Method for the application of deep reinforcement learning for optimised control of industrial energy supply systems by the example of a central cooling system
Metadata only
Author
Show all
Date
2021Type
- Journal Article
Citations
Cited 6 times in
Web of Science
Cited 10 times in
Scopus
ETH Bibliography
yes
Altmetrics
Abstract
This paper presents a method for data- and model-driven control optimisation for industrial energy supply systems (IESS) by means of deep reinforcement learning (DRL). The method consists of five steps, including system boundary definition and data accumulation, system modelling and validation, implementation of DRL algorithms, performance comparison and adaptation or application of the control strategy. The method is successfully applied to a simulation of an industrial cooling system using the PPO (proximal policy optimisation) algorithm. Significant reductions in electricity cost by 3% to 17% as well as reductions in CO2 emissions by 2% to 11% are achieved. The DRL-based control strategy is interpreted and three main reasons for the performance increase are identified. The DRL controller reduces energy cost by utilizing the storage capacity of the cooling system and moving electricity demand to times of lower prices. Additionally, the DRL-based control strategy for cooling towers as well as compression chillers reduces electricity cost and wear-related cost alike. (C) 2021 CIRP. Published by Elsevier Ltd. All rights reserved. Show more
Publication status
publishedExternal links
Journal / series
CIRP AnnalsVolume
Pages / Article No.
Publisher
CIRPSubject
Machine learning; Energy efficiency; CO2 reduced productionMore
Show all metadata
Citations
Cited 6 times in
Web of Science
Cited 10 times in
Scopus
ETH Bibliography
yes
Altmetrics