Reinforcement learning in deregulated energy market: A comprehensive review

Ziqing Zhu, Ze Hu, Ka Wing Chan, Siqi Bu*, Bin Zhou, Shiwei Xia

*Corresponding author for this work

Research output: Contribution to journalJournal articlepeer-review

19 Citations (Scopus)

Abstract

The increasing penetration of renewable generations, along with the deregulation and marketization of power industry, promotes the transformation of energy market operation paradigms. The optimal bidding strategy and dispatching methodologies under these new paradigms are prioritized concerns for both market participants and power system operators. In contrast with conventional solution methodologies, the Reinforcement Learning (RL), as an emerging machine learning technique that exhibits a more favorable computational performance, is playing an increasingly significant role in both academia and industry. This paper presents a comprehensive review of RL applications in deregulated energy market operation including bidding and dispatching strategy optimization, based on more than 150 carefully selected papers. For each application, apart from a paradigmatic summary of generalized methodology, in-depth discussions of applicability and obstacles while deploying RL techniques are also provided. Finally, some RL techniques that have great potentiality to be deployed in bidding and dispatching problems are recommended and discussed.

Original languageEnglish
Article number120212
JournalApplied Energy
Volume329
Early online date8 Nov 2022
DOIs
Publication statusPublished - 1 Jan 2023

Scopus Subject Areas

  • Building and Construction
  • Renewable Energy, Sustainability and the Environment
  • Mechanical Engineering
  • Energy(all)
  • Management, Monitoring, Policy and Law

User-Defined Keywords

  • Bidding strategy
  • Energy market
  • Optimal dispatching
  • Reinforcement learning

Fingerprint

Dive into the research topics of 'Reinforcement learning in deregulated energy market: A comprehensive review'. Together they form a unique fingerprint.

Cite this