Abstract
Real-time pricing and demand response (RTP-DR) is a key problem for profit-maximizing and policy-making in the deregulated retail electricity market (REM). However, previous studies overlooked the non-convexity and multi-equilibria caused by the network constraints and the temporally-related non-linear power consumption characteristics of end-users (EUs) in a privacy-protected environment. This paper employs mixed strategy Nash equilibrium (MSNE) to analyze the multiple equilibria in the non-convex game of the RTP-DR problem, providing a comprehensive view of the potential transaction results. A novel multi-agent Q-learning algorithm is developed to estimate subgame perfect equilibrium (SPE) in the proposed game. As a multi-agent reinforcement learning (MARL) algorithm, it enables players in the game to be rational “agents” that learn from “trial and error” to make optimal decisions across time periods. Moreover, the proposed algorithm has a bi-level structure and adopts probability distributions to denote Q-values, representing the belief in environmental response. Through validation on a Northern Illinois utility dataset, our proposed approach demonstrates notable advantages over benchmark algorithms. Specifically, it provides more profitable pricing decisions for monopoly retailers in REM, leading to strategic outcomes for EUs. The numerical results also find that multiple optimal pricing decisions over a day exist simultaneously by providing almost identical profits to the retailer, while leading to different energy consumption patterns and also significant differences in total energy usage on the demand side.
Original language | English |
---|---|
Article number | 125815 |
Number of pages | 11 |
Journal | Applied Energy |
Volume | 391 |
Early online date | 14 Apr 2025 |
DOIs | |
Publication status | E-pub ahead of print - 14 Apr 2025 |
User-Defined Keywords
- Demand response
- Mixed strategy Nash equilibrium
- Real-time pricing
- Reinforcement learning
- Stackelberg game