iBet uBet web content aggregator. Adding the entire web to your favor.
iBet uBet web content aggregator. Adding the entire web to your favor.



Link to original content: https://doi.org/10.1631/FITEE.2300438
Multi-agent evaluation for energy management by practically scaling α-rank | Frontiers of Information Technology & Electronic Engineering Skip to main content

Advertisement

Log in

Multi-agent evaluation for energy management by practically scaling α-rank

基于拓展α-rank的多智能体策略评估方法在能源管理中的应用

  • Research Article
  • Published:
Frontiers of Information Technology & Electronic Engineering Aims and scope Submit manuscript

Abstract

Currently, decarbonization has become an emerging trend in the power system arena. However, the increasing number of photovoltaic units distributed into a distribution network may result in voltage issues, providing challenges for voltage regulation across a large-scale power grid network. Reinforcement learning based intelligent control of smart inverters and other smart building energy management (EM) systems can be leveraged to alleviate these issues. To achieve the best EM strategy for building microgrids in a power system, this paper presents two large-scale multi-agent strategy evaluation methods to preserve building occupants’ comfort while pursuing system-level objectives. The EM problem is formulated as a general-sum game to optimize the benefits at both the system and building levels. The α-rank algorithm can solve the general-sum game and guarantee the ranking theoretically, but it is limited by the interaction complexity and hardly applies to the practical power system. A new evaluation algorithm (TcEval) is proposed by practically scaling the α-rank algorithm through a tensor complement to reduce the interaction complexity. Then, considering the noise prevalent in practice, a noise processing model with domain knowledge is built to calculate the strategy payoffs, and thus the TcEval-AS algorithm is proposed when noise exists. Both evaluation algorithms developed in this paper greatly reduce the interaction complexity compared with existing approaches, including ResponseGraphUCB (RG-UCB) and αInformationGain (α-IG). Finally, the effectiveness of the proposed algorithms is verified in the EM case with realistic data.

摘要

随着碳达峰、碳中和政策的制定与实施,电网新能源化成为了主流趋势。然而,配电网中光伏装置数量的增加已经给分布式配电网系统带来巨大的有源电压调控压力,使得传统电压调节模式难以适应新能源化电网系统。基于多智能体强化学习的智能控制策略可通过智能逆变器和其他智能建筑能源管理系统(楼宇微网)缓解这些问题。为了获得楼宇微网的最佳能源管理策略,并满足楼宇用户的舒适度和能源需求,本文提出两种大规模多智能体策略评估方法,将能源管理问题转化为一般和博弈,同时优化了系统和楼宇用户两个层面的收益。α-rank算法虽然可解决一般和博弈,并在理论上保证策略排名的可靠性,但其受到策略交互中的采样复杂度限制,难以应用于实际电力系统。通过引入张量补全拓展α-rank算法,本文提出一种新的评估算法TcEval,以降低交互中的采样复杂性。此外,考虑到实际场景中普遍存在的噪声问题,本文建立了基于领域知识的噪声处理模型来计算策略收益,提出了针对噪声场景的TcEval-AS算法。多组基于实际数据的能源管理案例实验说明,本文提出的两种评估算法相较于现有方法(RG-UCB和α-IG)大幅度降低了策略评估中采样复杂度。最后,用实际数据验证了所提算法的有效性。

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Data availability

The data that support the findings of this study are available from the corresponding author upon reasonable request.

References

  • Brookes DH, Listgarten J, 2018. Design by adaptive sampling. https://arxiv.org/pdf/1810.03714v4

  • Brookes DH, Park H, Listgarten J, 2019. Conditioning by adaptive sampling for robust design. Proc 36th Int Conf on Machine Learning, p.773–782.

  • Cai WQ, Kordabad AB, Gros S, 2023. Energy management in residential microgrid using model predictive control-based reinforcement learning and Shapley value. Eng Appl Artif Intell, 119:105793. https://doi.org/10.1016/j.engappai.2022.105793

    Article  Google Scholar 

  • Claessens BJ, Vrancx P, Ruelens F, 2018. Convolutional neural networks for automatic state-time feature extraction in reinforcement learning applied to residential load control. IEEE Trans Smart Grid, 9(4):3259–3269. https://doi.org/10.1109/TSG.2016.2629450

    Article  Google Scholar 

  • Czarnecki WM, Gidel G, Tracey B, et al., 2020. Real world games look like spinning tops. Proc 34th Int Conf on Neural Information Processing Systems, Article 1463.

  • Dong Q, Wu ZY, Lu J, et al., 2022. Existence and practice of gaming: thoughts on the development of multi-agent system gaming. Front Inform Technol Electron Eng, 23(7):995–1001. https://doi.org/10.1631/FITEE.2100593

    Article  Google Scholar 

  • Du YL, Yan X, Chen X, et al., 2021. Estimating α-rank from a few entries with low rank matrix completion. Proc 38th Int Conf on Machine Learning, p.2870–2879.

  • Lowe R, Wu Y, Tamar A, et al., 2017. Multi-agent actor-critic for mixed cooperative-competitive environments. Proc 31st Int Conf on Neural Information Processing Systems, p.6382–6393.

  • Muller P, Omidshafiei S, Rowland M, et al., 2020. A generalized training approach for multiagent learning. Proc 8th Int Conf on Learning Representations.

  • Omidshafiei S, Papadimitriou C, Piliouras G, et al., 2019. α-rank: multi-agent evaluation by evolution. Sci Rep, 9(1):9937. https://doi.org/10.1038/s41598-019-45619-9

    Article  Google Scholar 

  • Pigott A, Crozier C, Baker K, et al., 2022. GridLearn: multiagent reinforcement learning for grid-aware building energy management. Electr Power Syst Res, 213:108521. https://doi.org/10.1016/j.epsr.2022.108521

    Article  Google Scholar 

  • Rashid T, Zhang C, Ciosek K, 2021. Estimating α-rank by maximizing information gain. Proc AAAI Conf on Artificial Intelligence, p.5673–5681. https://doi.org/10.1609/aaai.v35i6.16712

  • Rowland M, Omidshafiei S, Tuyls K, et al., 2019. Multiagent evaluation under incomplete information. Proc 33rd Int Conf on Neural Information Processing Systems, Article 1101.

  • Shalev-Shwartz S, Ben-David S, 2014. Understanding Machine Learning: from Theory to Algorithms. Cambridge University Press, Cambridge, UK.

    Book  Google Scholar 

  • Signorino CS, Ritter JM, 1999. Tau-b or not tau-b: measuring the similarity of foreign policy positions. Int Stud Q, 43(1):115–144. https://doi.org/10.1111/0020-8833.00113

    Article  Google Scholar 

  • Silver D, Huang A, Maddison CJ, et al., 2016. Mastering the game of Go with deep neural networks and tree search. Nature, 529(7587):484–489. https://doi.org/10.1038/nature16961

    Article  Google Scholar 

  • Su WC, Wang JH, 2012. Energy management systems in microgrid operations. Electr J, 25(8):45–60. https://doi.org/10.1016/j.tej.2012.09.010

    Article  MathSciNet  Google Scholar 

  • Tong Z, Li N, Zhang HM, et al., 2023. Dynamic user-centric multi-dimensional resource allocation for a wide-area coverage signaling cell based on DQN. Front Inform Technol Electron Eng, 24(1):154–163. https://doi.org/10.1631/FITEE.2200220

    Article  Google Scholar 

  • Tuyls K, Perolat J, Lanctot M, et al., 2018. A generalised method for empirical game theoretic analysis. Proc 17th Int Conf on Autonomous Agents and Multiagent Systems, p.77–85.

  • Vincent R, Ait-Ahmed M, Houari A, et al., 2020. Residential microgrid energy management considering flexibility services opportunities and forecast uncertainties. Int J Electr Power Energy Syst, 120:105981. https://doi.org/10.1016/j.ijepes.2020.105981

    Article  Google Scholar 

  • Williams CKI, Rasmussen CE, 1995. Gaussian processes for regression. Proc 8th Int Conf on Neural Information Processing Systems, p.514–520.

  • Xia D, Yuan M, Zhang CH, 2021. Statistically optimal and computationally efficient low rank tensor completion from noisy entries. Ann Stat, 49(1):76–99. https://doi.org/10.1214/20-AOS1942

    Article  MathSciNet  Google Scholar 

  • Xu HC, Domínguez-García AD, Sauer PW, 2020. Optimal tap setting of voltage regulation Transformers using batch reinforcement learning. IEEE Trans Power Syst, 35(3):1990–2001. https://doi.org/10.1109/TPWRS.2019.2948132

    Article  Google Scholar 

  • Zhang YY, Rao XP, Liu CY, et al., 2023. A cooperative EV charging scheduling strategy based on double deep Q-network and prioritized experience replay. Eng Appl Artif Intell, 118:105642. https://doi.org/10.1016/j.engappai.2022.105642

    Article  Google Scholar 

  • Zhao LY, Yang T, Li W, et al., 2022. Deep reinforcement learning-based joint load scheduling for household multi-energy system. Appl Energy, 324:119346. https://doi.org/10.1016/j.apenergy.2022.119346

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Contributions

Yiyun SUN designed the research. Yiyun SUN and Meiqin LIU processed the data. Yiyun SUN drafted the paper. Yiyun SUN, Meiqin LIU, Senlin ZHANG, Ronghao ZHENG, Shanling DONG, and Xuguang LAN revised and finalized the paper.

Corresponding author

Correspondence to Meiqin Liu  (刘妹琴).

Ethics declarations

All the authors declare that they have no conflict of interest.

Additional information

Project supported by the National Key R&D Program of China (No. 2021ZD0112700), the Zhejiang Provincial Natural Science Foundation of China (No. LZ22F030006), and the Fundamental Research Funds for the Central Universities, China (No. xtr072022001)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Sun, Y., Zhang, S., Liu, M. et al. Multi-agent evaluation for energy management by practically scaling α-rank. Front Inform Technol Electron Eng 25, 1003–1016 (2024). https://doi.org/10.1631/FITEE.2300438

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1631/FITEE.2300438

Key words

关键词

CLC number

Navigation