Energy management in microgrids using model-free deep reinforcement learning approach

dc.authoridhttps://orcid.org/0009-0003-2205-2573
dc.authoridhttps://orcid.org/0000-0001-7032-8018
dc.contributor.authorTalab, Odia A.
dc.contributor.authorAvci, Isa
dc.date.accessioned2025-02-06T08:56:23Z
dc.date.available2025-02-06T08:56:23Z
dc.date.issued10-01-2025
dc.departmentFakülteler, Mühendislik Fakültesi, Bilgisayar Mühendisliği Bölümü
dc.description.abstractElectric power systems are undergoing rapid modernization driven by advancements in smart-grid technologies, and microgrids (MGs) play a crucial role in integrating renewable energy sources (RESs), such as wind and solar energy, into existing grids. MGs offer a flexible and efficient framework for accommodating dispersed energy resources. However, the intermittent nature of renewable sources, coupled with the rising demand for Electric Vehicles (EVs) and fast charging stations (FCSs), poses significant challenges to the stability and efficiency of microgrid (MG) operations. These challenges stem from the uncertainties in both energy generation and fluctuating demand patterns, making efficient energy management in MG a complex task. This study introduces a novel model-free strategy for real-time energy management in MG aimed at addressing uncertainties without the need for traditional uncertainty modeling techniques. Unlike conventional methods, the proposed approach enhances MG performance by minimizing power losses and operational costs. The problem is formulated as a Markov Decision Process (MDP) with well-defined objectives. To optimize decision-making, an actor-critic-based Deep Deterministic Policy Gradient (DDPG) algorithm is developed, leveraging reinforcement learning (RL) to adapt dynamically to changing system conditions. Comprehensive numerical simulations demonstrated the effectiveness of the proposed strategy. The results show a total cost of 51.8770 €ct/kWh, representing a reduction of 3.19% compared to the Dueling Deep Q Network (Dueling DQN) and 4% compared to the Deep Q Network (DQN). This highlights the robustness and scalability of the proposed model-free approach for modern MG energy management.
dc.identifier.citationTalab, O.A., & Avci, I. (2025). Energy Management in Microgrids Using Model-Free Deep Reinforcement Learning Approach. IEEE Access, 13, 5871-5891.
dc.identifier.doi10.1109/access.2025.3525843
dc.identifier.endpage5891
dc.identifier.issn2169-3536
dc.identifier.scopus2-s2.0-85215428066
dc.identifier.scopusqualityQ1
dc.identifier.startpage5871
dc.identifier.urihttps://doi.org/10.1109/ACCESS.2025.3525843
dc.identifier.urihttps://hdl.handle.net/20.500.14619/15083
dc.identifier.volume13
dc.identifier.wosWOS:001398098800023
dc.identifier.wosqualityQ2
dc.indekslendigikaynakScopus
dc.indekslendigikaynakWeb of Science
dc.language.isoen
dc.publisherInstitute of Electrical and Electronics Engineers (IEEE)
dc.relation.ispartofIEEE Access
dc.relation.publicationcategoryMakale - Uluslararası Hakemli Dergi - Kurum Öğretim Elemanı
dc.rightsinfo:eu-repo/semantics/closedAccess
dc.subjectDDPG
dc.subjectenergy management
dc.subjectEVs
dc.subjectFCSs
dc.subjectmicrogrid
dc.subjectRESs
dc.titleEnergy management in microgrids using model-free deep reinforcement learning approach
dc.typeArticle
oaire.citation.volume13

Dosyalar

Lisans paketi
Listeleniyor 1 - 1 / 1
Küçük Resim Yok
İsim:
license.txt
Boyut:
1.17 KB
Biçim:
Item-specific license agreed upon to submission
Açıklama: