Energy-efficient deep Q-network: reinforcement learning for efficient routing protocol in wireless internet of things
Abstract
The internet of things (IoT) underscores pivotal real-world applications ranging from security systems to smart infrastructure and traffic management. However, contemporary IoT devices grapple with significant challenges pertaining to battery longevity and energy efficiency, constraining the assurance of prolonged network lifetimes and expansive sensor coverage. Many existing solutions, although promising on paper, are intricate and often impractical for real-world implementations. Addressing this gap, we introduce an energy-efficient routing protocol leveraging reinforcement learning (RL) tailored for wireless sensor networks (WSNs). This protocol harnesses RL to discern the optimal transmission route from the source to the sink node, factoring in the energy profile of each intermediary node. Training of the RL algorithm is facilitated through a reward function that includes energy outflow and data transmission efficacy. The model was compared against two prevalent routing protocols, LEACH and fuzzy C-means (FCM), for a comprehensive assessment. Simulation results highlight our protocol’s superiority with respect to the active node count, energy conservation, network longevity, and data delivery efficiency.
Keywords
Energy consumption; Energy-efficient routing; Network performance; Q-learning; Reinforcement learning
Full Text:
PDFDOI: http://doi.org/10.11591/ijeecs.v33.i2.pp971-980
Refbacks
- There are currently no refbacks.
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
Indonesian Journal of Electrical Engineering and Computer Science (IJEECS)
p-ISSN: 2502-4752, e-ISSN: 2502-4760
This journal is published by the Institute of Advanced Engineering and Science (IAES) in collaboration with Intelektual Pustaka Media Utama (IPMU).