Twin Delayed DDPG based Dynamic Power Allocation for Mobility in IoRT
Abstract
The internet of robotic things (IoRT) is a modern as well as fast-evolving technology employed in abundant socio-economical aspects which connect user equipment (UE) for communication and data transfer among each other. For ensuring the quality of service (QoS) in IoRT applications, radio resources, for example, transmitting power allocation (PA), interference management, throughput maximization etc., should be efficiently employed and allocated among UE. Traditionally, resource allocation has been formulated using optimization problems, which are then solved using mathematical computer techniques. However, those optimization problems are generally nonconvex as well as nondeterministic polynomial-time hardness (NP-hard). In this paper, one of the most crucial challenges in radio resource management is the emitting power of an antenna called PA, considering that the interfering multiple access channel (IMAC) has been considered. In addition, UE has a natural movement behavior that directly impacts the channel condition between remote radio head (RRH) and UE. Additionally, we have considered two well-known UE mobility models i) random walk and ii) modified Gauss-Markov (GM). As a result, the simulation environment is more realistic and complex. A data-driven as well as model-free continuous action based deep reinforcement learning algorithm called twin delayed deep deterministic policy gradient (TD3) has been proposed that is the combination of policy gradient, actor-critics, as well as double deep Q-learning (DDQL). It optimizes the PA for i) stationary UE, ii) the UE movements according to random walk model, and ii) the UE movement based on the modified GM model. Simulation results show that the proposed TD3 method outperforms model-based techniques like weighted MMSE (WMMSE) and fractional programming (FP) as well as model-free algorithms, for example, deep Q network (DQN) and DDPG in terms of average sum-rate performance.
Keywords
IoRT, Power Allocation, Radio Resource Management, User Mobility, Deep Reinforcement Learning, Twin Delayed Deep Deterministic Policy GradientThis work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
H. Kabir, M. Tham and Y. Chang, "Twin Delayed DDPG based Dynamic Power Allocation for Mobility in IoRT," in Journal of Communications Software and Systems, vol. 19, no. 1, pp. 19-29, February 2023, doi: https://doi.org/10.24138/jcomss-2022-0141
@article{kabir2023twindelayed, author = {Homayun Kabir and Mau-Luen Tham and Yoong Choon Chang}, title = {Twin Delayed DDPG based Dynamic Power Allocation for Mobility in IoRT}, journal = {Journal of Communications Software and Systems}, month = {2}, year = {2023}, volume = {19}, number = {1}, pages = {19--29}, doi = {https://doi.org/10.24138/jcomss-2022-0141}, url = {https://doi.org/https://doi.org/10.24138/jcomss-2022-0141} }