IJCNIS Vol. 15, No. 4, 8 Aug. 2023
Cover page and Table of Contents: PDF (size: 472KB)
Full Text (PDF, 472KB), PP.13-24
Views: 0 Downloads: 0
Reinforcement Learning, Fifth Generation (5G), Internet of Things (IoT), Power Control, Device-to-device (D2D)
There are billions of inter-connected devices by the help of Internet-of-Things (IoT) that have been used in a number of applications such as for wearable devices, e-healthcare, agriculture, transportation, etc. Interconnection of devices establishes a direct link and easily shares the information by utilizing the spectrum of cellular users to enhance the spectral efficiency with low power consumption in an underlaid Device-to-Device (D2D) communication. Due to reuse of the spectrum of cellular devices by D2D users causes severe interference between them which may impact on the network performance. Therefore, we proposed a Q-Learning based low power selection scheme with the help of multi-agent reinforcement learning to detract the interference that helps to increase the capacity of the D2D network. For the maximization of capacity, the updated reward function has been reformulated with the help of a stochastic policy environment. With the help of a stochastic approach, we figure out the proposed optimal low power consumption techniques which ensures the quality of service (QoS) standards of the cellular devices and D2D users for D2D communication in 5G Networks and increase the utilization of resources. Numerical results confirm that the proposed scheme improves the spectral efficiency and sum rate as compared to Q-Learning approach by 14% and 12.65%.
Chellarao Chowdary Mallipudi, Saurabh Chandra, Prateek Prakash, Rajeev Arya, Akhtar Husain, Shamimul Qamar, "Reinforcement Learning Based Efficient Power Control and Spectrum Utilization for D2D Communication in 5G Network", International Journal of Computer Network and Information Security(IJCNIS), Vol.15, No.4, pp.13-24, 2023. DOI:10.5815/ijcnis.2023.04.02
[1]X. Shen, “Device-to-device communication in 5G cellular networks,” IEEE Network. 2015.
[2]Z. Su, M. Dai, Q. Xu, R. Li, and S. Fu, “Q-Learning-Based Spectrum Access for Content Delivery in Mobile Networks,” IEEE Trans. Cogn. Commun. Netw., vol. 6, no. 1, pp. 35–47, 2020.
[3]D. Wu, L. Zhou, Y. Cai, H. C. Chao, and Y. Qian, “Physical-Social-Aware D2D Content Sharing Networks: A Provider-Demander Matching Game,” IEEE Trans. Veh. Technol., vol. 67, no. 8, pp. 7538–7549, 2018.
[4]R. S and S. G K, “Interference Mitigation and Mobility Management for D2D Communication in LTE-A Networks,” Int. J. Wirel. Microw. Technol., vol. 9, no. 2, pp. 20–31, 2019.
[5]M. H. Faridi, A. Jafari, and E. Dehghani, “An Efficient Distributed Power Control in Cognitive Radio Networks,” Int. J. Inf. Technol. Comput. Sci., vol. 8, no. 1, pp. 48–53, 2016.
[6]E. Ogidiaka, F. N. Ogwueleka, and M. Ekata Irhebhude, “Game-Theoretic Resource Allocation Algorithms for Device-to-Device Communications in Fifth Generation Cellular Networks: A Review,” Int. J. Inf. Eng. Electron. Bus., vol. 13, no. 1, pp. 44–51, 2021.
[7]Y. Wei, Y. Qu, M. Zhao, L. Zhang, and F. Richard Yu, “Resource allocation and power control policy for device-to-device communication using multi-agent reinforcement learning,” Comput. Mater. Contin., vol. 63, no. 3, pp. 1515–1532, 2020.
[8]M. Abrar, R. Masroor, I. Masroor, and A. Hussain, “IOT based efficient D2D communication,” Moscow Work. Electron. Netw. Technol. MWENT 2018 - Proc., vol. 2018-March, pp. 1–7, 2018.
[9]D. Singh and S. C. Ghosh, “A distributed algorithm for D2D communication in 5G using stochastic model,” 2017 IEEE 16th Int. Symp. Netw. Comput. Appl. NCA 2017, vol. 2017-Janua, pp. 1–8, 2017.
[10]M. Mamdouh, M. Ezzat, and H. Hefny, “Optimized Planning of Resources Demand Curve in Ground Handling based on Machine Learning Prediction,” Int. J. Intell. Syst. Appl., vol. 13, no. 1, pp. 1–16, 2021.
[11]H. Yang, X. Xie, and M. Kadoch, “Intelligent resource management based on reinforcement learning for ultra-reliable and low-latency IoV communication networks,” IEEE Trans. Veh. Technol., vol. 68, no. 5, pp. 4157–4169, 2019.
[12]B. Gu, X. Zhang, Z. Lin, and M. Alazab, “Deep Multiagent Reinforcement-Learning-Based Resource Allocation for Internet of Controllable Things,” IEEE Internet Things J., vol. 8, no. 5, pp. 3066–3074, 2021.
[13]I. Budhiraja, N. Kumar, and S. Tyagi, “Deep-Reinforcement-Learning-Based Proportional Fair Scheduling Control Scheme for Underlay D2D Communication,” IEEE Internet Things J., vol. 8, no. 5, pp. 3143–3156, 2021.
[14]X. Wang, T. Jin, L. Hu, and Z. Qian, “Energy-Efficient Power Allocation and Q-Learning-Based Relay Selection for Relay-Aided D2D Communication,” IEEE Trans. Veh. Technol., vol. 69, no. 6, pp. 6452–6462, 2020.
[15]J. Huang, Y. Yin, Y. Zhao, Q. Duan, W. Wang, and S. Yu, “A Game-Theoretic Resource Allocation Approach for Intercell Device-to-Device Communications in Cellular Networks,” IEEE Trans. Emerg. Top. Comput., vol. 4, no. 4, pp. 475–486, Oct. 2016.
[16]H. Zhao, K. Ding, N. I. Sarkar, J. Wei, and J. Xiong, “A Simple Distributed Channel Allocation Algorithm for D2D Communication Pairs,” IEEE Trans. Veh. Technol., vol. 67, no. 11, pp. 10960–10969, Nov. 2018.
[17]H. Zhang, Y. Liao, and L. Song, “D2D-U: Device-to-Device Communications in Unlicensed Bands for 5G System,” IEEE Trans. Wirel. Commun., vol. 16, no. 6, pp. 3507–3519, Jun. 2017.
[18]D. Della Penda, A. Abrardo, M. Moretti, and M. Johansson, “Distributed Channel Allocation for D2D-Enabled 5G Networks Using Potential Games,” IEEE Access, vol. 7, pp. 11195–11208, 2019.
[19]J. Shi, Q. Zhang, Y.-C. Liang, and X. Yuan, “Distributed Deep Learning for Power Control in D2D Networks With Outdated Information,” IEEE Trans. Wirel. Commun., vol. 20, no. 9, pp. 5702–5713, Sep. 2021.
[20]K. Zia, N. Javed, M. N. Sial, S. Ahmed, A. A. Pirzada, and F. Pervez, “A Distributed Multi-Agent RL-Based Autonomous Spectrum Allocation Scheme in D2D Enabled Multi-Tier HetNets,” IEEE Access, vol. 7, no. Figure 1, pp. 6733–6745, 2019.
[21]L. Wei, R. Q. Hu, Y. Qian, and G. Wu, “Energy Efficiency and Spectrum Efficiency of Multihop Device-to-Device Communications Underlaying Cellular Networks,” IEEE Trans. Veh. Technol., vol. 65, no. 1, pp. 367–380, Jan. 2016.