Next-generation offloading using hybrid deep learning network for adaptive mobile edge computing
Abstract
Deploying mobile application tasks that require a lot of computing and are time-sensitive to distant cloud-based data centers has become a popular method of working around the limitations of mobile devices (MDs). Deep reinforcement learning (DRL) techniques for offloading in mobile edge computing (MEC) environments struggle to adapt to new situations due to low sample efficiency for each new context. To address these issues, a novel computational offloading in mobile edge computing (COOL-MEC) algorithm has been proposed that combines the benefits of attention modules and bi-directional long short-term memory. This algorithm improves server resource utilization by lowering the cost of assimilating processing latency, processing energy consumption, and task throughput of latency-sensitive tasks. The experiment's findings show that, when used as intended, the recommended COOL-MEC algorithm minimizes energy consumption. When compared to the current deep convolutional attention reinforcement learning with adaptive reward policy (DCARL-ARP) and DRL techniques, the energy consumption of the proposed COOL-MEC is decreased by 0.06% and 0.08%, respectively. The average time per channel utilized for the execution of the proposed COOL-MEC also decreased by 0.051% and 0.054% when compared with existing DCARL-ARP and DRL methods respectively.
Keywords
Average energy consumption; Computation offloading; Deep learning; Deep reinforcement learning; Long short-term memory; Mobile devices; Mobile edge computing
Full Text:
PDFDOI: http://doi.org/10.11591/ijece.v15i2.pp1924-1932
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
International Journal of Electrical and Computer Engineering (IJECE)
p-ISSN 2088-8708, e-ISSN 2722-2578
This journal is published by the Institute of Advanced Engineering and Science (IAES) in collaboration with Intelektual Pustaka Media Utama (IPMU).