Reinforcement Learning-Empowered Mobile Edge Computing for 6G Edge Intelligence

  title={Reinforcement Learning-Empowered Mobile Edge Computing for 6G Edge Intelligence},
  author={Pengjin Wei and Kun Guo and Ye Li and Jue Wang and Wei Feng and Shi Jin and Ning Ge and Ying-Chang Liang},
  journal={IEEE Access},
Mobile edge computing (MEC) is considered a novel paradigm for computation-intensive and delay-sensitive tasks in fifth generation (5G) networks and beyond. However, its uncertainty, referred to as dynamic and randomness, from the mobile device, wireless channel, and edge network sides, results in high-dimensional, nonconvex, nonlinear, and NP-hard optimization problems. Thanks to the evolved reinforcement learning (RL), upon iteratively interacting with the dynamic and random environment, its… 


Deep Reinforcement Learning for Energy-Efficient Computation Offloading in Mobile-Edge Computing
The objective is to minimize the energy consumption of the entire MEC system, by considering the delay constraint as well as the uncertain resource requirements of heterogeneous computation tasks, using a mixed-integer nonlinear programming (MINLP) problem and a value iteration-based reinforcement learning (RL) method.
Performance Optimization in Mobile-Edge Computing via Deep Reinforcement Learning
A deep $Q$-network-based strategic computation offloading algorithm to learn the optimal policy without having a priori knowledge of the dynamic statistics is proposed and achieves a significant improvement in average cost compared with baseline policies.
Partial Computation Offloading in NOMA-Assisted Mobile-Edge Computing Systems Using Deep Reinforcement Learning
A deep reinforcement learning algorithm named ACDQN is proposed that utilizes the advantages of both actor-critic and deep Q-network methods and provides low complexity and achieves near-optimal performance in a NOMA-assisted MEC network.
Dynamic Pricing for Smart Mobile Edge Computing: A Reinforcement Learning Approach
This letter develops a policy gradient (PG)-based reinforcement learning (RL) algorithm that enables continuous pricing, thus constituting an advancement over the conventional Q-learning algorithm that has provided only discrete action space.
Smart Resource Allocation for Mobile Edge Computing: A Deep Reinforcement Learning Approach
A smart, Deep Reinforcement Learning based Resource Allocation (DRLRA) scheme, which can allocate computing and network resources adaptively, reduce the average service time and balance the use of resources under varying MEC environment is proposed.
Multi-Agent Deep Reinforcement Learning for Computation Offloading and Interference Coordination in Small Cell Networks
A distributed multi-agent deep reinforcement learning (DRL) scheme with the objective of minimizing the overall energy consumption while ensuring the latency requirements and a federated DRL scheme which only requires SBS agents to share their model parameters instead of local training data is proposed.
In-Edge AI: Intelligentizing Mobile Edge Computing, Caching and Communication by Federated Learning
The "In-Edge AI" framework is designed in order to intelligently utilize the collaboration among devices and edge nodes to exchange the learning parameters for a better training and inference of the models, and thus to carry out dynamic system-level optimization and application-level enhancement while reducing the unnecessary system communication load.
Deep Reinforcement Learning for Online Computation Offloading in Wireless Powered Mobile-Edge Computing Networks
A Deep Reinforcement learning-based Online Offloading (DROO) framework that implements a deep neural network as a scalable solution that learns the binary offloading decisions from the experience is proposed, which eliminates the need of solving combinatorial optimization problems, and thus greatly reduces the computational complexity especially in large-size networks.
Distributed and Collective Deep Reinforcement Learning for Computation Offloading: A Practical Perspective
This work proposes a distributed and collective DRL algorithm called DC-DRL with several improvements, combining the advantages of deep neuroevolution and policy gradient to maximize the utilization of multiple environments and prevent the premature convergence.
Optimized Computation Offloading Performance in Virtual Edge Computing Systems Via Deep Reinforcement Learning
This paper considers MEC for a representative mobile user in an ultradense sliced RAN, where multiple base stations are available to be selected for computation offloading and proposes a double deep ${Q}$ -network (DQN)-based strategic computation offload algorithm to learn the optimal policy without knowing a priori knowledge of network dynamics.