• Corpus ID: 58004579

Vehicular Edge Computing via Deep Reinforcement Learning

@article{Qi2019VehicularEC,
  title={Vehicular Edge Computing via Deep Reinforcement Learning},
  author={Qi Qi and Zhanyu Ma},
  journal={ArXiv},
  year={2019},
  volume={abs/1901.04290}
}
The smart vehicles construct Vehicle of Internet which can execute various intelligent services. [] Key Method We formulate the offloading decision of multi-task in a service as a long-term planning problem, and explores the recent deep reinforcement learning to obtain the optimal solution. It considers the future data dependency of the following tasks when making decision for a current task from the learned offloading knowledge.

Figures and Tables from this paper

Collaborative Data Scheduling for Vehicular Edge Computing via Deep Reinforcement Learning
TLDR
A unified framework with communication, computation, caching, and collaborative computing is formulated, and a collaborative data scheduling scheme to minimize the system-wide data processing cost with ensured delay constraints of applications is developed.
Task offloading algorithm of vehicle edge computing environment based on Dueling-DQN
TLDR
A semi-online task distribution and offloading algorithm based on Dueling-DQN is proposed for time-varying complex vehicle environments and can improve the efficiency and energy consumption of computation tasks to a certain extent.
Dynamic Scheduling for Stochastic Edge-Cloud Computing Environments Using A3C Learning and Residual Recurrent Neural Networks
TLDR
This work proposes an A3C based real-time scheduler for stochastic Edge-Cloud environments allowing decentralized learning, concurrently across multiple agents, and uses the R2N2 architecture to capture a large number of host and task parameters together with temporal patterns to provide efficient scheduling decisions.
Computation Offloading and Content Caching Delivery in Vehicular Edge Computing: A Survey
TLDR
An overview of VEC architectures including types of layers, fog nodes, communication technologies and also vehicle applications, which are used in data offloading and dissemination scenarios and the mobility model used in the VEC scenario is discussed.
DeepEdge: A New QoE-Based Resource Allocation Framework Using Deep Reinforcement Learning for Future Heterogeneous Edge-IoT Applications
TLDR
This paper proposes a novel two-stage deep reinforcement learning (DRL) scheme that effectively allocates edge resources to serve the IoT applications and maximize the users’ QoE and develops a Q-value approximation approach to tackle the large space problem of Edge-IoT.
Reinforcement Learning-Based Vehicle-Cell Association Algorithm for Highly Mobile Millimeter Wave Communication
TLDR
This paper proposes a low complexity algorithm that approximates the solution of the proposed optimization problem of vehicle-cell association in millimeter wave (mmWave) communication networks and achieves up to 15% gains in terms of sum rate and 20% reduction in VUE outages.
Convergence of Edge Computing and Deep Learning: A Comprehensive Survey
TLDR
By consolidating information scattered across the communication, networking, and DL areas, this survey can help readers to understand the connections between enabling technologies while promoting further discussions on the fusion of edge intelligence and intelligent edge, i.e., Edge DL.
Green Internet of Vehicles: Architecture, Enabling Technologies, and Applications
TLDR
5G technology, mobile edge computing and deep reinforcement learning in green IoV, how to minimize energy consumption and maximize resource utilization with the constraints of existing environment and equipment is discussed.
Deep Reinforcement Learning for Autonomous Internet of Things: Model, Applications and Challenges
TLDR
A tutorial of DRL is provided, and a general model for the applications of RL/DRL in AIoT is proposed, where the existing works are classified and summarized under the umbrella of the proposed general DRL model.
...
...

References

SHOWING 1-10 OF 63 REFERENCES
Mobile-Edge Computing for Vehicular Networks: A Promising Network Paradigm with Predictive Off-Loading
TLDR
A cloud-based mobileedge computing (MEC) off-loading framework in vehicular networks is proposed, where the tasks are adaptively off-loaded to the MEC servers through direct uploading or predictive relay transmissions, which greatly reduces the cost of computation and improves task transmission efficiency.
User Scheduling and Resource Allocation in HetNets With Hybrid Energy Supply: An Actor-Critic Reinforcement Learning Approach
TLDR
This paper investigates the optimal policy for user scheduling and resource allocation in HetNets powered by hybrid energy with the purpose of maximizing energy efficiency of the overall network and demonstrates the convergence property of the proposed algorithm.
Vehicular Fog Computing: A Viewpoint of Vehicles as the Infrastructures
TLDR
An interesting relationship among the communication capability, connectivity, and mobility of vehicles is unveiled, and the characteristics about the pattern of parking behavior are found, which benefits from the understanding of utilizing the vehicular resources.
A Hierarchical Framework of Cloud Resource Allocation and Power Management Using Deep Reinforcement Learning
  • Ning Liu, Zhe Li, Yanzhi Wang
  • Computer Science
    2017 IEEE 37th International Conference on Distributed Computing Systems (ICDCS)
  • 2017
TLDR
The emerging deep reinforcement learning (DRL) technique, which can deal with complicated control problems with large state space, is adopted to solve the global tier problem and the proposed framework can achieve the best trade-off between latency and power/energy consumption in a server cluster.
A deep learning approach for optimizing content delivering in cache-enabled HetNet
TLDR
This paper trains the optimization algorithms through a deep neural network (DNN) in advance, instead of directly applying them in real-time caching or scheduling, which allows significant complexity reduction in the delay-sensitive operation phase since the computational burden is shifted to the DDN training phase.
Optimal Schedule of Mobile Edge Computing for Internet of Things Using Partial Information
TLDR
This paper generates asymptotically optimal schedules tolerant to out-of-date network knowledge, thereby relieving stringent requirements on feedbacks and able to dramatically reduce feedbacks at no cost of optimality.
Routing or Computing? The Paradigm Shift Towards Intelligent Computer Network Packet Transmission Based on Deep Learning
TLDR
Simulation results demonstrate that the proposal outperforms the benchmark method in terms of delay, throughput, and signaling overhead, and it is demonstrated how the uniquely characterized input and output traffic patterns can enhance the route computation of the deep learning based SDRs.
On the Serviceability of Mobile Vehicular Cloudlets in a Large-Scale Urban Environment
TLDR
The concept of serviceability is introduced to measure the ability of an MVC to provide cloud computing service and it is found that the serviceability has a relationship with the delay tolerance of the undertaken computational task, which can be described by two characteristic parameters.
Experience-driven Networking: A Deep Reinforcement Learning based Approach
TLDR
A novel experience-driven approach that can learn to well control a communication network from its own experience rather than an accurate mathematical model, just as a human learns a new skill (such as driving, swimming, etc).
Distributed Multiuser Computation Offloading for Cloudlet-Based Mobile Cloud Computing: A Game-Theoretic Machine Learning Approach
TLDR
This paper proposes a fully distributed computation offloading (FDCO) algorithm based on machine learning technology and theoretically analyzes the performance of the proposed FDCO algorithm in terms of the number of beneficial cloudlet computing mobile devices and the system-wide execution cost.
...
...