An RSU-crossed dependent task offloading scheme for vehicular edge computing based on deep reinforcement learning Online publication date: Wed, 03-May-2023
by Xiang Bi; Jianing Shi; Benhong Zhang; Zengwei Lyu; Lingjie Huang
International Journal of Sensor Networks (IJSNET), Vol. 41, No. 4, 2023
Abstract: Various interdependent and computationally intensive on-vehicle tasks have posed great pressure on the computing power of vehicles. Vehicular edge computing (VEC) is considered to be a promising paradigm to solve this problem. However, due to the high mobility, vehicles will pass through multiple road-side units (RSUs) during task computing. How to coordinate the offloading decision of RSUs is a challenge. In this study, we propose a dependent task offloading scheme by considering vehicle mobility, service availability, and task priority. Meanwhile, to coordinate the offloading decisions among the RSUs, a Markov decision process (MDP) is carefully designed, in which the action of each RSU is divided into three steps to decide whether, where, and how each task is offloaded separately. Then, an advanced DDPG-based deep reinforcement learning (DRL) algorithm is adopted to solve this problem. Simulation results show that the proposed scheme has better performance in reducing task processing latency and consumption.
Existing subscribers:
Go to Inderscience Online Journals to access the Full Text of this article.
If you are not a subscriber and you just want to read the full contents of this article, buy online access here.Complimentary Subscribers, Editors or Members of the Editorial Board of the International Journal of Sensor Networks (IJSNET):
Login with your Inderscience username and password:
Want to subscribe?
A subscription gives you complete access to all articles in the current issue, as well as to all articles in the previous three years (where applicable). See our Orders page to subscribe.
If you still need assistance, please email subs@inderscience.com