6533b81ffe1ef96bd12771c6

RESEARCH PRODUCT

Designing a multi-layer edge-computing platform for energy-efficient and delay-aware offloading in vehicular networks

Giuseppe FaraciChristian GrassoGiovanni SchembraFabio BusaccaSergio Palazzo

subject

Markov ModelsVehicular ad hoc networkComputer Networks and CommunicationsComputer scienceDistributed computing5G; Edge Computing; Markov Models; Reinforcement Learning; Vehicular NetworksLoad balancing (computing)Reinforcement LearningDomain (software engineering)ServerEdge ComputingReinforcement learningVehicular NetworksMarkov decision process5GEdge computingEfficient energy use

description

Abstract Vehicular networks are expected to support many time-critical services requiring huge amounts of computation resources with very low delay. However, such requirements may not be fully met by vehicle on-board devices due to their limited processing and storage capabilities. The solution provided by 5G is the application of the Multi-Access Edge Computing (MEC) paradigm, which represents a low-latency alternative to remote clouds. Accordingly, we envision a multi-layer job-offloading scheme based on three levels, i.e., the Vehicular Domain, the MEC Domain and Backhaul Network Domain. In such a view, jobs can be offloaded from the Vehicular Domain to the MEC Domain, and even further offloaded between MEC Servers for load balancing purposes. We also propose a framework based on a Markov Decision Process (MDP) to model the interactions among stakeholders working at the three different layers. Such a MDP model allows a Reinforcement Learning (RL) algorithm to take optimal decisions on both the number of jobs to offload between MEC Servers, and on the amount of computing power to allocate to each job. An extensive numerical analysis is presented to demonstrate the effectiveness of our algorithm in comparison with static policies not applying RL.

https://doi.org/10.1016/j.comnet.2021.108330