Multi-Agent Deep Reinforcement Learning for Collaborative Computation Offloading in Mobile Edge-Computing
In this work, we study collaborative computation offloading in mobile edge computing (MEC) to support computation-intensive applications. Mobile devices (MDs) can offload their computation to edge nodes (ENs), where we leverage edge-to-edge offloading to further enhance the MEC’s computing capabilities. This however presents significant challenges due to the need for real-time and decentralized decision-making in the highly dynamic MEC environment especially with collaborative offloading. We design a queue-based multi-layer model scenario and formulate the joint offloading problem as a decentralized partially observable markov decision process (Dec-POMDP), where each MD and EN constructs and trains offloading agents to achieve high performance and efficient resource utilization in MEC. To solve the formulated problem, we propose a multi-agent deep reinforcement learning (DRL)-based approach, where multiple agents collaborate to make distributed decisions in an uncertain MEC environment through global optimization.
Mobile edge computing, Computation offloading, Des-POMDP, Multi-agent deep reinforcement learning