Adaptive Hierarchical Reinforcement Learning for Decoupled Service Caching and Computation Offloading in Mobile Edge-Cloud Network

Main Article Content

T. Sai Lalith Prasad, 2. Kanneti Venkata Yeshwanth Reddy, Kummari Deepna, Yanala Deepika

Abstract

In modern mobile edge-cloud computing environments, efficient management of computation and caching resources essential to meet the increasing demands of latency sensitive applications Traditional deep reinforcement learning-based offloading frameworks focus mainly on single agent optimization, often leading to high system overhead and poor scalability. The growing demand for faster and smarter mobile applications requires efficient management of computation offloading and service caching in mobile edge cloud networks. Existing reinforcement learning models often rely on single agent systems, which struggle to handle complex decisions and dynamic network conditions. To address this issue, the proposed project introduces an Adaptive Hierarchical Reinforcement Learning (HRL) framework that separates decision making into two layers. The high-level agent coordinates global resource allocation between edge and cloud servers, while the low-level agents manage local caching and computation offloading tasks. Using advanced multi-agent reinforcement learning algorithms such as MAPPO and MADDPG the system enables cooperative and adaptive learning among distributed agents. This adaptive approach improves learning stability reduces latency, and enhance energy efficiency, making it a more intelligent and scalable solution for modem mobile edge-cloud networks.

Article Details

Section
Articles