Non-linear, Reinforcement Learning-based Algorithm for Real-Time MANET Configuration and Threat Prediction

Main Article Content

K.Purnima, M.N.Giriprasad

Abstract

Mobile Ad Hoc Networks (MANETs) face challenges such as dynamic topologies, high node mobility, and increased vulnerability to attacks like black hole and DDoS. This study develops a non-linear reinforcement learning-based algorithm (DRL-MANET) to address real-time network configuration and threat prediction. The model aims to ensure efficient performance and secure operation under unpredictable conditions. The approach uses deep reinforcement learning (DRL) to optimize network decisions based on real-time feedback. An LSTM-based anomaly detection system identifies and mitigates threats by integrating detection outputs into the decision-making process. Federated learning allows decentralized model training, preserving privacy through differential privacy and blockchain mechanisms. Hierarchical clustering and adaptive updates minimize computational overhead and support scalability. Simulation results show a packet delivery rate of 97.2%, a threat detection accuracy of 96.8%, and a 7% reduction in throughput when scaling to 150 nodes. Compared to MA3DQN and EDRL, DRL-MANET demonstrates lower latency, faster recovery from node failures, and improved resource management. These findings illustrate how the model handles high traffic, variable mobility, and evolving attack scenarios. The proposed algorithm supports secure, scalable, and adaptable solutions for MANETs. The methods and results offer a practical framework for managing dynamic network environments while addressing privacy and resource constraints.

Article Details

Section
Articles