Federated Reinforcement Learning for Edge AI Decision-Making in 6G-Enabled V2x Systems
DOI:
https://doi.org/10.47363/JAICC/2025(4)472Keywords:
6G, Vehicular Networks, V2X, Federated Reinforcement Learning, Edge Intelligence, Ultra‑Reliable Low‑Latency Communications, SUMO, NS‑3Abstract
The evolution toward sixth‑generation (6G) networks introduces transformative capabilities in intelligent transportation, particularly throughultra‑reliable, low‑latency vehicle‑to‑everything (V2X) communication. As autonomous and connected vehicles generate vast amounts of data at the edge, conventional centralized learning approaches are increasingly constrained by privacy, bandwidth, and latency limitations. In this paper, we present a federated reinforcement learning (FRL) framework that enables distributed edge agents—such as vehicles and roadside units—to collaboratively learn real‑time decision policies for navigation, collision avoidance, and traffic optimization, without sharing raw data. Our approach models the V2X environment as a decentralized multi‑agent Markov decision process (MDP) and introduces an adaptive aggregation mechanism that accounts for node mobility and
communication variability. We implement and evaluate the framework using a co‑simulation environment that integrates SUMO for traffic dynamics and ns‑3 for network emulation. Experimental results demonstrate that our FRL method outperforms centralized baselines by reducing average decision latency by 32 percent, while preserving data privacy and achieving robust convergence under intermittent connectivity. This work advances the deployment of edge AI in future vehicular ecosystems, providing a scalable, privacy‑preserving foundation for real‑time intelligence in 6G‑enabled V2X systems.
Downloads
Published
Issue
Section
License
Copyright (c) 2025 Journal of Artificial Intelligence & Cloud Computing

This work is licensed under a Creative Commons Attribution 4.0 International License.