Evolutionary reinforcement learning multi-agents system for intelligent traffic light control: new approach and case of study
Abstract
Due to the rapid growth of urban vehicles, traffic congestion has become more serious. The signalized intersections are used all over the world and still established in the new construction. This paper proposes a self-adapted approach, called evolutionary reinforcement learning multi-agents system (ERL-MA), which combines computational intelligence and machine learning. The concept of this work is to build an intelligent agent capable of developing senior skills to manage the traffic light control system at any type of junction, using two powerful tools: learning from the confronted experience and the assumption using the randomization concept. The ERL-MA is an independent multi-agents system composed of two layers: the modeling and the decision layers. The modeling layer uses the intersection modeling using generalized fuzzy graph technique. The decision layer uses two methods: the novel greedy genetic algorithm (NGGA), and the Q-learning. In the Q-learning method, a multi Q-tables strategy and a new reward formula are proposed. The experiments used in this work relied on a real case of study with a simulation of one-hour scenario at Pasubio area in Italy. The obtained results show that the ERL-MA system succeeds to achieve competitive results comparing to urban traffic optimization by integrated automation (UTOPIA) system using different metrics.
Keywords
adaptive traffic signal control; fuzzy modeling; genetic algorithm; Q-learning; reinforcement learning; traffic simulation;
Full Text:
PDFDOI: http://doi.org/10.11591/ijece.v12i5.pp5519-5530
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
International Journal of Electrical and Computer Engineering (IJECE)
p-ISSN 2088-8708, e-ISSN 2722-2578
This journal is published by the Institute of Advanced Engineering and Science (IAES) in collaboration with Intelektual Pustaka Media Utama (IPMU).