Conflict-driven learning scheme for multi-agent based intrusion detection in internet of things
Abstract
This paper introduces an effective intrusion detection system (IDS) for the internet of things (IoT) that employs a conflict-driven learning model within a multi-agent architecture to enhance network security. A double deep Q-network (DDQN) reinforcement learning algorithm is implemented in the proposed IDS with two specialized agents, the defender and the challenger. These agents engaged in an antagonistic adaptation process that dynamically refined their strategies through continual interaction within a custom-made environment designed using OpenAI Gym. The defender agent aims to identify and mitigate threats by matching the actions of the challenger agent, which is designed to simulate potential attacks in the environment. The study introduces a binary reward mechanism to encourage both agents to explore and exploit different actions and discover new strategies as a response to adversarial actions. The results showcase the effectiveness of the proposed IDS in terms of higher detection rate the comparative analysis also validates the effectiveness of the proposed IDS scheme with an accuracy of approximately 96%, outperforming similar existing approaches.
Keywords
Internet of things; Intrusion detection system; Multi-agent system; Reinforcement learning; Security
Full Text:
PDFDOI: http://doi.org/10.11591/ijece.v14i5.pp5543-5553
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
International Journal of Electrical and Computer Engineering (IJECE)
p-ISSN 2088-8708, e-ISSN 2722-2578
This journal is published by the Institute of Advanced Engineering and Science (IAES) in collaboration with Intelektual Pustaka Media Utama (IPMU).