Enhancing the stability of the deep neural network using a non-constant learning rate for data stream
Abstract
The data stream is considered the backbone of many real-world applications. These applications are most effective when using modern techniques of machine learning like deep neural networks (DNNs). DNNs are very sensitive to set parameters, the most prominent one is the learning rate. Choosing an appropriate learning rate value is critical because it is able to control the overall network performance. This paper presents a new developing DNN model using a multi-layer perceptron (MLP) structure that includes network training based on the optimal learning rate. Thereupon, this model consists of three hidden layers and does not adopt the stability of the learning rate but has a non-constant value (varying over time) to obtain the optimal learning rate which is able to reduce the error in each iteration and increase the model accuracy. This is done by deriving a new parameter that is added to and subtracted from the learning rate. The proposed model is evaluated by three streaming datasets: electricity, network security layer-knowledge discovery in database (NSL-KDD), and human gait database (HuGaDB) datasets. The results proved that the proposed model achieves better results than the constant model and outperforms previous models in terms of accuracy, where it achieved 88.16%, 98.67%, and 97.63% respectively.
Keywords
data stream; deep neural network; learning rate; machine learning; network performance;
Full Text:
PDFDOI: http://doi.org/10.11591/ijece.v13i2.pp2123-2130
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
International Journal of Electrical and Computer Engineering (IJECE)
p-ISSN 2088-8708, e-ISSN 2722-2578
This journal is published by the Institute of Advanced Engineering and Science (IAES) in collaboration with Intelektual Pustaka Media Utama (IPMU).