Gated recurrent unit decision model for device argumentation in ambient assisted living

,


INTRODUCTION
The aging population is growing rapidly worldwide, and elderly people living alone face various problems to performing daily activities and thus require continuous assistance.Caring for elderly individuals has become challenging due to the hectic nature of modern lifestyles among caretakers.Ambient assisted living (AAL) is a viable solution that utilizes information and communications technology (ICT) to help elderly people and disabled individuals to lead independent lives.AAL employs machine learning and ubiquitous networks to enhance the lives of elderly and disabled individuals, predicting daily activity behavior in real-time [1].
A survey was conducted on research and skills related to AAL systems.The trends in AAL system development were analyzed from technological and methodological perspectives and identified key issues for Int J Elec & Comp Eng ISSN: 2088-8708 ❒ 1167 further investigation [2].A proposed internet of things (IoT) framework [3] combines calibrated random forest and domain-based action rules for human activity recognition, addressing uncertainty effectively.Tested in two cases.The CleFAR algorithm [4] outperformed several methods, achieving 91% accuracy in social IoT fall detection.A deep learning framework [5] detected suspicious activities in secure IoT assisted living using convolutional and recurrent neural networks.An activity recognition system [6] combining video, inertial, and ambient sensor data showed promise for enhancing the lives of the elderly and disabled.An AAL service survey [7] revealed the need for a multi-sided market approach for older adults.Co-evolutionary hybrid intelligence research [8] discussed artificial intelligence (AI) techniques, industry applications, and AI ethics.Symmetrical models' impact on AAL systems was studied [9].A novel approach was proposed for resolving conflicting preferences in AAL settings.The argumentation-based solution offers promise in addressing user-centric conflicts efficiently and effectively [10].An insightful approach to predict cardiovascular diseases using supervised learning techniques was addressed in [11].The model's novel integration of transformers and bidirectional gated recurrent unit (GRU) in [12] shows promise in complex, multi-resident activity recognition for AAL systems.Additionally, [13] comprehensively analyzes AI's potential in AAL, while [14] highlights IoT's role in enhancing elderly monitoring.A comprehensive and well-structured approach for leveraging AAL technologies was presented.It provides valuable insights and guidelines for making informed decisions in implementing such systems.The framework's practicality and effectiveness make it a valuable resource for researchers and practitioners [15].To leverage technologies for elderly care, a comprehensive approach was offered in [16].By integrating IoT, smart sensors, and GRU deep learning, it presented a promising framework that addresses the challenges of providing enhanced assistance to the elderly population.
An innovative approach to classifying user activities in AAL environments was presented in [17].The proposed ensemble method demonstrates promising results in accurately identifying various activities, contributing to the advancement of assisted living technologies.A compelling clinical trial demonstrating the positive impact of ambient-assisted living on the quality of life among elderly individuals in Chile was presented in [18].The findings highlight the potential benefits of this technology in enhancing the well-being of older adults.A reliable social IoT alert system for AAL was presented in [19].It effectively addresses the needs of elderly individuals by generating timely alerts, enhancing their safety and well-being.The study demonstrates the potential of IoT technology in promoting independent living for seniors.An innovative approach for detecting and classifying postures using neural networks was addressed in [20].The study offers valuable insights into improving the quality of life for individuals in ambient-assisted living environments.An effective approach using a hidden Markov model to predict the dependency evolution of the elderly in assisted living environments was proposed in [21].The method shows promise in early detection and intervention for improved elderly care.A valuable insight for designing ambient assisted living environments that promote independence and well-being for older adults and individuals with cognitive disabilities was offered in [22].It highlights the importance of architecture in supporting their unique needs.The k-nearest neighbors (KNN) approach for making device-related decisions in AAL environments was proposed in [23].The model effectively addresses the challenge of device selection and enhances the quality of life for individuals in need of assistance.The activity recognition with ambient sensing (ARAS) dataset [24], contained human activity data from various homes with activities such as cooking, cleaning, and entertainment, collected using different sensors.The dataset includes information on data collection, hardware and software used, privacy protection, statistics analysis, and comparison with other datasets.The continuously annotated signals of emotion (CASE) dataset [25] was created with continuous affect annotations and physiological signals (electrocardiogram (ECG) and electrodermal activity (EDA)), which can be used for emotion analysis research.The dataset includes information on data acquisition, participant numbers, signals, and correlations between physiological signals and affect annotations.
However, none of the cited works discussed device interactions and device argumentation for activity occurrence that may lead to conflicting decisions in identifying the performed activity.Thus, to address the issue of device argumentation, a GRU deep learning technique is proposed to identify activities in AAL systems.A decision model is also proposed to determine the target activity during device argumentation for activity occurrence.Hence, the problem statement for the proposed work can be stated as follows: "The AAL environment comprises multiple heterogeneous devices equipped with different sensors to capture the daily activities of users.However, these sensors may interact and lead to conflicting decisions about the occurrence of activities among devices.To address this issue, a decision model is necessary to identify and predict the user activity during device argumentation and to help resolve any conflicts that arise." This work proposes a few contributions that include the use of a GRU deep learning technique for the classification of sensor value status and user activity.It is compared with other state-of-the-art techniques.Additionally, a novel device argument identification (DeArId) algorithm is proposed to identify argumentation among devices for activity occurrence in AAL environments.Furthermore, a gated recurrent unit decision (GRUDEC) algorithm is proposed which is a modified GRU algorithm for decision-making during device argumentation for activity occurrence and to resolve the arguments among devices and identify the target activity.
The remaining paper content is organized as follows: section 2 presents a proposed framework and proposed algorithm.The research method is stated in section 3. Section 4 presents the results and discussion.Conclusion and future work are discussed in section 5.

PROPOSED FRAMEWORK AND ALGORITHM 2.1. Proposed framework
The GRU decision model framework for the AAL environment is depicted in Figure 1.It comprises four main phases.In the first phase of data Pre-processing, the dataset underwent exploratory data analysis (EDA), where missing values were filled, and categorical features were converted to numeric values.The second phase, the classification phase, involves using GRU for two processes: firstly, classifying the status value of each sensor as low or high, and secondly, classifying 21 user activities based on sensor values, utilizing its update, reset gate, and activation unit layers.Moving to the third phase of the device argumentation identification phase, the objective is to detect arguments that arise between devices during user activities when the devices interact and dispute about the probable activity being performed.Arguments are identified in two cases: different activities are identified for the same sensor values, or the same activities are identified for different sensor values.In the last phase of decision-making, conflicts raised between the devices regarding user activity occurrence are resolved by counting the devices involved in the argumentation and examining surrounding devices to determine the activity.The activity is identified based on the majority of devices.The results are evaluated through an experiment involving 15 elderly participants who provide feedback on the user activity performed, based on factors such as time, location, and exhibited emotions.The majority of the feedback consensus is considered the overall result and represents the activity performed.Algorithm 2 proposes the GRUDEC algorithm for resolving device argumentation of the activity that occurred.It takes inputs from sensors and their corresponding user activities, as well as sensor values, resulting in argumentation.The algorithm considers the devices involved in argumentation and surrounding devices to make decisions and count the activities identified by the devices.The algorithm outputs the activity with the majority count as the decision.

Algorithm 2. GRUDEC algorithm
Require: K surrounding devices' sensor data Ensure: Predicted activity label A 1: Initialize GRU model with weights and biases 2: Initialize empty list S to store hidden states 3: for k ← 1 to K do 4: Obtain sensor data X k from surrounding device k

5:
Obtain previous hidden state h k−1 from device k

6:
Compute the current hidden state h k using the GRU model and inputs X k , h k−1

7:
Append h k to list S 8: end for 9: Concatenate all hidden states in S: H = concatenate(S) 10: Use H as input to a classifier to predict the activity label A 11: return A

RESEARCH METHOD 3.1. System model
The system model S M of an ambient-assisted living environment E comprises heterogeneous devices D = {1, 2, 3...n}, equipped with a set of sensors S = {1, 2, 3...m} located at various locations l ∈ L, that allow users to perform various activities A = {a 1 , a 2 , a i } at a specific time t.The devices interact and exchange information I about the occurred activities.The system model can be expressed as (1):

Problem formulation
Upon the occurrence of activity a i ∈ A, a convergence of multiple devices D i ∈ D takes place, facilitating the exchange of information I at a specific time t.Upon the occurrence of activity a i from the set A, there is a dynamic interaction and information exchange I among several devices D i belonging to the set ❒ ISSN: 2088-8708 D, all happening at a specific time t.However, this interaction can lead to device argumentation A r among the devices for the occurred activity and can lead to a conflict decision.Therefore, a decision model D M is needed to make the decision for device argumentation during activity occurrence and identify the occurred activity.Hence the objective function can be formulated as (2): Subjected to where N is the activity with the highest number of votes.

Proposed solution
From the objective function 2, to identify device argumentation, various activities performed by users are classified using the classifier algorithm.The equation to represent the classification process is (3): This equation takes the extracted features from the sensor data and exchanged activity information of device i at time t and predicts the corresponding activity label (P r AL ) using the trained machine learning model (A C ).To identify argumentation between D i and D j at a specific time (t) the equation can be defined as (4): D i are devices that are involved in an argumentation during an activity occurrence at time t.It returns a boolean value (true or false) representing whether device argumentation is detected between D i and D j .The A r equation checks for inconsistencies in the identified activities between D i and D j at the given time.If the predicted activities for the two devices are different (sensor values are the same, but different activities are identified) while their actual activities are the same, it returns true, indicating the presence of device argumentation.Otherwise, it returns false, indicating no argumentation.Therefore, argument A r with conditions C i and C j can be represented as (5), subjected to To resolve device argumentation and make a decision for the activity that occurred, we consider the surrounding devices and take the majority vote of the activity identified by these surrounding devices.To identify K surrounding devices, we use (6), where The equation ( 6) identifies K surrounding devices such that the distance between D i and a target D is less than or equal to the maximum distance threshold R considered.The equation to identify activities recognized by the K surrounding devices is (7), Thus, an equation to resolve the argumentation and make a decision is defined as (8), where δ(Act sur D [k] , a i ) is the Kronecker delta function, which equals 1 when Act sur D [k] = a i and 0 otherwise.The activities of the surrounding Sur D K devices are counted and returns the activity with the highest number of votes as the target occurred activity.

RESULTS AND DISCUSSION 4.1. Data preprocessing
This work utilizes a dataset that comprises the ARAS dataset [24] which extracts and captures user activities, and the CASE dataset [25] which captures user emotions.The dataset includes data from 20 sensors attached to various devices, location information, status values, and wearable sensors that detect emotions exhibited by the user.A total of 21 activities are captured in the dataset.Missing values are filled with the mode value of each column, and the LabelEncoder method is used to convert categorical values to binary numeric values for each feature.The features of the dataset includes age, sex, object, location, time, photocell (PH1), photocell (PH2), photocell (PH3), photocell (PH4), photocell (PH5), photocell (PH6), distance sensor (DS1), distance sensor (DS2), distance sensor (DS3), distance sensor (DS4), infrared receiver (IR1), contact sensor (CS1), contact sensor (CS2), contact sensor (CS3), distance sensor (SD1), distance sensor (SD2), temperature sensor (TS1), force sensor (FS1), force sensor (FS2), force sensor (FS3), status, Ecg, Bvp, Rsp, Gsr, Skt, Emg Coru, Emg Trap, Emg Zygo, Emotion Exhibited.The target classes of the dataset include: going out, preparing breakfast, having breakfast, preparing lunch, having lunch, preparing dinner, having dinner, washing dishes, having snacks, sleeping, watching TV, studying, having a shower, toileting, napping, reading book, shaving, brushing teeth, talking on the phone, listening to music, and other.

Experimental setup
The experimental setup used for implementing the proposed model employs a 7 th Gen Intel processor CPU running at 2.40 GHz.The system is furnished with 4 GB of RAM and runs on the Windows operating system.Python 3.7 is the primary programming language utilized in this experiment, supplemented by key libraries such as scipy 1.6.3,pandas 1.2.3, numpy 1.20.1, and matplotlib 3.4.1,all of which are installed.Furthermore, the experiment relies on the scikit-learn library version 0.24.2 to facilitate machine learning and data analysis tasks, and it utilizes PyQt5, a Python interface for Qt, as well as a cross-platform GUI library.

Model evaluation
An experiment was carried out to assess the performance of the decision making algorithms using the dataset [24], [25].The dataset was splitted into train and test set with a ratio of 90:10, 80:10, and 70:30 and average performance scores were considered.The GRU algorithm was used to classify sensor and user activity status.The performance results of GRU are compared with SVM and decision tree algorithm.The performance metric of decision making for device argumentation in AAL was also evaluated.

Results
The average performance results of classifying user activities using the GRU approach are shown in Figure 2. The combined average performance results for GRU, SVM, and decision tree algorithms for classifying the user activities are shown in Table 1.Table 2 shows the combined average performance results for GRU, SVM, and decision tree algorithms for classifying each sensor status value.The average performance ❒ ISSN: 2088-8708 results of GRUDEC algorithm for device argumentation during activity occurrence are shown in Figure 3. Table 3 shows the combined average performance results of proposed GRUDEC, SVM, and decision tree for device argumentation during activity occurrence.The results demonstrate that the since GRU approach can handle sequential data effectively making it well-suited for modeling temporal relationships and patterns in activity sequences, it achieves superior performance and outperforms the existing SVM and decision tree methods.Also, GRU has the ability to learn and maintain information over longer sequences due to its gating mechanism, which helps mitigate the vanishing gradient problem which on the other SVM and decision tree may struggle to capture long-term dependencies.

CONCLUSION AND FUTURE WORK
This work proposes and discusses a GRU-approached decision model framework which has four phases: the initial data pre-processing phase that preprocesses the dataset by filling in missing values and converting categorical values into numeric values.The classification phase is used for classifying sensor status values of each sensor and to classify the user activities.In the argumentation identification phase, a novel DeArID algorithm is used to identify arguments among the devices during user activity occurrence.Finally, in the decision-making phase, a GRUDEC algorithm is proposed to resolve the argument that occurred among the devices to identify the user activity.Performance metrics like accuracy, precision, recall, and F1-Score is used as measurement to evaluate the proposed algorithm.The GRU approached decision-making algorithm gives 85.45% of accuracy, 72.32% of precision, 65.83% of recall, and 60.22% of F1-Score as in comparison of the existing algorithms.In future, deep neural network algorithms and reinforcement algorithms can be used for predicting the activity and making decisions during device argumentation for the occurrence of user activity.

Figure 2 .
Figure 2. GRU user activity classification performance metric Algorithm 1 proposes the DeArId algorithm, which takes sensor values and corresponding user activity as input.Specific sensor values and activities are considered to check for argumentation, and the algorithm outputs whether there is argumentation or not.Argumentation can occur in two cases: when sensor values are the same but different activities are captured, or when sensor values are different, but the same activities are captured.

Table 1 .
Average performance metric of different algorithms for user activity classifications

Table 2 .
Combined average performance metric of each sensor status value classification

Table 3 .
Average performance metric comparison of different decision making algorithms