Insights on critical energy efficiency approaches in internet-of- things application

Received Jun 13, 2020 Revised Oct 22, 2020 Accepted Dec 19, 2020 Internet-of-things (IoT) is one of the proliferated technologies that result in a larger scale of connection among different computational devices. However, establishing such a connection requires a fault-tolerant routing scheme. The existing routing scheme results in communication but does not address various problems directly linked with energy consumption. Cross layer-based scheme and optimization schemes are frequently used scheme for improving the energy efficiency performance in IoT. Therefore, this paper investigates the approaches where cross-layer-based schemes are used to retain energy efficiencies among resource-constrained devices. The paper discusses the effectivity of the approaches used to optimize network performance in IoT applications. The study outcome of this paper showcase that there are various open-end issues, which is required to be addressed effectively in order to improve the performance of application associated with the IoT system.


INTRODUCTION
The uses of internet-based devices are increasing rapidly and are interconnected to perform information sharing [1]. This device interconnectivity to the cyber-physical system can be known as internetof-things (IoT), where different numbers of computing machines are connected [2]. These machines could be in actuators, sensors, radiofrequency devices, and mechanical devices. The IoT device can transfer digital data over different communication medium ranges using specific protocols independent of the human intervention [3]. The IoT devices comprise various associated and diversified technologies, e.g., wireless sensor networks, embedded systems, data analytics, automation, and machine learning [4]. IoT adoption is sought to be implemented over various applications, from smart home automation to critical information extracting and processing systems [5,6]. It also plays a critical role in smart city systems where massive numbers of heterogeneous devices are interconnected to form an extensive device-to-device-based communication. The applications of IoT are planned, considering resource-constrained nodes and have limited processing capability [7]. The challenges encountered by an IoT at present are: i) security concerns [8], ii) absence of standard interoperable system [9], iii) complex planning of business models [10].
Routing the data effectively through this resource constraint device is a challenging problem. At present, IoT makes use of various communication protocols viz. ZigBee, 6LoWPAN, 6TiSCH, IPv6 over G.9959, IPv6 over Bluetooth low energy/NFC [11,12]. There are various routing protocols too to carry out this task viz. RPL [13], CORPL [14], CARP [15], and AODVv2 [16]. However, all these routing protocols are associated with beneficial features as well as open issues. The first problem is the need for an identity management system for largely connected IoT devices.
The second problem is associated with scalability owing to the highly restricted computational capability of IoT nodes. The third problem is concerned with the mobility management of the IoT nodes. There is a more profound connection of such problems with the complex data when it comes to routing. The data considered in the IoT environment are highly heterogeneous in order, and this problem acts as an impediment to full routing policies evolved till data. Hence, the adoption of conventional routing schemes does not address such problems by overloading the IoT nodes to a more considerable extent, causing transmission performance degradation and energy consumption.
Therefore this paper discusses the nearest solution to address such a cross-layer-based scheme and scheme based on iterative optimization. It is strongly believed that cross-layer schemes will endow the IoT system with better energy efficiency. However, developing an effective cross-layer system is a challenging task. This paper discusses the effectiveness of existing approaches towards energy efficiency in an IoT system. This paper's organization is as: Section 1 discusses the background, the research problem and proposed solution. Section-2 discusses cross layer-based approaches, while section 3 discusses optimizationbased approaches towards improving communication performance. Section 4 discusses open research issues, while section 5 discusses the conclusion of this paper.
Various researchers [17][18][19] have discussed the study toward energy efficiency in IoT. According to Mahmoud et al. [20], existing IoT models using cloud services have focused on energy efficiencies; however, the existing system is not found to support better service performance quality. One of the widely used deployments of an IoT is in building management. The studies carried out by Hannan et al. [21] have energy management in building premises using the IoT concept. The study outcome says that there is an open end of energy improvement via an optimized controller design. However, Arshad et al. [22] highlighted all the recent energy efficiency mechanisms as compressing sensing, network classification, selective sensing, server allocation based on context, and sleep schedule. The studies mentioned above have created a background of the most recent investigation to find that studies towards energy efficiencies in an IoT has still an open-end problem that is required to be solved.
The research problem. Significant research problems are as:  Existing review work has emphasized identifying multiple approaches towards energy efficiency; however, an effective solution is yet to be found.  Existing studies do not disclose open research gaps required to be bridged for better performance and outcomes. Therefore, the statement of the problem of the proposed study is "To make an exhaustive and smart study of existing approaches towards energy efficiency in IoT with precise identification of research gap." The solution presents a compact discussion on energy-efficient techniques in an IoT with an agenda to: i) highlight the significance of cross layer-based approach for facilitating better network performance; ii) carry out a discussion of recent research works in the area of cross-layer based approach for IoT; iii) different types of analytics in an IoT. The manuscript aims to perform a detailed study of all the methodologies and approaches in current times. Finally, the manuscript's contribution is to offer an explicit discussion of the strengths and weaknesses of the implemented approach so that a precise research gap can be identified. The next section discusses the cross-layer approach.

CROSS LAYER-BASED APPROACHES
The cross-layer-based approach's evolution's prominent target is to solve the issues associated with the network layers in IoT. Figure 1 shows the IoT challenges, i.e., reliability, efficiency, security, and privacy [23]. All these problems are potentially connected were some problems have a higher number of solutions investigated to date while other problems have received a smaller number of solutions. The problems concerning security and privacy are more addressed, while the problems associated with energy efficiency and reliability are significantly less addressed. Figure 2 shows the conventional architecture of an IoT was developed to improve the veracity problem, quality of service problem, and security factor. The architecture usually consists of three (application layer, network layer, and perception layer) to five layers (business layer, application layer, service management layer, abstraction layer, and object layer). Figure 2 shows that the cross-layer approach's adoption assists in offering better flexibility in layerbased operation that cannot be obtained in the conventional IoT model. The data accessibility becomes comfortable, and it can also be customized as per the demands [24]. The degree of interaction among the levels increases more in cross layer-based approaches. At present, various research works have already been carried out to prove that the adoption of cross-layer approaches in networks offers a highly enhanced version

Benefits of cross layered approach
The conventional layered structure of an IoT suffers from various problems as it is found not to cater to the demands of the IoT application in present times [26]. The biggest issue associated with adopting the conventional layered architecture is that it offers supportability towards end-to-end communication for all the given layers of the defined system. The interaction level is highly limited between the adjacent layers, which results in increasing dependencies of resources over complex data processing [27]. There is no possibility of communication or sharing of information between non-adjacent layers in IoT's conventional layered model. The data extraction can be carried out by applying the source of middleware against the incoming raw data obtained from the physical layer. This is one of the mechanisms to control better performance in an IoT infrastructure. However, there are not many research attempts considering the usage of the cross-layer approach in an IoT. The recent works of Choi et al. [28] have presented an approach for considering the mechanism of sensing the data for multiple IoT nodes with more focus on addressing the problem with data accessing in large scale IoT. Another cross-layer-based scheme was implemented by Hasan et al. [29], where the quality of service (QoS) factor was emphasized for an energy-efficient IoT framework to address the problems of routing in IoT. The study has implemented a mathematical model to facilitate better channel access, especially meant for handling dynamics of large and dense traffic conditions. Moreover, the heterogeneity concept has not been used in this mechanism. Such a problem was found addressed in the work of Jiang et al. [30], where the adoption of cross-technology has been used. However, the study does not ensure a higher degree of scalability improvement in higher dense traffic. This aspect is discussed in Jin et al. [31], where the authors have discussed the scheduling problem associated with the MAC layer in IoT. The study has used a proactive routing approach using the current standards of IEEE (802. 15.4). Still, the study is found to be devoid of addressing excessive resources. The works carried out by Yang et al. [32] have presented another cross-layer based approach where energy efficiency has been the prominent factor of investigation. The study outcome shows the model could effectively maintain the IoT device's positional information in the network layer, implementing MAC layer protocol in the data link layer, and managing the IoT interface from the physical layer. Although various other approaches [33][34][35], these are the most prominent ones. The observed pros are significant controlling of traffic, comprehensive green efficiency, faster transmission rate, delay reduction, traffic control, and better transmission efficiency from the above research work. These works' cons are highly iterative, inserts computational complexity, a higher degree of scalability is not assured, and network dynamic is considered free from random events and energy performance not analyzed deeply.

OPTIMIZATION-BASED APPROACH
In the existing system, there have been various research work being carried out where machine learning-based approaches have been utilized for improving the performance of IoT applications. This part of the research will discuss the 20 most relevant technical approaches where machine learning has been used for IoT from an implementation perspective. Most recently, the problems associated with the traffic bottleneck situation have been addressed by Sharma and Wang [36] with the Q-learning approach. The adoption of Q-learning has also been considered in Zhu et al. [37], mainly about scheduling transmission in machine-tomachine communication. The study has used deep learning mechanisms to plan up the transmission scheduling for a cognitive-based network and the Markov decision process. Machine learning has also been used to identify the unstabilized links over a network with its complex structures. Srinivasan and Guruswamy [38] have used machine learning with multiple operation stages for studying the quality of service parameters. The authors have used random forest, multi-layered perceptron, and support vector machine for this purpose to gain higher accuracy. Similar applicability of multiple forms of machine learning was also reported in Li et al. [39], where the idea was to analyze the system statistics concerning security problems in IoT. The authors have used various parameters involved, e.g., disk utilization, and utilization cycles of CPU, for evaluating the probability of successful operation of the proposed system. The study outcome was found to offer better outlier detection over the anomaly behavior of an IoT node. Security problems have been investigated using the machine learning approach by Kotenko et al. [40]. The authors have investigated various machines learning scheme for addressing classification related sub-problems in security. A different security problem was also in the spammer's shape in the existing IoT system integrated with the cloud. This security problem has been investigated by Qiu et al. [41] by developing a model that performs the identification of the spammer. The authors have used the Gaussian mixture approach and machine learning to investigate the security degree of threat to mobile networks.
The work carried out by Chatterjee et al. [42] has also used machine learning for enhancing the security features associated with authentication in IoT networks. The authors have used deep neural networks in order to facilitate the dynamic authentication system over wireless nodes. The study outcome has witnessed a higher accuracy in the presence of a different communication channel condition. The work carried out by Park, and Saad [43] has used a sequential learning model to support a restricted resource-based communication system over IoT devices. The authors have developed a mechanism that allows the devices to perform learning operation over critical messages for supporting seamless communication of sensitive nature. The authors have offered more importance to the message prioritization in this work. The study towards considering the message prioritization is also carried out by Inagaki et al. [44], where the authors have developed a machine learning-based model. Unlike existing approaches, the proposed learning mechanism uses feature selection mechanisms to have potential control over the transmission of the data considering the mobility factor. The problem associated with energy efficiency while performing learning operation over IoT application is discussed by Chen et al. [45].
The authors have discussed a hardware-based approach where the digits' recognition is carried out and energy efficiency. Cheng et al. [46] have carried out a study where reinforcement learning has been implemented. The authors have performed the allocation of the resources demanded computing towards the virtualized environment using the scheduling of tasks followed by using the Markov decision method and deep learning technique to perform complex network structures' learning operation. The work carried out by Jayasinghe et al. [47] has developed a model for investigating the trust factor numerically. The authors have used machine learning to classify the obtained features associated with trust, followed by integrating them to generate a cumulative trust for better decision-making. AlHajri et al. [48] have carried out the machine 2929 learning approach's implication to address the problems associated with the classification related to the IoT ecosystem. The study outcome shows that weight k-nearest neighborhood algorithms exhibited better performance than other machine learning methods. Machine learning is also used for the identification of the signal for a given spectrum. The work carried out by Li et al. [49] has developed machine learning algorithms in order to facilitate an effective decision making of spectrum sharing. The works carried out by Kulin et al. [50] have presented a comprehensive learning mechanism using deep learning neural networks. The study outcome exhibited better signal quality. There are various variants of learning methods used in IoT applications. One such unique variant is called learning automata that have been reportedly used in Di et al. [51]. The study has presented a mechanism that can investigate solving access problems due to a massive traffic system. Zhang et al. [52] have focused on using active learning mechanisms for improving the performance of feature learning connected with big data of IoT application of industries. The study presents a mechanism for resisting the over-fitting problem. The system uses a higher entropy value to select the best sample to perform crowdsourcing. The work carried out by Siryani et al. [53] have used machine learning for enhancing IoT application associated with the smart metering system. The predictive modeling is carried out using the Bayesian network, where a better decision support system is presented. Yang et al. [54] have developed prototyping of smart wearable devices to enhance the classification accuracy using Principal Component Analysis to reduce high dimensional data.
From the above researches, it can be seen that various research-based approaches have been evolved to develop analytical solutions. All the current approaches have used machine learning of various variants with a unique focus over an explicit problem associated with the application or cater to an application's demands. However, all the approaches are witnessed to possess a certain level of problems that could not permit the application to perform a full-fledged analytical operation when exposed to different implementation scenarios.

OPEN RESEARCH ISSUES
The prior section has discussed various aspects of implementation towards approaches used in IoT's machine learning approach. Irrespective of different scales of the effectiveness of the present state of predictive solution, various open-end issues are required to be highlighted and discussed in this section:

Critical problem explored
A closer look into the existing mechanism has shown that there has been a lesser degree of advancement being carried out using a cross-layer based approach towards achieving energy efficiency. It is now already known that the IoT nodes are usually obtained by rechargeable batteries or by scavenging. The various reasons for energy dissipation while performing routing operation is the primary usage of radio links with low power, vulnerable wireless communication medium, and interference. It is because the operations of the IoT nodes are significantly affected by the operational condition. The majority of the problems arrive if the transmission distances between two nodes are very much less. Apart from this, it has been noted that if the transmission range is constant, then excessive energy is drained. This phenomenon also causes the retransmission of the IoT nodes from the links that are not proven to be viable sometimes. In such a case, the existing solutions permit the IoT nodes to optimize the energy over each unit hop, causing more energy dissipation. Such problems can be handled if the cross-layer approach is developed in fashion incorrectly. One possible mechanism of performing enhancement/optimization of the cross-layer approach in IoT is shown in Figure 3. Hence, the theoretical evaluation suggests that the adoption of cross layer-based approach can significantly minimize energy consumption.
This adoption assists in processing the routing operation, feature extraction, and MAC scheduling. Further improvement can be carried out using a machine learning scheme too. Although this process seems to be theoretically correct, various authors have proven it, as seen from discussion carried out in prior sections. However, they are associated with various pitfalls too. From the existing system discussion, it was noticed that usage of single layer-based approaches is usually forbidden. Hence, the adoptions of multi-layered based approaches are more preferred. The existing system has mainly used the MAC layer, transport layer, and physical layer in all its implementation, which is believed to minimize the energy drainage in an IoT application. All these cross-layered schemes cannot offer a balance between data transmission performance and network lifetime. Therefore, there is a greater scope of improvement towards cross-layered based approaches in the existing system. While performing the integration of various cross-layer protocols, it is essential to perform the proper slot selection to increase the resources' degree of utilization. Although there are studies associated with the resource allocation using cross-layer and machine learning approaches in IoT, the functionality to perform energy efficiency communication among the IoT nodes is not discussed. The centralized modeling is required to optimize the network layer's correction mechanism to improve the routing operation through a cross-layered approach. Unfortunately, no such scheme has been presented to be used in such a manner in the network layer. Apart from this, there is also a more significant set of the problem associated with using a machine learning approach for improved communication in an IoT. In this aspect, better performance can only be obtained when a multi-objective function is achieved with less iteration. The significance of cross-layer design is more for constructing a better form of multi-objective design optimization. The existing system using both cross-layer and machine learning-based approaches was used without realizing that the design form is also required to be used in various services. Every user has a unique quality of experience as per their privilege.

General problem explored
Apart from critical problems, some generalized issues are associated with adopting the existing approach in IoT for data communication performance. The following are the open-end generalized problems.  Data complexity is one of the significant problems which still a significant concern. The role of interdomain routing comes into play in this problem, whereas existing routing schemes have less supportability of inter-domain routing scheme over its gateway system.  Less effective optimization is another open-end problem. Adopting supervised learning is witnessed as a majority in the existing system towards upgrading IoT's routing performance. The adoption of a supervised learning scheme always demands the data under the pre-labeling stage before training. However, it is quite challenging to perform data labeling due to the variability problems associated with the data generated in the IoT eco system. Moreover, it might not be enough, too, as well as an expensive process too.  The existing system also suffers from the complexity associated with model development. Ensuring seamless connectivity with the cloud system and all resource constraint IoT devices is sometimes challenging. Moreover, consideration of routing operation through edge-based architecture has not been considered.  There is always an open-end security threat while performing routing operations in the IoT environment.
However, there are some excellent encryption and security-based solution in it to date. However, the bigger problem is when such encryption standards are used over the resource constraint nodes; it is a highly iterative and computationally expensive process. Hence, such security algorithms run as a default in all the IoT nodes that eventually affect the routing performance if the traffic is dynamically ordered and the IoT environment is dense.  Lack of consideration of the cost-effectiveness associated with the routing strategies. The majority of the routing strategies are pre-defined and focused on a data transmission scheme where cross-layer contributes to the transport and physical layers. However, an effective routing strategy will need interaction with the network layer more, which is less seen in existing approaches. Hence, it can be seen that the existing scheme of cross-layered based approaches are significantly less in number while usage of optimization offers iterative operation. The frequently used schemes are found less to be addressed for mitigating the energy problems in resource-constrained IoT nodes. Hence, it offers an excellent future scope of research.

CONCLUSION
This paper has discussed the approaches frequently and preferred by existing researchers towards improving the energy efficiency problems in IoT applications. After reviewing the approaches, following conclusion has been made viz: i) adoption of the cross layer-based approaches are more dominant over transport and physical layer and less on network layer; ii) the testing environment of the existing models doesn't consist of resource constraints which renders impracticality of existing approaches; iii) the optimization-based approach mainly uses machine learning which is highly iterative and has higher dependencies on training data dimension; iv) scheduling is one of the significant mechanism which is progressive and non-iterative and can contribute to greater extent towards energy efficiency, however existing scheduling approaches are developed without considering constraints associated with IoT nodes and its surrounding demands; v) IoT network consists of different nodes with different physical properties for which reason allocation of generalized channel capacity will have an adverse effect towards energy factor as well as data transmission performance. Therefore, future work will be carried out to construct analytical modeling of a cross-layer approach where the network layer will be considered a prominent point of implementation with other associated layers. Emphasis will be given towards developing a mathematical model where various intrinsic as well as extrinsic constraints of IoT, are considered so that effective routing strategies can be formulated.