Reliable and efficient webserver management for task scheduling in edge-cloud platform

ABSTRACT


INTRODUCTION
The cloud webservice providers (CWP) take the tasks execution data in the form of a directed acyclic graph (DAG), which is known as a workflow.The scheduling of the workflow in the field of cloud has been researched for a long time [1]- [3].Though, a major problem in workflow scheduling is providing better performance, reliability, energy efficiency, and fault tolerance in a large computing environment.Some frameworks automatically assign required resources, energy constraints, and performance for the tasks in the workflow for execution.Various researchers have proposed various methods and technologies which can reduce the consumption energy rate to conserve energy in the cloud environment and provide better performance in the cloud web server.Though, these methods need a large amount of communication cost between the web servers.Furthermore, these methods provide inadequate results and the consumption of energy cannot be decreased because of the high utilization of memory, and routing.The cost for communication between the webservers is high using the existing methods, as these methods depend on the traditional model which is not reasonable when applied to a hybrid cloud environment [3]- [5] for example, edge-cloud server as shown in Figure 1.This motivates the proposed work to design a workload scheduling model for edge-cloud platform that brings good tradeoffs between reducing and energy and cost and meets internet of things (IoT) workflow applications reliability requirement.In meeting research issues this paper Int J Elec & Comp Eng ISSN: 2088-8708  Reliable and efficient webserver management for task scheduling in edge-cloud platform (Sangeeta Sangani) 5923 present reliable and efficient web server resource management for workload execution in the hybrid cloud platform.Reliable and efficient webserver management (REWM) is designed to reduce energy consumption, reduce task failure, minimize delay, and provide high reliability.The research significance of REWM for task scheduling in the edge-cloud platform is as: i) the proposed work presents a novel workload management technique adopting an edge-cloud platform to provide efficiency and reliability; ii) the proposed REWM reduces task failures and provides fault-tolerance task offloading with minimal energy consumption; iii) the proposed REWM reduces energy consumption and cost by meeting efficiency and reliability constraints; and iv) the experiment outcome shows a significant reduction in energy and costs in comparison with existing workload management methodologies.
The paper is formatted in the given way.In section 2, literature survey on various existing methods for workload scheduling has been presented.In section 3, the methodology of the model is provided.In section 4, the results of the model using the epigenomics data set are presented and finally, in section 5, the conclusion and future work for the model has been given.

LITERATURE SURVEY
In [3], they have proposed a method using the heterogenous-earliest-finish-time (HEFT), fuzzy dominance sort based heterogeneous earliest-finish-time (FDHEFT), to optimize the cost and makespan for the execution of the workloads in a heterogenous cloud.This method uses fuzzy dominance regulation for the execution of the workload.The experimentation performance has been evaluated using both the real-time and synthetic workload which has a better makespan and good cost trade-offs when contrasted with the traditional execution of the workload models.In [4], proposed a technique, days cash on hand (DCOH), for the hybrid cloud environment to optimize and reduce the cost and the deadline for the given task.The attained results give an outcome that the technique can efficiently reduce the trade-offs between the performance of the makespan and cost.Moreover, by reducing the consumption of energy, the cost of the execution of the workload can be reduced.In [5], they have proposed an energy-aware method to execute the DAG workloads having deadline constraints to reduce the consumption of energy in hybrid clouds.This model reduces the computational overhead of an energy-aware processor.In [6], a technique to reduce the cost and to attain better trade-off performance between energy and cost using the energy-aware scheduling method has been proposed.This technique comprises the given phases: slack resource optimization to save energy, idle virtual machine (VM) resource reuse policies, VM selection, and task merging.This technique attains good performance when compared with the existing workflow models.In [7], a model to optimize the cost and energy using the scheduling method to execute a large number of scientific data in IoT devices under a cloud network has been presented.This model improves performance, reduces energy consumption, and decreases the cost of execution of the task by a given deadline.In [8], a model to address the problem of reliability and to reduce the consumption of energy in the workload scheduling to provide better quality-ofservice (QoS) requirements has been proposed.The experimental results of the model showed better efficiency and reliability for workload scheduling when compared with different models.Nowadays many algorithms like genetic algorithm, swarm optimization, and other algorithms are currently being used to solve the multi-objective workload scheduling (MOWS) issues in the cloud environment and to execute the realtime-workloads [9], [10] and also to optimize energy aware multi-objective function [11] presented energy minimized scheduling (EMS) [12].Some other algorithms like reinforcement learning (RL) also have been used to solve the MOWS problems in the cloud network [13]- [15].In [16], an improved fuzzy logic rule with GT to balance and control the load between the physical machines has been presented.In [17], a Q-learning model to optimize the deadline and balance load for a given task having a weighted-objective function to schedule the workload has been presented.In [18], a technique, RADAR, to allocate resources to a given task in the cloud environment has been presented.This technique handles unpredicted failure and dynamic resource management considering the workload conditions.This technique decreases the cost required for execution, time, and service level agreements (SLA) violation when compared with the traditional techniques used for the execution of the workloads.In [19], they have proposed a model for the workflow which consists of composite tasks (cWFS).In this model, they have used nested-particle swarm optimization (N-PSO) method to execute the inner and outer population tasks.As this method is slow, a faster version of N-PSO has been used which executes the tasks by a given deadline.In [20], they have given an algorithm for the scheduling of the tasks named QL-HEFT which uses the Q-learning method and HEFT method to decrease the makespan during the execution of the task.This algorithm first ranks the given tasks on the given priority and then sorts them based on Q-learning and then gives an optimal resource utilization for each task so that the task can be executed by a given deadline.In [21], they have also proposed a scheduling method named as endpoint-communication contention-aware list-scheduling-heuristic (ELSH) which decreases the makespan of the workflow.In [22], they have presented an algorithm, DMWHDBS, which executes the task within the given deadline and is also cost-efficient.In this model, they have implemented a judgment mechanism that provides a success rate for the scheduling of the task in a multiworkflow environment.In [23], they have presented an allocation of resources technique for multi-cloud scheduling which executes the workflow, provides better performance, and reduces the cost of execution.In [24], they have designed a fault-tolerant workflow-scheduling model for the multi-cloud which reduces the cost during the execution and provides better reliability.They have also given a billing method for the resources utilized during the execution of the workflow.Finally, they have given an algorithm that reduces the cost, and time and also provides a solution for fault-tolerant workflow scheduling.The comparative study of two most recent workload scheduling is provided in Table 1.The proposed REWS work is focused to address limitation of existing model and are given in Table 1.

METHOD 3.1. System and workflow execution on hybrid cloud environment
In this model, for the execution of the scientific workflow, a hybrid cloud environment has been considered.Suppose there are two sensor devices connected and both devices are running through the DAG application.The illustration of the workflow can be seen in Figure 1.The sensor devices are linked to an edge server which computes various operations and the edge server is linked to the cloud data center.The cloud data center comprises different hosts and  number of virtual machines.This model presumes that for the computation and communication process, there should be a stable connection between the sensor devices and the edge server.The server has the capability for the computation of scientific workflows and to execute the workflows within a given deadline.

Workload execution model for hybrid cloud environment
The execution of the workload is done either in the server or the cloud network when the sub-tasks are offloaded to the cloud.The delay induced to execute of the sub-task   in the server containing the processing element  0 is defined using the (1).Further, the energy consumed to execute a given task of the workload in the edge server is represented using the (2).
Reliable and efficient webserver management for task scheduling in edge-cloud platform (Sangeeta Sangani)

5925
In (2), the energy consumed by the processing element for a given deadline is represented using .In the same way, the delay used to execute the offloading of the workload in the cloud network is calculated.The calculation of the capacity in the cloud is given using (3).
The delay used to complete the execution of the workload comprises execution and communication delays.Hence, the total delay to execute the complete workload in the cloud is given using (4).Moreover, the delay used for the execution of the workload locally in the cloud or the edge server is calculated by the given using (5).
In ( 5), the value of    = 0 when the value of k=0.The failure of the task in the sub-task of the workload   is calculated using the poisson distribution on the processing element   by the given (6).The efficiency of the processing element for the execution of the DAG workload is given using the (7) [19].In (7), the sub-task index is defined using the yjk.The sub-task yjk can be described using (8).
In this model, the consumption of energy is reduced for the execution of the workload task  by decreasing the delay and increasing the efficiency of performance.The consumption of energy for the execution of the workload in the edge server is calculated using the (9).After this, the main issue can be denoted using the (10).In (10), the given constraint should be satisfied using the objective function to achieve an effective workload execution result.In (11), the constraint for the overall execution of delay in the workload  should be less when compared with the other delay bound.In (12), the delay bound describes the performance efficiency of the workload which must be greater than the specified performance efficiency bounds.The (13), specifies that each sub-task either can be executed either in the cloud network or in the edge server.
The ( 14) and (15), it specifies the subsequent sub-tasks have to wait until the previous sub-task has been completely executed.In ( 14) and ( 15), the time for the execution of the sub-task   is initialized using the (  ).This model's main aim is to reduce the delay in the execution of the task and to improve the performance efficiency which depends on some of the constraints given in (11) to (16).In this model, the resource which consumes less energy is used to execute the workload in a hybrid cloud environment.In the initial step, the sub-tasks are well-ordered in increasing order to generate a proper sequence set  ̂ for the sub-tasks.In the next step, the delay bounds and processing efficiency for each sub-task having different processing elements in the cloud network and edge server are obtained.Finally, the resources are allocated to the sub-task  ̂ by finding an appropriate processing element that consumes less energy overhead and also satisfies the bounds for the execution of the sub-task  ̂.

Task ordering webserver management
Figure 1 shows the existence of dependencies among preceding and subsequent tasks.As a result, after completion of the preceding task, only the webserver assign the resource to subsequent tasks.On the other side, if a task doesn't have any dependencies the webserver allocates resources in parallel.Take Figure 1 for example, the task  1 is executed first and  2 waits for the resource from the webserver to start execution, while in the meantime the task  3 is executed in parallel to minimize the delay.As a result, the web server needs to decide the order in which it executes the workload so that the parallel efficiency can be maximized and delay can be reduced.In this work the task's ascendent ordering outcome   is used to measure tasks selectivity using the (17).
where,  ̅ defines the mean delay induced to execute the task on different webservers in graph .The parameter  ̅ is computed using the (18).In this work, the delay induced for communicating the offloaded tasks outcomes toward the edge server.Therefore, the parameter   (  ,   ) defines the delay induced for communicating from the edge-server to   considering m=0rrr.
(  ,   ) = 0, If m≠0 then before starting workload execution all the tasks are arranged in a descendent structure by   .If tasks   and   are to be allocated and ascendant order assures   (  ) >   (  ), in such cases the   might be having higher selectivity in comparison with   .Further, an important to be noted when   is about to be processed, it must wait till its preceding task is completed meeting the bounds defined in ( 14) and (15).

Reliable and efficient workload execution webserver management
The proposed work is focused on establishing the best reliable webserver server that reduces delay and assures fault-tolerance of workload H. Let  ̂define the assignment order of  and the task that must be allocated is defined as  ̂( ̂ ∈  ̂).The task set that must be processed is defined as { ̂1,  ̂2,  ̂3, …  ̂−1 } and tasks that must be allocated to the web server are defined as { ̂+1 ,  ̂+2 ,  ̂+3 , …  ̂}.In the work during allocating  ̂ the reliability (i.e., fault-tolerance) outcome considered must be ( ̂).Therefore, the present fault-tolerance of H is obtained through the (21).
In the proposed algorithm, we adopt an effective soft-computing-based searching strategy that presumptuous every unallocated task must be allocated to either edge server or cloud webserver with the maximum fault-tolerance outcome, and later establish obtainable yjk for assuring (23) in minimizing solution space size.Further, the work aimed to meet efficiency requirements in getting the required webserver with minimal time for workload execution.In this work, the fastest initialization time (FIT) and the recent completion time (RCT) are used for reducing the task's execution time and sustaining the workload delay prerequisites.First, the FIT of the incoming task Uinc on a different individual web server uk is given using (26).
Then, we can get another task   ′ FIT on a different web server   using the (27).The RCT of the departure task   is given using (28).In meantime, this work assumes the other task   ′ RCT on a different web server   which is given using (29).Then, this work obtains every task   ′ minimized execution delay rather than the overall delay prerequisite of  which is given using (30).

𝐹𝐼𝑇(𝑢
The proposed REWM methodology working is given in algorithm 1 which is focused on reducing delay and providing fault-tolerance assurance with high reliability meeting task QoS constraints of each task, and assures the constraint are bounded with minimal energy dissipation.The proposed methodology is designed considering the following three phases.First, use the task's ascendent ordering outcome   for generating  ̂.Second, use ( 23) and (29) for obtaining the fault-tolerance and efficiency constraint of every individual task on different web servers in the edge-server or cloud web servers.Lastly, when allocating a task  ̂ establish a web server with the minimal energy-cost assuring bounds of  ̂.Hence, this model can reduce both the consumption of energy and cost when compared with the existing web server management for task scheduling which is discussed below in the results section.
Algorithm 1. Reliable and efficient webserver management (REWM) Step 1. Start Step 2. Deploy edge-cloud platform with physical machines and virtual machines.
Step 3. The user submits workflow task with deadline requirement to edge-cloud resource provider.
Step 4. The resource provider first arranges the task according to its selectivity and arranges it in ascendent order Step 5.The resource provider uses ( 23) and (29) for obtaining the fault-tolerance and efficiency constraint, respectively.
Step 6.The resource provider finds edge-server that reduces energy meeting deadline prerequisite using ( 10) and ( 23), respectively.
Step 7. If resource provider doesn't find any edge-server; then, the task is offloaded to cloud platform.
Step 8.The resource provider execute task in cloud platform that minimizing energy and meets efficiency and reliability constraint.

RESULTS AND DISCUSSION
In this section, experiments have been conducted to evaluate the performance of the proposed model REWM over the existing reliability-aware cost-efficient scientific (RACES) workflows scheduling strategy on the multi-cloud systems model [24].Experiments have been conducted on the epigenomics and SIPHT scientific workflow [25], [26].The workflow has been discussed below in the next sub-section.Energy consumption, computation cost, and reliability are the three factors that have been considered to evaluate the performance of the model.The IoT-edge cloud server environment has been modeled using the SENSORIA simulator and the cloud environment is modeled using CloudSim and is combined through object-oriented programming language in building a hybrid cloud environment.The experiments have been conducted on an Intel i5 processor with NVIDIA graphics and RAM of 8 GB.

Energy consumption performance
In this section, the experiments have been conducted on both the epigenomics and SIPHT workflow by varying the size of the workload from 30 to 1,000 and the energy consumed for both the scientific workflow has been evaluated using the proposed REWM and the existing RACES model.From Figure 2 it can be seen that as the size of the workload of epigenomics workload increases the energy required for the execution also increases slightly.However, using the REWM model, a significant reduction in energy consumption can be seen in comparison to the existing RACES model.This shows that the REWM model achieves a scalable performance considering both the smaller workload and as well significantly large workload.Also, there is an average energy efficiency improvement of 16.63% which has been achieved by the REWM model in comparison with the RACES model.Further, from Figure 3 it can be seen that as the size of the workload of SIPHT workload increases the energy required for the execution also increases slightly.However, using the REWM model, a significant reduction in energy consumption can be seen in comparison to the existing RACES model.This shows that the REWM model achieves a scalable performance considering both the smaller workload and as well significantly large workload.Also, there is an average energy efficiency improvement of 6.009% which has been achieved by the REWM model in comparison with the RACES model.

Computational cost
In this section, the experiments have been conducted on both the epigenomics and SIPHT workflow by varying the size of the workload from 30 to 1,000 and the cost consumption for both the scientific workflow has been evaluated using the proposed REWM and the existing RACES model.The cost is measured based on total time spent on respective server type on Azure cloud as defined using (31).

𝑡𝑜𝑡𝑎𝑙 𝑐𝑜𝑠𝑡 = 𝑋 * 𝑇
where  defines the total cost (measure in dollars) per second on respective instance type.More detail can be obtained from [24], [27].From Figure 4 it can be seen that as the size of the workload of epigenomics workload increases the cost required for the execution also increases slightly.However, using the REWM model, a significant reduction in the computational cost can be seen in comparison to the existing RACES model.This shows that the REWM model achieves a scalable performance and reduces the computational cost considering both the smaller workload and as well significantly large workload.Also, there is an average energy efficiency improvement of 4.67% which has been achieved by the REWM model in comparison with ISSN: 2088-8708  Reliable and efficient webserver management for task scheduling in edge-cloud platform (Sangeeta Sangani) 5929 the RACES model.Further, from Figure 5 it can be seen that as the size of the workload of SIPHT workload increases the computational cost required for the execution also increases slightly.However, using the REWM model, a significant reduction in the computational cost can be seen in comparison to the existing RACES model.This shows that the REWM model achieves a scalable performance and reduces the cost considering both the smaller workload and as well significantly large workload.Also, there is an average energy efficiency improvement of 6.44% which has been achieved by the REWM model in comparison with the RACES model.

Reliability
In this section, the experiments have been conducted on both the Epigenomics and SIPHT workflow by varying the size of the workload from 30 to 1,000 and the reliability of the model for both the scientific workflow has been evaluated using the proposed REWM and the existing RACES model.From Figure 6 it can be seen that as the size of the workload of epigenomics workload increases the reliability of the REWM model increases in comparison with the existing RACES model.This shows that the REWM model achieves a scalable performance and reduces the computational cost and is highly reliable considering both the smaller workload and as well significantly large workload.Also, there is an average energy efficiency improvement of 0.065% which has been achieved by the REWM model in comparison with the RACES model.Further, From Figure 7 it can be seen that as the size of the workload of SIPHT workload increases the reliability of the REWM model increases in comparison with the existing RACES model.This shows that the REWM model achieves a scalable performance and reduces the computational cost and is highly reliable considering both the smaller workload and as well significantly large workload.Also, there is an average energy efficiency improvement of 0.115% which has been achieved by the REWM model in comparison with the RACES model.In various studies, it could be seen that less amount of work has been done on workflow scheduling problems to reduce the cost and energy consumption in a heterogeneous cloud network.The present models for workload scheduling have failed to bring out a good trade-off when meeting the energy constraint and task deadline.In this model, we have presented an efficient method that can provide good trade-offs to meet the energy constraint and task deadline in an edge-cloud environment for provisioning complex IoT workflow.The REWM computes the delay for executing in the edge server and the task failure rate for IoT workflow execution.Then, the benefit (i.e., performance efficiencies) of minimizing delay and failure rate by offloading execution on the cloud platform is estimated.Finally, the energy is optimized to meet task delay and processing efficiency requirements.In this way, the REWM achieves much superior throughput, energy efficiency, and cost reduction for executing workflow on a heterogeneous platform and an increase in reliability when compared with RACES and EMS which is proven through experimental study for provisioning IoT workflows.

Figure 1 .
Figure 1.Heterogenous web server architecture for workload execution

Table 1 .
Comparative study