An energy optimization with improved QOS approach for adaptive cloud resources

Received Apr 13, 2019 Revised Mar 6, 2020 Accepted Mar 18, 2020 In recent times, the utilization of cloud computing VMs is extremely enhanced in our day-to-day life due to the ample utilization of digital applications, network appliances, portable gadgets, and information devices etc. In this cloud computing VMs numerous different schemes can be implemented like multimedia-signal-processing-methods. Thus, efficient performance of these cloud-computing VMs becomes an obligatory constraint, precisely for these multimedia-signal-processing-methods. However, large amount of energy consumption and reduction in efficiency of these cloud-computing VMs are the key issues faced by different cloud computing organizations. Therefore, here, we have introduced a dynamic voltage and frequency scaling (DVFS) based adaptive cloud resource re-configurability (ACRR) technique for cloud computing devices, which efficiently reduces energy consumption, as well as perform operations in very less time. We have demonstrated an efficient resource allocation and utilization technique to optimize by reducing different costs of the model. We have also demonstrated efficient energy optimization techniques by reducing task loads. Our experimental outcomes shows the superiority of our proposed model ACRR in terms of average run time, power consumption and average power required than any other state-of-art techniques.


INTRODUCTION
Due to ever-enhancing demand and popularity of cloud computing applications, various companies has moved their focus to cloud computing to decrease costs and for the better utilization of resources hence it referred as a next generation computing application. Cloud computing application termed as a novel computational model, which provides on-demand resources and required information, network, storage and data to the subscribers. The cloud-computing model combines hardware device locations and various software resources over the cloud network to decrease the management costs. Cloud Computing is a highly emerged technology which offers high amount of storage capacity, instant scalability and work on the principle of pay-per-use which is only for the time period subscribers are utilizing it [1]. Cloud computing applications are distributed into three sections such as Infrastructure-as-a-service (IaaS), Platform as-a-service (PaaS) and Software as-a-service (SaaS). Virtualization is the most essential technique for cloud computing applications, which used to decrease resource utilization. Moreover, virtualization helps to active the numerous virtual machines (VMs) on one machine by allocating every resource, which belongs to the basic hardware [2].

RELATED WORK
In recent years, the demand of cloud computing applications has taken immense growth. Therefore, to control high demand from the clients, there is a need of excessive resources of different types. All these resources consumes high amount of electricity and hence power consumption is more. According to a 2013 research in United States of America, the information cloud processing centers consumes 910 ℎ electricity, which is almost same as the summation of 34 thermal energy plants whose electricity generation capacity is 500 per year. This consumed energy was sufficient for entire New York City for two years and till 2020, this energy consumption will rise to 1400 ℎ which is enormous electricity consumption and almost same as the summation of 50 thermal energy plants [17]. Hence, the energy consumption in cloud data centers and computing processors has taken drastic growth. Therefore, controlling of power consumption in data centers and embedded processors is a vital and critical requirement, which need to be focused soon. Thus, an extensive research work described in this section on energy balanced scheduling algorithms and their connection with for various embedded devices. In [18], level of power consumption in information processing centers of china and performance is measured. They conclude that the power consumption in information processing centers of CHINA is very high and various techniques are introduced to reduce power consumption and enhance performance. In [19], an energy consumption model is presented based on the server maximum power and degree of CPU utilization to predict the total power in the present server. In [20], mobile cloud computing prototype is introduced to reduce the energy consumption at the time of wireless communication based on dynamic energy-aware cloudlets. They provide simulations results based on the practical experiments. However, execution time is very high using this technique, which may degrade its performance. In [21], an efficient resource allocation model is introduced in cloud environment and a review on existing scheduling and energy consumption strategies is presented. To offer better resources in cloud environment and improve relationship with users, scheduling of resources is an extremely essential topic, which can performance of cloud computing VMs.
In [22], a novel energy aware based on VM scheduling technique is introduced. Here, both network components and resources both are considered to provide an efficient scheduling technique. VM placement and VM migration are the two essential scheduling steps to achieve objective. This technique helps to reduce energy consumption as well as traffic over network. In [23], energy aware resource-scheduling technique is presented based on DVFS networked information processing centers for cloud computing VMs. Here, two types of energies are mainly optimized such as computing energy and communication energy to reduce overall energy consumption while following SLA constraints. This technique is difficult to implement in real-time. In [24], an efficient cost minimization and resource utilization approach is introduced for cloud computing devices using stable parallel applications. This approach precisely decreases cost by choosing devices, which follow the methods of least resource utilization. However, the difficulty is to maintain trade-off between performance and energy consumption. In [25], a precise scheduling algorithm is introduced which rely upon DVFS-enabled network processing centers. This algorithm helps to achieve efficient scheduling for cloud computing VMs. However, this technique introduces optimization problem as well.
In above works, different researchers have utilized different energy consumption and scheduling techniques. However, only few scheduling techniques are well-known to be offered in real-time applications due to various problems occurred in techniques [18,20,24,25], like lack of balancing between performance and power consumption, optimization complexity, large run-time. Therefore, to control these issues, we have introduced a novel dynamic voltage and frequency scaling (DVFS) based adaptive cloud resource re-configurability ( ) technique for cloud computing devices, which efficiently reduces energy consumption, as well as perform operations in very less time. Therefore, this technique is very much efficient to establish a trade-off between performance and energy consumption.

PROPOSED ENERGY BALANCED SCHEDULING ARCHITECTURE
This section defines proposed architecture and its various modules. This section also describes about the optimization of computational and re-configuration cost in information processing centers. Figure 1 demonstrates the proposed architecture. Here, we introduce a novel adaptive cloud resource re-configurability ( ) technique for cloud computing VMs. The proposed technique works on the principle of parallel computing which can handle numerous cloud computing VMs and they can be controlled by a central resource handler. Every cloud computing VM finishes the present allocated task as a self-governing processor by self-controlling its memory and resources. Message passing method is used for Intra-cluster interaction. Whenever a new task is assigned, a central resource handler simultaneously starts implementing resource distribution and admission governing. There are three vital components in our proposed technique which helps to achieve better resource utilization from an infrastructure perception such as information storage, switched local area network (LAN) and virtual machine handler (VMC) as demonstrated in Figure 1. Whenever, a new task is assigned the arrival time of that task is defined by and size of that task is denoted by in bits. The total processing time of assigned task is less than or equal to the estimated employed time ( )which is very essential for any technique to be adopt in real time scenarios. Our proposed technique works on some essential parameters which is necessary to work in real time scenarios such as processing task size , the maximum allowed delay in sec and the task granularity which shows the maximum number of tasks ( ≫ 1) can be grouped into the assigned work. Assume that the maximum VMs, which can be utilized in the assigned tasks using our proposed methods, can be expressed by ≫ 1and presented in Figure 1. Our scheduling technique works on the principle that every VM can be demonstrated as a virtual server, which can process (Bits per Second). The operating rate can be parallelly scaled at the time of execution depending upon the task size in bits. Assume that all the task follows the interval [0, ↑ ] where ↑ belongs to the maximum permissible operating rate.
Moreover, the task size does not affect the estimated time to complete that assigned task by VM, which is fixed prior only to be adopt our model in real time scenarios and denoted by in seconds. Furthermore, a VM can handle background task-loads of a present assigned task whose size is and the background task-load size is . This background task-load comes under OS (operating system) programs. It is assumed that the background task-load is stored by basic memory of VM. Thus, the background task-load only required computing cost and does not persuade interaction cost. Thus, the utilization parameter can be expressed as, where, (1) represents that the dynamic elements of the computing energy are the most essential part to decrease the computational cost. Assume that the total energy consumption by VM is to finish a single task of time interval ( )is denoted by in joule at the operating rate . Thus, the dimensionless ratio can be expressed as, where, (2) represents the Total Energy Consumption by the concern VM. For an instance, the DVFS based CPU analytical form can be described by the following equation, Here, we can also use to compute relative energy cost by concerned VM for the completion of task.
3.1. Modelling for task-load reduction using proposed technique In this section, modelling for task-load reduction is discussed. Assume that ≜ ↓ { , } is the number of tasks which are not overlapped and can be performed in parallel to execute various tasks. Assume that is the task size which are assigned to the computing ( ). The process time of different tasks does not rely upon the task length . Therefore, the processing rate can be defined as in bits per seconds, This (4) shows that the maximum length permitted for a task is ↑ = . ↑ ( ). And The total size of a job can be referred as in bits and task size of the task, which is assigned to the ( )by task scheduler as shown in Figure 1, can be referred as ≫ 0, = 1, … . , . . . To reduce task loads, we distribute total job size into parallel tasks whose size boundary limit can be defined as∑ = .

Optimization of reconfiguration cost using proposed technique
This section provides detailed modelling for the optimization of reconfiguration cost. The VM module controller is used to perform two key operations such as balancing the task loads and controlling of virtual machines. For the controlling of virtualization layer as demonstrated in Figure 1, the virtual machine controller ( )is required which helps to achieve final mapping of VM resources on numerous computing VMs. The VM's characteristics parameters can be described by (9) where, all these parameters can be stated using Virtualization Layer and then they are transmitted to virtual machine controller ( ) as demonstrated in Figure 1. The operating rate can be scaled up or scaled down using an efficient frequency-scaling scheme, which is controlled by . The power consumption while switching from operating frequency 1 to frequency 2 can be ( 1 : 2 ) in joule. This power consumption mainly rely upon the technique used and on the CPU's present in the workstation. This function ( 1 : 2 ) consists of some properties such as the function ( 1 : 2 ) rely upon the entire frequency gap | 1 − 2 |, it becomes zero at 1 = 2 and remain non-decreasing in the entire frequency gap| 1 − 2 |, it is combined convexly at 1 and 2 . Our model have some characteristics which can be shown using (10), where, represents reconfiguration cost for the unit switching of frequency and the values of is bounded only to some hundreds of /( ) 2 . In our model , for every job the size remains same over the respective operating time and any kind of fluctuations not occurred in the task-loads during taskexecution. Various tasks can be parallel executed at run-time, due to the induced time overhead using frequency-scaling technique is very less in few for DVFS-enabled architectures. The above-mentioned prediction that the utilization parameter can be continuous valued and it requires continuous computational rates, which is denoted by . The can offer an instance of CPU's which offer a finite set as, where, these finite set consists of discrete computational rate . The optimality loss from both continuous and discrete DVFS enabled techniques, can be eliminated by (8) as, where, the discrete value set of which represent the frequency set ℍ as shown in (7). A virtual power consumption curve can be denoted as ̃( ) and formed using piecewise linear interpolation and the permitted operating points are, {(̂( ) , (̂( ) )) , = 0, … … . . , ( ) − 1} (9) where, the corresponding vertex points can be presented as, This above mentioned maintains the continuity and can be used for the provisioning of resources. The use of suggests that with the help of , the average energy cost of DVFS enabled techniques remains under the estimated interval of time duration . Here, every configuration rely upon CPU type, size of memory and cost per time. The cost rely upon the type of configuration. The internal cost of assumed to be zero in all information cloud centers.

Modelling of efficient resource allocation
In this section, modelling for effective resource allocations is presented. Here, offers two types of services such as balancing of the load and sharing of computational resources. Precisely, these service used to fine-tune the rate of computation ( , = 1, … … … . , )and size of task ( , = 1, … … … . , ) for the DVFS enabled cloud computing as demonstrated in Figure 1. The main objective is to reduce the total computational energy in joule, which is defined in (11), where, the total computational energy consumption rely on the run-time/job in seconds, operating time/task ( ) needed by demonstrated in Figure 1. Precisely, in Figure 1 all the links are operated by the switching unit adaptively. The entire computational overhead for the ℎ link can be expressed as, where, the condition on total run-time per job to satisfy the solution of optimization problem can be expressed as, Assume that the total computational energy optimization issues can be expressed as following, where it states that, ≥ 0, = 1, … … . , , where, in (14) starting first term represents computational energy and the following term represents re-configuration energy, which can be expressed by ( ) jointly by the computing VMs. Moreover, in (14), represents the present rate of computation and is the required rate of computation. Here, remains constant while processing of task and shows present state of ( ) whereas can be variable and changes  (15) shows the condition for which assigned task must be executes in seconds whereas (16) shows the condition in which the assigned job must be divided into parallel tasks.

Solution for optimization problem using
This section provide various solution for handling optimization problem using our efficient scheduling architecture . Firstly, let the time delay occurred while switching of frequencies remains constant which is induced by DVFS-enabled-techniques. However, the time delay occurred while switching of frequencies between various can be optimized by a non-negative function ( , )( ) and the solution for optimization problem, which is stated in (15) and (18), can be derived. This non-negative function ( , )( ) helps to retain two characteristics such as, a. The non-negative function (. , . ) remains non-decreasing throughout the interval| − |. b. The product of and non-negative function (. , . ) remain convex in .
To control optimization problem, the first term in (14) (14) to (18) is not a convex type. In fact, this problem is a loosely coupled optimization type, where the parameter , = 1, … … … represents the computational problems. The solution for optimization of computational and re-configurational problem is presented as, Assume that, { * , * , = 1, … … … } represents the solution set for computational and re-configurational optimization problem, which is shown in (15) to (18). At last, we present our proposed model in an efficient algorithm form, which is as follows:

PERFORMANCE EVALUATION
Now days, the request of cloud computing devices has highly emerged in real-time due to the extensive utilization of informative devices, digital instruments, network appliances and portable gadgets etc. Multimedia-signal-processing method is well-known technique, which can be utilized in these cloud-computing devices. Therefore, the performance of these computing devices must be superior due to the extensive demand of these computing devices in day-to-day life. However, high-energy consumption in these computing devices can disturb their performance. Thus, this section discusses about the balancing between performance and power consumption. To achieve these objectives, we have introduced a Dynamic voltage and Frequency Scaling (DVFS) based Adaptive Cloud Resource Re-Configurability ( ) technique for heterogeneous computing devices, which efficiently reduces energy consumption, as well as provide superior performance. The run-time can be evaluated considering various jobs as 30, 50, 100, and 1000. Graphical representation of our outcomes is also presented considering execution time, number of tasks and energy consumption. The run-time and total power consumed can be evaluated using different parameters in Table 1, which is demonstrated in the following section. Our proposed model is tested Neon.3 editor and code is written in JAVA.

Comparative study
In this modern era, computing devices has ruled market in different fields like medical, healthcare solutions, trading, software companies etc. Thus, future expertise is clearly in favor of these cloud-computing devices due to their extensive requirements. However, the efficiency of these computing devices may be reduced due to high-energy consumption and lack of efficient resource utilization techniques. Consequently, these issues can be sorted out using efficient task scheduling techniques. Therefore, to allocate resource properly and schedule all the tasks efficiently to overcome power consumption problem, we have presented a novel Dynamic voltage and Frequency Scaling (DVFS) based Adaptive Cloud Resource Re-Configurability ( ) technique. A precise task scheduling technique can enhance throughput of the system, increase connections with subscribers, offer better resource utilization and can afford to handle multiple tasks at a time etc. The results are demonstrated in contrast to other state-of-art techniques in terms of energy consumption, run-time, power sum and average power as shown in Table 1

Graphical representation
This section demonstrates the graphical representation of our evaluated outcomes. Here, Figure 2 shows run time comparison of our proposed technique with DVFS technique using scientific workload ℎ for different jobs as 30, 50, 100 and 1000. Here, Figure 3 shows Power Sum Comparison of our proposed technique with DVFS technique using scientific workload ℎ for different jobs as 30, 50, 100 and 1000. Here, Figure 4 shows Average Power Required Comparison of our proposed technique with DVFS technique using scientific workload ℎ for different jobs as 30, 50, 100 and 1000. Similarly, Figure 5 shows Power Consumption Comparison of our proposed technique with DVFS technique using scientific workload ℎ for different jobs as 30, 50, 100 and 1000. Furthermore, Figure 6 shows Average Run Time Comparison of our proposed technique with DVFS

CONCLUSION
The significance of controlling high-energy consumption and task load allocation for every cloud computing VMs is very essential. Therefore, to attain balancing between power consumption and performance for computing processors in cloud environment, we have introduced a novel dynamic voltage and frequency scaling (DVFS) based adaptive cloud resource re-configurability ( ) technique for cloud computing VMs. Efficient modelling for all the three types of cost such as computation cost and re-configuration cost is presented. The performance of the model is enhanced by reducing all three costs. Furthermore, modelling for efficient resource allocation and utilization is presented and task loads are reduced which is a great challenge for other state-of-art techniques. Numerous resources can be efficiently allocated adaptively at a time using this technique. The experimental results are shown in terms of run time taken, reduction in energy consumption and average power required for cloud computing VMs.