A NURBS-optimized dRRM solution in a mono-channel condition for IEEE 802.11 entreprise Wlan networks

Dynamic Radio Resource Management, RRM, is an essential design block in the functional architecture of any Wifi controller in IEEE 802.11 indoor dense enterprise Wlans. In a mono-channel condition, it helps tackle co-channel interference problem and enrich end-to-end Wifi clients experience. In this work, we present our dRRM solution: WLCx, and demonstrate its performance over related-work and vendor approaches. Our solution is built on a novel and realistic per-Beam coverage representation approach. Unlike the other RRM solutions, WLCx is dynamic: even the calculation system parameters are processed. This processing comes at price in terms of processing time. To overcome this limitation, we constructed and implemented a NURBS surface-based optimization to our RRM solution. Our NURBS optimized WLCx, N-WLCx, solution achieves almost 92.58% time reduction in comparison with basic WLCx. Furthermore, our optimization could easily be extended to enhance others, vendors and research, RRM solutions.


INTRODUCTION
The Wifi controller is the central component of a Wlan enterprise network architecture. All network access points get their radio configuration from this controller especially, what radio/canal to use, and at what transmit power. The controller plays another important role in Wlan integration to other parts of the enterprise network: Lan, Local Area Network, Wan, Wide Area Network, and Dcn, Datacenter Network, where application servers are located. By looking closely at the controller functional architecture, it is RRM block that processes the radio plan. RRM controls access points transmit parameters such as to minimize interference and optimize the spectrum utilization. But how does RRM decide on what channel to be used by an access point? and at what transmit power? To build an efficient radio plan that maximizes the network capacity, the controller needs data from access points, Wifi clients, wired network devices, and servers. This data pertains to the quality of the radio interface and client overall experience when accessing to services. But this information is not sufficient to hint on the whole coverage quality, such as the interference, at any point of the coverage area; it is limited to some coverage points where access points and clients are located. To overcome this limitation, either we place sensors everywhere, which is not economically feasible in an enterprise network, or model the coverage area. The Vendors do coverage modelization in a lab context to provide strict recommendations that customers should follow to build their networks. This approach works in common situations but, it requires a lot of engineering effort and monitoring to maintain the network at an optimal condition. In some situations, it may just not work or false the transmit opportunity estimation. For the rest of this work, this approach is referenced as static RRM, Journal homepage: http://ijece.iaescore.com/index.php/IJECE Ì ISSN: 2088-8708 or sRRM. The third alternative is to allow the controller to do more real-time complex processing, without any or very few preconfigured settings, to find out the optimum RRM configuration to apply network-wide. This approach is the focus of this study and will be referenced as dynamic RRM, or dRRM. A controller, that supports dRRM, does not rely on any preconfigured settings, in hardware or software, to decide on how to modify the radio plan to meet the function utility. In dRRM, even the system parameters are processed to optimize the network capacity which is different from sRRM. But the advantage of dRRM comes at a high price in terms of time and system resources consumption to process the whole network coverage and adaptation to changes. The aim of this study is to reduce the required processing time of dRRM as it is descrided in this work [1].
In this work, we present our dRRM optimization solution algorithm: N-WLCx, that is based on concepts generally encountered in the context of CAGD, Computer Aided Graphical Design, field: Bézier curves and NURBS sufaces. Our solution approach is built on a novel and realistic per-Beam coverage representation that is different from research models: per-Range and per-Zone. In theses works [1,2], we detail our coverage representation approach and demonstrate how it generalizes the other common and advanced literature approaches. In this study, we show that our optimization achieves a 92.58% time reduction by processing only 6.5% of available coverage points in average. This result is more significant than the 79.99% time reduction we achieved in this work [3] and its extension [4]. In Section 2., we present how related work, researchers and vendors, process RRM. In Section 3., we present our dRRM solution and compare it to vendors' solution in processing the radio coverage. Before the problem statement, in Section 5., and the presentation of our solution, in Section 6., we introduce, in Section 4., some important facts about Wlan network design, coverage representation models and NURBS surfaces. Section 7. is dedicated to the simulation and evaluation of our optimization. In the conclusion, we recall our achievements and further our work. This paper is an extension of the work originally presented in the 2017 15 th International Conference on Wired/Wireless Internet Communications (IFIP WWIC) [5]. In this extended version, and in addition to the original version, we evaluate deeply our solution results in terms of processing time and the accuracy of results, visually and statistically. We explore also, the effect of modifying the number of control points in a very large coverage area. We enhance significantly the NTO-CP algorithm Part2 procedure and clarify the purpose of the coverage area zoning.

DRRM RELATED WORK
In this section we discuss RRM approaches from research and vendors leader of the Wifi market such as: Cisco, Aruba-HPE, that pertain to enterprise Wlan networks. We're interested in algorithms that modify the transmit power of APs in order to maximize the network capacity or optimize radio resources utilization. An algorithm is different from another when the used variables are different. In this preliminary work, we discuss a mono channel condition.

In research
The first category of approaches concentrate on lower-layer constraints: co-channel interference, physical interface and MAC performance. The authors in these works [6][7][8], modeled the coverage area per-range: transmit, interference, and not-talk ranges, using a circular or disk pattern. The way this model represents the coverage is common but may not hint on some opportunities to transmit as discussed in this work [2]. The author in this work [9], focused instead on the interaction that an AP may have with its neighboring AP. The result is a per-zone, Voronoi zone, negotiated coverage pattern. This model is difficult to put into practice technologically and economically as it was discussed in [1,2]. Both models: per-zone and per-range, do not take upper layer constraints into account.
Another set of similar works tackle the issue from a power saving perspective. The authors in [10] build their on-demand Wlan approach on the observation of idle APs that have no clients associated to. The Wlan controller manages the activation or not of an AP. In this work [11], the authors build a radio environment map to allocate dynamically the spectrum among stations. This map considers the stations location and power models to minimize the outage probability and reduce service blocking to users.
A third category of approaches tackle the issue from an upper-layer perspective for applications such as: FTP, HTTP. This work [12], as an example, presented an interesting idea to find out a suitable power, or RRM, scheme, that may optimize the application performance. It is a per-experience approach that requires a huge amount of data, to be put into practice. And, it is very dependent on the coexistent applications behavior. Eng   ISSN: 2088-8708  Ì  4191 Another challenge, is to be able to determine when physical layer is responsible of the observed performance. Works like [13] use concepts from Game theory, that is a powerful tool, to model the interactions between APs. These concepts are applied to the user perception of the QoS it receives. The same limitation of the previously cited work applies to this one also. A fourth category tackles the problem from an inter-protocol cooperation point of view like in this example [14]. Making the protocols aware of each other is a good strategy to find an optimum inter-protocol negotiated power scheme that optimizes the performance of each of them individually. It's an idealistic scheme, difficult to put into practice technologically and economically, with regards to vendors offering. Let's imagine the integration of a Wifi and Bluetooth network. The impact of a Wifi AP on Bluetooth network is very important but not the opposite. Then, as an example, it is necessary to find out a way to provide the network controller (for both Wifi and Bluetooth) with feedback so it can adjust Wifi power scheme to allow a Bluetooth optimum operation. This would require important data transfers (and power) from the Bluetooth network to the controller which is very difficult, by design of Bluetooth devices, to implement.

Int J Elec & Comp
A fifth cateroy of approaches such as [15][16][17][18][19] concentrate on the environmental variables that may affect the phenomena under study. In this work [15], the RRM policy is issued by a learner repeatedly to train the general RRM model. In [19], the authors apply deep learning principales to the the stations power scheme. The outcome of these methods depend heavily on the quality of the training step.

Vendor solutions
The approach or theoretical background, behind the vendor implementations, is hidden in general for commercial purposes, only the settings (recommendations) are provided by those vendors. Cisco TPC, Transmit Power Control, algorithm, that is a part of Cisco RRM, processes, at each AP, the desired transmit power hysteresis, T x Hysteresis,Current , that is equal to the sum of the current transmit power (initially at maximum), T x Current , and the difference between the power threshold, T x T hresh , and RSSI 3rd , the third neighbor reported RSSI. If the difference between the processed power and the current one, T x Hysteresis T hresh , is at least 6dBm, then the current power must be reduced by 3db (by half). We should then wait for 10 minutes before re-attempting another calculation. Details about this implementation are given in [20].
Aruba-HPE adopts another strategy. The ARM, Adaptive Radio Management, algorithm maintains two measures for every channel : a coverage index, cov idx , and an interference index, if er idx . The decision of increasing or decreasing the transmit power level on a given channel is based on the processed coverage index as compared to the "ideal" coverage index, noted cov idx,ideal , and "acceptable" coverage index, cov idx,acceptable , for instance. As a general rule, the current coverage index should be greater than cov idx,acceptable and equivalent to cov idx,ideal . Coverage index, cov idx , corresponds to the sum of two variables : x and y. x is the weighted average of all other APs SNR as being measured by the current AP. y is the weighted average of the processed x variables by other APs from the same vendor and on the same channel. The same thing applies to if er idx processing. Details of this calculation are in [21].
Fortinet Auto Power that is a part of ARRP, Automatic Radio Resource Provisioning, solution, works by reducing automatically the transmit power if the transmit channel is not clear. From the corresponding documentation [22], it is an alternative to manually limiting the number of neighbors per channel (less than 20) by adjusting the transmit power level.

3.
OUR WLCX DRRM SOLUTION Our WLCx dynamic RRM solution is based on the per-Beam coverage representation we discuss in the upcoming section. Our solution is "dynamic" because even the parameters values change, especially, the optimum number of supported direction per AP in the case of WLC2 variant. The workflow in Figure 1, describes how our solution works.
Our solution runs three algorithms: TDD (Discovery), TDM (Map) and TDO (Opportunity). After initialization, TDD optimizes the number of supported directions per AP by reducing the power level and doubling the initial number of direction until all neighbors are discovered and at almost one neighbor is discovered per AP direction. Based on information from TDD, TDM categorizes the coverage area points into categories that hints on how these points appear on APs directions. Each category is assigned a cost to hint on its probability to get a fair transmit opportunity. TDO aim is to process each coverage area point opportunity to transmit, taking into account data from TDM and SLA (upper-layer input). We simulate, using Matlab 2019a, two variants of our WLCx solution : WLC1 and WLC2. In WLC1, all APs share the same optimal number of supported directions and transmit at the same power level. In WLC2, APs process the same optimal number of supported directions but may use different transmit power level per AP. In the same simulation, we compare both WLCx variants to vendor implementation : Cisco. We evaluate models based on their performance at processing the coverage and time this processing takes.
The coverage processing performance, P r(), of a given model, m, is calculated in (1). I(), H() and O() are the model processed interference, number of coverage holes and transmit opportunity, repectively. The performance calculation in (1), is the weighted sum of relative interference, opportunity and coverage holes in each model. The weights K 1 , K 2 and K 3 , hints on how important is processing of interference, opportunity or holes, to the performance of a given model. For the rest of our study, we consider that all variables are of equal importance then, The diagram in Figure 2, shows the performance of models after 10 iterations of the same simulation. Each simulation corresponds to a random distribution of a set of 30 APs and 100 WDs. We check that our WLC2 solution variant performs better than Cisco and WLC1. Cisco model performance is comparable to WLC1. The processing time of models is represented in Figure 3. The models have a comparable processing time for a large number of the same simulation iterations. In work [2], we discuss our WLC2 dRRM solution.
For further details about our solution, refer to [1] work that is an extension of the previous one.  THEORETICAL BACKGROUND Before we dive into the description of the problem, let's recall some facts about Wlan enterprise network architecture design, the importance of coverage representation for radio planning, and NURBS surfaces concepts that are the foundation of our NURBS optimized WLCx dRRM solution.

Wlan Enterprise networks
In a standalone AP-based Wifi architecture, the network capacity do not scale with dense, important number of Wifi clients and APs, frequently changing radio environments. To optimize the network capacity, some kind of coordination and control, distributed or centralized, is needed. In UWA, Unified Wifi Architecture, Ì ISSN: 2088-8708 a WLC, Wireless LAN Controller, acts as a repository of APs intelligence, runs routines to plan radio usage, provides an interface to wired network, etc. and guarantees conformance to policies: QoS and Security, domainwide, including LAN, MAN, WAN, and DCN, network parts. A typical enterprise Wlan architecture is given in Figure 4. Two market leading implementations of such WLCs are the Cisco 8540 Wireless Controller and the Aruba 7280 Mobility Controller. The rest of our study focuses on Cisco implementation. In Figure 4, APs are located nearest to Wifi clients, WDs. All APs are connected to the LAN and are associated, via VPNs, Virtual Private Networks, or tunnels, to the controller, WLC, located at the Datacenter, in a Hub and Spoke architecture. Depending on the network size and requirements, the controller may be located at the same location as APs. To build an association, an AP should be able to join the controller, via MAN, WAN or internet. After AP's successful association to the controller, WDs start their association process, that includes authentication, to Wlan. After the successful association, WDs are able to access network resources behind the controller or in some configurations, behind APs (FlexConnect or Local Switched mode).
WLC receives information about the network from three sources : the wired path toward the datacenter, the radio interface counters of each associated AP, and OTA, Over-The-Air, AP to AP wireless messages over a dedicated low speed radio. In the case of Cisco, two protocols are available for the purpose of exchanging data between APs, and between APs and WLC: (a) CAPWAP: stands for Control and Provisioning of Wireless Access Points, is used by APs to build a protocol association to the RF group leader WLC and for control and data exchange. (b) NDP: is the Neighbor Discovery Protocol, it allows APs to send Over-The-Air (OTA) messages and exchange standard and some proprietary control and management information.
In addition to these protocols, Cisco APs embark a set of on-chip features such as : CLIENTLINK and CLEANAIR. CLEANAIR enables the APs to measure real-time radio characteristics and send them to the controller via the already established CAPWAP tunnels. Cisco appliances such as Cisco Prime Infrastructure (CPI) and Mobility Services Engine (MSE), shown in Figure 4, extend the capability of this feature to process analytics on Wifi client presence, interfering devices management and heatmaps processing. CLIENTLINK version 4.0, is the Cisco at AP-level implementation of MU-MIMO IEEE 802.11ac beamforming. It works independently of CLEANAIR after the assessment of the quality of the channel. In this scheme, an AP sends a special sounding signal to all its associated WDs which report, back to this AP, their signal measurement. Based on these feedbacks, the AP, and not the controller, decides on how much steering toward a specific WD is needed to optimize the energy radiation.

Coverage representation and processing
We categorize related-work's coverage representation models into three categories: Range-based, Zone-based and Beam-based. In the upcoming subsections, we describe each of them and discuss their limitations. In Range-based category of models, it is common to represent an AP's wireless coverage such as : Ì 4195 a transmission, interference or no-talk range. These ranges processing is based on the estimation of the distance between the AP and a receiving point P (AP or WD). Further, this category of coverage representation models, consider that an AP's coverage pattern is omnidirectional, with the geometric shape of a circle or a disk, centered at the AP, like in Figure 5. In this scheme, the interference, for example, at any given point is approximated by the weighted intersection of all interfering devices patterns at this point.

Figure 5. A per-Range model coverage pattern
In Zone-based category of models, an AP coverage is not only function of its transmission characteristics: channel, power level, etc., but depend also on the neighboring APs. The result of this, is that the transmission shape is no more a solid circle but a convex polygon with straight sides. Each straight side defines a borderline that separate two neighboring APs' transmission ranges. The more an AP transmit power is strong, the more the borderline with its neighboring APs is far. Further, it is important to note that a point in a transmission zone of one AP could not be in another AP's transmission zone. An example of Zone-based AP's wireless coverage is represented in Figure 6. In this scheme, the interference caused by the transmission ranges in the previous model, is totally cancelled. Only interference caused by other ranges : interference, and no-talk is still present. The previous two models: Range and Zone-based, come with these limitations: (a) Both models are limited to consider that the strength of interference is only inversely proportional to the distance (or quadratic) of an AP from interfering neighbors, (b) Both models would interpret an increase in a transmission power level as an expanded reach in all directions: uniformly in case of Range-based models but depending on neighboring APs in the case of Zone-based ones, (c) A point could not be in two transmission ranges of two different APs at the same time in Zone-based models, (d) Both models would interpret falsely obstacles to the signal propagation, as a weaker signal from an AP in the context of indoor Wlans does not mean necessarily that this AP is out of reach, (e) Alternatively, a stronger signal from an AP does not mean necessarily that this AP is at reach: it may be guided or boosted under some conditions. The consequences of these limitations, the adoption of a Range or Zone-based like representation model of coverage, and regardless of the RRM solution that is built upon, is to false our transmit opportunity processing and misinterpret some phenomenons encountered in the specific context of indoor enterprise Wlans.
To overcome the limitations of the previous models, our Beam-based coverage representation, defines for each AP a number of directions over which it may transmit. Depending on the number of directions, their order and transmit power levels, an AP may be able to mimic a Range or Zone-based scheme. The Figure 7, shows a per-Beam coverage pattern example. In this pattern, the APs have an equal number of directions, equal to eight, that are uniformly distributed and of equivalent transmit power. In works [2,1], we discussed in detail how per-Zone and per-Range representation models are generalized to per-Beam representation and how our representation model could solve previous models limitations such as : per direction transmit power control, hole coverage reduction, obstacle detection, client localization and transmit opportunities maximization.

NURBS surfaces
The processing of coverage may induce huge time and system resources consumption. In this work, we propose an approach to alleviate this processing using NURBS surfaces, that are a generalization of Bézier curves and B-Splines surfaces. For complete details about these concepts, please refer to this book [23]. A NURBS surface is described in (2).
The NURBS surface S() in (2) is associated with two degrees q and p that correspond to the number of control polygons and the number of control points. B i,j is the B-Spline surface given in (3).

Int J Elec & Comp Eng
ISSN: 2088-8708 Ì 4197 In [24,25], NURBS surfaces are built using B-Splines, that are an application of Bézier curves, it is required that the control points or polygons be of the same number. Using B-Splines, to process NURBS surfaces, introduces the utilization of knots. The nodal vector defined by these knots, subdivide the parametric space to these corresponding points t 0 , t 1 , . . . , t m+p+1 and t 0 , t 1 , . . . , t n+q+1 . The first set of points correspond to control polygons and the second set to control points. A control point or control polygon become active when the parameter enters the corresponding parametric interval. Some important properties about drawing a NURBS surface: (a) The surface shape depends on control points positioning and concentration, (b) Weights associated with these control points, (c) And the nodal vector. In the upcoming sections, we show how our solution uses these important concepts about NURBS surfaces in optimizing the coverage processing.

5.
PROBLEM STATEMENT: TIME PROCESSING OF COVERAGE Coverage processing includes the calculation of interference, opportunity and coverage holes, as per our Beam-based representation model, that is a generalization of the previous work models such as Range or Zone-based representation models.
For the problem description let us define: P i -a coverage point. L j,k -AP j , number k direction. C i -the sensitiviy of point P i at reception. C i,1 -AP to which P i is associated, range of transmission. C j,2 -AP j , interference range. C j,3 -AP j , no-talk range.
We show in (4), the interference I B () that is calculated by WLC2, our WLCx dRRM solution variant, using Beam-based representation model. The processed interference by this model at a point P i , corresponds to the sum of the intersections, Sc(), of all APs beam patterns with C i and their interference and no-talk ranges with C i,1 that is the transmission range of AP i to which the point P i is associated.
For the opportunity calculation let us define: s 1,i -passive survey result at a coverage point P i . s 2,i -active survey result at a coverage point P i .
In (5), we give the opportunity calculated by WLC2 model, using our Beam-based representation model, O B (). The opportunity is inversely proportional to the interference calculation and hints also, on the result of surveys on the active and passive network paths: s 1,i and s 2,i . Passive surveys allow the controller to have statistics and metrics from the network devices and attached interfaces that are on the network path between the client and the server such as : the number of transmit errors, number of lost packets, etc. and is generally available via protocols such: SNMP or Simple Network Management Protocol. Active surveys instead, craft traffic patterns and simulate actively the traffic between the client and the server, using protocols such as UDP or TCP, and report measurements such as : delay, jitter, etc. to the controller.

Ì
ISSN: 2088-8708 The last element to include in the coverage processing, is the number of the detected coverage holes, that is given in (6). Coverage holes are evaluated at every coverage point P i and correspond to points where the signal is insufficient to perform an accurate communication with their APs of association or the access network, if they're not already associated. holeT hesh i is another variable that is tight to the point P i sensitivity at reception.
The processing of the coverage, that is done in (4), (5) and (6), is a part of the general processing of our dRRM solution variants: WLC1 and WLC2 that is described in Figure 1 workflow. We give in (7) the necessary time to process a coverage and to process changes to this coverage. In (7), we neglected, for simplification, the necessary time to process the optimal number of directions that is supported by the APs and the corresponding transmit power levels. M is the number of APs and any monitoring device. T discovery is the necessary time to run TDD and build a neighborship map. N is the number of coverage points, where the coverage must be calculated. d is the processed optimum number of the directions that are supported by APs. T interf erence corresponds to the necessary time to process coverage. We consider that T interf erence , T opportunity and T holes , times are equivalent.
In Figure 8, we plot the processing time results of models with and without control: simplistic (Rangebased), idealistic (Zone-based), WLC1, WLC2 (dRRM) and Cisco (sRRM). We notice that in general, withoutcontrol models perform better than with-control models due to the addition to the control part of processing. Processing time of with-control models are equivalent but huge in comparison with without-control models. sRRM and dRRM solutions have advantages over each others and over the without-control models approaches but they require important processing time and resources, which is not suitable in the context of indoor dense enterprise Wlans. In the next section, we propose an optimization solution that is based on concepts from CAGD field: NURBS surfaces, to with-control RRM solutions. To stick we the aim of this work, we apply this optimization to the example of our dRRM WLC2 solution, but it's easily applicable to the other approaches.

NURBS OPTIMIZED WLC2 SOLUTION: N-WLC2
The workflow in Figure 9 describes how our solution processes the coverage. After the initialization, our solution model runs NTO-CP function to discover the "effective" control points and optimize the knots number. Then, it runs NTO-CH that is responsible of change processing. The upcoming subsections detail the functioning of our optimization solution. The aim of this algorithm is to reduce the number of control points and still obtain the same coverage calculation results. It optimizes also the knots number corresponding to the variables u and v in (2). NTO-CP processing is done at the system initialization, and for any newly added control point, or at a large periodic time interval to guarantee that A set is up-to-date.

Ì
ISSN: 2088-8708 For the description of this algorithm let us define: A -the set of P i,j control points. A inef f -the set of ineffectif control points, that have no control over the transmission opportunity of the other nodes, but still monitor the radio interface. ERRthe difference between S() calculation and the corresponding reported measure. A i -set of P j points that are affected by the maximum weighting of point P i . w avg,reported -the average of the reported measures of P i as seen by all the other nodes. P 0,pseudo -the nearst point of A − A inef f from the processed pseudo-node. Z 0,pseudo -the central zone control points set that is covered by the pseudo control point at the maximum weight. At first, as described in Algorithm 1, A set is initialized to correspond to the mobility devices: APs, WDs, and all devices that have the ability to report raw radio data measurements. These devices are also the main source of co-channel interference. We reorder A set by increasing power levels. We increase the transmit power level of the first node to the maximum and calculate S() at all the other A set nodes. If the reported measures, after and before weight change, are the same and if S() at these points is the same, then we move this node from A set to A inef f set. If S() calculation is not equal to the corresponding reported measure, we set ERR to this difference. ERR hints on the difference between the analytically processed coverage and the reported measure at the radio interface. Furthermore, all P j nodes that are affected by P i maximum weighting are put in A i set.
The processing of the effective control points requires a one by one node weighting at the maximum level, and the measurement of its effect on the other control points. This weighting may correspond to an increase of the transmit power level, a higher QoS classification, or any other variable that can impact the transmit opportunity. for i ← 0, |A|, i + + do 3: w i ← w max

5:
A i ← ∅ 6: for j = i ← 0, |A|, j + + do 7: if w j,af ter = w j,bef ore then 8: if S j,af ter = S j,bef ore then if S j,af ter = S j,bef ore then 15: w j = w j + 1 16: end if end for 20: end procedure After processing the control points, NTO-CP processes the control zones as per the Algorithm 2. We divide the coverage area into a maximum of four zones: one central and three suburbans. In each region, we elect a zone control point that matches these two criteria: covers the all corresponding zone, and is the farthest point from the central zone or the central zone control point. We set these four zones control points transmit power level at the maximum and turn the other control points to monitoring state that is the lowest transmit power level.

end if
And so on for P 2,pseudo and P 3,pseudo 17: end procedure We initialize next, per the Algorithm 3, the knots number to match the control points number. If the reported measures at these points are the same as the calculated ones we keep the current number, otherwise we double it until the acceptable hysteresis is satisfied in the corresponding zone. In the upcoming subsection, we describe how our solution react to a change that may affect the coverage area. Not all changes are relevant: they may affect a zone, multiple zones or the entire network. A change may correspond to a newly reported RSSI, measured SNR or any other relevant variable that impacts the coverage area. The NTO-CH Algorithm 4 describes how a change is handled by our solution. The procedure tries to scope the change impact so that only the pertaining control points sets are processed to reflect the new change. The notion of zone, used in NTO-CH, is different from NTO-CP, as the purpose is different. The idea here, is to find an optimized number of zones that hints on the impact of a given change and not to optimize the coverage processing. NTO-CH algorithm categorizes the coverage points into three classes : C 1 , C 2 and C 3 . C 1 points have a higher impact on the coverage area than C 2 and C 3 classes points. A coverage point Ì ISSN: 2088-8708 that belongs to C 1 has an important impact on its neighborship and corresponds by itself to an entire impact zone. C 2 impact zones include many adjacent coverage points of lower impact to be equivalent to a C 1 impact zone. The determination of the C 3 impact zones, follows the same logic of collecting many class C 3 adjacent coverage points to form a C 2 class equivalent zone.

Time
The total required coverage processing time corresponds to one initial calculation of the coverage and k − 1 changes processing. This time includes the effective control points processing time, the optimization of the knots number time, and the changes processing time. The processing of the effective control points is unique to this method and requires running S(), M * (M − 1) times. The optimum knots number processing time corresponds to S() calculations at every zone control point, and multiple iterations of the same calculation, until the required accuracy is achieved. The necessary time to process knots is equal to α µ * (M − β). α, µ, and β, are the number of iterations, the number of zones and the number of ineffective controls points respectively. We give in (8) the necessary time to process our NURBS optimized WLC2 solution. η is a value that represents the scope of the change.
For the remaining of this work, we apply these numerical simplifications: α = 1, µ = 4, η = 0.25. α = 1, that corresponds to one iteration, is sufficient for an acceptable accuracy in comparison with the other algorithms. Also, at initialization, the number of knots is set to a high level. µ = 4 is more to allow parallel processing when computing the zones and may correspond to non-overlapping channels. η = 0.25 supposes that most changes affect only specific zones and do not span multiple zones.

EVALUATION
In this section we evaluate our NURBS optimized WLC2 solution, N-WLC2, against WLC2 our dRRM basic solution variant. We describe the process of our simulation, the effect of modifying the number of control points, the criteria we adopted to check the accuracy of our results, and the processing time of the models with and without optimization.

Simulation
We simulate all the models in Matlab 2019a version using a 32-Giga RAM 8-Core AMD processor SSD disk and Windows 10 Pro operating system. For this test, we simulate a random network of 30 APs. The Figure 10 shows an example of the distribution of APs and WDs, when the number of the control points is equal to 32 points. The points in red (*), correspond to the control points, WDs, where the coverage calculations are done. They are uniformly distributed: 32 points in each dimension axis of a 2D Cartesian plan. The total number of the coverage area points, including the control points, is equal to 128 * 128 = 16, 384 points. In Figure 11, we show our reference heatmap that represents the coverage calculations result of WLC2 without optimization. These calculations have been done for all the 16,384 coverage area points.  Figure 12, we show the visual effect of modifying the number of the control points. We notice how comparable are first, second and third subplots. As a first conclusion, it is, visually, enough to process the coverage at only 6.25% of the total number of the coverage area points to get the same result.
In Table 1, we show the mean, median and standard deviation of the difference between WLC2 and N-WLC2 calculations. We check that the mean and median are slightly different from zero when the control points number is : 128, 64, 32 or 16 points per axis. The standard deviation is getting higher for the lower values of the number of the control points number. At this stage, we could state that statistical results: mean, median and standard deviation of 2.51, 2.33 and 24.55 units, respectively, correspond to a visually   In Figure 14, we plot the coverage required processing time as a function of the number of the control points. We notice that the processing time decreases exponentially with the number of the control points. When the number of the control points is equal to 32 points per axis, the relatively required processing time reduction is almost 93.75%.

Accuracy of results
We visually observed that using only 32 control points per axis is sufficient to get an accurate estimation of the coverage area heatmap. We quantified these observations, using statistical variables : mean, median and standard deviation. We've seen that a visually acceptable result may correspond to a mean, median and standard deviation almost equal to 2.51, 2.33 and 24.55 units, respectively. To confirm our observation, we redo the previous simulation multiple times. In Table 2, we show the results of 10 iterations of the same simulation. We check that the mean and median values when the number of control points is equal to 32 points, is almost constant. The standard deviation is varying between 15 and 25 units.

Processing time
In Figure 15, we plot N-WLC2 processing time results for 10 iterations of the same simulation. We notice that in general, N-WLC2 time is very negligible in comparison with WLC2 time. The N-WLC2 optimization reduces the required coverage relative processing time, in average, by almost 92.58%.

CONCLUSION
In this work, we've presented our WLC2 dRRM solution in comparison with the idealistic (Zonebased), the simplistic (Range-based) and the vendors' sRRM category of models. We've shown that our solution performs better than vendors' sRRM solution in a simulated controller-based Wifi environment. But the basic variant of our dRRM solution: WLC2, requires relatively important processing time than the reference models. The N-WLC2 optimization, solved this limitation and allowed us to achieve an average of 92.58% relative time reduction when processing only 6.5% of the total number of the available coverage area points. The accuracy of the results was evaluated both visually and statistically among a large set of patterns.
Our N-WLC2 optimization approach does not depend on the Beam-based coverage representation model approach we adopted for the simulation of the coverage area. But the calculations of the coverage, at the control points, are done using the basic variant of our dRRM solution, WLC2. In this preliminary work [4], we's explored the possibility to optimize the prediction of the coverage area measurements based on environmental variables rather than on an analytical interpretation of the phenomena under study. But the result is not as much important, only 79.99% time reduction. In further work, we explore the possibility to introduce an hybrid approach in predicting the coverage either by adapting the basic machine learning algorithm or using their deep learning counterparts.