Enhancing three variants of harmony search algorithm for continuous optimization problems

Faculty of Computing, College of Computing and Applied Sciences, Universiti Malaysia Pahang, Pahang, Malaysia Deanship of Information and Communication Technology, Imam Abdulrahaman Bin Faisal University, Dammam, Saudi Arabia Computer Department, Imam Abdulrahman Bin Faisal University, Dammam, Saudi Arabia Department of Computer Science, College of Computer Science and Information Technology, Imam Abdulrahman bin Faisal University, Dammam, Saudi Arabia


INTRODUCTION
Optimization algorithms were invented to find the fittest element from groups of choices subjected to specific constraints. The use of the optimization algorithm to solve real-world problems started in the early of this century [1,2]. A well-known sector of optimization algorithms called metaheuristic algorithms, which were able to solve different types of problems in different domains [3][4][5][6]. The reasons behind the usage of metaheuristic algorithms in different fields, as they were able to find near-optimal solutions with fast search speed, and its flexibility to suit different types of problems [7][8][9] which is very important in our modern life specially in software development [10]. Effective metaheuristic algorithms were created and used in the literature such as simulated annealing [11], particle swarm [12], harmony search algorithm [13], Firefly [14], and cuckoo search [15].
The effectiveness of meta-heuristic algorithms relies on the utilization of exploration (global search) and exploitation (local search) ranges through a search. The explorative process is defined as the ability of an algorithm to investigate uncovered areas rapidly within large search sizes. The exploitative process is achieved by utilizing the information gained to guide the search toward its goal. Overall the algorithm performance will improve if a balance between exploration and exploitation features is achieved [1]. Geem et al. [13], created the Harmony Search algorithm (HS) by mimicking the creation of new music tune, and it has been used to solve different types of problems by many researchers in different areas such as engineering [16], computer science [17] and many other fields [18].
Even though the HS has strong exploration, the problem of HS it has weak exploitation. The weak exploitation because it has a slow convergence rate, which means the HS is able to discover solution space using its exploration process, but it has difficulty in term of finding the global optima within this space through the exploitation process. To improve the HS performance and fix this problem, researchers proposed different variants of HS in the literature, by adopting different techniques, such as chaotic map [19], hybrid algorithms [1], and opposition based learning (OBL) [20].
Even though many variants introduced in the literature to improve the overall performance of HS, they continue to suffer from the weak exploitation process, while others improved the convergence rate, but they tend to fall in local optima after removing some of the HS parameters. Overall, most of these variants were unable to provide sufficient results and handle different types of problems. In this work we introduce hybrid algorithms between HS variants and improved version of OBL, to enhance the performance of these variants. OBL is an effective technique created by Tizhoosh [21] to enhance optimization algorithms, and in this work, we adopted an improved version of OBL, which utilizes randomness to create a new possible solution. The improved OBL (IOBL) will be used in the HS update process.
In the following section, we will provide a brief description of the HS and its variants that we are going to hybridize with the IOBL. After that to verify the effectiveness of using the IOBL technique we will apply the proposed hybrid algorithms on 9 standard benchmark functions as characterized in Table 1. As the result shows, the new hybrid algorithms show significant improvement for all the HS variant's performance, as the IOBL increased the convergence speed and enhanced the HS variant's exploitation. The subsequent parts will be organized as follow: part 2 will present the original structure of HS and some of its variants, part 3 will present the proposed hybrid algorithms, part 3 will present the obtained results discussion, and in the final part, the conclusion and future work will be provided.

ORIGINAL HS STRUCTURE AND SOME VARIANTS
First, we will describe the standard HS algorithm, and its main components, after that a description of the other variants that we used in this, and how they differ from the standard HS.

Standard structure of HS as described by its author
The HS simulates the musician process to create a new harmony music tune. HS tunes a new prospective value to achieve global optima, in a similar manner of the tuning process to create a new beautiful tone. The standard HS algorithm has three major phases, described in Figure 1 as pseudocode: a. In the first phase, the HS specifies the static parameter values bandwidth (BW), pitch adjustment rate (PAR), harmony memory acceptance rate (HMCR), and harmony memory size (HMS). b. In the second phase the algorithm will create a new population randomly inside the HM, using (1): c. In the third phase, the algorithm will improvise the population inside the HM, based on its parameters (BW, PAR, and HMCR). Through this phase the algorithm will have to choices based on HMCR as follows: -If (R1>HMCR), a stochastic value will be processed in the next equation (R1is a stochastic value between 0~1): -If (R2<HMCR), the algorithm will pick a random HM, and if (R2<=PAR) the value of the chosen HM will be tuned as follows:
d. In the third phase, the new improvised value will replace the worst one in the HM, if it has a superior objective function value. e. Finally, the improvisation process of the HS algorithm will end once the algorithm reaches a stopping cause such as the highest number of iterations.

HS variants
The HS algorithm has some advantages such as flexibility and easy to implement, and because of that, many researchers use it to fix several kinds of complex problems. Similar to other metaheuristic algorithms, HS has some weaknesses, such as the weak exploitation process, and its parameter tuning. To the HS performance and its limitations, several HS variants and hybridization approaches have been introduced in the literature.
Hence sometimes these variants and hybridization fall in local optima or still have a slow convergence rate. In this article, we aim to enhance the performance of these variants, by improving their convergence speed. To do that we present a new technique, based on opposition-based learning, to enhance the performance of three recent variants of HS.
In this part, we will describe the three variants that have been enhanced in this work: a. The first variant of HS was introduced in 2007 as an improved harmony search (IHS) [22]. The new variant aims to improve the original HS performance by solving its parameter tuning problem, and to do that two parameters (PAR and BW) updated through iterations using specific functions.
The new variant provided a decent result compared to standard HS but still has weak exploitation. b. The second variant of HS named an exploratory power of the harmony search (EHS) [23], in this work, the authors analyzed the HS and proposed a new variant of HS. The new variants are similar to the original except it has a new BW modification process, which improved the overall performance of the algorithm, but in some cases, the new variant still has a slow convergence rate. c. The third variant of HS is called improved global-best harmony search algorithm (IGHS) [24]. The new variant is different from the original HS by focusing on the exploration process at the beginning of the search, and on the exploitation process at the end of a search. In this article, they used standard OBL only in the initialization process. The overall result was better than previous HS variants, but it still has slow convergence in some cases. The HS variants that have been introduced in the literature show some improvement in the algorithm performance, but they have the same updating process as in Figure 1, step 4, which can be improved by adopting the OBL or other techniques. In this work, we will implement a new improved OBL technique on the aforementioned variants to increase their convergence rate and improve the overall results.

PROPOSED ALGORITHMS
To overcome HS weak exploitation, many researchers proposed different variants of HS. The modification covered different parts of the HS, such as initialization, improvisation, or parameter selection. Yet all these variants have the same updating procedure, similar to the original HS. This work proposes new hybrid algorithms of HS variants with a new updating procedure, named improved OBL, to enhance the convergence speed and avoid falling in local optima for three variants of HS, IHS, EHS, and IGHS.
The following section presents a new improved opposition-based learning technique (IOBL), which we aim to use as part of the updating process of the hybrid algorithms. The goal of using IOBL to improve the local search process, of the three described variants. All the new variants will be compared before and after the use of IOBL in the evaluation part.

IOBL structure
The first OBL was created by Tizhoosh [21], and after that, different variants of it were developed and used in different research areas [24][25][26]. The original OBL was able to enhance the performance of different optimization algorithms, including HS [27].The current study presents an improved version of the original OBL by including randomness in the process which enhances the diversity of the solution, to provide better performance than the original OBL for continuous optimization problems. The improved opposition was applied through the HS updating phase to increase HS exploitation as the following Figure 2 present. In Figure 2, is the obtained results from the improvisation process, r stochastic number between (0~2), D reflect the dimensions, and ̅ stands for the improved value using OBL.

RESULTS AND DISCUSSION
To present the performance of the proposed hybrid algorithms, we will compare the HS variants before and after adding the IOBL in the updating part. The evaluation process will be made using 15 benchmark functions to find the global optima. After that, we will compare the variant and its enhanced one based on the convergence rate speed. The HS variants implemented in this work same as they described by their authors, except for two parameters that we use as a fixed value, (HMS=5, and the highest number of evaluation function= ). Table 1 describes the benchmark functions we used in this paper, their optimal values, and the range for each function.

Comparison results before and after adding IOBL
All the results of three variants and their hybrid version with IOBL are presented in Table 2. In this table, we provide the average, and time consuming to find the global optima for each HS variants, against its hybrid version with the IOBL. a. The column (IHS) in Table 2 functions except for F5. The reason behind this improvement was the usage of IOBL as it increases the diversity of the proposed solution and increased the convergence rate of IHS. Meanwhile, for F5 the original IHS provided better results, as this function requires more focus on exploration, which is the opposite of the IOBL role. b. The column (EHS) in Table 2, presents the results of the second variant and the next column presents the results of the (Hybrid-EHS). The results in Hybrid-EHS shows significant enhancement except for F5, similar to the previous example. c. The column (IGHS) in Table 2, presents the results of the third variant and the next column (Hybrid-IGHS) present the results of hybrid IGHS with IOBL, and the obtained results by the hybrid algorithm is better than the original IGHS for all functions, except F7. According to the Table 2, the hybrid algorithms provided better performance for most cases with lower running time, and the reason is the usage of IOBL, which enhanced the exploitation process of HS variants. The hybrid algorithm of IGHS with IOBL provided the best overall result compared to the other variants and their hybrids.

Convergence rate before and after adding IOBL
In this part, A comparison between the HS variants before and after adding the IOBL to their structure, the following graphs present the convergence rate of each variant. All the variants applied on the same objective functions (number 1 and 6), with 100 number of iterations. As figures Figure 3 till Figure 7 presents, the IOBL enhanced the convergence speed for all HS variants. For Figures 3 and 4 we compared the original IHS and its new variant IHS-IOBL, using two objective functions, numbers 1 and 6. The figure shows the convergence rate increase and the algorithm reaches global optima with a smaller number of iterations. After that in Figures 5 and 6, we presented the obtained results of applying EHS and its new variant EHS-IOBL, and as we can see in Figures 5 and 6 the convergence speed highly increased after utilizing the IOBL which shows a high improvement compared to the original EHS. For Figures 7 and 8 we compared the variant IGHS and its new variant IGHS-IOBL, and as we can see in graph number 7 the algorithm performance slightly improves, meanwhile graph number 8 shows the convergence rate highly improved. Overall, these graphs present how much the original variant performance-enhanced after adopting the IOBL, and the overall convergence rate increased for all the variants.

CONCLUSION
HS is a well-known metaheuristic, that has advantages such as simplicity and easy to apply to different problems. But similar to other metaheuristics it has weaknesses such as a slow convergence rate, which causes the algorithm to have a weak exploitation process. Many variants introduced in the literature to address the HS problems, and they have enhanced the algorithm performance, yet most of these variants of HS still have insufficient convergence rate. In this work, we have implemented an improved oppositionbased learning technique in the updating phase of the HS recent variants, to enhance the overall algorithm performance, by improving the exploitation process. The proposed hybrid algorithms evaluated, against its original one, using 9 benchmark functions. Moreover, a convergence rate analysis was conducted to present the algorithm enhancement using the IOBL. The hybrid HS variants provided a better result than the original HS variant, with higher convergence speed and lower running time. Overall the IGHS variant with IOBL shows the highest-obtained results in the evaluation test compared to the others. For future work, the enhanced variants can be used to solve a real-world optimization problem. Also, the IOBL technique can be used to enhance other metaheuristics to increase the convergence rate and improve the overall performance.