Population based optimization algorithms improvement using the predictive particles

A new efficient improvement, called Predictive Particle Modification (PPM), is proposed in this paper. This modification makes the particle look to the near area before moving toward the best solution of the group. This modification can be applied to any population algorithm. The basic philosophy of PPM is explained in detail. To evaluate the performance of PPM, it is applied to Particle Swarm Optimization (PSO) algorithm and Teaching Learning Based Optimization (TLBO) algorithm then tested using 23 standard benchmark functions. The effectiveness of these modifications are compared with the other unmodified population optimization algorithms based on the best solution, average solution, and convergence rate.


INTRODUCTION
Recently, many Meta heuristic optimization algorithms have been developed. These include Particle Swarm Optimization (PSO) [1][2][3][4][5], Genetic Algorithm (GA) [6][7][8][9] , Deferential Evolution (DE) [10], Ant Colony (AC) [11], Gravitational Search algorithm (GSA) [12], Sine Cosine Algorithm (SCA) [13][14][15], Hybrid PSOGSA Algorithm [16], Adaptive SCA integrated with particle swarm [17], and Teaching Learning Based Optimization (TLBO) [18][19][20]. The same goal for them is to find the global optimum. In order to do this, a heuristic algorithm should be equipped with two main characteristics to ensure finding global optimum. These two major characteristics are exploration and exploitation. Exploration is the ability to search whole parts of the space whereas exploitation is the convergence ability to the best solution. The goal of all Meta heuristic optimization algorithms is to balance the ability of exploitation and exploration in order to find global optimum. According to [21], exploitation and exploration in evolutionary computing are not clear due to lake of a generally accepted perception. In other hand, with strengthening one ability, the other will weaken and vice versa. Because of the above-mentioned points, the existing Meta heuristic optimization algorithms are capable of solving finite set of problems. It has been proved that there is no algorithm, which can perform general enough to solve all optimization problems [22]. Many hydride optimization algorithms are to balance the overall exploration and exploitation ability.
In this study, the proposed modification increases the exploration and make the particle look to the surrounding space before affected by the best solution. The proposed modification can be applied to any population optimization algorithms. The PSO is one of the widely used population algorithms due to its simplicity, convergence speed, and ability of searching global optimum. Recently TLBO is a new efficient optimization method combine between teaching and learning phases. For the reasons listed above this where v i t+1 is the velocity of particle i at iteration t, w is a weighting function, c j is a weighting factor, rand is a random number between 0 and 1, x i t is the current position of particle i at iteration t, pbest i is the pbest of agent i at iteration t, gbest is the best solution so far.
The first part of (1), provides exploration ability for PSO. The second and third parts, 1× × ( − ) and, 1× × ( − ) represent private thinking and collaboration of particles respectively [23,24]. The PSO is initialized with randomly placing the particles in a problem space. In each iteration, the particles velocities are calculated using (1). After velocities calculating, the position of particle can be calculated as (2). This process will continue until meeting an end criterion.

PSO Exploration Problem
The first part of (1), provides PSO exploration ability. When the algorithm is started, the velocity is initialized with zero value. Thus from Equation 1, the Global Best Particle (GBP) (i.e. P1 in Figure 1 (a)) remains in its place until the best global solution is changed by a new particle. This means the global best particle cannot explore near area because it is not exited by any particle. In addition, particles that arrive from another places (P2 -P5) to the place of the global best solution with a certain velocity after a number of iteration may be damped before reaching the optimal solution as shown in Figure 1 (b). This phenomenon will be treated using PPM in Section 2.

THE STANDARD TEACHING LEARNING BASED OPTIMIZATION
The TLBO method is based on the effect of the teacher on the learners. The teacher is considered as a global best learned person ( ) who shares his knowledge with the learners. The process of TLBO is divided to two phase. The first phase consists of the 'Teacher Phase' and the second phase consists of the 'Learner Phase'. The 'Teacher Phase' means learning from the teacher and the 'Learner Phase' means learning through the interaction between learners. TLBO was modeled as follows [18]:

Teacher Phase
A learner learns from teacher by moving its mean to teacher value. Learner modification is expressed as:

Learner Phase
A learner learns new something if the other learner has better knowledge than him. Learner modification is expressed as:

PREDICTIVE PARTICLE
The main idea of the PPM based on that each iteration the particle should look at its near area and see if it have a value best than the GBP or not. If it have value better than GBP, it will be the GBP. The PPM can remedy non-exiting GBP (P1 in Figure 1 (a)) and not wait until excitation from another particle. In addition, it can improve the vision of the particle before movement toward GBP and overcome the jump over narrow area leaving goloabal solution.
Consider the initial values of the particles P1 to P5, which are shown in Figure 2. In the next iteration, these particles will move toward P1 (as it is the GBP at this moment) and take positions P1, P2 to P5. In addition, the P3 may jump to P3 without converge to gbest especially when the fitness function have narrow area with high deep value. In addition, the P1 still in its position as it is GBP. These phenomena can be treated if the particle try to find a best solution (target) from near area before move to GBP as shown in Figure 3. This can be done using the numerical gradient with a definite target. Assume the fitness function (F) is a linear function near the particle position in matrix form: is the mean of the learner and ' ' is the global best (the teacher) at any iteration .
Using numerical gradient method: X new is the new postion of the particle in column form X old is the current position of the particle R is the step size ∆ /∆ is calculated numerically near by change only From (5) and (6) by substraction: where F old is the current fitinenss value F new is the new fitinenss value From (4) and (7). If F i is the current fitness value of the particle and F t is the target fitness of the particle (less than gbest value). It is nice to dived search steps to N steps as follows: for each step The complete PPM algorithm before moving to GBP is shown in Table 1. In addition, the Modified PSO (MPSO) and Modified TLBO (MTLBO) are shown in Table 2 and Table 3 respectively.  Table 2. Modified PSO For each particle initialize particle End Choose the particle with the best fitness value of all the particles as the gbest Do For each particle Update particle position according to +1 = + 2 × × ( − ) +1 = + +1 gradient algorithm as shown in Table 1 End For each particle Calculate fitness value If the fitness value is better than the best fitness value (pbest) in history set current value as the new pbest End Choose the particle with the best fitness value of all the particles as the gbest While maximum iterations or minimum error criteria is not attained  Table 3. Modified TLBO For each particle initialize particle End Choose the particle with the best fitness value of all the particles as the gbest Do 1 ) Teacher phase = [1 + (0, 1)] = ( − ) = 1 ∶ +1 = + gradient algorithm as shown in Table. 1 for i +1 .
Choose the particle with the best fitness value of all the particles as the gbest While maximum iterations or minimum error criteria is not attained

EXPERIMENTAL RESULTS AND DISCUSSION
The standard PSO, PSOSGSA, SCA, TLBO, MPSO, and MTLBO with the parameter in Table 4 [25][26][27][28] have executed 30 independent runs over each benchmark function for statistical analysis. As shown in Table 5 Figure 11. Converge rate curves for F22 to F23

CONCLUSION
In this paper, the PPM has the advantage of powerful exploration. Thus, it was necessary to enhance the population algorithms by merging it with PPM, which has the advantage of powerful exploitation. Hence, the proposed modification improves the exploration quality and maintaining fast convergence. PPM optimization was tested to find the optimal solution for standard mathematical functions, and results demonstrated improvement in solution quality and convergence rate..