Implementation of the Fruit Fly Optimization Algorithm (FOA) (Python with source code)

1. Implementation idea of ​​fruit fly optimization algorithm

Fruit Fly Optimization Algorithm (FOA) is a swarm intelligence optimization algorithm proposed by Pan Wenchao in 2011. It simulates the way fruit flies in nature forage, and compares the position information of food to the optimal solution of optimization problems. The optimal solution, the current position of the fruit fly is compared to the random solution, and the food is finally obtained through the continuous movement of the position of the fruit fly, which is the optimal solution. The realization idea of ​​the fruit fly algorithm is relatively simple. The predation mode of the fruit fly is mainly used as the individual position movement mode in the iterative training process of the algorithm. The principle of this method is as follows:

Predation

Drosophila has a strong sense of smell, and it can move to the direction of companion gathering or food through smell. The Drosophila optimization algorithm compares this movement method to being affected by the position of the optimal individual in the population. At the same time, due to the uncertainty of flight, Add a certain random value for interference, the specific calculation formula is as follows:

Among them, x_new represents the position after moving, x_best represents the optimal position of the population individual, and rand() represents a random value between 0 and 1.

2. Algorithm steps

The specific steps when using the fruit fly optimization algorithm to solve the optimization problem can be summarized as follows:

  1. Taking the position information of the population individual as the solution to the problem to be optimized, according to the scope of the solution to the problem to be optimized, randomly initialize the position information of all individuals in the population;
  2. According to the problem to be optimized, the fitness value of each population individual is calculated, and the current optimal position information is obtained by comparison and saved;
  3. Use the location update method to update the location information of each individual in the population in turn;
  4. For the updated location information of each individual, recalculate the fitness value, and obtain the current optimal location information by comparison and save it;
  5. Repeat steps 3 to 4 according to the number of iterations, stop the iterative process when the maximum number of iterations is reached, and output the historically optimal position information, which is the optimal solution obtained after the algorithm is optimized.

3. Examples

Questions to be solved:
Schwefels P2.22, the value range is [-10,10], the ideal optimal solution within the value range is 0, and the space dimension of its search is set to 20.

Realize the source code:

import numpy as np
import matplotlib.pyplot as plt

#The problem to be solved, find the minimum value
def function(x):
    y1 = 0
    y2 = 1
    for i in range(len(x)):
        y1 = y1 + abs(x[i])
        y2 = y2 * abs(x[i])
    y = abs(0 - y1 - y2)
    return y


sizepop = 30  #total group number
dimension = 20   #The spatial dimension of the solution
iteration = 100  #iterations
rangelow = -10   #The minimum value of the solution
rangehigh = 10   #The maximum value of the solution

#pop is used to store the location information of the population individual, and pop_fitness is used to store the fitness value corresponding to the individual
pop = np.zeros((sizepop,dimension))
pop_fitness = np.zeros(sizepop)

#Initialize the population individuals and calculate the corresponding fitness value
for j in range(sizepop):
    pop[j] = np.random.uniform(low=rangelow, high=rangehigh,size=(1, dimension))
    pop_fitness[j] = function(pop[j])

#allbestpop and allbestfit respectively store the optimal individual solution and corresponding fitness of the population in the historical iteration process
allbestpop,allbestfit = pop[pop_fitness.argmin()].copy(),pop_fitness.min()

#his_bestfit stores the individual fitness with the best historical fitness value of the population at each iteration
his_bestfit=np.zeros(iteration)

#start training
for i in range(iteration):
    print("The iteration is:", i + 1)
    #Each individual moves towards the place with the highest odor concentration (the location with the best fitness)
    for j in range(sizepop):
        pop[j] = allbestpop + 2 * np.random.rand() - 1
        pop_fitness[j] = function(pop[j])
    # Update the historical optimal position information and fitness value of the population
    if pop_fitness.min() < allbestfit:
        allbestfit = pop_fitness.min()
        allbestpop = pop[pop_fitness.argmin()].copy()
    # Store the historical optimal fitness value of the population under the current iteration and output
    his_bestfit[i] = allbestfit
    print("The best fitness is:", allbestfit)

print("After iteration, the best pop is:",allbestpop)
print("After iteration, the best fitness is:","%e"%allbestfit)

#Output the mean and standard deviation of the individual fitness values ​​of the population after training
mean = np.sum(pop_fitness)/sizepop
std = np.std(pop_fitness)
print("After iteration, the mean fitness of the swarm is:","%e"%mean)
print("After iteration, the std fitness of the swarm is:","%e"%std)

#plot the result
fig=plt.figure(figsize=(12, 10), dpi=300)
plt.rcParams['font.sans-serif']=['Arial Unicode MS']
plt.title('The change of optimal fitness value',fontdict={'weight':'normal','size': 30})
x=range(1,101,1)
plt.plot(x,his_bestfit,color="red",label="FOA",linewidth=3.0, linestyle="-")
plt.tick_params(labelsize=25)
plt.xlim(0,101)
plt.yscale("log")
plt.xlabel("iterations",fontdict={'weight':'normal','size': 30})
plt.ylabel("fitness value",fontdict={'weight':'normal','size': 30})
plt.xticks(range(0,101,10))
plt.savefig("FOA.png")
plt.show()

The horizontal axis in the figure is the number of iterations, and the vertical axis is the optimal fitness value.

Reference source code

Tags: Python Algorithm numpy

Posted by krysco on Fri, 20 Jan 2023 04:38:33 +0300