Parallel Implementation of PSO Algorithm

Particle Swarm Optimization (PSO) is a wide-used optimization algorithm that can “optimize a problem by iteratively trying to improve a candidate solution with regard to a given measure of quality” (From Wiki). Just like birds seeking food, a particle’s position would be affected by self-estimation and other particles. The description has been presented detailedly in Wiki, so I give the pseudo-code directly as following.

As an example, to look for the local maximum of y(x) = -(x-1)^3 + x^2 – 3 between 0.0~3.0, which is about 0.1126 at 2.218, it is easy to implement PSO to solve the problem. The figure of y = y(x) is shown as below.

y(x) = -(x-1)^3 + x^2 - 3

The MATLAB code is quite simple,

x = [0:0.1:3];
y = -1 * power(x - 1, 3) + x.^2 -3;
plot(x, y)

We can initialize some particles at random positions, and change their position according to the latest best solution of themselves and the whole group, to find the local maximum value. Here is the pseudo-code.

# Initialization of the swarm
FOREACH particle in Swarm DO
    randomize particle.position, particle.velocity
    calculate particle.fitness
DONE
swarmBestFitness = best(particle.fitness)
# Iteration of the swarm
FOREACH generation DO
    FOREACH particle in Swarm DO
        FOREACH dimension in vDimensions DO
            update particle.velocity //PSO formula
            update particle.position
        DONE
    DONE
    swarmBestFitness = best(particle.fitness)
DONE

Parallelization

It is clear that each particle in the swarm is independent which can be computed and altered on its own. After each generation, the swarm best fitness and according position need to be collected and broadcast, and then the next generation is to be computed.

Some issues about scalability should be noticed. First is the dimension of the optimization function. The best solution is to create a struct including an argument array as the position of particles. The second one is the communication volume when swarm becomes larger. Suppose there are N particles in the swarm and the dimension is D. It is easy to see we need broadcast (D+1)*(N-1) values ([D*(N-1) positions,] (N-1) particle_best_fitness) each generation. So the comlexity is O(ND).

The algorithm was implemented as one-dimension solver with MPI, written in C++. I tested it on my laptop. Because of the randomization, the result is different each time.

Best solution (with MPI):
#CURRENT#(Position.X = 2.215059, Velocity.X = -0.021011, Fitness = 0.112612), #BEST#(Position.X = 2.215059, Fitness = 0.112612)

Best solution (without MPI):
#CURRENT#(Position.X = 2.214772, Velocity.X = 0.043791, Fitness = 0.112611), #BEST#(Position.X = 2.214772, Fitness = 0.112611)

The code is uploaded to GitHub (subdirectory named PSO) with a README file and MAKEFILE file.

In my original work, I used PSO to design antennas automatically. The diagram is as below. It is a two-dimension problem with N*N particles (N*N grids). The fitness is calculated by CST Microwave Studio and is then feed back to PSO.

Antenna designed with PSO

Leave a Reply

Time limit is exhausted. Please reload CAPTCHA.