Application of PSO for Optimization of Power Systems under Uncertainty


Doctoral Thesis / Dissertation, 2009

159 Pages, Grade: 1.0


Excerpt


Table of Contents

Acknowledgement

Abstract

1 Introduction
1.1 Motivation
1.2 Objective
1.3 Outline

2 Particle Swarm Optimization
2.1 Introduction
2.2 Simple PSO Algorithm
2.3 Algorithm Settings
2.3.1 Population Size
2.3.2 Maximum Velocity
2.3.3 Acceleration coefficients
2.3.4 Inertia weight
2.3.5 Constriction factor
2.3.6 Neighbourhood Topology
2.4 PSO Versions
2.4.1 Binary PSO
2.4.2 Discrete PSO
2.4.3 Adaptive PSO
2.4.4 Multiobjective PSO
2.5 Applications
2.6 Novel Particle Swarm Optimization Algorithm
2.6.1 Introduction
2.6.2 Who can benefit?
2.6.3 The solution strategy
2.6.4 Description of the Proposed Approach
2.6.5 Performance of APSO
2.7 Summary

3 Constrained Optimization
3.1 Introduction
3.2 Penalty Functions
3.2.1 Static Penalty
3.2.2 Dynamic Penalty
3.2.3 Adaptive Penalty Function
3.2.4 Self Learning Penalty Function (AP)
3.3 Novel Approach (Rule Based Penalty Approach, RBP )
3.3.1 Why do we need a fitness function?
3.3.2 The Idea
3.3.3 Who decides the camouflages?
3.3.4 The Procedure
3.3.5 The Benefits
3.4 Test Problems
3.5 Summary

4 Unit Commitment Problem
4.1 Introduction
4.2 Problem Formulation
4.3 Particle Formulation
4.3.1 Binary Coding
4.3.2 Integer Coding
4.4 Binary Programming
4.4.1 Repair Strategy
4.4.2 Mutation Operator
4.4.3 Generation
4.5 APSO Approach
4.5.1 Swarm Initialization
4.5.2 Velocity Update
4.5.3 Additional Constraints
4.6 Special Convergence Operators
4.6.1 Demand Equalizer
4.6.2 Reserve Manager
4.7 Numerical Example
4.8 Conclusion

5 Optimization under Uncertainty
5.1 Introduction
5.2 Stochastic programming
5.2.1 Types of SP with recourse
5.2.2 Advantages and disadvantages of SP
5.3 Uncertainty Modeling
5.4 Scenario reduction algorithm
5.5 Applications

6 Selected Applications of PSO in Power Systems
6.1 Introduction
6.2 Reactive Power Management in Offshore Wind Farms
6.2.1 Offshore Wind Farm
6.2.2 Reactive Power Capability of Wind Energy System
6.2.3 Grid Requirements
6.2.4 Reactive Power Dispatch Problem
6.2.5 Solution Procedure
6.2.6 Test Network
6.2.7 Results
6.2.8 Conclusion
6.3. Optimal Operation of a Wind-Thermal Power System
6.3.1 Introduction
6.3.2 Wind Power Forecast Methodology
6.3.3 Uncertainty Modeling
6.3.4 Problem Formulation
6.3.5 Case Study
6.3.6 Conclusion
6.3 Hybrid Power Systems for Residential Loads
6.3.1 Introduction
6.3.2 Problem formulation
6.3.3 Scenarios
6.3.4 Solution procedure
6.3.5 Numerical results
6.3.6 Conclusion

7 Conclusions
7.1 Summary
7.1.1 Optimization algorithm
7.1.2 Power system operation
7.1.3 Uncertainty modelling
7.1.4 Optimization under uncertainty

References

Resume

List of Publications

Acknowledgement

I would like to express my sincere gratitude to my advisor Prof. Dr.-Ing. habil. Istvan Erlich, whose encouragement, supervision and support from the initial to the concluding level enabled me to produce this thesis. His perpetual energy and enthusiasm in research had motivated me to do this Ph.D. I could not have imag­ined having a better advisor and mentor for my Ph.D study. I am privileged and proud to have worked under his esteemed supervision. I sincerely dedicate this thesis to him.

I especially want to thank Prof. Dr.-Ing. habil.Gerhard Krost for his valuable guidance during my research. His great efforts to explain things clearly and sim­ply have helped a lot of young researchers, including me to successfully carry out my study at the university. I would also like to express my sincere thanks to my co-advisor Dr. Ganesh Kumar Venayagamoorthy for his valuable feedback, com­ments and suggestions to my dissertation.

My heartiest thanks to Hannelore Treutler, for helping me with the administra­tive issues especially with my visa and Ralf Dominik, for providing all the soft­ware resources. I never started my day without the warm greetings from Rüdiger Reißig. Special thanks to him for making my stay at ‘Fachgebiet Elektrische Anlagen und Netze’, the most memorable part of my life.

My deepest appreciation to my colleagues Koch, Jens, Shewarega , Befekadu ,Teeuwsen, Feltes, Robert, Nakawirow, Hashmani, Ayman, Stark, Wilch and Zamri for providing a stimulating and fun environment at the institute. I am in­debted to my friends Jai, Naveen, Kiran, Sagar, Murali, Anka Rao and Subash who were always with me in bliss and despair.

Most importantly, I wish to thank my parents, Chalapathi Rao and Chandra Kumari, my sisters Malathi and Srilatha, my brother-in-laws Siva Kumar and Dorababu for helping me get through the tough times, and for all the emotional support, love and caring they provided me throughout my life.

Finally, I would like to thank all people who have helped and inspired me during my doctoral study.

Abstract

The primary objective of this dissertation is to develop a black box optimization tool. The algorithm should be able to solve complex nonlinear, multimodal, dis­continuous and mixed-integer power system optimization problems without any model reduction. Although there are many computational intelligence (CI) based algorithms which can handle these problems, they require intense human interven­tion in the form of parameter tuning, selection of a suitable algorithm for a given problem etc. The idea here is to develop an algorithm that works relatively well on a variety of problems with minimum human effort. An adaptive particle swarm optimization algorithm (PSO) is presented in this thesis. The algorithm has special features like adaptive swarm size, parameter free update strategies, progressive neighbourhood topologies, self learning parameter free penalty approach etc.

The most significant optimization task in the power system operation is the scheduling of various generation resources (Unit Commitment, UC). The current practice used in UC modelling is the binary approach. This modelling results in a high dimension problem. This in turn leads to increased computational effort and decreased efficiency of the algorithm. A duty cycle based modelling proposed in this thesis results in 80 percent reduction in the problem dimension. The stern up­time and downtime requirements are also included in the modelling. Therefore, the search process mostly starts in a feasible solution space. From the investiga­tions on a benchmark problem, it was found that the new modelling results in high quality solutions along with improved convergence.

The final focus of this thesis is to investigate the impact of unpredictable nature of demand and renewable generation on the power system operation. These quan­tities should be treated as a stochastic processes evolving over time. A new PSO based uncertainty modelling technique is used to abolish the restrictions imposed by the conventional modelling algorithms. The stochastic models are able to in­corporate the information regarding the uncertainties and generate day ahead UC schedule that are optimal to not just the forecasted scenario for the demand and renewable generation in feed but also to all possible set of scenarios. These mod­els will assist the operator to plan the operation of the power system considering the stochastic nature of the uncertainties. The power system can therefore opti­mally handle huge penetration of renewable generation to provide economic op­eration maintaining the same reliability as it was before the introduction of uncer­tainty.

Chapter 1 Introduction

1.1 Motivation

Optimization problems in power systems are usually huge, complex and highly nonlinear. They are usually formulated as discontinuous and multi-modal prob­lems and have to be solved in continuous, discrete, combinatorial or mixed integer parameter space. In some instances the optimization problem cannot be mathemat­ically modeled and there is no information about the gradient or derivatives except the objective function value. The gradient search optimization methods are effi­cient for continuous and uni-modal problem but cannot handle complex multi­modal problems. These optimization techniques have a deterministic search pro­cedure and are highly dependent on the starting operating point. Moreover, in or­der to simulate realistic power systems, the optimization problems need to be solved without considerable reduction or approximation of the models. In practical application, aspects of easy adaptation and extension of the optimization algorithm are of particular importance. This is made possible with the evolution of new ro­bust computational intelligence tools which use only the evaluations of objective and constraint functions and has no obligation on the characteristic of these func­tions.

Evolutionary algorithms (EAs) are population based stochastic global search algorithms. They provide robust solutions to highly complex optimization prob­lems in engineering with minimum human effort. They ignite the evolutionary process with a population of random individuals (arbitrary operating points) representing the potential solutions to the given problem. The quality of the indi­viduals is evaluated by the fitness function. Each evolutionary algorithm has its own distinctive search procedure to refine the solution during the iterative process. The credibility of the algorithms depends on its global exploration of the search space and local exploitation of the optimal solution space. EAs like Particle Swarm Optimization, Ant Colony Optimization, Genetic Algorithms, Harmony Search, Bees algorithm, Tabu Search, Simulated Annealing, Diffusion Search etc. have been successfully applied to effectively solve large-scale nonlinear optimiza­tion problems.

Despite the huge success of the global search algorithms in tracking good solu­tion to real-world applications, they are subjected to rigorous parameter tuning These parameters which influence the search procedure are problem dependent. Most EAs also has the demerit of premature convergence. So the converged solu­tion may not even be a local minimum. The selection of optimal population size influences the quality of the solution and computational time. The constrained op­timization problems require the selection of appropriate penalty function and well tuned penalty coefficients. The algorithms are therefore problem dependent and require intense trial and error procedure to track the optimal parameters of the al­gorithm for a specific application.

The above mentioned demerits of evolutionary algorithms have challenged re­searchers to develop new variants to improve the performance of the EAs. For this special reason, there is much research underway that is aiming to develop a robust algorithm applicable to a wide variety of optimization problems with less human intervention and also reduce the burden of parameter tuning. In this thesis, a pa­rameter free particle swarm optimization algorithm capable of handling complex optimization problems in power systems is presented. PSO developed by Kennedy and Eberhart simulates the social behavior of fish and birds. The individuals or particles are associated with a certain velocity to explore the search space. The magnitude and direction of the particle’s velocity is guided by its own previous flying experience and also by the performance of the best particle in the swarm. PSO has high priority among the class of EAs because of its simple evolutionary procedure, few algorithm dependent parameters, faster convergence and easy adaptability to different problems.

Most decisions in real-time optimizations have to be made under uncertainty i. e. the data regarding all the variables associated with the optimization models is not known with certainty. Solving the optimization models by fixing the uncertain variables with their nominal values may provide poor solutions. These solutions are not robust to perturbations in the uncertain variables. The stochastic nature of the uncertainties can be described by statistical information such as mean and standard deviation from the historical data. The stochasticity of the random va­riables is described by probabilistic distribution. These continuous distributions are approximated to discrete probabilistic distribution. A large number of scena­rios resulting from these distributions are then used to represent the randomness of the uncertainties. Stochastic programming approach can be used to incorporate these scenarios in the optimization model. In these models decisions are made be­fore the uncertainty is disclosed and the corrective actions are taken after the un­certainty is revealed. Since stochastic models generate optimal decisions by mini­mizing the future consequences, it results in reliable and robust optimal solutions. Solving the optimization model with these huge set of scenarios is computational­ly expensive. So these bulky scenario set should be reduced with minimum loss of information. The conventional scenario reduction technique use heuristic approach to perform one-to-one comparisons in order to find the scenarios to be deleted. This limits their use for extremely huge number of initial scenarios. This thesis proposes the use of computational intelligence in scenario reduction algorithm so as to liberalize the restrictions posed by the currently available reduction tech­niques and to reduce the modeling error. Solving the stochastic model involves optimization over a set of scenarios which drastically increases the dimension of the control variables. Therefore an efficient algorithm is required to solve these high dimensional models. Particle swarm optimization algorithms can be a good choice to handle these high dimensional multi-modal problems.

1.2 Objective

The main contributions of this thesis are to develop a parameter free particle swarm optimization algorithm and provide robust solutions to various issues in power system planning and operation. The research described herein will address the following objectives:

1. Develop a robust global searching algorithm: Real-time power systems op­timization problems should be solved without considerable model reduction. This is made possible with the global search algorithms. A new algorithm is therefore necessary to overcome the major drawbacks of evolutionary algo­rithms and provide robust solutions in any complex search environment.
2. Penalty functions: The most efficient and easiest way to handle constraints in optimization problems is by the use of penalty functions. The direction of the search process and thus, the quality of the optimal solution are hugely impacted by these functions. A suitable penalty function has to be chosen in order to solve a particular problem. These penalty functions are associated with numer­ous user defined coefficients which have to be rigorously tuned to suit the given problem. Therefore, the objective is to develop a general parameter free penalty technique which can handle most optimization problems without any human in­tervention.
3. Unit Commitment Problem: UC is the most significant optimization task in the operation of the power systems. The complexity of the UC problems grows exponentially to the number of generating units. The currently available UC al­gorithms suffer from the curse of dimensionality. The increased problem size adversely effects the computational time and the quality of the solutions. So a new UC problem formulation is necessary to reduce the problem size.
4. Reduce uncertainty modelling error: Many decisions in power system plan­ning, and operation have to be made without the complete information of the uncertainties such as generator outages, electrical and thermal load, renewable generation capacity fluctuations etc. prevailing in the optimization model. So these uncertainties should be modeled in a suitable form to be used in the mod­el. The evolution of uncertainties is usually represented by a scenario tree. Computational Intelligence techniques have to be introduced in scenario gener­ation and reduction techniques for better uncertainty modeling.
5. Optimization under uncertainty: Planning and operation of the power system often require decisions to be made in the presence of uncertain information. All the uncertainties should be considered in the decision making process. There­fore suitable optimization models have to be developed which results in cost optimal and reliable operation of the power system.

1.3 Outline

Chapter 2 describes the fundamentals of particle swarm optimization algorithm. The developments by various other authors are explained through the PSO va­riants. The chapter then proposes the unconstrained adaptive particle swarm opti­mization algorithm. The performance of the algorithm is tested on various bench­mark problems and the results obtained though the probabilistic comparison with other PSO variants are listed in this chapter.

Chapter 3 presents the basic concepts of constrained optimization. A complete description of the various penalty techniques is provided. The chapter then states the main contributions to constrained optimization. First, the improvements made to the self learning penalty function approach. Second, the new proposed rule based penalty technique is introduced. Third, the new penalty techniques are com­pared with other approaches. The results are compared using the statistical meas­ures and also through the non-parametric Mann-Whitney U hypothesis.

The most significant optimization task in power system operation, the unit commitment problem (UCP) is described in chapter 4. This chapter presents the new proposed duty cycle based UCP with the advanced convergence operators. A standard ten machine system is used to check the performance of this new ap­proach.

Chapter 5 deals with the optimization under uncertainty. Uncertainty modeling using scenario analysis is explained in detail. The new swarm intelligence based scenario reduction algorithm is presented in this chapter. The chapter provides a rich literature on the applications of stochastic programming.

Chapter 6 presents three applications of APSO in power systems. The first ap­plications deals with reactive power management in offshore wind farms, the second application describes a two-stage stochastic programming approach for stochastic UCP applied to wind-thermal power plant and the final application is a multi-stage stochastic programming model describing the optimal operation of a residential hybrid power system with unpredictable PV generation and load.

Chapter 2 Particle Swarm Optimization

2.1 Introduction

Particle swarm optimization is an efficient evolutionary computational technique developed by Kennedy and Eberhart [1], [2]. Unlike the other population based search algorithms, PSO tracks the optimal solution not by survival of the fittest but by a process motivated by the personal and social behaviour of a flock of birds. Analogous to other optimization algorithms, PSO does not need the gradient information of the objective function or an appropriate operating point for initiat­ing the search process.

PSO has been successfully applied to solve complex global optimization prob­lems. It’s simple evolutionary process, less problem dependent parameters and faster convergence properties makes it a leading competitor among all the evolu­tionary algorithms. It’s solution methodology is hardly affected by the dimension, complexity and nonlinearity of the problem. PSO is therefore a smart alternative to solve large scale power system optimization problems such as optimal power flow, power systems planning and operation, reactive power management, control­ler parameter estimation etc.

PSO performs the search process by a population of particles called a swarm. The particle is characterised by D-dimensional vector representing the position of the particle in the search space. The position vector represents a potential solution to an optimization problem. During the evolutionary process, the particles traverse the entire solution space with a certain velocity. Each particle is associated with a fitness value evaluated using the objective function at the particle’s current posi­tion. Each particle memorizes its individual best position encountered by it during its exploration and the swarm remembers the position of the best performer among the population. At each iteration the particles update their position by adding a certain velocity. The velocity of each particle is influenced by its previous veloc­ity, the distance from its individual best position (cognitive) and the distance from the best particle in the swarm (social). A weighted combination of these three pa­rameters gives the new velocity.

The particle therefore appends its previous flying experiences to control the speed and direction of its journey. Apart from its own performances, the particle also interacts with its neighbours and share information regarding their previous experiences. The particle also utilizes this social information to build their future searching trajectory. During the iterative procedure the particles update their ve­locity so as to stochastically move towards its local and global best positions. The particle therefore tracks the optimal solution by cooperation and competition among the particles in the swarm.

Consider a swarm of 5 particles in a D dimensional search space [Abbildung in dieser Leseprobe nicht enthalten] designed to minimize a function/(x). A particle i is defined by its current position [Abbildung in dieser Leseprobe nicht enthalten] velocity vi and personal best position xpi as shown below:

illustration not visible in this excerpt

The global best position in the swarm is considered to be xg. The standard PSO update equations for each particle in the swarm at iteration t are as given below:

illustration not visible in this excerpt

where xge{xp0, xp1 , , xp5} such that:

illustration not visible in this excerpt

The randomness in the search procedure is introduced by two independent uni­form random sequences, r1 and r2 in the range (0,1). The weighting coefficients c1 and c2 are the acceleration coefficients which control the influence of cognitive and social terms on the particle’s velocity. The inertia weight w regulates the global and local exploration capability of the particles.

2.2 Simple PSO Algorithm

Step 1: Initialize a random population of particles. Each dimension xip of the po­ sition vector, x, is generated by a random distribution on the interval [xmin, xmax]. Each coordinate of velocity vector, v, is similarly initialized on the interval [vmin, vmaxj. Where vmax = k*xmax and vmin = xmin and 0.1 < k < 1.0. The individual best position vector xpi replicates the initial position vector, x,.

Step 2: Evaluate the fitness,/x,) of each particle.

Step 3: Update the local best position, xpi as shown below:

illustration not visible in this excerpt

The global best, xg is estimated by (2.4).

Step 4: Update the velocity and position of the particles according to equations

(2.2) and (2.3) respectively.

Step 5: Repeat steps 2, 3 and 4 until the stopping criterion is met. The exit con­

dition is usually the maximum number of function evaluations, maximum number of iterations or a tolerance value pertaining to the fitness value.

2.3 Algorithm Settings

Like any other evolutionary algorithm, the performance of PSO is also influenced by several settings like parameters and topology. The selection of these settings depends on the optimization problem and greatly affects the convergence behav­iour and optimal solution.

2.3.1 Population Size

Eberhart and Kennedy [3] have analyzed that PSO requires a smaller population compared to other evolutionary algorithms like Genetic algorithm. Complex mul­timodal problems require large swarms to fruitfully exploit several promising ar­eas (local minima) and provide reliable global solution. A smaller population in this case would end up in a local minimum. Whereas a larger swarm for simple problems would be computationally expensive. The optimal choice of the swarm size should be a good balance between computational time and reliability [4].

2.3.2 Maximum Velocity

This parameter limits the maximum exploration capability of the particles [5]. A large value of vmax may explode the search space. The particles may unnecessarily exploit the infeasible space and the algorithm may not find a feasible solution. Whereas a smaller value of vmax will not facilitate global exploration and the parti­cles may prematurely converge on a local minimum. Moreover the inertia weight directly controls the global exploration where as vmax can only monitor the explo­ration indirectly. Therefore a cap of the maximum velocity should be carefully carried out by inertia weight rather than by vmax.

The acceleration coefficients govern the relative velocity of the particle towards its local and global best position. These parameters have to be tuned based on the complexity of the problem. A suitable constriction factor calculated from these pa­rameters will ensure cyclic behaviour for the particles.

2.3.4 Inertia weight

The choice of vmax to curtail the maximum allowable velocity for a particle is also problem dependent. There is no generic rule to either estimate or control this pa­rameter. This disadvantage has been overcome by using inertia weight [6]. This parameter is so designed that the particles have good balance between global and local exploration. Therefore a linearly decreasing function of iterations between 0.9 and 0.4 is chosen. An initial larger value of inertia weight allows particles to search new promising areas efficiently and a lower value during the termination facilitates fine tuning the optimal solution.

illustration not visible in this excerpt

2.3.5 Constriction factor

The constriction factor based PSO algorithm proposed by Clerc can converge without using vmax. The particle’s oscillations can be effectively damped using this factor. This phenomenon is well explained in [7], [8] using the Eigen value analy­sis considering the velocity and position update equations as state space equations. The velocity update equation with the constriction factor can be expressed as fol­lows:

The convergence factor will allow the particles to cycle around the randomly de­fined regions around xp and xg. Regardless of the distance between the two re­gions, the constriction factor ensures that PSO can efficiently exploit the defined space and can finally converge.

In the standard PSO algorithm described by equations (2.2) and (2.3), each parti­cle is completely informed about the performance of all other particles in the swarm. The particle is attracted towards the global best performer in the swarm. Another set of algorithms have topologies where each particle has access only to a certain set of neighbours such as ring, wheel topologies [9], [10]. Several other network structures are proposed to solve different set of problems. The choice of the topology is problem dependent and there is no general network which is suit­able for all classes of optimization problems. The stability and the convergence properties of the PSO algorithm are proved in [11].

2.4 PSO Versions

2.4.1 Binary PSO

The standard PSO was designed to handle real valued vector space. However PSO can easily be adapted to handle binary space with ease. There are several applica­tions like unit commitment, lot sizing problem etc. where the control variables can be either YES or NO represented by the binary bits 1 and 0 respectively. In binary PSO, the particle is modelled to represent the binary bits. The velocity, vid repre­sents the probability of flipping the bit, xid. The velocity update equation in (2.2) remains unchanged except that xp and xg are vectors of binary bits. The velocity is transformed to probability by using the sigmoid function [12].

The maximum allowable velocity, vmax is used to control the mutation of the binary bits. vmax is usually set to 4.0. Curtailing the velocity to [-4.0, 4.0] means that the probability, s(vid) is limited between 0.982 to 0.018. Unlike real coded PSO where high vmax allows increased exploration, in binary PSO high vmax allows a lower rate of altering the binary bits. The modified position update equation can be expressed as:

illustration not visible in this excerpt

PSO can be easily adaptable to solve mixed inter optimization problems [13]-[15]. The particle can be modelled to accommodate binary, integer and continuous con­trol variables. The evolution and update strategy is very similar to the standard PSO except that after the update process, the position, xid corresponding to the in­teger variables is ceiled to the nearest integer. The rounding can be deterministic or probabilistic process. In deterministic process, xid is rounded based on its dis­tance to its adjoining integers whereas in probabilistic rounding, a probability function governs the ceiling process.

2.4.3 Adaptive PSO

The particles flying trajectory and convergence rate can be optimally defined by rigorously tuning the inertia weight, acceleration coefficients and constriction fac­tor. The amplitude and damping rate of the particle’s oscillations during the itera­tive process can be controlled by these parameters. But a deterministic set of these parameters for the entire search process may lead to bad results. For example, if the population of particles is concentrated near a local minimum, the velocity guided by its own performance and global best particle is not sufficient enough to support exploration. The particles therefore converge on the best position discov­ered so far by the swarm which might not be even a local minimum. Such danger­ous situations can be avoided by energizing the particles by giving them enough velocity to explore. Several adaptive strategies were proposed to dynamically ad­just the particles trajectories during the evolutionary process and improve the solu­tion. Three such variants adopting the fuzzy control for dynamically adjusting the velocity and inertia weight are describes below.

The parameters can be dynamically adjusted during the iterative process as per the requirements of the particles. The fuzzy logic controller is a tool that generates directive control action from a given set of inputs through a strong knowledge base generated by simple logical operators. The knowledge base is built based on the experience gained during the trial and error methods for parameter tuning.

2.4.3.1 Fuzzy Inertia weight

Shi and Eberhart [16] proposed a fuzzy controller to estimate the changes in the inertia weight. The inputs to the fuzzy controller are the current inertia weight and the normalized fitness corresponding to the global best position. Three member­ship functions are used to define how the input space is mapped to the three fuzzy sets (low, medium, high). The output of the controller predicts the new inertia weight.

The results indicate that the fuzzy control is very effective for unimodal prob­lems. The controller suggests a larger inertia weight until the particle traces a good solution space. Once the particle finds the optimal space, the controller automati­cally reduces the inertia weight to fine tune the optimal space. Adjusting the iner­tia weight based on the particles performance rather than by iteration number will remove unwanted iterative steps and hence assert faster convergence. However for multimodal problems, the controller may identify the solution space close to a lo­cal minimum but has no information whether the traced local minimum is also the global optimum. The particles therefore may be trapped in a local minimum.

2.4.3.2 Fuzzy Velocity

In [17], the authors proposed an adaptive fuzzy control to dynamically control the velocity of the particles. The particles velocity is scrutinized at every iteration and the poor performing particles are motivated to search better and compete with the best particles. If the velocity is less than a predefined threshold, the particle is given a sudden boost determined by a set of fuzzy rules.

vc is the threshold velocity and p is the scaling factor which determines the magni­tude of the turbulence. A large vc will rejuvenate the particles to discover new promising areas and a smaller value allows a local fine search. A small value of p increase the amplitude of the particle’s oscillations and this might help the particle to jump even a strong local minimum. The entire search process is therefore di­vided into three stages. In the first stage, vc and p are set at large and small values respectively to ensure that the particles identify the best local minimum. vc and p are set at medium values in the second stage and in the final stage vc is small and p is large. In the second and third stages, the swarm will exploit the discovered op­timal space. These characteristics are used to frame the rules of a fuzzy controller. At every iteration, the controller monitors the particle’s personal performance and velocity to decide the threshold and scaling factor.

2.4.3.3 Adaptive Constriction factor

In this algorithm, the particle’s dynamics and convergence are controlled by opti­mally adjusting the constriction factor during the search process. In the standard PSO, the constriction factor is constant throughout the search process. In [18], the authors proposed an adaptive constriction factor to enhance the performance of PSO to complex multi-modal problems. The influence of social and cognitive pa­
rameters on the particle’s velocity can be controlled by carefully regulating the constriction factor. This adaptive policy results in faster convergence. A special index called location related ratio (LR) is defined for each dimension of the parti­cle. LR is a normalized distance of the particle from a core position (average per­formance of all the particles in the swarm). The particles tries to adjust their veloc­ity based on their average distance from the other particles in the swarm. This avoids premature convergence and also regulates velocity for effective global ex­ploration and local exploitation.

2.4.4 Multiobjective PSO

Multiobjective optimization problems have two or more objectives to be opti­mized simultaneously. The Pareto front concept describes the optimal trade off possibilities between the objectives. A potential solution on the Pareto front can­not improve any objectives without degrading at least one of the other objectives. The algorithms for multiobjective optimization problems have to identify the true Pareto front. Evolutionary algorithms and especially PSO can effectively explore different parts of the Pareto front simultaneously. In PSO all the particles are con­centrated towards the global best particle in the swarm. Hence PSO cannot find multiple points on the Pareto front. The global best and the local best have to be intelligently selected to let the swarm discover different regions of the front.

Hu and Eberhart [19] proposed a dynamic neighbourhood PSO where each par­ticle selects a set of neighbours based on one of the objective. Then a local best is selected from this neighbourhood based upon the other objective. This local per­former replaces the global best in the velocity update equations. Although this method is applicable to a wide variety of multiobjective problems, it is strictly re­stricted to a two dimension function space.

In [20], [21] Coella suggested another approach to incorporate pareto ranking scheme in PSO. The idea is to archive the non-dominant solution found by the particles in the past and use this information in the global centring mechanism of PSO to promote convergence towards the most optimal non-dominant solutions. Each particle chooses a different leader based on the historical data stored in the repository to guide its search trajectory. Furthermore the already explored function space is partitioned into hypercubes. Each hypercube is assigned a fitness based on the number of particles in it. A particle in the less crowded hypercube is more likely to be selected as a leader of a particle. This facilitates the swarm to explore unrepresented areas of the Pareto front.

2.5 Applications

Particle Swarm Optimization outsmarts the other evolutionary algorithms due to its simple search process, few tuneable parameters and its ability to rapidly cover good solutions. PSO can be an effective tool when the optimization problem cannot be mathematically modelled; no expert knowledge is available to exactly define the problem space or when the objective function is complex, non-linear and high-dimensional. These special features enable PSO to be applied to a variety of applications ranging from engineering design, process optimization, to service oriented applications in finance, healthcare, bioinformatics and entertainment. In medical field PSO is used in edge detection of medical images, optimize drug formulas, and to identify the right sample in medical diagnostic tests involving huge samples. With the rise of AI based computer games PSO is used to generate smarter computer players; create artificial human-like interaction to a human player by strategically adapting the AI and to improve video and sound quality. In Robotics, PSO can be used to generate fast motion trajectories for robots with high degree of freedom. In dynamic environment, robots store information about the changing topologies and obstructions. PSO is used to decide the shortest path in the ever changing environment. It can also be used machine- learning applications, including classification and object prediction. In financial sector PSO is used for optimal asset allocation, index tracking, risk analysis and search for the optimal trading indicators and rules for the stock market using high frequency data. The finance models and bidding strategies can also be improved. PSO has also been used to model immune systems, ecological and social systems.

In power systems PSO has been successfully applied to optimal power flow, unit commitment problem, economic dispatch, reactive power and voltage control, generation expansion planning, reliability assessment, controller design, machine modelling, neural network training and forecast models [22], [23].

2.6 Novel Particle Swarm Optimization Algorithm

2.6.1 Introduction

The advent of new evolutionary based global search algorithms has almost filled the vacuum created by the conventional optimization techniques in solving large complex problems. These stochastic algorithms are capable of solving a wide va­riety of real-world problems which may or may not have an explicit description. However, they too have some limitations. They have several problem dependent parameters which need to be rigorously tuned to obtain good solutions. For exam­ple, the population size will depend on the complexity of the problem. As such, there is no standard fixed swarm size which suits a range of problems. The itera­tive search process ensures that all particles converge at the global optima. Extra energy in the form of iterations is devoted to improve the bad performing par­ticles. What is the contribution of these bad particles in searching the optimal space? The particles in the swarm may have different exploration capabilities. How fair is it to have the same flying strategies for both good and bad performing particle. The bad particle may call for additional iterations to converge. There are several adaptive algorithms which adapts the inertia weight, cognitive and social parameters with the iterative search process. It is a common practice to have high inertia weight to rejuvenate the particles. This helps the particles to explore new areas efficiently. What will happen to the swarm, if no good region is discovered during this stage? The particles will be exhausted and may only facilitate a local search. The swarm may end up with no good solution. The selection of local and global best parameters also involves computational effort. The user should either have expert knowledge on the problem being solved or repeat the optimization process several times to decide on the right set of these parameters. The standard PSO velocity update strategy demands all particles in the swarm to follow the global best performer. The best performer might catch up local minima and all the particles might follow its path and therefore the whole swarm might converge prematurely on this local minimum. There are many such challenging issues which have to be addressed to take the evolutionary algorithm to the next level. Most of these issues are addressed in the proposed version of PSO.

2.6.2 Who can benefit?

Almost any optimization problem can be solved by the proposed algorithm with­out much human intervention. One has to define the right set of decision variables represented as a particle and a suitable measure (fitness function) to compare the relative performance of the particles. The ability to discover good solutions at a rapid pace is the key feature for its success. The algorithm can be easily adapted to solve any new application. The new version of PSO is a good alternative to other optimization techniques when:

(1) Search space is complex, nonlinear, non-differentiable and multimodal
(2) Search space is widely spread
(3) No expert knowledge or mathematical analysis of the problem is available
(4) Problem space or objective function cannot be approximated
(5) Applications can return only the numerical evaluations for objective function and constraint violations and no other information is available.

2.6.3 The solution strategy

The motive behind the new version of PSO is to develope a black box optimization tool. The algorithm should involve minimum human assistance and should provide good solutions to a wide variety of real-time applications. PSO should be used like a built in function call. The function should only have inputs like problem dimension, lower and upper bounds of the decision variables, number of equality and inequality constraints and a stopping ctiteria. This function call should be associated with an external function which can take an instance of decision variables and return the corresponding fitness value and constraint violations. No other information about the problem can be exchanged. The particles should be able to adjust their flight and the swarm should maintain a suitable population diversity to automatically self tune their search process and discover good solutions.

The above requirements are fulfilled by the new version of PSO called Self- adaptive particle swarm optimization, here after reffered to as APSO. It is a parameter free optimization tool. The particles of this algorithm have the capability to modify their search strategy based on their personal performances. The particles and the swarm adapt to the situations to find the global optimal solution. It is free from the burden of selecting the most appropriate swarm size. The algorithm is inspired from the nomad community. Nomads are groups of people who move from place to place following the seasonal availability in search for a better living. This algorithm simulates the moving strategy of different sized groups of nomads called "Tribes". The basic structure of the algorithm is derived from the TRIBE-PSO introduced by Maurice Clerc [24].

2.6.3.1 Swarm Evolution

The search process is ignited by minal set of NT tribes. Each consisting of fixed set of particles, NP. Each particle is associated with a certain velocity and fitness. The particles try to memorize its previous two performances and also its best performance. At the end of NG generations, the tribes are evaluated. The particle is judged based on its two previous performances. The performances can be an improvement (+), status quo (=) or a deterioration (-). A bad particle is one which deteriorates or shows no progress(--,- =, = =, =-, +-). On the other hand, a good particle is one whose performances are improvements(=+, ++, +=). The TRIBE is also labelled as good or bad based on the majority of its good or bad particles. At the first iteration, the previous two performances of the particles are initialized to their current position. If the TRIBE happens to be a bad performer, it indicates that its current information about the search space is not enough to find good solution. At this instant, this tribe will add more information by generating a new tribe with NP particles. Two-third of the new particles are randomly generated while the remaining one-third particles are generated in the close proximity of the best particle in the current TRIBE. The second bad TRIBE will add another NP particles to the newly generated TRIBE. Whereas the good TRIBE has majority of good particles. It means that the TRIBE has enough information about the good solution. If this TRIBE has more than one particle, the worst performing particle is identified. This particle may also be good but its close associates in the same TRIBE are much better and this particle has the same or less information as the rest of its associates. So there is no risk in deleting this particle. The good TRIBE therefore will eliminate one its least performing particle. The updated swarm is again allowed to explore for NG iterations. This process continues until the stopping criteria is reached. The process of evolution indicates that new particles will be born only when they are required. Particles which do not contribute to the search process are eliminated. The swarm always has potential particles enthusias­tic enough to search for an optimal solution. This substantially helps the algorithm to find the solution within few iterations.

2.6.3.2 Flying Strategies

Different particles in the TRIBEs have different levels of performances. Before each TRIBE evaluation, the particles are given enough time to explore. During the evaluation, the particles are compared based on their performance and least per­forming particles may be eliminated. But, are all the particles given a fair chance to improve? Both good and bad particles are allowed to explore for the same amount of time. Since all particles have the same flying strategies, the good par­ticles will always perform better and the bad particle will never reach the stan­dards of the good particle. In order to remove this bias, different particles have dif­ferent flying strategies. Based on their previous performances, the particles automatically judge the right flying strategy. The particles are categorized into three groups. A worst particle is one whose performances are deterioration (--,=-). Bad particles and good particles comprise the following combinations (-=, ==, +-), (+=, =+, -+, ++) respectively. The worst particles follow a random search strategy, the bad particle prefers pivot strategy and the good particles follow Gaussian up­date strategy. These strategies are explained below.

Random Strategy

The flight of the particles totally depends on either the local best or the global best. Two procedures are randomly chosen in this strategy. In the first procedure distance of the local best, p from the current position and also the global best; g is calculated. The larger of the two distances serves as the radius of the hypersphere generated around the local best. A position vector is randomly generated on this hypersphere (Hp) and this when added to the current position results in the updated position vector.

illustration not visible in this excerpt

Similarly the second procedure identifies the maximum of the two distances of global best to local best and current position. With this as the radius and centre at the global best position, a hypersphere (Hg) is generated. The new updated posi­tion is randomly generated on this hypersphere as shown below:

illustration not visible in this excerpt

Pivot Strategy

In this strategy the particles are forced to follow the direction of its individual best performance, p and the best informer of the particle, g. Two hyperspheres, Hp and Hg with p and g as centers and radius equal to the distance between them are de­fined. A point is randomly chosen in each of these hyperspheres. A weighted combination of these points gives the new position vector, xk+! of the particle.

illustration not visible in this excerpt

Where, c1 and c2 are the weight factors. These values are calculated using the ob­jective function, f corresponding to local and global best.

Gaussian Update Strategy

A set of Gaussian distributions govern the cognitive and social contribution to the particle’s velocity. The velocity of the particle i at dimension d and iteration k+1 depend on its previous velocity (vidk), its previous best performance (xpd) and per­formance of the best informer (xgd) as shown below.

illustration not visible in this excerpt

Where gauss_rand(«, d) generates a normal distributed numbers with mean p and standard deviation d. During the early stages of the search process the distance of the particle from its local best and global best will be quite high. A Gaussian dis­tribution generated with this distance as the mean and half the distance as the

Standard deviation will have a wide spread. The velocity generated by these distri­butions will be high enough to provide global exploration. When the search process is in the final stages, 5p and 5g are small and therefore the Gaussian distri­bution have a very small spread. Hence provide only a local search. The Gaussian distributions will therefore provide both global and local exploration around p and g. This approach eliminates the use of acceleration coefficients. Hence the update strategy is totally independent of the tuning parameters.

2.6.3.3. Neighborhood Topologies

The standard PSO version employs a star topology where each particle is directly connected to the global best performer of the swarm. When the global best per­former catches a local minima, all particles are naturally attracted to it. Since each particle is directly connected to the global best, information is rapidly propagated and the whole swarm may prematurely converge on this local minima. In order to avoid such untoward convergence several information or neighborhood topologies are suggested. In APSO algorithm each particle has an evolving neighborhood. The flight of the particle is consistently monitored and guided by these neighbors. The global best performer is no more common to all the particles. Each particle selects its own global best performers from its neighborhood and not from the whole swarm. The neighborhood of a particle in a TRIBE consists of all its con­temporaries in that particular TRIBE and also its parent particle (particle from which it is born). Initially all particles have random parents. When a particle dies due to ill performance, its presence in all neighborhoods is replaced by its best performer. The information regarding the global best solution discovered by the swarm is propagated to all particles. However it takes several iterations to spread this information. During this process the particles will have enough time to ex­plore and easily escape the local minima suggested by the global performer. How­ever this process is at the cost of convergence. The swarm with this topology will able to avoid premature convergence but the speed of convergence will be less than the swarm with ring topology.

2.6.4 Description of the Proposed Approach

The optimization process starts with NT tribes and eventually evolves to explore the entire problem space. Each particle in a tribe is assisted by a set of associates in its neighborhood. Each tribe will try to locate a minimum and in the process al­so communicate with the other tribes to discover the global solution. The algo­rithm consists of two iterative loops. One loop controls the ultimate termination of the search process. The second loop allows the swarm to explore and exchange in­formation among its neighbors before they are finally evaluated. So this loop con­trols the evolutionary process. A fixed set of generation, Ng(=10) is set as the ter­mination criteria for this loop. The algorithm is explained in the following steps:

illustration not visible in this excerpt

2.6.5 Performance of APSO

APSO was developed to solve large power system optimization problems. How­ever it is computationally difficult to validate the algorithm using these applica­tions. Therefore a set of standard test problems commonly used in global optimi­zation are used to investigate the effectiveness of the proposed algorithm. The performance of the proposed adaptive PSO algorithm (APSO) is compared with two other variants of PSO (NPSO and SPSO). PSO algorithm with fixed neigh­borhood topologies is referred to as NPSO (standard PSO 2006). In this algorithm each particle has a fixed neighborhood size. The second variant is the basic PSO algorithm with star neighborhood i.e. each particle is connected to every other par­ticle in the swarm. The parameters for NPSO and SPSO are optimally tuned to ob­tain quality solutions. The particles which violate the bounds are made to stay on the boundary and their corresponding velocity is initialized to zero. The accelera­tion coefficients are set to 1.5 each and inertia weight is assumed to be a linearly decreasing function from 0.8 to 0.2. A population size of 200 was considered in all simulations.

2.6.5.1 Test Functions

The performance of APSO was investigated on six benchmark functions. The sev­en functions describe a wide variety of complexities in unconstrained global opti­mization problems. The functions are all high-dimensional problems. All the ap­plications are formulated as minimization problems and have a unique global op­timal solution. These functions are given by:

1. Sphere Function

illustration not visible in this excerpt

2. Step Function

illustration not visible in this excerpt

3. Rosenbrock Function

illustration not visible in this excerpt

4. Rastrigin Function

illustration not visible in this excerpt

5. Ackley’s Function

illustration not visible in this excerpt

6. Griewank Function

illustration not visible in this excerpt

Functions Sphere and Rosenbrock are unimodal problems with continuous (smooth) landscapes. Step function is also unimodal problem but has disconti­nuous (flat) landscapes. No gradient information is available for this function and therefore the algorithm may converge on any of the flat surfaces. Rastringin, Ack­ley and Griewank functions are multimodal problems whose number of local mi­nima increases exponentially with problem dimension. The complex nature of these functions is shown by the 3D view and contour plots in Fig. 2.1-2.4. The ob­jective of these optimization problems is to find the global optimal solution de­fined as:

illustration not visible in this excerpt

The performance is analyzed in terms of convergence rate, convergence reliability and the quality of final solution. The convergence rate indicates the reduction of the distance to the final global solution over time. Convergence reliability shows the ability of the algorithm to repeat final solutions within f-tolerance. The dimen­sion of all problems is set to 20. Each function minimization problem is simulated 250 times and the resulting best final solution, worst solution and the mean and standard deviation of all simulations are reported in Table 2.1. The f-tolerance is assumed to be 10-4. The algorithm terminates when the target value or the maxi­mum number of function evaluations (60,000) is reached. The function evalua­tions required by all the three PSO algorithms for obtaining the best and worst so­lution along with the mean evaluations required to converge for the 250 simulations are listed in Table 2.2. It is clear from the results that all the algo­rithms performed well for the Sphere and Step functions. For Rosenbrock func­tion, APSO produced far better results than the other two algorithms. SPSO was able to obtain the best solution of 0.0545 but the average and standard deviation of optimal solutions over several simulations was very far from that obtained by APSO. NPSO was the best performer on Griewank function with a mean value of 0.0053 and standard deviation of 0.0090. The tabulated results do not give a clear picture about the performance of the algorithms. Moreover the results obtained by the 250 simulation might have occurred by chance. So the results should be stan­dardized. It means that the solutions obtained during the various simulations should result from a distribution. Another simulation of the algorithm on the con­sidered test function should produce a result corresponding to this distribution.

Performing a new simulation is similar to sampling the distribution. The algorithms can be compared only when the simulation results follow similar prob­ability distributions. Lilliefors test is performed on the results to verify the good­ness of fit to normal distribution. All the algorithms results are observed to fit a normal distribution. Algorithm A is considered to perform better than algorithm B only if the probability of a solution obtained by A is greater than the solution ob­tained by B.

The results obtained by algorithm A represents a random variable A whose probability distribution is given by DA. Similarly results by algorithm B are represented by random variable Y and distribution DB. A new simulation of the al­gorithm is similar to drawing a random number from their corresponding distribu­tion

illustration not visible in this excerpt

Fig. 2.1 3-D view and contour plots of the sphere function

illustration not visible in this excerpt

Fig. 2.2 3-D view and contour plots of the step function

illustration not visible in this excerpt

Fig. 2.3 3-D view and contour plots of the Ackley function

illustration not visible in this excerpt

In order to compare the two algorithms we have to integrate the above probability over all possible values of random variable X, weighted by its probability density. The resulting formula is as follows:

The probabilities obtained by comparing APSO with NPSO and SPSO for differ­ent set of benchmark problems are listed in Table 2.3. The second column com­pares the performance of APSO with NPSO. APSO performs better than NPSO for Rosenbrock and Rastringin functions. But for Griewank and Ackley function NPSO performs better. The average probability of APSO compared to NPSO is

Table 2.3 Probability comparison of APSO with the other two PSO variants

illustration not visible in this excerpt

given in the last row. This probability indicates that APSO performs better than NPSO by nearly 59% over all considered test functions. Similarly APSO out­smarts SPSO by 60%.

The effectiveness of the algorithm with increased problem dimension is ob­served with respect to one of the test problem (Ackley function). The average function value and average function evaluations for the algorithms are shown in Fig. 2.5 and 2.6. The results indicate that with the increase in problem dimension, the solution complexity increases, quality of the solution are degraded and the al­gorithm require high function evaluations to converge. But the increase is promi­nent in NPSO and SPSO compared to APSO. The quality of the average function value for APSO is similar to NPSO but the number of function evaluations in­creases drastically in NPSO compared to APSO. The convergence properties of the algorithms for Griewank function are shown in Fig. 2.7. It is evident that APSO convergences much faster than the other two variants. The convergence

illustration not visible in this excerpt

Fig. 2.6 Performance of the algorithms with the increase in problem size

illustration not visible in this excerpt

Fig. 2.7 Convergence of the algorithms for Griewank function

illustration not visible in this excerpt

illustration not visible in this excerpt

Fig. 2.9 Evolution of the swarm for different test problems

illustration not visible in this excerpt

Table 2.2 Performance comparison of the three PSO variants with respect to function evaluations

illustration not visible in this excerpt

Fig. 2.12 Average swarm size required for different dimensionality of Ackley test function

properties of APSO with increase in problem dimension are shown in Fig. 2.8. The problem dimension has a greater impact on the convergence properties of the algorithm. The evolution of the swarm (swarm size) for all test functions is shown in Fig. 2.9. The swarm adopts different evolutionary strategies for different prob­lems. For Ackley function, the swarm generates a lot of TRIBES at the initial stages whereas very few particles are born during the final stages of the search process. For sphere, Step, Griewank and Rosenbrock functions, the swarm gene­rates numerous particles during the medieval phases of the optimization. A very strange evolution is evident from Rastringin function where the swarm conti­nuously evolves at all stages of the search process. As shown in Fig. 2.10, ex­tremely huge swarm size of approximately 270-250 particles is required to solve the Rosenbrock function. Whereas the inter-quartile range of the swarm size for Rastringin function is 230-210. However the rest of the test functions require ap­proximately 30 particles to obtain the optimal solution (Fig. 2.11). The increase in swarm size with problem dimensionality for Ackley function is shown in Fig. 2.12. As the dimension of the problem space increases, the complexity is exponen­tially increased. The optimization therefore requires more number of particles to assist the search process. The adaptive swarm therefore acknowledges the solution space complexity and accordingly modifies its search technique automatically to obtain the global optimal solutions

2.7 Summary

A new PSO variant called APSO is addressed in this chapter. The algorithm has been tested on various standard unconstrained test functions. The results are ana­lyzed with respect to solution quality and reliability. The solutions are probabilis­tically compared with two other variants of PSO algorithms. The investigation showed that APSO outperforms the other versions of PSO. The convergence is al­so improved with the new approach. The solution strategies for unimodal and mul­timodal problems indicate the exploration and exploitation capabilities of the algo­rithm. Apart from all these performance improvements the algorithm was executed for all test functions without any parameter tuning. Although there was a sacrifice in the performance with regard to certain functions, the overall perfor­mance was much better than the rest of the PSO variants.

Excerpt out of 159 pages

Details

Title
Application of PSO for Optimization of Power Systems under Uncertainty
College
University of Duisburg-Essen  (Institute of Electrical Power Systems)
Course
Electrical Engineering
Grade
1.0
Author
Year
2009
Pages
159
Catalog Number
V153461
ISBN (eBook)
9783640660483
ISBN (Book)
9783640661039
File size
2426 KB
Language
English
Notes
The primary objective of this dissertation was to develop a black box optimization tool. The algorithm should be able to solve complex nonlinear, multimodal, discontinuous and mixed-integer power system optimization problems without any model reduction. A duty cycle based unit commitment modelling proposed in this thesis results in 80% reduction in the problem dimension. The final focus of this thesis was to investigate the impact of unpredictable nature of demand and renewable generation on the power system operation.
Keywords
Application, Optimization, Power, Systems, Uncertainty
Quote paper
Dr. Venkata Swaroop Pappala (Author), 2009, Application of PSO for Optimization of Power Systems under Uncertainty, Munich, GRIN Verlag, https://www.grin.com/document/153461

Comments

  • No comments yet.
Look inside the ebook
Title: Application of PSO for Optimization of Power Systems under Uncertainty



Upload papers

Your term paper / thesis:

- Publication as eBook and book
- High royalties for the sales
- Completely free - with ISBN
- It only takes five minutes
- Every paper finds readers

Publish now - it's free