中南大学学报(英文版)

J. Cent. South Univ. Technol. (2011) 18: 1502-1508

DOI: 10.1007/s11771-011-0866-4

Three-dimensional multi-constraint route planning of unmanned aerial vehicle low-altitude penetration based on coevolutionary multi-agent genetic algorithm

PENG Zhi-hong(彭志红)1, 2, WU Jin-ping(吴金平)1, 2, CHEN Jie(陈杰)1, 2

1. School of Automation, Beijing Institute of Technology, Beijing 100081, China;

2. Key Laboratory of Complex System Intelligent Control and Decision of Ministry of Education,

Beijing Institute of Technology, Beijing 100081, China

? Central South University Press and Springer-Verlag Berlin Heidelberg 2011

Abstract:

To address the issue of premature convergence and slow convergence rate in three-dimensional (3D) route planning of unmanned aerial vehicle (UAV) low-altitude penetration, a novel route planning method was proposed. First and foremost, a coevolutionary multi-agent genetic algorithm (CE-MAGA) was formed by introducing coevolutionary mechanism to multi-agent genetic algorithm (MAGA), an efficient global optimization algorithm. A dynamic route representation form was also adopted to improve the flight route accuracy. Moreover, an efficient constraint handling method was used to simplify the treatment of multi-constraint and reduce the time-cost of planning computation. Simulation and corresponding analysis show that the planning results of CE-MAGA have better performance on terrain following, terrain avoidance, threat avoidance (TF/TA2) and lower route costs than other existing algorithms. In addition, feasible flight routes can be acquired within 2 s, and the convergence rate of the whole evolutionary process is very fast.

Key words:

unmanned aerial vehicle (UAV), low-altitude penetration, three-dimensional (3D) route planning, coevolutionary multi- agent genetic algorithm (CE-MAGA)

1 Introduction

Low-altitude penetration plays an important role in modern war [1], but the high risk and operational difficulty have limited its implementation. In this case, the incomparable advantages (such as low energy consumption, good mobility, and no vital risk) of unmanned aerial vehicles (UAVs) highlight out. Thus, research on UAV low-altitude penetration draws more and more attention. In this area, three-dimensional (3D) route planning is one of the key technologies. Unfortunately, there are many stringent requirements for UAV low-altitude penetration: flying at an altitude as low as possible [2]; good abilities of terrain following, terrain avoidance, threat avoidance (TF/TA2) [3]; having short voyage as possible under the premise of satisfying all of the motion constraints; facing the complicated terrain and threats in the battlefield. Therefore, 3D route planning of UAV low-altitude penetration is always considered as a complex multi-objective multi-constraint optimization problem with a huge search space as all kinds of constraints couple together.

Genetic algorithm (GA) has been proved effective for path optimization of ground mobile robots [4-5]. However, because of the complexity of 3D route planning, traditional GA cannot retain strong global search ability and fast convergence rate. Some researchers focused on improving coding mode [6] or using improved operators [7], but these partial improvements are not fully effective. VACHER et al [8] have made efforts to improve the evolutionary mechanism of GA. For this purpose, they integrated the search style of GA and multi-agent system (MAS). The global search ability and convergence rate of algorithm could be improved by fusing the advantages of perceiving and reacting to environment of agents as well as self-learning into the search style of GA. As a development of VACHER et al’s research, ZHONG et al [9] newly proposed an algorithm, multi-agent genetic algorithm (MAGA), and proved that it was much more efficient for numerical optimization than traditional GA. Nevertheless, when MAGA is applied to UAV 3D route planning, local optimum still cannot be avoided completely, since the role of evolutionary operators is limited and the diversity of evolutionary populations cannot be guaranteed. In order to prevent premature convergence, CAI and PENG [10] used coevolutionary mechanism to improve traditional GA. The improved algorithm was applied to path planning of ground mobile robots and local optimum was avoided effectively. To increase the diversity of evolutionary populations for efficiently avoiding premature convergence, coevolutionary mechanism was introduced to form an improved algorithm, named coevolutionary multi-agent genetic algorithm (CE-MAGA) in this work.

In the field of UAV 3D route planning, B-Spline curve is usually adopted to represent the flight routes of UAVs since such route representation form can efficiently shrink the planning space, which consequently reduces the time-cost of planning computation. To some extent, the planning results of B-Spline based route planning are satisfactory [6, 11]. However, due to the number limitation of control points, the route accuracy of the planning results is far from satisfactory, so the terrain following ability of the UAVs cannot be guaranteed. So, the sequence of route points was used to replace B-Spline curve to represent flight routes in this work. In this representation form, the number of the route points is allowed to change dynamically. In addition, certain measures were taken to ensure the route points number large enough and then to improve the flight route accuracy. Relative coding mode and evolutionary operators were re-designed, too.

Furthermore, in order to acquire truly feasible flight routes, the motion constraints of UAVs must be taken into account by transforming them into mathematical constraints of the optimization problem. A widely used treatment for multi-constraint is penalty function method, which was proved to be effective in a certain sense [12]. However, the adjustment of penalty parameters is very complicated, and the evaluation of both infeasible and feasible flight routes is required to follow the same rule, which is not necessary and will bring additional computational amount. In order to simplify the treatment of multi-constraint and reduce the time-cost of planning computation, an improved penalty function method was proposed here for constraint handling.

2 Mathematical modeling

2.1 Planning space representation

As UAV 3D route planning is considered, a state in the planning space is corresponding to a 3D flight route. Then, the representation of planning space is mainly referred to the representation of 3D flight routes. No matter what representation form is used, it should be conductive for planning computation. Taking the characteristics of MAGA (as an evolutionary algorithm) into account, using the sequence of route points to represent 3D flight routes is simple and rational, since any state in the planning space can be represented, and any represented state can be moved to any other state in the planning space through finite evolutionary operations.

In a sequence of route points {S, P1, …, Pn-1, G} (n=1, 2, …), S and G are fixed route points which represent the source and goal points, respectively, and P1, …, Pn-1 represent intermediate route points which can move arbitrarily. Every route point is uniquely determined by its spatial location coordinates (xi, yi, hi) (i=0, …, n), and every two adjacent points are connected with straight line segment. Here, the number of the route points is allowed to change dynamically. In this way, we can acquire proper flight route accuracy, which is closely related to the terrain following ability and the flight altitude requirements of UAVs.

2.2 Cost function

Suppose that the terrain of task environment and the information of threat regions are known beforehand, and the source and goal points are also given in advance. Then, our main task is to obtain a high-quality flight route between the source and goal points for UAV low-altitude penetration, where ‘high-quality’ just means low-cost. For this multi-objective optimization problem, three kinds of costs, namely voyage cost CL, altitude cost CH, and threat cost CT, are considered, and the cost function is then the weighted sum of these three costs:

                                   (1)

where wL, wH and wT are weight coefficients. CL, CH and CT are determined by

                                (2)

where li is the length of the i-th route segment. Here, CL is set to be the square sum of all li rather than the direct sum, since such treatment is helpful for improving the flight route accuracy, and then enhancing the terrain following ability of UAV.

Remark: As we know, the object of planning computation is to minimize the cost value. For the same voyage, more route points won’t change the direct sum of the length of the route segments, but the square sum will be smaller as the number of the route points increases. Hence, under the premise of satisfying all the motion constraints, such a cost function can ensure the number of the route points as large as possible, which is just the guarantee of high flight route accuracy.

hi represents the altitude of the i-th route point (h0 and hn are the altitudes of S and G, respectively). The altitude cost of every route segment is determined by the altitude of its end points.

For the threat cost CT, NT is the number of the threats and CT,ij is a part of the threat cost of the i-th route segment caused by the j-th threat as [13]

                                         (3)

where Kj and Rj are the strengthening and action radius of the j-th threat, respectively. Rij represents the average distance between the i-th route segment and the j-th threat. The distance from any point on a route segment (such as mid-point) to a threat is taken as the average distance between the route segment and the threat. By controlling the threat cost defined here, the survival probability of UAV can be increased effectively.

2.3 Constraint handling

Here, the UAVs are considered to be limited by four types of motion constraints [14]: the minimum route segment length, the maximum bending angle, the maximum ascent/diving angle, and the minimum flight altitude. The flight routes which satisfy all the constraints are feasible flight routes. Contrarily, the flight routes that violate any constraint are infeasible. Constraint handling is just to determine the feasibility of flight routes and to evaluate the quality of infeasible flight routes.

In order to further improve the penalty function method, the concept of constraint violation (CV) [15] is introduced in this work. CV is the violation degree of infeasible flight routes against all the constraints, and the smaller the CV value, the higher the quality of routes, and vice versa. Different from the traditional penalty function method, the quality of infeasible flight routes is just evaluated according to their CV value without computing their cost value. Thus, the cumbersome adjustment of penalty parameters is avoided, and the computational amount is reduced effectively.

3 CE-MAGA for 3D route planning

According to the definition in Ref.[16], an agent could be any entity with the features of perceiving and reacting to local environment and certain self-learning ability. The basic principle of MAGA is to integrate the advantages of MAS and the search style of GA. By this way, the global search ability of algorithm can be enhanced, and its convergence rate can be raised, too. For MAGA, the evolutionary population was set to be MAS by giving the individuals in the population the features of agents. Considering that the local optimum cannot be avoided completely when applying MAGA to UAV 3D route planning, a CE-MAGA was proposed by introducing coevolutionary mechanism to MAGA. In CE-MAGA, several small scale evolutionary populations are initialized. Each population forms an agent lattice L, as shown in Fig.1 [9]. The size of L is Lsize×Lsize. A lattice-point represents an agent (i.e. an evolutionary individual), which evolves through the role of the evolutionary operators. An agent evolves within an agent lattice as well as cooperates with those in other agent lattices.

Fig.1 Model of agent lattice

3.1 Coding and initialization

A real coding with variable length was adopted to approximate the original form of 3D flight route, and to let the route points number change dynamically. As shown in Fig.2, every agent is corresponding to a flight route of UAVs. Every node of the agents represents the spatial coordinates of a route point with one pointer that points to the next node.

Fig.2 Coding structure of agent

The initial agent lattices are randomly generated, that is, both the node number and the node coordinates of all the agents are randomly generated. The maximum node number could be a pre-set parameter. It is worth mentioning that the head node and the tail node of all the agents are changeless. They are corresponding to the source point and goal points of the flight route, respectively.

3.2 Energy function

The energy function of agents in CE-MAGA is similar to fitness function of the individuals in traditional GA. The object of the evolution of agents is to maximize their self-energy through the role of evolutionary operations. Namely, if an agent has high energy, its corresponding flight route will have high quality. Consequently, corresponding principles are formulated for flight route evaluation [15]:

1) For feasible flight routes, the smaller the cost value, the higher the quality of routes, and vice versa.

2) For infeasible flight routes, the smaller the CV value, the higher the quality of routes, and vice versa.

3) For feasible and infeasible flight routes, the quality of feasible flight routes is always higher than that of infeasible flight routes.

According to the above three principles, the feasible and infeasible flight routes can be evaluated uniformly and the energy function of agents can be defined as

                            (4)

where cv is the value of CV, the number of the route segments which dissatisfy any of the constraints. Here, if the corresponding flight route of an agent is feasible, the energy of agent is the reciprocal of the cost value, which is definitely positive. When infeasible, the cost value does not need to be computed and the energy of agent is just set to be the opposite number of CV value, which is always negative. Hence, the above energy function can ensure that the corresponding flight route of an agent with higher energy is always better than that of an agent with lower energy.

3.3 Evolutionary operators for agents

There are four evolutionary operators for agents in typical MAGA: neighborhood competition operator, neighborhood crossover operator, mutation operator and self-learning operator. In CE-MAGA, the crossover operator was re-designed to achieve the aim of cooperation among the populations.

3.3.1 Neighborhood competition operator

The abilities of agents of perception and reaction are limited within their neighborhoods. According to the model of agent lattice (see Fig.1), the neighborhood of an agent can be defined as the set of all the agents that are connected to it directly. Suppose Mi,j represents the optimal agent (i.e. the agent with the highest energy) in the neighborhood of an agent Li,j. When the neighborhood competition operator is performed on Li,j, Li,j has to compete with Mi,j. If Li,j wins the competition, i.e. Eng(Li,j)>Eng(Mi,j), it can still live in L, otherwise, the agent must die, and its lattice-point will be occupied by Mi,j with probability Po. That is to say, if Rand(0,1)< Po, a new agent Ni,j will be generated by means of disturbing the coordinates of all the intermediate nodes of Mi,j with certain amplitude to occupy the empty lattice-point. Otherwise, the empty lattice-point will be occupied by a completely new agent N′i,j, which is randomly generated in initialization. Here, Rand(·,·) is a uniform random number generator, and Po is the occupation probability.

3.3.2 Crossover operator

The crossover operator is mainly used to realize the behavior of cooperation among the agents, meanwhile, CE-MAGA requires the agents in different agent lattices to cooperate mutually so as to realize the aim of coevolution. Here, the number of agent lattices is set to be 2, and these two agent lattices, denoted by L and L′, respectively, have the same size. When the crossover operator is performed on an agent Li,j in L, Li,j will be crossed with L′i,j which locates at the same lattice-point on L′. Two new agents will be generated to replace Li,j and L′i,j. Here, multi-point crossover is used by splitting the flight route of each agent into several parts according to the randomly determined crossover positions, and then regrouping these parts to form two new flight routes (i.e. two new agents).

3.3.3 Mutation operator

As 3D route planning is considered, the mutation operator is set to have three different mutation types: disturbance, insertion and deletion. When the mutation operator is performed on an agent Li,j, it will randomly choose one of these three mutation types according to different probabilities. The two-dimensional (2D) schematic diagram of these three mutation types is shown in Fig.3.

Fig.3 Different mutation types: (a) Disturbance; (b) Insertion; (c) Deletion

Disturbance means disturbing the coordinates of a randomly chosen intermediate node of Li,j with certain amplitude. If the corresponding flight route of Li,j is feasible, small disturbance amplitude would be chosen for the purpose of improving the quality of route; otherwise, larger disturbance amplitude is needed for acquiring feasible flight route more quickly. Insertion means to insert a randomly generated node into Li,j between two randomly determined adjacent nodes of Li,j. Generally, if the corresponding flight route of Li,j violates the minimum flight altitude constraint, insertion may be helpful for improving the quality of route. Deletion is just to choose and delete a intermediate node of Li,j randomly. It mainly fits for the situation where the corresponding flight route of Li,j is infeasible because infeasible flight routes may evolve into feasible flight routes as some route points are deleted.

3.3.4 Self-learning operator

When the self-learning operator is performed on an agent Li,j, another small-scale agent lattice represented by Ls is generated by Li,j at first. The size of Ls is Ls,size×Ls,size and it is determined by

                                      (5)

where Ni′,j′ is generated by randomly disturbing all the intermediate nodes of Li,j with certain amplitude. Self- learning operator is a small-scale MAGA, and its algorithm flow is similar to CE-MAGA’s (see Algorithm 1). The main difference between the two is: since only one small-scale agent lattice is generated for self- learning, the cooperation between different populations does not exist and the crossover operator is not needed. When the evolutionary process of self-learning operator is finished, the energy of the optimal agent in Ls is never lower than that of Li,j. Hence, the purpose of increasing the energy of the agent lattice by replacing Li,j with the optimal agent in Ls is achieved.

3.4 Algorithm flow

The details of the algorithm flow are given below. In order to speed up the convergence rate of algorithm, elitist strategy is adopted to prevent the optimal agent from degenerating. Suppose G to be the maximum evolution generation, (iopt, jopt) and (i′opt, j′opt) record the current locations of the optimal agent in L and L′, respectively. t represents the current generation. Then, the details of the algorithm flow are shown in Algorithm 1, where Pc is the crossover probability, Pm and P′m are the mutation probabilities for the agents in L and L′, respectively.

Algorithm 1: CE-MAGA for UAV 3D route planning

Step 1: Initialize the agent lattice L and L′. t←1.

Step 2: Perform the neighborhood competition operator on each agent in each agent lattice. Update (iopt, jopt) and (i′opt, j′opt).

Step 3: For each agent Li,j in L, if Rand(0,1)<>c, and both Li,j and L′i,j are not the optimal agents in each agent lattice, perform the crossover operator. Update (iopt, jopt) and (i′opt, j′opt).

Step 4: For each agent Li,j/L′i,j but the optimal one in L/L′, if Rand(0,1)<>m/P′m, perform the mutation operator. Update (iopt, jopt) and (i′opt, j′opt).

Step 5: Perform the self-learning operator on the optimal agent in each agent lattice.

Step 6: If t

4 Simulation and analysis

Simulations were performed on PC (Pentium (R) 2.6 GHz) and the operating system was Windows XP with Visual C++ 6.0 environment. The graphic displaying was realized by Matlab 7.5. The size of the task area was set to be 50 km×50 km, and the source and goal points of UAV low-altitude penetration were located at (11, 1) and (49, 49), respectively. The simulation parameters are given in Table 1.

Table 1 Simulation parameters and their values

The simulation result is shown in Fig.4. As forecasted earlier, the obtained 3D flight route is highly-efficient for UAV low-altitude penetration since the TF/TA2 ability of UAV can be guaranteed effectively. At the same time, the voyage and altitude cost are relatively low.

Fig.4 Optimal 3D flight route

The main characteristic of CE-MAGA different from MAGA is: the former makes use of coevolutionary mechanism to enhance its global search ability, and then avoids premature convergence effectively. The horizontal projections of the optimal flight routes acquired by typical MAGA and CE-MAGA within 60 s are shown in Fig.5. Obviously, MAGA falls into a local optimum, and a part of the planned route is covered by two threat regions while the total voyage is relatively long.

Fig.5 Horizontal projections of route planning results of MAGA and CE-MAGA in contour map

During the evolutionary process of CE-MAGA, the distributed optimization mechanism is used to retain the diversity of each population while the global search ability is guaranteed by the cooperation between the two populations. The energy curves of the optimal agent in each agent lattice are shown in Fig.6, from which we can judge that the aim of coevolutionary is achieved well. Thanks to the cooperation between the two populations, the energy of the agents in each agent lattice increases jointly. Additionally, since the agents with positive energy are corresponding to feasible flight routes, we can draw the following conclusion from Fig.6 that the feasible flight routes can be acquired within 2 s.

Fig.6 Energy curves of optimal agent in each agent lattice

For comparison, four simulation times were set to be 15, 30, 45, and 60 s for CE-MAGA, GA, and B Spline based route planning, respectively. The planning results are presented in Table 2. It is shown that CE-MAGA can acquire high-quality flight routes within 15 s. Compared with traditional GA, the route planning results of CE-MAGA are obviously better in all respects including average flight altitude, total voyage, as well as threat cost.

Compared with B Spline based route planning, although the route planning method proposed in this work has a huger search space, its convergence rate is not less than the former. At the same time, the planning results of CE MAGA are obviously better than those of B Spline based route planning in the respects of the altitude cost (see Table 2) and the terrain following performance (see Fig.7).

Table 2 Comparison of planning results of CE-MAGA GA, and B Spline based route planning

Fig.7 Vertical projections of route planning results of B Spline based route planning (a) and CE-MAGA (b)

5 Conclusions

1) Compared with the other existing route planning algorithms, CE-MAGA is highly-efficient with strong global search ability. The optimal 3D flight route has good performance on TF/TA2, and the route costs are relatively low. Moreover, these high-quality flight routes can be obtained within 15 s, while local optimum is avoided effectively.

2) The proposed route representation form is effective for improving the route accuracy. Thanks to such dynamic route representation form, the terrain- following ability of UAV is improved greatly, and the flight altitude is lowered down effectively. Thus, it is much more suitable for 3D route planning of UAV low-altitude penetration than B Spline curve.

3) The CV based constraint handling method is efficient and simple. It can help acquiring truly feasible flight routes within 2 s and speeding up the convergence of the whole evolutionary process effectively. At the same time, no penalty parameters are needed and the cumbersome adjustment of penalty parameters is avoided.

References

[1] YE Wen, ZHU Ai-hong, FAN Hong-da. Survey of route planning of low altitude penetration [J]. Journal of System Simulation, 2007, 19(10): 2357-2361. (in Chinese)

[2] LIU Li-feng, ZHANG Shu-qing. Voronoi diagram and GIS-based 3D path planning [C]// Proceedings of the 17th International Conference on Geoinformatics. New York: IEEE, 2009: 1100-1104.

[3] ASSEO S J. Terrain following/terrain avoidance path optimization using the method of steepest descent [C]// Proceedings of the IEEE 1988 National Aerospace and Electronics Conference. Dayton: IEEE, 1988: 1128-136.

[4] ZHOU Xiao-bing, CAI Zi-xing, SUN Guo-rong. Non-smooth environment modeling and global path planning for mobile robots [J]. Journal of Central South University of Technology, 2003, 10(3): 248-254.

[5] CAI Zi-xing, PENG Zhi-hong. The application of a novel path encoding mechanism in path planning for a mobile robot [J]. Robot, 2001, 23(3): 230-233. (in Chinese)

[6] MITTAL S, DEB K. Three-dimensional offline path planning for UAVs using multiobjective evolutionary algorithms [C]// Proceedings of IEEE Congress on Evolutionary Computation. Singapore: IEEE, 2007: 3195-3202.

[7] HASIRCIOGLU I, TOPCUOGLU H R, ERMIS M. 3-D path planning for the navigation of unmanned aerial vehicles by using evolutionary algorithms [C]// Proceedings of the 10th Annual Conference on Genetic and Evolutionary Computation. New York: Association for Computing Machinery, 2008: 1499-1506.

[8] VACHER J P, GALINHO T, LESAGE F, CARDON A. Genetic algorithms in a multi-agent system [C]// Proceedings of IEEE International Joint Symposia on Intelligence and Systems. Rockville: IEEE, 1998: 17-26.

[9] ZHONG Wei-cai, LIU Jing, XUE Ming-zhi, JIAO Li-cheng. A multiagent genetic algorithm for global numerical optimization [J]. IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, 2004, 34(2): 1128-1141.

[10] CAI Zi-xing, PENG Zhi-hong. Cooperative coevolutionary adaptive genetic algorithm in path planning of cooperative multi-mobile robot systems [J]. Journal of Intelligent and Robotic Systems, 2002, 33(1): 61-71.

[11] NIKOLOS I K, VALAVANIS K P, TSOURVELOUDIS N C, KOSTARAS A N. Evolutionary algorithm based offline/online path planner for UAV navigation [J]. IEEE Transactions on Systems, Man and Cybernetics-Part B: Cybernetics, 2003, 33(6): 898-912.

[12] PEHLIVANOGLU Y V, BAYSAL O, HACIOGLU A. Path planning for autonomous UAV via vibrational genetic algorithm [J]. Aircraft Engineering and Aerospace Technology, 2007, 79(4): 352-359.

[13] BEARD R W, MCLAIN T W, GOODRICH M A, ANDERSON E P. Coordinated target assignment and intercept for unmanned air vehicles [J]. IEEE Transactions on Robotics and Automation, 2002, 18(6): 911-922.

[14] ZHENG Chang-wen, YAN Ping, DING Ming-yue, SUN Fu-chun. Route planning for air vehicles [M]. Beijing: National Defense Industry Press, 2008: 32-34. (in Chinese)

[15] DEB K. An efficient constraint handling method for genetic algorithms [J]. Computer Methods in Applied Mechanics and Engineering, 2000, 186(2/3/4): 311-338.

[16] RUSSELL S, NORVIG P. Artificial intelligence: A modern approach [M]. Englewood Cliffs: Alan Apt, 1995: 31-52.

(Edited by YANG Bing)

Foundation item: Project(60925011) supported by the National Natural Science Foundation for Distinguished Young Scholars of China; Project (9140A06040510BQXXXX) supported by Advanced Research Foundation of General Armament Department, China

Received date: 2010-12-10; Accepted date: 2011-03-11

Corresponding author: PENG Zhi-hong, Associate Professor, PhD; Tel: +86-10-68912464; E-mail: peng@bit.edu.cn

 


 

[1] YE Wen, ZHU Ai-hong, FAN Hong-da. Survey of route planning of low altitude penetration [J]. Journal of System Simulation, 2007, 19(10): 2357-2361. (in Chinese)

[2] LIU Li-feng, ZHANG Shu-qing. Voronoi diagram and GIS-based 3D path planning [C]// Proceedings of the 17th International Conference on Geoinformatics. New York: IEEE, 2009: 1100-1104.

[3] ASSEO S J. Terrain following/terrain avoidance path optimization using the method of steepest descent [C]// Proceedings of the IEEE 1988 National Aerospace and Electronics Conference. Dayton: IEEE, 1988: 1128-136.

[4] ZHOU Xiao-bing, CAI Zi-xing, SUN Guo-rong. Non-smooth environment modeling and global path planning for mobile robots [J]. Journal of Central South University of Technology, 2003, 10(3): 248-254.

[5] CAI Zi-xing, PENG Zhi-hong. The application of a novel path encoding mechanism in path planning for a mobile robot [J]. Robot, 2001, 23(3): 230-233. (in Chinese)

[6] MITTAL S, DEB K. Three-dimensional offline path planning for UAVs using multiobjective evolutionary algorithms [C]// Proceedings of IEEE Congress on Evolutionary Computation. Singapore: IEEE, 2007: 3195-3202.

[7] HASIRCIOGLU I, TOPCUOGLU H R, ERMIS M. 3-D path planning for the navigation of unmanned aerial vehicles by using evolutionary algorithms [C]// Proceedings of the 10th Annual Conference on Genetic and Evolutionary Computation. New York: Association for Computing Machinery, 2008: 1499-1506.

[8] VACHER J P, GALINHO T, LESAGE F, CARDON A. Genetic algorithms in a multi-agent system [C]// Proceedings of IEEE International Joint Symposia on Intelligence and Systems. Rockville: IEEE, 1998: 17-26.

[9] ZHONG Wei-cai, LIU Jing, XUE Ming-zhi, JIAO Li-cheng. A multiagent genetic algorithm for global numerical optimization [J]. IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, 2004, 34(2): 1128-1141.

[10] CAI Zi-xing, PENG Zhi-hong. Cooperative coevolutionary adaptive genetic algorithm in path planning of cooperative multi-mobile robot systems [J]. Journal of Intelligent and Robotic Systems, 2002, 33(1): 61-71.

[11] NIKOLOS I K, VALAVANIS K P, TSOURVELOUDIS N C, KOSTARAS A N. Evolutionary algorithm based offline/online path planner for UAV navigation [J]. IEEE Transactions on Systems, Man and Cybernetics-Part B: Cybernetics, 2003, 33(6): 898-912.

[12] PEHLIVANOGLU Y V, BAYSAL O, HACIOGLU A. Path planning for autonomous UAV via vibrational genetic algorithm [J]. Aircraft Engineering and Aerospace Technology, 2007, 79(4): 352-359.

[13] BEARD R W, MCLAIN T W, GOODRICH M A, ANDERSON E P. Coordinated target assignment and intercept for unmanned air vehicles [J]. IEEE Transactions on Robotics and Automation, 2002, 18(6): 911-922.

[14] ZHENG Chang-wen, YAN Ping, DING Ming-yue, SUN Fu-chun. Route planning for air vehicles [M]. Beijing: National Defense Industry Press, 2008: 32-34. (in Chinese)

[15] DEB K. An efficient constraint handling method for genetic algorithms [J]. Computer Methods in Applied Mechanics and Engineering, 2000, 186(2/3/4): 311-338.

[16] RUSSELL S, NORVIG P. Artificial intelligence: A modern approach [M]. Englewood Cliffs: Alan Apt, 1995: 31-52.