Robot SLAM with Ad hoc wireless network adapted to search and rescue environments
来源期刊:中南大学学报(英文版)2018年第12期
论文作者:张承进 王洪玲 宋勇 庞豹
文章页码:3033 - 3051
Key words:search and rescue environments; local Ad-WSN; robot simultaneous localization and mapping; distributed particle filter algorithms; coverage area exploration
Abstract: An innovative multi-robot simultaneous localization and mapping (SLAM) is proposed based on a mobile Ad hoc local wireless sensor network (Ad-WSN). Multiple followed-robots equipped with the wireless link RS232/485 module act as mobile nodes, with various on-board sensors, Tp-link wireless local area network cards, and Tp-link wireless routers. The master robot with embedded industrial PC and a complete robot control system autonomously performs the SLAM task by exchanging information with multiple followed-robots by using this self-organizing mobile wireless network. The PC on the remote console can monitor multi-robot SLAM on-site and provide direct motion control of the robots. This mobile Ad-WSN complements an environment devoid of usual GPS signals for the robots performing SLAM task in search and rescue environments. In post-disaster areas, the network is usually absent or variable and the site scene is cluttered with obstacles. To adapt to such harsh situations, the proposed self-organizing mobile Ad-WSN enables robots to complete the SLAM process while improving the performances of object of interest identification and exploration area coverage. The information of localization and mapping can communicate freely among multiple robots and remote PC control center via this mobile Ad-WSN. Therefore, the autonomous master robot runs SLAM algorithms while exchanging information with multiple followed-robots and with the remote PC control center via this local WSN environment. Simulations and experiments validate the improved performances of the exploration area coverage, object marked, and loop closure, which are adapted to search and rescue post-disaster cluttered environments.
Cite this article as: WANG Hong-ling, ZHANG Cheng-jin, SONG Yong, PANG Bao. Robot SLAM with Ad hoc wireless network adapted to search and rescue environments [J]. Journal of Central South University, 2018, 25(12): 3033–3051. DOI: https://doi.org/10.1007/s11771-018-3972-8.
J. Cent. South Univ. (2018) 25: 3033-3051
DOI: https://doi.org/10.1007/s11771-018-3972-8
WANG Hong-ling(王洪玲)1, ZHANG Cheng-jin(张承进)1, 2, SONG Yong(宋勇)2, PANG Bao(庞豹)1
1. School of Control Science and Engineering, Shandong University, Ji’nan 250101, China;
2. School of Mechanical, Electrical and Information Engineering, Shandong University at Weihai,Weihai 264209, China
Central South University Press and Springer-Verlag GmbH Germany, part of Springer Nature 2018
Abstract: An innovative multi-robot simultaneous localization and mapping (SLAM) is proposed based on a mobile Ad hoc local wireless sensor network (Ad-WSN). Multiple followed-robots equipped with the wireless link RS232/485 module act as mobile nodes, with various on-board sensors, Tp-link wireless local area network cards, and Tp-link wireless routers. The master robot with embedded industrial PC and a complete robot control system autonomously performs the SLAM task by exchanging information with multiple followed-robots by using this self-organizing mobile wireless network. The PC on the remote console can monitor multi-robot SLAM on-site and provide direct motion control of the robots. This mobile Ad-WSN complements an environment devoid of usual GPS signals for the robots performing SLAM task in search and rescue environments. In post-disaster areas, the network is usually absent or variable and the site scene is cluttered with obstacles. To adapt to such harsh situations, the proposed self-organizing mobile Ad-WSN enables robots to complete the SLAM process while improving the performances of object of interest identification and exploration area coverage. The information of localization and mapping can communicate freely among multiple robots and remote PC control center via this mobile Ad-WSN. Therefore, the autonomous master robot runs SLAM algorithms while exchanging information with multiple followed-robots and with the remote PC control center via this local WSN environment. Simulations and experiments validate the improved performances of the exploration area coverage, object marked, and loop closure, which are adapted to search and rescue post-disaster cluttered environments.
Key words: search and rescue environments; local Ad-WSN; robot simultaneous localization and mapping; distributed particle filter algorithms; coverage area exploration
Cite this article as: WANG Hong-ling, ZHANG Cheng-jin, SONG Yong, PANG Bao. Robot SLAM with Ad hoc wireless network adapted to search and rescue environments [J]. Journal of Central South University, 2018, 25(12): 3033–3051. DOI: https://doi.org/10.1007/s11771-018-3972-8.
1 Introduction
A mobile robot localizes both itself and the significant targets within the exploration search and rescue (SAR) cluttered area and simultaneously builds a map for the post-disaster SAR area by updating the current metric map. Simultaneous localization and mapping (SLAM) is defined as the process in which mobile robots build a map of the environment to be explored, and simultaneously, use this map to calculate its location [1]. The SLAM methods have now reached a considerable maturity state in usual ground environments at a theoretical and conceptual level. The prospective challenges will focus on how to implement a large-scale SLAM in an increasingly unstructured SAR scenario. Especially, it is a greater difficulty to perform the SLAM process in situations of a GPS-denied environment, such as urban canyons, or underground mine SAR post-disaster scenarios [2]. Many parts of the SAR environmental area are usually dangerous and physically challenging where the network communications are limited.
The SLAM ability of robots has become the bottleneck of applying a mobile robot to the SAR scenarios as the assistant for SAR teams. The data association performance of SLAM strongly depends on signal communication; however, an SAR area usually lacks network. Hence, the construction of a local wireless Ad hoc network is indispensable. FERWORN et al [3] established a line of communication between mobile robots and trapped victims through a hybrid wireless network. ALZAQ et al [4] described that the mobile nodes explore the source node by using the received signal strength indicator (RSSI) in a large-scale outdoor area, and REID et al [5] indicated that a multi-robot team performed exploration and mapping for a large- scale environment where GPS information is not always available, by applying wireless sensor networks (WSNs) to robot SLAM tasks. For improving robot SLAM performance when adapting to SAR scenarios, the addition of an intelligent co-operative network can make multiple robots communicate with each other [6]. In the study conducted by HUR et al [7], the integration of a WSN and inertial measurement units (IMUs) led to the implementation of mobile robot localization. The mobile robot’s position was modified according to the motion model and the robot perceived the surroundings against the stored metric map. TUNA et al [8] described that the multiple robots in their study cooperatively performed SLAM and communicated over WSNs in SAR post-disaster scenarios; the mobile robots deployed WSNs autonomously. WICHMANN et al [9] showed the development of a wireless sensor and robot network with a teleoperation control center. In their study, all sensor node information was communicated to the PC control center and between robots for cooperative SLAM in WSNs [10–12]; these WSN nodes collected environmental data and conveyed them to the remote PC control center.
Wireless communication between mobile robots and the PC control center, as well as that between the other robots, in the SAR SLAM process, is important. The multiple wireless sensors and robot systems detect and identify the positions of significant objects. The multi-robots and remote PC control center exchange information via this mobile Ad hoc wireless sensor network (Ad-WSN) for implementing cooperative SLAM. LA [13] introduced a dynamic sensing framework that enables scalar field mapping with wireless mobile nodes. Such a cooperative WSN system steers the mobile sensor nodes via the on-line sensing performance of the feedback. Depending on the wireless ZigBee technology, FERNANDES et al [14] implemented and validated the communication functions of Ad hoc wireless communication during the multi-robot localization and mapping process. They designed a robot operating system (ROS) driver, which enables a mobile robot to receive and send ZigBee wireless messages. HIROMOTO et al [15] displayed the application of a mobile Ad hoc wireless network in a nuclear power plant, i.e., vital data transfer between sensor nodes and PC control center, providing wireless connections for the deployed sensors. In the SLAM process, the location and pose estimation, as well as controlling the mobile robot itself, is necessary [16]. Using odometry and IMUs, a mobile robot robustly executes relative localization against environment changes, but increases the accumulated error in localization and mapping [17]. The landmarks’ localization identifies a mobile robot from the current position and eliminates the accumulated error. A combination of these two localization methods can improve accuracy and robustness.
A globally consistent map is formed by calculating local displacements between the correspondences of the observed RFID tags and estimated positions. An integration of sensory and wireless pyroelectric infrared rays was used for mobile robot localization, which is a wireless networking ZigBee on-chip system, and WSNs were applied to robot SLAM in a harsh SAR environment where GPS information is not always available. RSSI is defined as the voltage measured from the RSS indicator, where the measurements can be achieved via the receiver during the communication [4, 5, 18]. KLEINER et al [19] proposed radio frequency identification device (RFID)-SLAM, which can close large loops within a few seconds because several RFIDs are actively deployed by robots at adequate locations. During a coverage exploring the SAR area autonomously, the multiple wireless sensor and robot systems detect and identify the positions of significant objects and the mobile robot produces a map of the terrain and marks the victims’ locations. However, this RFID technology strongly depends on the radio frequency transmitter’s infrastructure.
A particle filter (PF), an extended Kalman filter (EKF) and visual feature-based map matching are common algorithms used in the SLAM process. A combination of the map-based SLAM and PF algorithms demonstrates more advantages in adapting to an SAR cluttered environment. A Bayes filter and EKF-based algorithms are employed to estimate the robots’ and wireless nodes’ locations online [20]. The data for multi-robot SLAM are achieved by detecting the features extracted from robot WSNs. A topological or graph-based map vertex represents a specific place or node, and an edge shows the traversability between two connected nodes. In a 2-D landmark-based SLAM, the landmarks’ coordinates are stored in the Cartesian coordinate system and the landmarks and robot locations are described by the covariance matrix associated with the map. ELIAZAR et al [21] proposed a distributed PF algorithm for SLAM with multiple hypotheses on the grid maps’ occupancy rate, calculating the robot’s position using each grid map and its corresponding occupied particles. However, the drift inherent to sampling information causes errors in the robot’s position estimation, from which the standard PFs cannot provide coverage.
A multi-robot modular was designed for autonomous exploration locomotion; the modular robotic applications include SAR operations [22]. The cooperation of multiple robotic wireless networks was characterized by various dynamics sensors, where all strategies were decided locally by each robot using information from the neighbors; this is an integrated framework of heterogeneous cooperative objects for mobile network deployment and operations [23, 24]. However, mobile Ad hoc network (MANET) suffers from frequent network partitioning due to the dynamically changing topology [25]. Therefore, additional relay nodes (RNs) are employed to maintain the network connectivity, which need to move as well accompany the mobile network in order to re-establish the topology as soon as possible. HUA et al [26] proposed a SLAM algorithm for a dynamic environment based on the landscape aggregation theory, which enables mobile robots to deal with the dynamically changing relation between static objects and dynamic environments. A previous study [27] presented a pruned dynamic Bayesian network approach, and compared the two methods of PF and dynamic arithmetic circuits. Motivated by these meaningful studies and the practical requirement of SAR SLAM for adaption to post-disaster scenarios, we propose an innovative multi-robot SLAM based on a mobile Ad hoc local wireless network. In SLAM, multiple robots act as mobile wireless nodes with on-board relayed routers and wireless cards, and messages exchange between the connected nodes. The novel robotic experiment platform includes a Pioneer LX, Amigo robots, and a LUKER-II crawler robot. The Pioneer LX is a fully autonomous robot with embedded industrial PC (IPC) and a complete robot control system, and mainly works in the indoor environment. Amigo robots are the pioneer of the adept mobileRobots platform with a balanced drive system (two-wheel differential with casters), reversible DC motors, motor-control and drive electronics, high-resolution motion encoders, and battery power, all of which are managed by an on-board controller and mobile-robot software. The LUKER-II crawler robot is a self-developed SAR robot platform used in the exploration of post-disaster environments, and consists of a mobile robot body and a remote console; the remote console provides a human-robot interface that displays information including the scene image, environment information, the robot’s own state, and control commands. The PC on the remote console can provide direct motion control of the robot, additional pan tilt zoom control, and additional robot arm control.
2 Ad hoc wireless SLAM network structure and deployment strategies
2.1 Mobile wireless local network structure
The robotic modules are designed as mobility nodes of a mobile wireless local network, and explore the unstructured environment with their on-board sensors; each mobile module is equipped with multi-sensors to obtain the information autonomously. According to RUSSO et al [22], an autonomous robot depends on its full movement and complete control system to explore the surroundings; during autonomous area exploration, the robot moves around and scans the environment to gain the information. The mobile multi-robot module perceives and scans the surroundings, and then conveys the obtained information to an autonomous master robot for performing SLAM and to the remote PC control center for monitoring and management via this mobile self-organizing WSN. Figure 1 displays a diagram of the mobile wireless components: an autonomous master robot with IPC and multiple followed-robots with on-board multi-sensors. The control strategies are sent from the master robot IPC via this mobile wireless network.
As shown in Figures 2 (a)-(d), the self- organizing mobile wireless network comprises the following elementary components:
1) Tp-link wireless network router on-board multiple robots and remote PC control center.
2) Wireless link module (RS232/485) installed on mobile robots (see Figure 2(b), Amigo robots equipped with wireless link RS232/485 module).
3) Tp-link local area network cards equipped on multiple followed-robots.
4) The remote PC console center (shown in Figure 2(d), which should be located outside the SAR scene).
5) Fully autonomous master robot (Pioneer LX robot).
6) Multiple followed-robots (Amigo robots).
The Pioneer LX robot can complete mapping and localization with its embedded IPC; therefore, the SLAM process on-site can be monitored from the remote PC control center via this wireless Ad hoc network and the remote PC console can manage the robot to explore and map the SAR cluttered area by logging in the mobile robot’s embedded IPC from the remote PC control center.
Figure 1 Structural diagram of mobile wireless network for SAR SLAM
Figure 2 Wireless Ad hoc network comprising following elementary components:
The IPC embedded in Pioneer LX is the Intel D252 64-bit Dual Core 1.8 GHz Atom Integrated Graphics Processing Unit, which is available for the Windows or Linux operating system. The designed wireless Ad hoc network does not depend on the infrastructure, and is self-organizing, self- configuring, self-discovering and self-healing.
The multiple robot-equipped wireless module is connected to the PC control center via this local Ad hoc network through its wireless port IP address. Figure 3 shows the autonomous master robot, i.e., Pioneer LX, and followed-multi-robots forming a mobile wireless network. The network components include an autonomous master robot Pioneer LX equipped with IPC, multiple Amigo robots, and a remote PC control center. In addition, the wireless network routers are relayed and multi-hop- distributed; therefore, we can deploy a sufficient number of wireless routers to strengthen the wireless signal exchange. These wireless Ad hoc networks’ frame includes the following three simple functionality modules:
1) Mobile robots equipped with IPC. This part runs a laser range finder (LRF) program associated with the SLAM algorithm.
2) Wireless relayed routers. These multi-hop routers connect the robot’s IPC to the remote PC control center and to the surrounding unstructured environments.
3) The PC control center. This part surveils and manages the SLAM process.
The SLAM processing is conducted through an IPC embedded in the robot, which can reduce the delay of information transmission; however, various sensors’ data are required to convey to the remote PC control center for the supervision. The designed local wireless Ad hoc networks are applied to the SLAM process to ensure data exchange freely without loss of information in real SAR cluttered environments. In addition, the Ad hoc networks are local self-organizing, and therefore, stable and safe and not disturbed by noisy signals; this Ad hoc network is also self-discovering and self-configuring, and hence, can adapt to the harsh condition of SAR scenarios and reconfigure the routers’ connectivity in real time.
Figure 3 Mobile Ad hoc wireless network deployment strategies
2.2 Deployment strategies for multi-robot SLAM wireless network
In SAR post-disaster environments, there is generally lack of network for SLAM information communication among multiple robots and the remote PC control center. In this case, constructing a wireless Ad hoc network will be helpful for mobile robot SLAM. Since the post-disaster environments are often deconstructed and hazardous to a human SAR team, the mobile robots’ exploration is attractive. In post-disaster scenarios such as earthquakes, hurricanes, and tornados, the wireless communication nodes should be deployed autonomously to cover the SAR areas without relying on infrastructure. BEZZO et al [23] designed a heterogeneous wireless network consisting of unmanned aerial and ground mobile robots. According to Ref. [28], cooperative wireless interaction networks among robots integrate the framework of a variety of sensor data. Each robot makes exploration decisions locally by receiving information from its neighboring mobile robots and autonomous master robot. The robots exchange information of LRF readings and orientation data, and image frames are transmitted to an autonomous master robot and conveyed to the surveillance remote PC console center. The mobile Ad hoc wireless network connects the autonomous master robot, multiple followed-robots and remote PC control center for surveillance, and mobile robots for exploration of SAR environments.
The integration of wireless sensor nodes and mobile robots can improve SLAM performance from the aspects of perception coverage, object identification, and loop-closure exploration.Figure 3 shows the deployment strategies of mobile wireless components, which incorporate an autonomous master robot, followed-multi-robots, Tp-link wireless modules, SLAM program, and control algorithms. As shown in Figure 3, from left to right, mobile robots embedded with a visual system and a sonar and localization module for detecting SAR objects of interest (OOI). Then, according to the flowchart of the top line, the image sequence obtained is conveyed to the autonomous master robot IPC and image processing is conducted (running LS ellipse curve-fitting algorithm shown in Eq. (32) of Section 6); as a result, OOI features are extracted, which are transmitted to the master robot for building 3D features of the SAR objects on a 2D global map. According to the flowchart of the low line, positioning information is transmitted through the mobile robot on-board with wireless RS232/485, Tp-link wireless card, and Tp-link routers (used for multi-hop and relay signals), and then, the localization signal combines with LRF scans (running integrated probabilistic DP filter algorithm displayed in Eqs. (14)-(29) and the SAR SLAM program shown in Algorithm 1); as a result, localization information is transmitted to the autonomous master robot for producing a consistent global map of the SAR environment. All information and the SLAM process on site are exchanged with the remote PC control center, as shown on the top of Figure 3.
Wireless sensor nodes can be deployed by mobile robots in SAR post-disaster scenarios where infrastructures are absent, and mobile robots can carry the sensor nodes and drop them to the assigned places, where the mobile robots can themselves act as wireless sensor nodes. According to SHIH et al [24], the sensor nodes are deployed in the following ways:
1) Mobile robots carry sensors and deploy them randomly within the appointed SAR area.
2) Multi-robots integrated with various sensors act as mobile nodes and perform self-deployment by moving to the assigned locations.
The autonomous deployment generally equalizes the assignment. Assuming the SAR area A to be deployed, the algorithm outputs a series of coordinates as the location assignment:
(1)
where La represents the assigned locations; (xi, yi, θi) indicates the coordinates and orientation of the ith deployed position; and N is the number of wireless sensor nodes. These coordinates are assigned to mobile robots as they need to cover the visiting waypoints, and then, the multi-robots perform deployment tasks within this SAR environment. The SLAM control strategies used in the mobile wireless network are shown in Figure 4 and the state variable sets are defined as
(2)
(3)
where S represents the set of global mapping tasks, Si indicates the sub-maps of each mobile robot building, R represents the set of multi-robots, rj is each mobile robot, and i≠j, (i, j=1, 2, …, l). S is divided into l independent sub-sets:
(4)
where si indicates the corresponding sub-map of each robot ri. The set of control strategies can be expressed as:
(5)
where cs(i) is the set of series of costs, i=1, 2, …, n.
(6)
where cm(s) indicates the control strategies’ minimum cost of robot rm and c*m(S) is the optimal control strategy:
(7)
whereandrepresents the series of optimal SLAM strategies set.
Ad hoc wireless networks are generally multi-hop or relayed and arbitrarily deployed, and then, the communication between the robots and wireless mobile nodes does not rely on the static infrastructure, which is difficult to install in SAR post-disaster environments.
3 Probabilistic DP algorithms of mobile robot SLAM
3.1 SLAM process of mobile multi-robot wireless network
An innovative mobile Ad hoc wireless network is developed for adaption to the SAR environment; this network is autonomous and self-organizing by multiple robots themselves. A mobile self- organizing Ad hoc wireless frame is established using multiple robots; the robots with on-board multi-sensors act as mobile network nodes. A probabilistic distributed particle filter (DPF)-SLAM is proposed to use for implementing the SLAM process via the designed mobile Ad hoc wireless network.
Landmark-based probabilistic distributed particle (DP) filter algorithms can localize the robot pose with only two landmarks. One landmark indicates the mobile robot’s orientation and the other represents its location. A mobile network model was developed for relay nodes’ movement control, which helped airborne nodes maintain connectivity, and used the landscape theory to build consistent maps in uncertain environments [25, 26]. These particles were distributed randomly within the exploration area, and a mobile robot pose was determined depending on the importance weight factor of the selected particles. Autonomous mapping means that starting from an arbitrary initial point, the robot should be able to autonomously explore the SAR area with its on-board sensors. According to the information achieved from these sensors, the robot builds a ground-truth map of the SAR scenario and localizes itself relative to this map. A local wireless network should be flexible and reliable for sensor data identification and communication. For performing SAR SLAM tasks, the robots’ observation state vectors are defined as
(8)
where x and y represent the x-axis and y-axis directional values, respectively; θ is the deviation angle away from the x-axis;,and indicate their respective velocities; and X is the robot observation vector in the Cartesian coordinate system.
The proposed SLAM process employs LRF and multi-sensors deployed in the mobile wireless network to localize and build the environmental map. The kth sub-map’s position and angle are expressed as
k=1, 2, …, l (9)
where xk and yk represent the Cartesian coordinates of the kth sub-map, and θk is the orientation angle. The set of all sub-maps is represented as
(10)
where Mk represents the kth sub-map coordinates, and then, the displacement of two consecutive adjacent sub-maps can be expressed as
(11)
where ξk represents the distance between Mk and Mk–1, Δxk,k–1 and Δyk,k–1 represent the displacements in the x- and y-axis directions, respectively. Δθk,k–1 is the deviation angle between two sub-maps. According to point cloud matching, the LS distance of global scans is represented as
(12)
where xt, yt represent the x- and y-axis directional values, respectively, and θt is the angle deviated from the x-axis; St is the global scan vector in the Cartesian coordinate system. The displacement between two consecutive scans is expressed as
(13)
where x10 and x20 indicate the scan of two consecutive scans in the x-axis directional values; y10 and y20 indicate the scan of two consecutive scans in the y-axis directional values; θ10 and θ20 represent the deviation angles of two consecutive scans in the x-axis direction; Δxt and Δyt represent the distances of two consecutive scans in the x- and y-axis directions; and Δθt is the deviation angle between two consecutive scans.
A globally consistent map is obtained by calculating the estimated displacements corresponding to the observed obstacles. For providing a correct topological exploration area map, a robot can recognize the previously visited location and revisit the unexplored area environments. In addition, the robots’ exploration requires a consistent map building and localization within the SAR cluttered environment.
3.2 Probabilistic DP filter SLAM algorithms
WANG et al [29] indicated that each particle action can be looked on as a mobile robot’s practical action, and after completion of one possible action, the position distinctness compared to the other particles can be calculated. In practice, it is usually sampled according to the calibrated odometer with a small amount of Gaussian noise. WANG et al [30] considered an EKF-SLAM framework where the mobile robot detects the post-disaster area in the process of localizing itself and mapping the surroundings. The local sensor observations Ω form the sub-map, where ψ is the connector of the algorithms and expressed as
(14)
where i, j represent the indices of two connected observes, i.e., the ith observation set is interrelated to the jth observation set. Δ is the position estimate state difference and CΔ represents their estimate covariance matrix. The integrated DP filters calculate the mobile robot’s exploration trajectories by choosing the position state particles. Each particle has an appointed set of EK filters, and the landmark position and orientation are estimated as follows:
1) From Eq. (3), the set of mobile robot exploration trajectories is expressed as
(15)
where ri=(xi, yi, θi) is the state vector of each time ti, and combined with Eq. (4), the position set s of SAR objects can be represented as
(16)
The sub-map is formed by each robot exploration trajectory, and lj=(xj, yj) indicates the object position.
2) The measurement function is predicted from the observations. The filter position state vector is represented as
(17)
where si is the ith object position, Csi is the corresponding covariance matrix, Cz is the observation z=d distance covariance matrix, and the distance estimate is expressed as
(18)
3) The weight factor update for each state particle is based on the observation distance. The updated error is ei, and the corresponding covariance matrix is Cei, which are expressed as follows:
(19)
(20)
where is the Jacobin matrix:
(21)
where x=dx–lx, y=dy–ly, and is a Jacobin symbol. The Jacobin function represents an optimal linear estimate distance. Assumingand
(22)
The covariance matrix of each particle is updated as follows:
(23)
(24)
where ; from Eq. (20), it can be obtained as
(25)
For a certain mobile robot exploration system, K=Cei–Cz is a constant matrix and the updated weight factor of each state particle is calculated as
(26)
According to the proposed probabilistic integrated DP filter algorithm, setting z=d within the mobile wireless network exploration area and assuming nd to be a set in this area, the set of sub-map data Mi stored in the master robot can be expressed as
, (27)
whererepresents the total number of observations states.
The probabilistic DP filter SLAM maintains a joint distribution on the grid maps and the robot’s pose according to a PF. The grid maps are global grid-like occupancy arrays, and each grid map is relative to a robot location [21]. The distributed PF stores a separated grid map copy for each particle. A higher-accuracy SLAM would result in an inefficient algorithm that requires much more storage space for the finer grid maps. Therefore, it is better to maintain balance between the SLAM accuracy and efficiency. The distributed particle method updates the current metric map and estimates the robot pose and location according to the records of incremental particle changes; the robot pose at time n is represented as
(28)
and the belief of the robot location is a probability density function, which is defined as
(29)
where (xn, yn) is the Cartesian coordinates of position and its orientation is θn. The lattice maps are marked at time n by a 2D matrix where Nx indicates the rows and Ny indicates the columns. The pixel values in mn indicate whether it is free, occupied, or unexplored. The location of robot Sn at time n comprises ρn, θn and mn; Sn, , and are the measurements of the robot at time n; Sn consists of odometry values; estimates three parameters of the robot pose; and includes the laser range finder estimates over the 180° view field in front of the robot.
The DP filter produces a series of particles to estimate bel(Sn), and each particle contains an estimated robot pose and a local grid map. When the robot collects a set of new measurements, the DP-SLAM approach assigns each particle an importance or weight factor. Generally, the more a particle supports the current measurements, the higher its importance factor. Hence, the current grid map is updated and the new location belief is approximated by resampling the particle set according to its weight factors. The higher a particle has an importance factor, the more likely it is to be selected. Repeating such a resample process by calculating the occupancy rate of the grid maps, the same particle can be selected more than once.
4 Area coverage and loop-closure performances of mobile SLAM wireless network
In the wireless network frame, a map of SAR environment is generated by the robot IPC, and using this map, the location of the robot can be perceived easily and the current metric map can be updated continuously. Figures 4 and 5 display the coverage and loop closure performances to explore the SAR cluttered area, and the accuracy is evaluated using the trajectory curves. However, when the GPS signal is unavailable or lost at different moments, the increased estimation error of location and mapping affects the SLAM implementation [31]. A local wireless Ad hoc network is helpful to implement and improve the performances of the SLAM process despite the harsh environments.
According to Ref. [27], PF algorithms need appropriate samples because estimating the location from a PF tends to have high variance for small sample sets. The dataset of robot poses of different particles in Figure 6 is partly listed in Table 1. An analysis of the absolute trajectory errors (ATEs) and SLAM accuracy is shown in Figure 6. In our test, the robot pose is recorded every ten seconds. The mobile robot exploration ATE can be calculated as follows:
(30)
where N represents the number of observations; xi and yi indicate the robot’s absolute position of the ith step in the Cartesian coordinate system; and represent the robot’s estimate pose of the ith step. The exploration area coverage is evaluated using the performance index KQ, which is proportional to the SAR area and inversely proportion to the exploration distance:
Figure 4 Scan coverage of SLAM area in wireless Ad hoc network environment:
(31)
where A represents the SAR area and li indicates the mobile robot exploration distance.
The wireless sensor nodes are deployed in the exploration area randomly, and the robot itself acts as a mobile node. The mobile robot drives to the deployed nodes depending on the RSSI, and these achieved signals for SLAM are simultaneously used to evaluate the SAR environments. Then, the robot obtains sufficient information to determine the most probable position of the wireless sensor nodes.
Figure 5 Loop closure of exploration area during performance of SLAM task by robot in an Ad hoc wireless network:
In the process of SLAM focused by CHEN [32], information is fused from multiple heterogeneous sensors with EKF, which is a nonlinear filter. However, EKF has very strict limitations on the number of particles and stability of feature points due to its high computational complexity. In contrast, selecting some corners or straight lines with scale-invariant feature transform enables the algorithm to improve the convergence performance. From a comparison of the above methods, vision-based localization and LRF-based mapping combination have more advantages for robot SLAM in a harsh SAR environment. The SLAM program employs LRF scanning and sonar localization devices equipped on the robot.
Figure 6 Robot trajectory with various poses and particles (Blue line represents x-coordinate trajectory, green line shows y-coordinate trajectory, and red line indicates orientations or angles)
Table 1 Dataset of robot poses in SLAM process
According to Figure 7, mobile robots perform the SLAM task in SAR cluttered post-disaster environments where local wireless Ad hoc networks are deployed. The programs are encoded and compiled in the visual studio (VS) system installed in the PC control center and IPC. The Pioneer LX robot embedded with IPC and Amigo robots runs the SLAM program and communicates with the remote PC control center through mobile wireless Ad hoc networks. The mobile robots can map the exploration SAR post-disaster environment and locate themselves simultaneously. Figure 7 displays the program procedure flowchart for a robot SLAM. The sensor information such as LRF measurements, sonar localization, and gyro signal is exchanged among autonomous master robot, multiple followed-robots, and remote PC control center via the self-organized wireless Ad hoc network.
Figure 7 Flowchart of SLAM program performed through mobile local wireless Ad hoc network
5 Ad hoc mobile wireless network SLAM simulations
The developed mobile wireless network does not need to rely on infrastructure, and has the ability of self-organizing, self-configuring, self- discovering and self-healing. In addition, the wireless Ad hoc network routers are multi-hop- distributed and relayed; therefore, multi-robots can be on-board with adequate wireless routers to explore the appointed SAR environment. In this wireless Ad hoc network, when the SLAM program is conducted, each robot equipped with the wireless module exchanges data with the autonomous robot, other multiple robots, and remote PC control center via mobile. These multi-hop and relayed wireless routers are on-board the mobile robots.
As shown in Figure 8, virtual multiple robots are loaded on the metric map of the SAR environment to perform the SLAM task, exploring the appointed area in Ad hoc wireless networks. Figure 8(a) displays the sonar signal (black lines emitted from virtual robot) transmitted for localization and exploring obstacles and LRF scanning (the blue band) for mapping. In Figure 8(b), multiple robots self-organize a mobile wireless network according to the red trajectories in the metric map of the SAR environment.Figure 8(c) represents the exploration area coverage (orange lines that surround the area) and OOI (blue rectangular blocks) marked. The SLAM simulations are run in the simulated SAR environments for SLAM through virtual mobile robots. These robots are loaded on the metric map of the SAR scenario for exploring and performing the SLAM task, where the red images indicate the established virtual robots, the black lines emitted from the virtual robot represent the sonar signal for localization and avoidance obstacles, and the blue band indicates LRF scanning for mapping.
In probabilistic integrated DP filter algorithms, the mobile robots perform a program on the master robot IPC while data information is exchanged between the multi-robots and the PC control center via the designed local wireless Ad hoc network. As a result, a new map is formed to update the current 2D metric map in real time. As REENTOV et al [33] described, the simulations are run iteratively and the misalignment does not occur every time. Therefore, the DP-SLAM algorithm does not obviously degrade the grid maps and produce pose and location errors in the wireless Ad hoc network, and thus, can produce a reasonable map. If the new map’s quality metrics are degraded, it will stop updating the current map entirely. Figure 9 shows that the probabilistic DPF-SLAM information is transmitted to an autonomous master robot and remote PC console. The autonomous master robot (the red image) processes the particle trajectory (blue line), as shown in Figure 9, and the remote PC console receives on-site information of robot SLAM. The remote PC console can supply control directly to guarantee a safe operation of mobile robots in harsh SAR scenarios.
Figure 8 SAR robots loaded on metric map for performing SLAM task and exploring appointed area in Ad hoc wireless network:
6 Experiments of mobile wireless network SLAM adapted to SAR scenarios
A WSN is applied to the SLAM process in a SAR cluttered environment, where the sensor nodes capture the key characteristics of the environment. The percentage of the area covered by the robots and the number of detected landmarks indicate the performances of localization and mapping. Exploration, localization, and mapping of the surroundings are the three layers for a mobile robot performing a SLAM task. Distributed PFs efficiently calculate the factorization of the joint probability, and a distribution grid map is built for each particle trajectory. According to Ref. [34], by integrating the measurements along the tracked trajectory, each grid map can be calculated. The experiments of the SLAM process are performed under the following two scenarios:
1) Simulated indoor SAR clutter environment in the absence of GPS-like signal (Figures 11(a)- (d)). The experiment platform is a Pioneer LX robot equipped with IPC, Amigo robots equipped with wireless link RS232/485 module, and the designed local wireless Ad hoc network, which are shown in Figures 1-3. The soft support includes the SLAM program shown in Tables 1 and 2, and distributes the particle filter algorithm.
2) A collapsed outdoor SAR site scene containing various clutter obstacles and gravel (Figures 13(a)-(c)), besides a loop around the exploration area, forms a grid map of the environment. A crawler mobile robot is employed as an experimental robot in this SAR collapsed unstructured area.
Figure 9 Information exchanged between mobile robots and remote PC console via self-organized wireless network (Red image represents master robot processing map data autonomously)
In the distributed PF of the SLAM process, a series of particles are produced, each containing an estimated robot pose and map information according to its weight factor. The robot collects a set of new values and the SLAM algorithm assigns each particle a weight factor. The higher a particle’s weight, the more likely it is to be selected for localization and mapping. As a result, the robot produces 2D map files for updating the current metric map, and this 2D map is loaded on the robot IPC and remotes PC control center via the designed local Ad hoc wireless network.
Algorithm 1 shows the SLAM task performed by the mobile robots using a self-organized Ad hoc wireless network; the robots explore the appointed SAR area, which forms a part of the commands for the SLAM process. Using this algorithm, the mobile robots perform autonomous exploration, localization, and mapping of the area with the local wireless Ad hoc network environment. The algorithmic process of Algorithm 1 runs as follows:
Algorithm 1: Mobile robots perform SAR SLAM with a self-organized Ad hoc wireless network.
Step 1: Command set. Adding command;Eqs. (1)-(7).
Step 2: Networking. Mobile robots’ self- organized Ad hoc wireless network; Eqs. (8)-(13).
Step 3: Localization. Mobile robots transmit information of LRF readings, gyro, and sonars.
Step 4: Obtain gyro positioning information.
Step 5: Mapping and localization; the low line in Figure 4.
Step 6: SAR OOI identification.
Step 7: Robot monitor; remote PC control center receives information.
Step 8: Area coverage and loop closure exploration; Eqs. (30) and (31).
Step 9: Call robot task; run procedures of Table 1, Figure 4 and Figure 8.
Step 10: Add SLAM task; probabilistic DPF; Eqs. (14)-(29).
Step 11: Initialize SLAM global information.
Step 12: Add image sequence.
Step 13: Process image stream; the top line of Figure 4; Eq. (32).
Step 14: Repeat. Return to step i (i=1, …, 13).
Step 15: End
According to the probabilistic DP filter algorithm, the SLAM process generates grid maps. Each particle occupancy rate contains the robot’s pose and location information. Through the wireless Ad hoc network from the remote PC control center, we can surveil the mobile robot, which then explores the appointed SAR area and performs localization and mapping, saves the completed map, and simultaneously updates the current metric map. As shown in Figures 10(a)-(c), the Pioneer LX master robot runs the program while mapping the exploration environment. The robot IPC processes the 2D map data with the received LRF scanning values, which include LRF estimates over the 180° view field in front of the robot. Therefore, as described in Eqs. (28)-(29), the LRF measurements belong to space dimension. The 2D map data are produced for building a new map, and the sensor information of SLAM is conveyed to the remote PC control center via the Ad hoc wireless network. The new map is completed and saved for updating the current metric map, as shown in Figure 10(c). This new map is marked with goal points (blue blocks), dock points (yellow blocks), home points (green blocks), and dangerous band (orange areas). The robots’ visual system collects the image information of the surrounding objects within the SAR area and the least square (LS) ellipse curve-fitting algorithm is used to extract the image feature of the objects; the proposed LS ellipse curve-fitting is expressed as
(32)
wherep represents the coefficient constant that affects the image feature extraction radius.
Figure 11 shows that multiple robots autonomously self-organize the mobile Ad hoc wireless network and the integrated performances of both area coverage and loop closure are improved. The red trajectories in Figures 11(a)-(c) show that the loop-closure mobile wireless network is complete, and simultaneously, the blue-band LRF SLAM information is the area coverage of the whole exploration area. As shown in Figure 11(d), the information data are received and the SLAM program (shown in Algorithm 1) is autonomously conducted. Compared to the conventional SLAM methods, the proposed SLAM approaches have the following significant advantages;
1) Adaption to SAR post-disaster indoor and outdoor environments.
2) Do not rely on infrastructure.
3) Wireless network is mobile and self-organized by multiple robots.
4) Robots autonomously conduct SLAM process.
Figure 10 Pioneer LX robot running SLAM process in wireless Ad hoc network:(red image represents a mobile robot, green spots represent sonar signal, blue spots represent LRF scanning, and black spots indicate obstacles)
Figure 11 Integrated improved performances of area coverage and loop closure with probabilistic DPF algorithm:
5) Remote PC control center monitors SAR SLAM site to guarantee mobile robots’ safety.
Figure 12(a) shows a crawler LUKER-II robot employed to perform the SLAM task at a collapsed site and the updated map formed online. The SLAM data exchange is implemented in the designed wireless Ad hoc network. Moreover, the designed local wireless Ad hoc network conducts the mapping process and ensures the robot’s safety by using GUI on the remote PC control center.Figure 12(b) displays the formed new map for SAR area exploration.
7 Results and discussion
Mobile robots on-board with wireless relayed routers are self-organized routers among the master autonomous robot, multiple followed-robots, and remote PC console, maintaining the coverage of the SAR exploration area. These relayed routers move around an SAR post-disaster area together with mobile robots, and strengthen the emission energy of the received signal information among mobile robots and the remote PC control center in the wireless Ad hoc network. The master robot autonomously conducts the SLAM process and exchanges information with multiple followed- robots and the remote PC control center. Compared to the conventional methods, the designed mobile Ad-WSN SLAM has the following novel contributions: 1) The wireless network does not depend on infrastructure; 2) The SLAM mobile robots self-organize to establish wireless network;3) Multiple robots autonomously perform SLAM task; 4) Remote PC console monitors SLAM process on site via wireless network.
The local displacements between the robot and obstacles can be computed by LRF scanning. Figures 13(a) and (b) shows that robot SLAM displacement deviates from the metric map and robot trajectories drift out of the SAR exploration area. Figure 13(a) is compared with Figure 10(c), and Figure 13(b) is compared with Figure 12(b). The black line frame indicates the metric map, the red image in Figure 13(a) is a robot performing SLAM, the blue line represents LRF scanning, and the green line indicates the sonar signal for obstacle avoidance and localization. The comparisons of the experiments demonstrate that a stable local Ad hoc WSN in the SAR environment provides indispensable information exchange to eliminate deviation from the SAR exploration area. The mobile self-organized wireless network maintains communication among the master autonomous robot, multiple followed-robots, and the remote PC console.
Figure 12 Crawler LUKER-II robot performing SLAM task at a collapsed SAR site:(Blue areas marked 1, 2 and 3 represent SAR targets)
Figure 13 Deviation from metric map and trajectory drift without wireless network:
While exploring the SAR environment, the robot produces a map based on the odometer, LRF, and sonar localization signals to estimate the robot’s location depending on this map. According to KIM et al [35], perception-driven navigation automatically adapts its revisit motion to the environment by comparing two pieces of propagated information from revisiting and exploring, and then, chooses the next movement. However, exploring in an unstructured disaster area requires more than a single piece of information as input. The robot’s SLAM process usually needs to make exploration decision based on a series of sensor readings and coverage wireless network information over time. The mobile robots are sensed with on-board sensors and the position information is integrated by autonomous robots to maintain a consistent global map.
The designed self-organized mobile local wireless Ad hoc networks are applied to the robot SLAM process, which improves the SLAM performance in terms of area coverage, object identification and loop closure. The local wireless Ad hoc network supplies a reliable transformation of data exchanges for mobile robots’ SLAM in harsh SAR environments.
8 Conclusions and future work
The self-organized mobile Ad hoc WSN used in multi-robots’ SLAM is a novel research. In this work, we elaborate the network’s architecture, deployment, and information data communication methods for SAR SLAM of multiple robots. The proposed mobile local wireless network addresses the information communication based on the robots’ SLAM in harsh SAR post-disaster environments. The mobile robots can easily exchange sensor data information among the master autonomous robot, multiple followed-robots, and remote PC control center. The wireless routers are relayed, and hence, the sensor data signal and SLAM information can be strengthened and the signal transmission is efficient. In the future, we will improve the SLAM algorithms for mobile robots by adapting to the SAR scenarios in the designed local Ad hoc wireless network.
References
[1] BAILEY T, DURRANT-WHYTE H. Simultaneous localization and mapping (SLAM): Part II [J]. IEEE Robot & Automation Mag, 2006, 13(3): 108–117.
[2] DURRANT-WHYTE H, BAILEY T. Simultaneous localization and mapping (SLAM): Part I [J]. IEEE Robot & Automation Mag, 2006, 13(2): 99–108.
[3] FERWORN A, TRAN J, UFKES A. Establishing network connectivity under rubble using a hybrid and wireless approach [C]// Safety, Security, and Rescue Robot (SSRR). IEEE International Symposium, 2012: 1–4.
[4] ALZAQ H, KABADAYI S. Mobile robot comes to the rescue in a WSN [C]// 2013 IEEE 24th International Symposium on Personal, Indoor and Mobile Radio Communications: Mobile and Wireless Networks. 2013: 1977–1982.
[5] REID R, CANN A, MEIKLEJOHN C. Cooperative multi-robot navigation, exploration, mapping and object detection with ROS [C]// 2013 IEEE Intelligent Vehicles Symposium (IV). Gold Coast, Australia, 2013: 1083–1088.
[6] NAIM I. A co-operative network of rescue robots for disaster management [C]// Communication Science Committee (CSC) 400: Info Tech Research Group (ITFG) Miniproposal, 2011: 1–4.
[7] HUR H, AHN H S. Unknown input H observer-based localization of a mobile robot with sensor failure [J]. IEEE/ASME Transactions on Mechatronics, 2014, 19(6): 1830–1838.
[8] TUNA G, GUMGOR V C, GULEZ K. An autonomous wireless sensor network deployment system using mobile robots for human existence detection in case of disasters [J]. Ad Hoc Networks, 2014, 13(1): 54–68.
[9] WICHMANN A, OKKALIOGLU B D, KORKMAZ T. The integration of mobile (tele) robotics and wireless sensor networks: A survey [J]. Computer Communications, 2014, 51(5): 21–35.
[10] CARLI M, PANZIERI S, PASCUCCI F. A joint routing and localization algorithm for emergency scenario [J]. Ad Hoc Networks, 2014, 13(1): 19–33.
[11] KUNTZE H B, FREY C, EMTER T. Situation responsive networking of mobile robots for disaster management [C]// Conference ISR ROBOTIK. Berlin, German, 2014: 313–320.
[12] TUNA G, GUNGR V , POTIRAKIS S M. Wireless sensor network-based communication for cooperative simultaneous localization and mapping [J]. Computers & Electrical Engineering, 2014, 41(4): 4074–4085.
[13] LA H M. Cooperative and active sensing in mobile sensor networks for scalar field mapping [J]. IEEE Transactions on Systems Man & Cybernetics Systems, 2015, 45(1): 831–836.
[14] FERNNANDES A S, COUCEIRO M S, PORTUGAL D. Ad hoc communication in teams of mobile robots using zigbee technology [J]. Computer Applications in Engineering Education, 2015, 23(5): 733–745.
[15] HIROMOTO R E, SACHENKO A, KOCHAN V. Mobile Ad Hoc wireless network for pre- and post-emergency situations in nuclear power plant [C]// International Symposium on Wireless Systems Within the Conferences on Intelligent Data Acquisition and Advanced Computing Systems. Idaacs-Sws, Offenburg, Germany, 2014: 92–96.
[16] AGHILI F, SALERNO A. Driftless 3-D attitude determination and positioning of mobile robots by integration of IMU with two RTK GPSs [J]. IEEE/ASME Transactions on Mechatronics, 2013, 18(1): 21–31.
[17] HAN S B, KIM J H, MYUNG H. Landmark-based particle localization algorithm for mobile robots with a fish-eye vision system [J]. IEEE/ASME Transactions on Mechatronics, 2013, 18(6): 1745–1756.
[18] LUO R C, CHEN O. Wireless and pyroelectric sensory fusion system for indoor human/robot localization and monitoring [J]. IEEE/ASME Transactions on Mechatronics, 2013, 18(3): 5649–5654.
[19] KLEINER A, DOMHEGE C. Real-time localization and elevation mapping within urban search and rescue scenarios [J]. J Field Robot, 2007, 24(8): 723–745.
[20] GOUVEIA B D, PORTUGAL D, SILVA D C. Computation sharing in distributed robotic systems: A case study on SLAM [J]. IEEE Transactions on Automation Science & Engineering, 2014, 12(2): 410–422.
[21] ELIAZAR A I, PARR R. Hierarchical linear/constant time SLAM using particle filters for dense maps [C]// Advances in Neural Information Processing Systems. Vancouver, British Columbia, Canada, 2005: 1–8.
[22] RUSSO S, HARADA K, RANZANI T. Design of a robotic module for autonomous exploration and multimode locomotion [J]. IEEE/ASME Transactions on Mechatronics, 2013, 18(6): 1757–1766.
[23] BEZZO N, GRIFFIN B, CRUZ P. A cooperative heterogeneous mobile wireless mechatronic system [J]. Mechatronics IEEE/ASME Transactions on Mechatronics, 2014, 19(1): 20–31.
[24] SHIH C Y, CAPITN J, MARRN P J, VIGURIA A. On the Cooperation between Mobile Robots and Wireless Sensor Networks [J]. Cooperative Robots and Sensor Networks, Studies in Computational Intelligence, 2014, 55(4): 67–86.
[25] LI Jie, SUN Zhi-qiang, SHI Bo-hao, GONG Er-ling XIE Hong-wei. Relay movement control for maintaining connectivity in aeronautical ad hoc networks [J]. Journal of Central South University, 2016, 23(4): 850–858.
[26] HUA Cheng-hao, DOU Li-hua, FANG Hao, FU Hao. A novel algorithm for SLAM in dynamic environments using landscape theory of aggregation [J]. Journal of Central South University, 2016, 23(10): 2587–2594.
[27] YU Jin-song, FENG Wei, TANG Di-yin, LIU Hao. Comparison of dynamic Bayesian network approaches for online diagnosis of aircraft system [J]. Journal of Central South University, 2016, 23(11): 2926–2934.
[28] BIRK A, SCHWERTFEGER S, PATHAK K. A networking framework for teleoperation in safety, security, and rescue robotics [J]. Wireless Communications in Networked Robot, 2009, 16(1): 6–13.
[29] WANG Y, CHEN W, WANG J, WANG H. Active global localization based on localizability for mobile robots [J]. Robotica, 2015, 33(1): 1609–1627.
[30] WANG N, MA S, LI B, WANG M. Subjective exploration for simultaneous localization and mapping used in ruins [C]// The 5th Annual IEEE International Conference on Cyber Technology in Automation, Control and Intelligent Systems, Shenyang. China, 2015.
[31] SCHLEICHER D, BERGASA L M, OCAA M. Real-time hierarchical outdoor SLAM based on stereovision and GPS fusion [J]. IEEE Trans Intell Trans Syst, 2009, 10(3): 440–452.
[32] CHEN S Y. Kalman filter for robot vision: A survey [J]. IEEE Transactions on Industrial Electronics, 2012, 59(11): 4409–4420.
[33] REENTOV E, MESSIER G G, MAGIEROWSKI S. Evaluating wireless network effects for SLAM robot map making [C]// IEEE Communications Society Subject Matter Experts for Publication in Proceedings of the IEEE Globecom. Canada, 2010: 1–5.
[34] BRAND C, SCHUSTER M J, HIRSCHNULLER H. Stereo- vision based obstacle mapping for indoor/outdoor SLAM [C]// IROS. Chicago, USA 2014: 1846–1853.
[35] KIM A, EUSTICE R M. Perception-driven navigation: active visual SLAM for robotic area coverage [C]// 2013 IEEE Int Conf Robotics and Automation (ICRA). Karlsruhe, Germany, 2013: 3196–3203.
(Edited by FANG Jing-hua)
中文导读
适应于搜救环境利用Ad hoc无线网络的机器人SLAM
摘要:本文提出了一种基于移动Ad hoc局域无线传感器网络 (Ad-WSN) 的创新型多机器人同时定位与地图创建 (SLAM)。通过装备无线连接模块RS232/485的多机器人作为移动的节点,将机器人车载各种传感器、Tp-link无线局域网卡和Tp-link无线路由器等部署覆盖到整个探索环境区域;具有内置工业PC (IPC) 的主机器人和完全自主控制系统,通过使用该移动Ad-WSN无线传感器局域网,与多机器人交换信息以自主执行SLAM任务;位于安全环境的远程控制中心可以监控现场多机器人SLAM执行情况,并能够对机器人提供直接运动控制,以保障恶劣环境下机器人的运行安全。移动Ad-WSN无线传感器局域网弥补了在搜索和救援 (SAR) 环境中,机器人执行SLAM任务时通常GPS定位信号的易变或缺失。灾后SAR环境中,信息交换网络通常不存在或不可靠,搜救现场充满障碍物。为了适应恶劣SAR环境的苛刻条件,本文提出的多机器人自组织移动Ad-WSN局域网能够完全自主执行SLAM过程,同时提高了搜救兴趣目标 (OOI) 辨识、SLAM信号的探索区域面积覆盖性能。定位和地图构建信息可以通过该移动Ad-WSN无线局域网在多机器人之间,以及与远程PC控制中心之间进行传输和通讯。因此,自主机器人能够在与多随从机器人交换信息的同时运行SLAM算法,并通过该无线局域网与远程PC控制中心交换信息。仿真和实验验证了自主机器人移动Ad-WSN无线局域网SLAM适应探索区域面积覆盖、搜救目标定位标记、运动轨迹闭环重访点检测等适应灾后SAR环境的改进性能。
关键词:搜救环境;Ad-WSN局域网;机器人SLAM;分布粒子滤波器算法;探索区域面积覆盖
Foundation item: Projects(61573213, 61473174, 61473179) supported by the National Natural Science Foundation of China; Projects(ZR2015PF009, ZR2014FM007) supported by the Natural Science Foundation of Shandong Province, China; Project(2014GGX103038) supported by the Shandong Province Science and Technology Development Program, China; Project(2014ZZCX04302) supported by the Special Technological Program of Transformation of Initiatively Innovative Achievements in Shandong Province, China
Received date: 2017-10-21; Accepted date: 2018-01-26
Corresponding author: ZHANG Cheng-jin, PhD, Professor; Tel: +86-631-5682389; E-mail: cjzhang@sdu.edu.cn; ORCID: 0000-0003- 2643-060X