• No results found

with Local Elevation Maps

N/A
N/A
Protected

Academic year: 2021

Share "with Local Elevation Maps"

Copied!
98
0
0

Loading.... (view fulltext now)

Full text

(1)

M A S T E R ' S T H E S I S

Teleoperator Turn Maneuver Assistance for a Mars Rover

with Local Elevation Maps

Hossein Shahrabi Farahani

Luleå University of Technology Master Thesis, Continuation Courses

Space Science and Technology Department of Space Science, Kiruna

(2)

Institute of Robotics and Telematics

Lule˚ a University of Technology

Department of Space Science Space Campus Kiruna

Teleoperator Turn Maneuver Assistance for a Mars Rover with

Local Elevation Maps

MasterThesis in the subject

Space Science and Technology

presented by

Hossein Shahrabi Farahani

(3)

Institut f¨ur Robotik und Telematik

Lule˚ a Tekniska Universitet

Institutet f¨or rymdfysik Space Campus Kiruna

Teleoperator Turn Maneuver Assistance for a Mars Rover with

Local Elevation Maps

MasterThesis in the subject

Space Science and Technology

presented by

Hossein Shahrabi Farahani

born on 26.03.1976 in Tehran, Iran

Completed at

University of Wuerzburg

Faculty of Computer Science Institute of Robotics and Telematics

Supervisor:

Prof. Dr. Klaus Schilling Dr. Martin Bohm

Markus Sauer

Delivery date of the Thesis:

31st, December 2007

(4)

I hereby declare that this submission is my own work and that, to the best of my knowledge and belief, it contains no material previously published or written by another person nor material which to a substantial extent has been accepted for the award of any other degree or diploma of the university or other institute of higher learning, except where due acknowledgment has been made in the text.

W¨urzburg, the 31st, December 2007

(Hossein Shahrabi Farahani)

(5)

Teleoperation of a mobile robot with Ackerman steering is a difficult task. The proposed solution in this work is based on constructing the elevation map of the environment utilizing a 2D laser range finder which is mounted on a tilting unit. The point cloud which is generated by the laser range finder is converted to an elevation map. This elevation map is visualized to the operator in 3D to support him during the teleoperation task.

Various filters are investigated to reduce the noise effects on the elevation map to get a consistent map. The application of the filters also leads to a smoother visualization for the operator. A calibration function for the laser scanner and also an auto-calibration function to correct the mechanical misalignments of the laser scanner and the tilting unit have been developed and tested. For path planning the elevation map will be converted to a occupancy grid. After finding the waypoints for the equivalent holonomic system by applying the exact cell decomposition method on the occupancy grid, the path for the holonomic system will be customized for the nonholonomic Mars rover with Ackerman steering. Two different methods are implemented for customizing the path for the nonholonomic system, solution based on 7-atomic maneuver, and a solution for smooth path planning. The experimental results are presented in the end.

(6)

1 Introduction and Motivations 4

2 Related Work 6

2.1 Terrain Map Generation . . . 6

2.1.1 Methods Based on Stereo Vision . . . 7

2.1.2 Methods Based on Laser Range Finders (LRFs) . . . 9

2.2 Path Planning of Robots with Ackerman Steering . . . 15

2.2.1 Solutions based on configuration space . . . 15

2.2.2 Solutions based on customizing the path for the equivalent holo- nomic system . . . 16

3 Terrain Map Generation 18 3.1 Introduction and Challenges . . . 18

3.2 Geometric Transformations . . . 18

3.3 Calibration of the Laser Range Finder . . . 21

3.3.1 Errors related to the obstacle and sensor peroperties . . . 22

3.3.2 The Calibration Model . . . 22

3.3.3 Error related to the mounting of the laser range finder . . . 24

3.4 Filters . . . 26

3.4.1 Mean filter . . . 27

3.4.2 Median filter . . . 27

3.4.3 Convolution Filter . . . 28

3.4.4 Weighted Convolution Filter . . . 28

4 Path Planning and Smoothing for Mobile Robots with Ackerman Steering 30 4.1 Holonomic and Nonholonomic Constraints . . . 30

4.1.1 Holonomic and nonholonimic systems . . . 30

4.1.2 Modeling a nonholonomic constrained mobile robot . . . 31

4.2 Path Planning of Car-like Robots Using Basic Atomic Maneuvers . . . . 32

4.2.1 Basic turning maneuver . . . 32

4.2.2 Basic shifting maneuver . . . 33

4.2.3 Combining the atomic maneuvers . . . 34

4.3 Smooth Path Planning of Car-like Robots . . . 35

4.3.1 A brief description of the smooth path planning for car-like robots 36 4.3.2 Planning the path of the equivalent holonomic system . . . 36

4.3.3 Local free path evaluation . . . 36

(7)

4.3.4 Transferring between adjacent configurations . . . 37

4.4 Calculating the Waypoints for the Equivalent Holonomic System . . . 39

5 Teleoperation 42 5.1 Different Teleoperation Interfaces . . . 42

5.2 Teleoperation Interface of Outdoor Merlin . . . 43

6 System Integration and Implementation 45 6.1 Overview of Outdoor Merlin Hardware . . . 45

6.1.1 Sensor Hardware . . . 45

6.1.2 PC104 . . . 46

6.1.3 C167 Infineon Microcontroller . . . 46

6.1.4 GPS HOLUX GR-213 . . . 47

6.1.5 FAS-A Inclinometer . . . 47

6.1.6 Scanning Laser Range Finder URG-04LX . . . 47

6.1.7 Video Server . . . 48

6.1.8 Camera . . . 48

6.1.9 Rate Gyroscope CRS-03-01 . . . 48

6.2 Tilting Unit Design . . . 49

6.3 Introduction to Player/Stage . . . 49

6.3.1 Transports . . . 51

6.3.2 Stage . . . 52

6.4 Overall software architecture . . . 53

6.4.1 Controller PC Software . . . 53

6.5 Software Implementation of the Elevation Map . . . 53

6.5.1 Generating the Elevation Map from the Point Cloud . . . 54

6.5.2 Java3D Program for Elevation Map Visualization . . . 54

6.5.3 Object responsibilities in the elevation map viewing program . . . 55

6.5.4 Controlling the View . . . 56

6.5.5 Controller PC software . . . 56

7 Experimental Results 58 7.1 Terrain Map Generation and Filtering . . . 58

7.1.1 Test case 1 : An outdoor scene (Mars surface) . . . 59

7.1.2 Test case 2: An indoor structured scene . . . 62

7.1.3 Test case 3 : An outdoor scene with very low-height features . . . 65

7.1.4 Test case 4: A completely defined scene . . . 68

7.2 Path Planning Using 7-atomic Maneuver Method . . . 70

7.3 Smooth Path Planning . . . 71

7.4 A Complete Experiment . . . 73

7.5 Testing with Outdoor Merlin . . . 76

8 Conclusion and Future Work 79 8.1 Future Work . . . 79

(8)

A Mathematical Proof of 3-atomic Turn Maneuver 89

B Structure of the CD 92

B.1 Contents of the CD . . . 92 B.2 The Java Source Code . . . 92 B.3 Installation Guide . . . 93

(9)

In teleoperation of mobile robots often the user must reach a certain target point via a path with certain specifications and avoid the obstacles simultaneously, depending on the problem in hand.

The robot operator can either operate the robot directly by sending speed and steering commands or by just giving the target pose and afterward the robot itself does the path planning autonomously. The former method need no autonomy onboad the robot. It is good for application scenarios with low time delay and especially for differential drive robots because they can turn on spot and it makes their operation very easy.

The latter method needs more autonomy onboad the robot and is suitable for applica- tion scenarios with very high time delay. Especially for robots with Ackerman steering like Outdoor Merlin in order to reach a certain target pose a composition of different maneuvers is necessary. Considering a robot in a partially or completely unknown envi- ronment even without a time delay it will be very difficult for the operator to compose the proper sets of maneuvers to reach the target pose.

Issues in turning maneuvers are ranging from remote operator perception related prob- lems like situation awareness which can lead to losing the robot due to wrong commands, or extra tension on the operator for decision making during the turn maneuver. As an example for the latter category consider the situation in which the robot is trapped in a narrow area and it is very difficult to find a solution to bring it out, in this situation an autonomous turning algorithm can help to find a proper maneuver if it exists.

In the special issue of the Mars rover due to time delay between the ground station on Earth and the robot on Mars this issue is even more important.

Considering the above mentioned issues and the problem at hand which is teleop- eration of a Mars rover with Ackerman steering the best method is putting a level of autonomy onboard the robot. In this way, while the operator gives the target pose coordinate the robot will do the path planning autonomously.

The very primary pre-request of an autonomous path planning system is a knowledge about different obstacles on the environment. Without such a knowledge no algorithm can work. Also in order to assist the operator on earth a good 3-dimensional representa- tion of the terrain in the remote site is essential. For the purpose of making the terrain map we used a 2D laser range finder which is mounted on a servo. This way the laser range finder can rotate and provides a 3-dimensional point cloud of the surrounding environment. This point cloud later will be transfered to an elevation matrix. Each en- try of the elevation matrix represents the average elevation on the particular cell. This elevation matrix will be used in a program which is developed utilizing Java3D library for generating the 3-dimensional elevation map display. The operator can use this map to gain a better understanding of the remote site. At the same time this map is used

(10)

for the purpose of path planning.

(11)

This work has two major parts, constructing the elevation map of the environment and then using that map for path planning of a mobile robot with Ackerman steering. The elevation map has two major applications here. First of all it gives a 3-dimensional representation of the environment to the operator and second an occupancy grid is constructed to be utilized in path planning later.

The first part of this chapter contains a review of previous literature in terrain map generation. There are two major methods for terrain map generation. First using stereo vision technique and second using laser range finders. Both will be reviewed in this chapter. Due to noise in elevation maps filtering is also an important issue which will be addressed in this chapter.

The second part is about path planning techniques for robots with Ackerman steering.

Two important categories of path planning techniques for car-like robots are techniques based on configuration space and techniques based on customizing the path for the equivalent holonomic system. Both will be reviewed in the last part of this chapter.

2.1 Terrain Map Generation

Autonomous navigation of mobile robots requires a reasonable knowledge about differ- ent obstacles in the environment in order to decide an obstacle should be traversed or circumnavigated. All methods for representing the environment of a mobile robot fall into one of the categories of topological maps or geometric models.

Most Topological maps aim at representing the environment by graph-like structures, where nodes correspond to places, and edges to path between them.

Geometric models, use either grid-based approximations or geometric primitives to represent the environment. Geometric models can work for the robots which are deployed in unstructured environments.

Full three-dimensional models have very high computational demands for direct appli- cations. Elevation maps method is a more compact way for representing the geometry of the workspace of the robot. Elevation maps are 212 dimensional representation of the en- vironment. Elevation maps represent the geometry of the environment by the elevation at grid points on a reference plane. In this work we need two things from the environ- ment, one is a representation of the environment for the robot operator and another one is an occupancy grid for the path planning algorithm. Elevation map technique gives both. Herber et al. [1] used elevation maps for introducing the environment to a legged robot and this is one of the earliest samples of utilizing elevation maps. The elevation is a vertical distance above or below the given reference surface discretized into a regularly

(12)

spaced grid, an elevation map then represented as z = f (x, y) and is shown in Figure 2.1.

Figure 2.1: Structure of an elevation map [2]

Comparing to full three-dimensional models elevation maps method needs much less memory and computational demand. The reason is that an elevation map is just a 2D matrix in which each cell stores the average elevation of all the points which fall in that cell.A Full three-dimensional model includes thousands of points which means a lot of memory demand and also a lot of computation effort to reconstruct the surface.

2.1.1 Methods Based on Stereo Vision

There are many methods for gathering elevation information such as stereo vision, ul- trasound sensors and 2D or 3D laser range finders.

A majority of systems [3],[4], [5] use the stereovision technique which has relatively low resolution and also it is very sensitive to environmental conditions like illumination.

Stereo processing uses two images taken simultaneously from a pair of cameras whose optical axes are parallel. For each pixel in the left image, a corresponding pixel is searched for in the right image. The disparity in location of the pixel in the two images is then used to triangulate the distance of the underlying point in space from the camera pair[6]. An array of disparities, each value corresponding to a pixel in the image pair, is known as a disparity map. For typical scenes encountered during ground-based naviga- tion, a substantial portion of the disparity map has a regular structure, corresponding to the points which lie on the ground. Obstacles which are above or below the ground plane cause deviations in the disparity map from this regular structure.

Hence, an effective approach to both obstacle detection and terrain estimation is to (a) find the dominant regular structure in the disparity map, and (b) search for regions which deviate from this model. In [4] this dominant regular structure is referred as the horopter. They [4]used a horopter which corresponds to a flat ground plane whose normal is tilted with respect to the image plane of the stereo pair (see Figure 2.2).

Obstacle detection is commonly performed as postprocessing of a disparity map gen- erated by stereo matching. In stereovision method objects lying above the ground corre- spond to pixels with positive disparity, while depressions generate negative disparities[4].

One of the most famous examples of mobile robots with stereo vision are the NASA Mars Exploration Rovers (MER). These rovers have the ability to navigate safely through

(13)

Figure 2.2: Example disparity map using the tilted horopter approach. The top image shows one of the two stereo images. The bottom image shows the resultant disparity map. The ground plane appears a uniform gray, regardless of the distance from the camera pair. Obstacles which are raised above the ground plane are shown with higher disparity (brighter), while negative obstacles have lower disparity than the ground plane, and would appear darker. Note the ease with which the obstacle in the foreground can be discriminated.[6]

unknown and potentially hazardous terrain, using autonomous passive stereo vision to detect potential terrain hazards before driving into them [5]. Their navigation system relies on a geometric analysis of the world near the rover, combining various range data snapshots generated by the stereo system into a local map. They developed a system for interpreting this data, called the Grid-based Estimation of Surface Traversability.

The algorithm used in MER has five steps:

1. To decrease the computational burden and the effect of the rigidity constraint, often the raw sensor images are reduced in size, e.g., from 1024 × 1024 source pixels down to 256 × 256 pixels, by averaging pixel values (see Figure 2.3a).

2. Each image pixel encodes the appearance of a location in the 3D world; in partic- ular, the surface of that object nearest the camera along a certain ray. To find the pixel that represents the same object surface in the other image, it is sufficient to search only along the projection of that ray. (see Figure 2.3b).

3. They compute the Laplacian of each image to remove any pixel intensity bias, (see Figure 2.3c).

4. The filtered images are then fed into a 1-D correlator that uses a 7×7 pixel window.

The correlator considers a number of potential matches for each pixel in the left

(14)

Figure 2.3: Illustration of the steps involved in Stereo Vision Processing.[5]

image of each stereo pair, assigning a score to each potential match. The range of pixels to be searched is disparity range which explained before.

5. Finally, each disparity value can be mapped to a 3-D (X, Y, Z) location using the geometric camera model. This information can be displayed in many forms; an elevation map is shown in Figure 2.3d.

2.1.2 Methods Based on Laser Range Finders (LRFs)

Another method for terrain mapping is utilizing 2D or 3D Laser Range Finders (LRFs).

3D LRFs have been used since nineties [2], [7], [8]. Their disadvantage is that they are very bulky and expensive. Especially their heavy weight makes them very inconvenient to be utilized in small/medium size mobile robots.

Because of the lower weight and less cost of 2D LRFs utilizing them for terrain map generation looks very appealing. The main idea behind using 2D LRFs for generating 3D maps is that by moving the sensor plane and logging the pose of the sensor each time, a 3-dimensional point cloud can easily be generated. There are two major methods which have been used by different researchers:

ˆ Changing the pitch angle of the sensor by mounting it over a servo

ˆ Installing the sensor by a fixed pitch angle downward or upward on the robot and moving the robot.

(15)

Figure 2.4: 3D map generated by a upward looking LRF[9]

One of the earliest works which utilized a 2D LRF for generating a 3D map is [9]. In that work CMU researchers used two 2D LRFs one looking upward to perform indoor 3D mapping and another horizontal to do the map building and localization tasks.

Figure 2.4 shows a 3D map generated by an upward looking LRF. This map is not suitable for ON since no ground map produced. Singh et al. [10] utilized a LRF to construct elevation maps and utilized these maps for navigating an all-terrain vehicle.

Figure 2.5: LRF tilted forward on the robot[11]

Ye et. al as shown in Figure 2.5 used a Sick LMS 200 [12], [11], [13], [14], [15] which is mounted on a vehicle such that the LRF looks diagonally downward and forward (at a pitch angle of −11). While the vehicle is in motion, the fanning laser beam sweeps the terrain ahead of the vehicle and produce continuous range measurements of the terrain.

Figure 2.6 shows one of the elevation maps generated by their system.

Cai et. al [16] developed a system for producing terrain maps for a mobile robot based on a 2D LRF mounted one a high precession rotating table with horizontal and pitch rotation. They developed a terrain reconstruction method. They proposed a wide range of different filters for smoothing the map. Figure 2.7 shows a sketch of laser scanner and rotating table.

The limited resolution of the LRF leads occasionally to missing data in the elevation map, e.g. conspicuous by surfaces holes. Furthermore, the effect of mixed pixels, which frequently happens if the laser beam hits edges of objects, whereas the returned distance measure is a mixture of the distance to the object and the distance to the background ,

(16)

Figure 2.6: Raw elevation map [13]

Figure 2.7: Sketch of laser scanner and rotating table. [16]

might lead to phantom peaks within the elevation map [13]. Therefore, the successively integrated elevation map has to be filtered.

Ye et. al [13],[11],[12] categorized the sources of noise and error in elevation maps into three major categories: mixed pixels, missing data, and artifacts/noise.

According to [13] mixed pixels occur when a laser beam hits the very edge of an object so that part of the beam is reflected from that edge of the object and part of the beam is reflected from the background behind the object.

Missing data occur when the measured range is invalid. For instance, no return or too weak a returned signal may result in missing data; direct exposure to the sunlight may lead to dazzling and cause invalid readings.

Environmental interferences, such as ambient light and shock during motion, may potentially create noisy range measurements and hence result in noise in the elevation map.

Figure 2.8 from [13] and [11] shows the effects of those errors in the terrain map deterioration.

There are conventional image filtering algorithms[17],[18] which can be applied on terrain maps. However, the application of conventional image filtering techniques is not without drawbacks. The foremost reason is that most conventional image filtering methods are applied unconditionally across the entire range image and may thus blur the image (i.e., alter the value of actually correct pixels).

The Frequency Filter [18] (e.g., Averaging Filter, Gaussian Filter) are implemented by the convolution of an input image with a convolution kernel in the spatial domain, i.e., each pixel in the filtered image is replaced by the weighted sum of the pixels in the filter window. The effect is that noise is suppressed but the edges of features in the

(17)

Figure 2.8: Map misrepresentation due to range errors: Left: Mixed pixels can create phantom objects behind the edges of objects. Missing data creates empty pixels in the upper surfaces of objects. Right: Environmental interferences can cause artifacts/noise. Ambient light and/or shock from motion can cre- ate random noise.[13]

image are blurred at the same time.

The Wiener Filter [18] is more effective in noise reduction, but it requires that the power spectrum of the noise be known in advance and that the noise be independent of the signal. However, this requirement is not met in the case of our elevation map where noise is highly correlated with the signal, for all three error sources. Specifically, (a) mixed pixels are always located behind objects edges; (b) missing data occur mostly when the laser beam illuminates a surface with low diffuse reflectivity or at a large incidence angle. Furthermore, the Wiener Filter also tends to blur the image since it is based on a Fourier transform, which is implemented as a convolution in the spatial domain.

Median filters are well known for their capability of removing impulse noise and pre- serving image edges, and are reported to have good performance in removing mixed pixels in laser range images. The standard Median Filter and its variations (Weighted Median Filter [18] and Center Weighted Median Filter[19]) often exhibit blurring when a large filter window is used, or insufficient noise reduction for a small filter window.

For instance, thin edges (because thin objects are present or objects are too high such that only the front edges are sensed) in an elevation map may be completely removed by a standard Median filter. Adaptive Median Filters [20],[21] maintain a better balance between detail preservation and noise reduction, and hence achieve better performance.

However, all of these median-type filters affect all the pixels in an image including un- corrupted ones. A number of median type filters [22] in the literature may potentially distinguish corrupted pixels from uncorrupted ones and apply filtering only to the cor- rupted pixels. These filters utilize local input image characteristics (spatial information only) to identify corrupted pixels.

There are a few filters which are especially developed for terrain maps. Ye et. al [13], [23] propose a novel filter for elevation maps, called Certainty Assisted Spatial (CAS) filter. This filter utilizes not only the spatial information contained in the unfiltered

(18)

Figure 2.9: Original elevation map and the maps after applying five different filters[11]

elevation map, but also the certainty information contained in the certainty map. Figure 2.9 shows the original elevation map and five different filters.

Cai et. al[16] designed a special median filter to reduce the blurring effect resulting from traditional filters. Their median filter adopts a template window of 5 × 5 centered on the processed pixel points. If the height value of a pixel grid is bigger than 50 percent of the total sum of all grids, this grid is considered as noisy grid and its height value is substituted by the average value of other 24 grids of the template window. Their median filter is useful for eliminating the disturbance from pulse noise. Figure 2.9 shows the elevation maps before and after applying the filter.

Figure 2.10: A wall corner before and after applying the customized median filter[16]

As explained before An elevation map consists of a two-dimensional grid in which each cell stores the height of the territory. This approach, however, can be problematic.

For example, consider the three-dimensional point cloud shown in Figure 2.11, which was acquired with a mobile robot located in front of a bridge. The resulting elevation map, which is computed from averaging over all scan points that fall into a cell of a horizontal grid (given a vertical projection), is depicted in Figure 2.12. As can be seen

(19)

in the figure, the underpass has completely disappeared and the elevation map shows a non-traversable object. Additionally, when the environment contains vertical structures, we typically obtain varying average height values depending on how much of this vertical structure is contained in a scan. When two such elevation maps need to be aligned, this can lead to large registration errors.

Figure 2.11: Scan (point set) of a bridge recorded with a mobile robot carrying a SICK LMS laser range finder mounted on a pan/tilt unit[24]

Figure 2.12: Standard elevation map computed for the outdoor environment depicted in Figure 2.11. The passage under the bridge has been converted into a large non-traversable object[24].

Pfaff et. al [24] present a system for mapping outdoor environments with elevation maps. They categorized each point in the environment in four different classes

ˆ locations sensed from above

ˆ vertical structures

ˆ vertical gaps

ˆ traversable cells

Using their approach over hanged structures like bridges will not be considered as an obstacle. This helps for better path planning later.

In [24] they need to identify which cells of the elevation map correspond to vertical structures and which ones contain gaps. In order to determine the class of a cell, they first consider the variance of the height of all measurements falling into this cell. If this

(20)

value exceeds a certain threshold, we identify it as a point that has not been observed from above. They then check whether the point set corresponding to a cell contains gaps exceeding the height of the robot. This is achieved by maintaining a set of intervals per grid cell, which are computed and updated upon incoming sensor data. During this process we join intervals with a distance less than 10 cm. Accordingly, it may happen, that the classification of a particular cell needs to be changed from the label vertical cell or cell that was sensed from above to the label gap cell. Additionally, it can happen in the case of occlusions that a cell changes from the state gap cell to the state vertical cell.

When a gap has been identified, we determine the minimum traversable elevation in this point set. Figure 2.13 shows the same data points already depicted in Figure 2.11.

Figure 2.13: Extended elevation map for the scene depicted in Figure 2.11 [24]

2.2 Path Planning of Robots with Ackerman Steering

In this section a selection of the most important approaches in path planning of mobile robots with Ackerman steering will be reviewed. Mobile robots with Ackerman steering are nonholonomic robots. Numerous works have been done to plan paths for such vehicles. Majority of the existing solutions fall into one of two following categories;

solutions based on configuration space and solutions based on customizing the path for the equivalent holonomic system.

2.2.1 Solutions based on configuration space

The approaches [25] based on configuration space consist of decomposing the configu- ration space into an array of small rectangloids (box shaped elements) and searching a direct graph whose nodes are these rectangloids. All the rectangloids are in the free con- figuration space and each pair af adjacent rectangloids contribute a node to the graph.

The algorithm requires time O(mrmlogm) and space O(rm) where r is the size of de- composition along each axis of the configuration space and m is the dimension of the configuration space.

The approach in [26] also adopts decomposition methods; the first decomposing the environment into corridors for planning smooth paths, and the second decomposing the environment into lanes for a motion with minimum turns.

(21)

2.2.2 Solutions based on customizing the path for the equivalent holonomic system

As mentioned before there is a second category of solutions for path planning of car like robots based on customizing the path for the equivalent holonomic system.The algorithm as generalized by Laumond et. al [27], [28] works in three steps:

1. plan a path π for the corresponding holonomic system

2. subdivide π until all endpoints can be linked by a minimal length collision-free feasible path

3. run an optimization routine to reduce the length of the path.

All the second category of solutions are based on the pioneering work by Reeds and Shepp [29] which showed that the minimal length paths for a car-like vehicle consist of a finite sequence of two elementary components:arcs of circle(with minimal turning radius) and straight line segments. From then, almost all of the proposed motion planners compute collision-free path constituted by such sequences. We call a path which is continuous by its second derivative a C2 path. As a result, the paths are piecewise C2, i.e., they are C2 along elementary components, but the curvature is discontinuous between two elementary components. To follow such paths, a real system has to stop at these discontinuity points in order to ensure the continuity of linear and angular velocities.

To overcome this problem of stopping frequently while following a path some authors [30],[31] have proposed to smooth the sequences straight line-arc of circle by clothoids.

The paths are then C2 between two crusp points. There kind of paths called SCC-paths (for Simple Continuous Curvature paths). Figure 2.14 shows a SCC path. Arc of circle, straight line and clthoids can be seen in the figure.

Figure 2.14: SCC path[30]

The path in Figure 2.14 moves the robot from the initial configuration qa to the final configuration qb.

Jaing et. al [32] developed another algorithm based on customizing the path for the equivalent holonomic system. Their algorithm has a novel feature that it uses the reduced

(22)

visibility graph method for finding the paths for the equivalent holonomic system. This way if a path was not proper for the nonholonomic system due to obstacles, simply the second shortest path from the visibility graph is utilized. Algorithm has three steps as below:

1. Construct and search the global reduced visibility graph for a point.

2. Select the shortest path for local free space evaluation. If a local free space along the selected path is not sufficient for the robot to pass, reject this path and find the next shortest path avoiding that local space. Repeat this until a path with sufficient local free spaces is found.

3. Lay configurations sequentially along the selected path in the way that the robot can maneuver from one configuration to the next avoiding obstacles.

Figure 2.15: Simulation result for a car like robot[32]

Figure 2.15 shows simulation result of the path planning algorithm in [32]. As it can be seen the shortest path for the holonomic system from the visibility graph is not possible for the nonholonimic system. So the second possible path is chosen and customized for the robot.

(23)

3.1 Introduction and Challenges

Autonomous navigation needs the mobile robot to have the ability to terrain reconstruc- tion and analysis the 3D surrounding environment. As explained in chapter 2, terrain maps are 2.5-dimensional representation of the robot environment. Elevation maps store in each cell of a discrete grid the average of the height of the place.

Different ways of producing elevation maps e.g. using a laser range finder(LRF) looking diagonally downward, using stereo cameras or mounting a LRF over a tilting unit are already discussed in chapter 2. In this work a 2-D LRF is mounted over a tilting unit.

The LRF scans 240 degree of space in 655 steps so the angular resolution is 0.367 degree. The tilting unit rotation between consecutive scans is 1 degree.

In this chapter first of all the geometric transformations which are necessary to produce the elevation map is discussed. Next the sources of noise and error in elevation maps, automatic calibration method and also the proposed filters for smoothing the produced maps will be illustrated.

3.2 Geometric Transformations

In order to construct the terrain map we have to start from a point cloud which is actually accumulated LRF data in each turn of the tilting unit.

In each specific angle of the servo in the tilting unit the LRF makes a complete 240 degree planner scan. The coordinates of the points in the sensor output are with respect to a coordinate system which is on the sensor itself which rotates with sensor as the tilting unit rotates. These coordinates must be transformed to a single coordinate system in order to be able to produce the point cloud.

In this section the details mathematical transformations which are used to convert the coordinates of the points from the laser range finder coordinate system to the global coordinate system will be presented.

Figure 3.1 shows the coordinate frame attached to the laser range finder. The laser beam swaps 240 degree of space. The beam rotates in a counter clockwise (CCW) direction in plane xoy.

The sensor output is range and angle for each obstacle point. We need to find the Cartesian coordinates of the points. Consider r as the range which is measured by the sensor at angle θ, which is the angle between the laser beam and the positive y axis and

(24)

Figure 3.1: Coordinate frame attached to the laser range finder

π6 ≤ θ ≤ 6 . Coordinates of a point P as an obstacle point which is sensed at distance r and angle θ by the sensor is

P =

 rsinθ rcosθ

0

 (3.1)

As explained before the laser the range finder is mounted over a servo. The next step is considering the tilting angle of laser range finder. Figure 3.2 shows the laser range finder, and servo. The coordinate system 01 − x1y1z1 is attached to the laser range finder, and plan x101y1 is the plan on which the laser beam moves. This coordinate system rotates as the sensor rotates. The coordinate system 02− x2y2z2 is attached to the servo and is not rotating with it. The y2 axis of this coordinate system is the axis of rotation of the servo.

We can easily obtain the transformation matrix from the sensor coordinate system to the coordinate system attached to the servo. Consider the coordinates of o2 with respect to the coordinate system 02− x2y2z2 is [a, b, c]T. Considering α as the turning angle of the servo, then the transformation matrix from the sensor coordinate system to the servo coordinate system, T21, can be calculated as the following

T21 =

cosα 0 sinα 0

0 1 0 0

−sinα 0 cosα 0

0 0 0 1

1 0 0 a 0 1 0 b 0 0 1 c 0 0 0 1

(3.2)

(25)

Figure 3.2: The tilting unit and the laser range finder

Coordinates of P in Equation (3.1) is with respect to the sensor coordinate system.

In order to calculate its coordinate with respect to the servo coordinate system we just need to multiply T21 from Equation (3.2) to it. We call coordinates of P with respect to the servo coordinate system as P2 which is

P2 = T21· P (3.3)

The coordinate of P2 from Equation (3.3) is as the following

P2 =

rsinθcosα + acosα + csinα rcosθ + b

−rsinθsinα − asinα + ccosα 1

(3.4)

Equation (3.3) gives us the coordinates of all obstacle points with respect to a non-

(26)

rotating coordinate frame which is attached to the servo. In order to construct the elevation matrix two more steps are necessary.

1. We have coordinates of all points with respect to a coordinate system which is located on the ground level. Consider the coordinate system 02− x2y2z2 is h units above the ground. in order to have the height of all points from the ground level we just need to add h to the z coordiante of all the points.

2. In order to simplify the process in which points will be distributed in different cells in a horizontal grid we need to have all their x and y coodinates as positive numbers. Considering the range of the laser range finder which is 4000mm by transfering all points to a coordinate system parallel to 02 − x2y2z2 and with an origin 4000mm in the negative direction of the y2 axis then all x and y coordiates will be positive.

Considering the two above mentioned requirments, by definging O − XY Z coordinate system in parallel to the 02− x2y2z2 coordinate frame which coordiantes of its origin O with respect to 02− x2y2z2 is [0 − 4000 − h]T. Considering Pe as the coordinate of the obstacle points in the new coordinate system. Transfering P2 from Equation (3.4) to the O − XY Z coordinate system we have

Pe = P2+

 0

−4000 h

 (3.5)

Equation (3.5) gives the coordinates of obstacle points in O − XY Z coodinate system.

The next step is to distribute each point in the corresponding grid cell. Here the size of each grid cell is considered as 1cm × 1cm so by deviding the x and y coordinates of each point by 10, the integer part of the results are x and y coordinates of the cell which the point actually falls into it.

As explained before elevation map is actually a matrix which each cell contains the average height of all the points which fall into that cell. After distributing all the points in different cells by adding the heights of the points in each cell together and finally dividing the result by the number of points in the cell the elevation in that cell can be calculated.

3.3 Calibration of the Laser Range Finder

Laser range finder is the main mean for acquiring the elevation maps. A good under- standing of sources of error in this sensor and also a calibration model is essential in for better interpretation and also building more accurate elevation maps. There are two sources of error in elevation maps

ˆ Errors which are related to the laser range finder, obstacle surface properties or obstacle distance from the sensor.

(27)

ˆ Errors which are related to the mounting of the laser range finder, such as mis- alignment of the sensor relative to the servo.

3.3.1 Errors related to the obstacle and sensor peroperties

In this section effects of obstacle’s surface and also distance of the obstacle to the sensor will be discussed. In the end a calibration function based on the acquired experimental data will be introduced. This function will be utilized over all experimental data in this work.

In this experiment two surfaces have been used a white surface and a metallic silver surface. Surfaces are put vertically in 1354 mm, 1971 mm and 2791 mm. Figure 3.3 shows the distribution of measured distances when two obstacles are the distance of 1354 mm from the sensor. It is obvious that the silver surface shows a better accuracy.

Figure 3.3: Measured distance distribution for 1354 mm

Figure 3.4 shows the same measurements but with 1971 mm from the sensor. Com- paring to Figure 3.3 the accuracy of measurements decreased and still measurements related to the silver surface show better accuracy.

Figure 3.5 shows the same measurement with 2791 mm from the sensor. As it is shown in the figure the accuracy is even lower in this distance and still the silver surface shows a better accuracy in measurements.

3.3.2 The Calibration Model

To calibrate the measured range values, the white target was placed at a position from 200 mm too 3800 mm in increments of 20 mm. At each position 10,000 samples were taken and the mean denoted as xi, was calculated. For the purpose of measurment out of 655 counts of the laser angle the average of the central two counts has be considered in

(28)

Figure 3.4: Measured distance distribution for 1971 mm

Figure 3.5: Measured distance distribution for 2791 mm

this section. The mean has an approximate linear relationship with the target distance.

So the linear distance can be estimated by a linear function as follows:

by = bµ + a (3.6)

where by is the estimate of the true distance y and µ is the mean of the measured range. b and a are constant. This method finds the line that minimizes the sum of the squares of the regression residuals Pn

i=1b2i. The residual is the difference between the observed value and the predicted value: bi = yi−ybi,

The minimization problem can be solved using calculus, producing the following for-

(29)

mulas for the estimates of the regression parameters:

b = (µi− µ)(yi− y) Pi=1

ni− µ)2 (3.7)

a = y − bµ (3.8)

The results of measurements exported to MATLAB for calculating the parameters b and a. With the acquired data the results are b = 0.9867 and a = 2.432.

3.3.3 Error related to the mounting of the laser range finder

In this section errors in the elevation map which are result of the mechanical misalign- ment of the laser range finder will be analyzed and a mathematical model for these errors will be developed. This model will be used later in section 7.4.

Figure 3.2 shows two coordinate frames 01−x1y1z1and 02−x2y2z2which are connected to the laser range finder and the servo axis respectively. The coordinate frame 01−x1y1z1 rotates with the servo while 02− x1y2z2 is stationary. In section 3.2 we assumed that the sensor is mounted in a way that two axes y1 and y2 always remain parallel which is not the case in reality. Perhaps there are two major sources of error

ˆ Rotation of y1 axis around x2

ˆ Rotation of y1 axis around z2

Which the former is more important than the latter as is shown in section 7.4 and Figure 7.24. For developing a mathematical model consider the coordinate system 02− x3y3z3 parallel to 01− x1y1z1 as is shown in Figure 3.6.

Figure 3.6: Coordinate system 02− x3y3z3

α is the angle between y3 and y2 axes in the vertical plane. The result of this misalign- ment is that the elevation map will look tilted in XOY surface. In order to compensate

(30)

such a misalignment we just need to turn the resulting map for −α. The rotation matrix T32 is as the following

T32 =

1 0 0

0 cosα sinα 0 −sinα cosα

 (3.9)

As explained before α in equation (3.9) is the amount of tilting of the elevation map in the XOY plane. Calculating α from the elevation data is straightforward. Figure 3.7 shows the horizontal surface and the surface which the elevation data of a flat surface is laying on it. In order to generate elevation data of a flat surface we just need to scan a flat horizontal surface with the laser range finder.

Figure 3.7: Calculating α

In order to calculate α consider to points A and B which xA = xB. Then α is calculated as

α = tan−1zB− zA

yB− yA (3.10)

As mentioned before another misalignment is rotation of y1 axis around z2. Again considering the coordinate system 02− x4y4z4 parallel to 01− x1y1z1 and β as the angle between y4 and y2 axes in the horizontal plane as is shown in Figure 3.8.

From Figure 3.9, β can easily be calculated. It is obvious in the figure that the non-zero elevations starts at a straight line.

For calculating β we have

β = tan−1yB− yA xB− xA

(3.11) Having the β the calibration process can be done easily by an invers rotation as equation (3.12)

T32 =

cosα sinα 0

−sinα cosα 0

0 0 1

 (3.12)

(31)

Figure 3.8: Coordinate system 02− x4y4z4

Figure 3.9: Calculating β

3.4 Filters

Smoothing is a process by which data points are averaged with their neighbors in a series, such as a time series, or image. This (usually) has the effect of blurring the sharp edges in the smoothed data. Smoothing is sometimes referred to as filtering, because smoothing has the effect of suppressing high frequency signal and enhancing low frequency signal.

There are many different methods of smoothing.

Sources of noise in data gathered from a laser range finder are mentioned in chapter 2. Noise in laser sensor data will result in a noisy elevation map. In order to produce a clean and smooth map utilizing a proper filter is necessary. Depending on the source of noise, environment and reflectance coefficient of obstacle materials and ground different kinds of filters must be applied. In [11] sources for errors are categorized in 3 different groups as below:

ˆ Mixed pixels which occur when the laser beam hits the very edge of an object

(32)

so that part of the beam is reflected from that edge of the object and part of the beam is reflected from the background behind the object. The resulting range measurement then lies somewhere between the distance to the object and the distance to the background.

ˆ Missing data occurred when the measured range is invalid. For instance no return or too weak a returned signal may result in missing data.

ˆ Environmental interferences, such as ambient light and shock during motion may potentially create noisy range measurements.

In this section different filters which are implemented in this work are discussed. The results of the filters in actual experimental data can be found in chapter 7.

3.4.1 Mean filter

Mean filter is one of the simplest filters in image processing. The main idea behind mean filter is reducing the abrupt variation in heights of neighboring cells by simply replacing each cells elevation with the average of the elevations in the neighboring cells.

The number of cells which are used for averaging (the size of the mask) can be varied.

We can have 3 × 3, 5 × 5 or any other kernel. Table 3.1 shows a 3 × 3 kernel.

1 9

1 9

1 1 9

9 1 9

1 1 9

9 1 9

1 9

Table 3.1: Averaging kernel used in mean filtering

This way the mean filter can be considered as a special kind of convolution filter with the above mentioned kernel.

This filter also can be written as fM(x, y) = 1

9

1

X

i,j=−1

f (x + i, y + j)

3.4.2 Median filter

Median filter can be considered as one of the order statistics filters. Order statistics filter are filters whose response depends on the ranking of each point in the image or elevation map. Median filter is one of the most popular ones. In order to apply the median filter, first we have to consider a window. Then after sorting the amount of elevation of each cell in that window we can simply replace the elevation of the cell in the center of the window with the elevation of the middle cell. As it will be shown in the experimental results except in some outdoor scenes the median filter does not show very good performance in the case of elevation maps.

(33)

3.4.3 Convolution Filter

Mean filter is a convolution filter itself but with a kernel which has equal weights for each point in the kernel window. If different points at the window have different weights then the result is a convolution filter. The Kernel which has been used is from [33].

Convolution filter has a more aggressive effect on the image comparing to the mean filter. Blurring of the image is the price we have to pay for reducing the noise.

The kernel which has been used in this work is as the following

1 4

1 2

1 1 4

2 1 12

1 4

1 2

1 4

Table 3.2: Convolution filter kernel [33]

3.4.4 Weighted Convolution Filter

The convolution filter which is described in section 3.4.3 has the drawback of suppressing the edges of different objects on the scene. To overcome this problem the weighted convolution filter is used. This filter takes advantage of the concept of variance. Variance of heights of different points which fall into each cell will be used in this filter. It is obvious in the edges due to dramatic change in height of the points that has higher variance comparing to other points.

The proposed filter in [33] takes advantage of the above mentioned principle. Consider h(x, y) as the height of the cell in the center of the filter window. If i, j ∈ −1, 0, 1 then any cell in the filter window can be referred to as h(x + i, y + j). The elements of the filter window or in other words weights for each value is proposed like this [33]

wi,j = 1

σh(x+i,y+j)2 if |i| + |j| = 0 wi,j = 1

h(x+i,y+j)2 if |i| + |j| = 1 wi,j = 1

h(x+i,y+j)2 if |i| + |j| = 2

After calculating wi,jwe have the filter kernel for each point, then the following formula easily gives the value in the filtered elevation map, hf at each cell

hf(x, y) = 1 C

X

i,j

h(x + i, y + j)wi,j where C =P wi,j

(34)

From the filter kernel it is obvious that the effect of variance is less for the cells which are more far from the center of kernel.

(35)

Mobile Robots with Ackerman Steering

In this chapter after a short discussion about holonomic and nonholonomic constraints in dynamic systems two different techniques for path planning of car-like robots which are used in this project will be explained. The first technique utilizes basic atomic maneuvers for reaching the target configuration. Atomic maneuvers technique is a relatively simple method but it has the drawback of being very discontinuous and the robot must stop and start motion many times before reaching the target.

The second technique which uses jumping technique between different configurations generates much more continuous paths but it is more complicated comparing to atomic maneuver technique.

4.1 Holonomic and Nonholonomic Constraints

4.1.1 Holonomic and nonholonimic systems

Before starting discussion about different algorithms for path planning of car-like robots, a short discussion about holonomic and nonholonomic systems in necessary. As it will be shown a car-like robot is a nonholonimic system.

A holonomic system is a system which the constraints of the system can be written as

g(q, t) = g(q1, q2, ..., qn, t) = 0 (4.1) In other words a system is holonomic when the constraint of the system is only depen- dent to the coordinates and time and not their derivatives like velocity or momentum.

A way to easily determine if a system is holonomic or not is looking at the controllable and total degrees of freedom of the system. If the total degrees of freedom are less than or equal to the controllable degrees of freedom then the system is holonomic.

In contrary to holonomic systems in which constraints are only dependent on the coordinates and time, in a nonholonomic system they are also dependent on the velocity or momentum of the system. We can also define them another way to make it easier to distinguish them from holonomic systems. If in a system the total degrees of freedom are greater than the controllable degrees of freedom then the system is nonholonimic.

(36)

For example a car-like robot is a nonholonomic system, because for example it can not have lateral motion. Mathematical a nonholonomic equality constraints is defined as a non-integrable scalar constraint of the form g(q, t) = g(q1, q2, ..., qn, ˙q1, ˙q2, ..., ˙qn, t) = 0 where g is a smooth function.

4.1.2 Modeling a nonholonomic constrained mobile robot

In this section a mathematical model of a mobile robot with Ackerman steering will be developed. It will be shown that this model is a nonholonomic model. [32]

As it can be seen in Figure 4.1 the robot is a rigid object R, moving around an instantaneous center Oi, with its wheels symmetrical about its long and short axes. Fm is a moving frame which is attached to R with its origin at the geometric center of R.

α and β are the steering angles of the front and rear wheels. When β = 0, it gives a front wheel steered robot which is the case of Outdoor Merlin robot.

Figure 4.1: The model of a moving nonholonomic robot[32]

In planar motions, mobile robots are subject to nonholonomic kinematic constraints.

For example when F has no radial component of velocity,

− dx

dtsin(θ) +dy

dtcos(θ) = 0 (4.2)

This is a non integrable differential equation and hence a nonholonomic constraint.

Since the robot is a rigid body, every point on R moves around its instantaneous center Oi. So the reference point F follows a curve whose curvature c is upper bounded by:

cmax = 1 ρmax

where ρmax is the minimum turning radius of the reference point F . The velocity v of

(37)

F measured along the main axis of the robot can be written as,

|v| ≥

dθ dt

ρmin or

 dx dt

2

+ dy dt

2

− ρ2min dθ dt

2

≥ 0 (4.3)

The nonholonomic constraints provided by (4.2) and (4.3) are imposed on vehicles whenever they are moving. Thus any proposed path for the nonholonimic robot must satisfy the constraints (4.2) and (4.3).

4.2 Path Planning of Car-like Robots Using Basic Atomic Maneuvers

This method is based on 2 simple maneuvers of rotation around a point and shifting. As will be shown later in this section the robot can reach from any configuration to another one by a combination of the above mentioned maneuvers.

4.2.1 Basic turning maneuver

This maneuver allows the robot to turn around a single point R as shown in Figure 4.2.

Figure 4.2: Simple turning maneuver(reproduced from [34])

This maneuver consists of three sub paths as shown in Figure 4.2, two rotations with minimum rotation radius ρmin and one straight motion in a line which is the common tangent of two arcs of circles with minimum turning radius of the robot.

The three maneuvers are as following:

1. A turn from the initial configuration (x0, y0, θ0) to the configuration (x1, y1, θ0+δθ) which means a rotation with minimum turning radius of the robot and with angle δθ. It is obvious that |δθ| < π/2

(38)

2. The second segment is a straight line motion from (x1, y1, θ0+δθ) to (x2, y2, θ0+δθ), which is just a change in position. During this motion the orientation of the robot remains the same.

3. The third and final motion is another rotation with minimum turning radius and with δθ angle from (x2, y2, θ0+ δθ) to (x0, y0, θ0+ 2δθ)

In order to formulate this maneuver in a way that can be easily implemented in a mobile robot we need to calculate the exact length of all three segments of the maneu- ver. Considering d1 and d3 as the length of two circular segments, d2 as the length of the straight line, ρmin the minimum turning radius of the mobile robot and ϕ as the maximum steering angle we have,

ρmin = tanϕl

d1 = d3 = δθ · ρmin d2 = 2ρmintan(δθ)

4.2.2 Basic shifting maneuver

The basic shifting maneuver allows the robot to perform a shifting with three sub paths.

In the shifting the orientation of the robot remains the same but the location will be shifted with a desired length.

As it is shown in Figure 4.3 this maneuver consists of the following parts:

1. First a turn from the initial configuration at (x0, y0, θ0) to an intermediate config- uration (x1, y1, θ0+ δθ) by a simple turn with radius ρmin which is the minimum turning radius and with angle δθ which |δθ| < π/2

2. The second motion is a straight line motion from the intermediate point (x1, y1, θ0+ δθ) to the second intermediate point (x2, y2, θ0+ δθ)

3. The third and final sub motion is moving from the second intermediate point (x2, y2, θ0+ δθ) to the final point (x3, y3, θ0) by a turn with radius ρmin and angle δθ

In order to formulate this maneuver in a way that can be easily implemented in a mobile robot we need to calculate the exact length of all three segments of the maneuver.

Considering d1 and d3 as the length of two circular segments, d2 as the length of the straight line, ρmin the minimum turning radius of the mobile robot, ϕ as the maximum steering angle and d as the distance which the mobile robot shifted we have,

(39)

Figure 4.3: Basic shifting maneuver (reproduced from [34])

ρmin = tanϕl θ = cos−1 2ρ min

min+d

d1 = d3 = δθ · ρmin

d2 = 2ρmintan(δθ) d = 2ρmin cosδθ1 − 1

4.2.3 Combining the atomic maneuvers

There are two ways for combining the two above mentioned atomic maneuvers to make the robot move from any arbitrary configuration to another one. They are very basic and simple techniques but they have the disadvantage of being discontinuous.

6-atomic maneuver

This maneuver consists of a shift to the target coordinate following by a simple turn maneuver to make the orientation of the robot as target orientation. Shift and turn maneuver each has three atomic part which makes this maneuver totally 6 atomic ma- neuver.

7-atomic maneuver

This method consists of a turn following by a straight motion and finalized by another turn to move the robot to the target pose.

Figure 4.4 shows the 7-atomic maneuver.

Figure 4.5 shows a 7-atomic maneuver simulated in Stage. Two 3-segment turning and one straight motions can be seen in the picture.

(40)

Figure 4.4: Rotate-move-rotate maneuver for a non holonomic car-like mobile robot[35]

Figure 4.5: 7-atomic maneuver simulated in Stage[35]

4.3 Smooth Path Planning of Car-like Robots

As explained before path planning for car-like robots by utilizing basic atomic maneuvers has the drawback of discontinuity. For this reason another method based on simple jumps has been also used as an alternative. The method is mainly from the work of Jiang et al. [32].

In this section after briefly explaining different steps of this technique each steps will be explained in details in a separate part.

(41)

4.3.1 A brief description of the smooth path planning for car-like robots

The steps for path planning using this method briefly are as following:

1. Construct a path for the equivalent holonomic system. In this work this path is generated utilizing the Planner interface of the Player/Stage which is explained in chapter 7. In some works like [32] methods like global visibility graph are used for finding the path for the equivalent holonomic system.

2. Do the local free space evaluation for the robot. If the local free space along the generated path is sufficient go to step 3.

3. Lay configurations sequentially along the selected path in the way that the robot can maneuver from one configuration to the next avoiding obstacles.

In the following 3 sections each step will be explained in detail.

4.3.2 Planning the path of the equivalent holonomic system

The concept of the Player/Stage is already explained in chapter 3. By feeding the map of the environment to the Stage and defining the robot as a differential drive robot to the Planner interface the way points for a path of the equivalent differential drive robot will be acquired. These path which is constructed by these way points will be customized for the nonholonomic system later.

4.3.3 Local free path evaluation

As mentioned before the purpose of local free path evaluation is to check if the robot can maneuver from one configuration to another without colliding with the obstacles.

The idea is to subtract the space where the robot sweeps while moving from the minimum distance between the nearby obstacles. If the result is a positive number for all the points along the path it means the path is a feasible path otherwise the path is not feasible.

Consider the robot R moves around its longitude direction following a straight line then the swept width is ws = 2b. If the robot turns with steering angle α the swept width ws(α) is

ws(α) = ρmax(α) − ρmin(α) (4.4) where

ρmax(α) = s

 a + l

2

2

+



b + l tan(α)

2

and

(42)

ρmax(α) = 2l tan(α) − b

Consider Dmin as the minimum distance between the two nearby obstacles. The following two conditions must be satisfied:

Dmin− ws(α) > 0 and for the case straight line motion

Dmin− 2b > 0

If we substitute every thing in the last two equations we will get a very useful equation which even can be used for laying the robot over the path

Dmin− s

 a + l

2

2

+



b + l tan(α)

2

+ 2l

tan(α) − b > 0

The importance of this equation is that providing that Dmin is given the angle α can be easily calculated.

For calculating Dmin the laser interface of Player/Stage in stage has been used. The laser range finder can do a 360-degree scan at each point in the path and then the minimum distance Dmin at each point can easily be obtained using the scanned data.

4.3.4 Transferring between adjacent configurations

As explained in last section from the inequality we can calculate the steering angle in each point of the path for a car-like robot. The most important question here is how to transfer from one configuration to another one on the path.

Consider two adjacent configurations qi and qi+1 in order to move the robot from the former to the latter configuration there are three motion transfer techniques, direct, indirect and reversal.

The second derivative of each path segment determines if it is a direct or indirect path. Consider p(x) is a path segment. It is called a direct path if its second differential does not change sign along the segment,

d2p(x) dx2 ≥ 0

or d2p(x)

dx2 ≤ 0 , for a continious range of x values.

A reversal is a maneuver of a robot involving movements in opposite directions. A reversal path is the locus of a robot after a reversal movement.

References

Related documents

The new campanies are: Nordea Bank Finland Pk, owner of all assets and liabilities related to the bankingbusiness in the demerged Nordea Bank Finland Pk, Nordea

The illumination system in a large reception hall consists of a large number of units that break down independently of each other. The time that elapses from the breakdown of one

By comparing the data obtained by the researcher in the primary data collection it emerged how 5G has a strong impact in the healthcare sector and how it can solve some of

A kind of opening gala concert to celebrate the 2019 Sori Festival at Moak hall which has about 2000 seats.. Under the theme of this year, Musicians from Korea and the world

Microsoft has been using service orientation across its entire technology stack, ranging from developers tools integrated with .NET framework for the creation of Web Services,

19 Controllerhandboken, Samuelsson red, page 114.. motivating, Sickness can be the result, if the needs are not fulfilled. If transferring these theories to the business world,

Industrial Emissions Directive, supplemented by horizontal legislation (e.g., Framework Directives on Waste and Water, Emissions Trading System, etc) and guidance on operating

För att kunna besvara frågeställningen om vilka beteenden som innefattas i rekvisitet annat socialt nedbrytande beteende har vi valt att välja ut domar som representerar samtliga av