• No results found

Luleå University of Technology MSc Programmes in Engineering

N/A
N/A
Protected

Academic year: 2021

Share "Luleå University of Technology MSc Programmes in Engineering"

Copied!
55
0
0

Loading.... (view fulltext now)

Full text

(1)

M A S T E R ' S T H E S I S

Coordinating Vehicles using

Mirror Enhanced Target Recognition

Michael Årnevall

Luleå University of Technology MSc Programmes in Engineering

Electrical Engineering

Department of Computer Science and Electrical Engineering Division of Robotics and Automation

2006:034 CIV - ISSN: 1402-1617 - ISRN: LTU-EX--06/034--SE

(2)

COORDINATING VEHICLES, USING MIRROR ENHANCED TARGET

RECOGNITION

Michael ˚ Arnevall

Lule˚ a University of Technology

Dept. of Computer Science and Electrical Engineering EISLAB

February 6, 2006

(3)
(4)

A BSTRACT

The long term goal behind this study is to achieve coordinated navigation of two or more robot vehicles, capable of autonomous operations in contaminated and hostile environ- ments.

One primary goal is to design and implement two basic functionalities named ”Follow Me” and ”Run Ahead of Me”, which implies coordinated vehicles, capable of autonomous object recognition and tracking, using a time of flight laser range scanner as main sensor.

Unfortunately the circumstances, made it impossible to create an applicable target tracking, using the two vehicles available, but a lot was learnt about the limitations when using lasers to recognize och track targets.

The project did however, result in some promising motion models, describing the rel- ative position and orientation (pose), of two coordinated vehicles, and because of their generality, the models has proven to work in various contexts.

Trails show that target tracking, utilizing a laser range scanner, is possible but the res- olution proves to be insufficient. Thus prospect are good to improve the measurements by using mirrors to reflect more laser pulses at the target, and by that receive more stable readings of the objects orientation.

Keywords:

Kinematic models, Coordinated vehicles, Object recognition, Mirrors enhanced target tracking, Kalman filter, Time of flight laser range scanner, Rate gyro, Odometric encoder

iii

(5)
(6)

P REFACE

This thesis has been performed in cooperation with the Department of Computer Science and Electrical Engineering at Lule˚ a University of Technology. This work is the last part of path to my exam as a Master of Science in robotics and mechatronics.

I would like to thank my teacher ˚ Ake Wernersson, for the ideas that gave rise to this project. Sven R¨ onnb¨ ack also deserves many thanks because of all the help with difficulties during the project.

Michael ˚ Arnevall

v

(7)
(8)

C ONTENTS

Chapter 1: Introduction 1

1.1 Motivation . . . . 1

1.2 Content . . . . 2

Chapter 2: Equipment 3 2.1 Range scanning laser . . . . 4

2.2 Rate gyros . . . . 5

2.3 Odometric encoders . . . . 5

2.4 Controller area network (CAN) . . . . 6

2.5 Vehicle control . . . . 7

2.6 Computer hardware . . . . 8

2.7 Computer software . . . . 8

2.8 Schematic overview of used equipment . . . . 8

Chapter 3: Motion models for two coordinated vehicles 11 3.1 Follow Me . . . . 11

3.2 Run Ahead of Me . . . . 11

3.3 Parametrization of models . . . . 12

3.4 Motion model of one moving vehicle . . . . 13

3.5 Motion model of two coordinated vehicles . . . . 14

3.6 Motion model based directly on parametrization . . . . 17

3.7 Summary of motion models . . . . 19

Chapter 4: Object recognition using a time of flight laser range scan- ner 21 4.1 Algorithms . . . . 22

4.2 Limitations . . . . 23

Chapter 5: Using mirrors to enhance the target tracking 25 5.1 Recalculation of laser data . . . . 25

5.2 Some limitations . . . . 30

Chapter 6: Evaluation and future work 33 6.1 Evaluation . . . . 33

6.2 Future work . . . . 34

(9)

A.2 Prediction . . . . 36

A.3 Observation . . . . 36

A.4 Estimation . . . . 37

A.5 Extended Kalman Filter (EKF) . . . . 37

Appendix B:Linearization and discretization 39 B.1 Linearization . . . . 39

B.2 Discretization . . . . 40

Appendix C:Notations and abbreviations 43 C.1 Notations . . . . 44

C.2 Abbreviations . . . . 44

viii

(10)

C HAPTER 1 Introduction

1.1 Motivation

Today, more and more autonomous systems are constructed. An increasing number of those are vehicles which are suppose to operate on their own in an unfamiliar environ- ment. The degree of difficulty, in the tasks performed by the robots, steadily increases, and a solution might be to use individually specialized robots, which are able to coop- erate. These systems are often one-of-a-kind and have complex structures. They are tedious to develop and require knowledge in many fields such as electrical engineering, automatic control, mechanical engineering and programming. Because most of the equip- ment already were assembled and programs controlling the vehicles were available, there was an opportunity to go back and deal with the basics of an autonomous system.

An inspiration when the job started, was that no one actually knew which partial system that had to be studied and developed.

This project has been performed to clarify whether the system, described in Figure 1.1, may be constructed using available hardware. The goal for this project, is that robot 1 shall keep track of the movements of robot 2, using the parameters α, β and r. Furthermore, the idea was to make use of common points of reference, P , to aid the tracking.

1

(11)

40°

47°

17° 45°

α β λ 1 r

λ 2 s 1

s 2 P

1.

2.

Figure 1.1: In the desired system, vehicle 1, is tracking the movement of vehicle 2 by the means of the three parameters r, α and β. If both vehicles are able to identify the object, P , it is possible to use that information to improve the estimation of the vehicle’s relative positions.

1.2 Content

Chapter 2 gives a description of the available equipment, for this project, concerning

sensors, interfaces and computer hardware. To deal with the intended type of autonomous

system a mathematical description is necessary, thus in chapter 5 a few suggestions to

possible motion models are presented. In chapter 3, the SICK laser range scanner is dealt

with, in an attempt to recognize objects using range measurements only. A method to

create virtual laser range scanners, using mirrors, is described in chapter 4. Appendix

A and B give a presentation of used methods of calculations, necessary to simulate or

implement the covered subjects.

(12)

C HAPTER 2 Equipment

In this project, two apparently different vehicles, were available. The larger of the two, is built upon an electric wheelchair named Mobile Internet Connected Assistant, or MICA, for short. This wheelchair is an ongoing project and has been developed by personnel at Lule˚ a University of Technology. Vehicle number two, also called ”Mini LuSAR”, is based on a 1:5 scale model car. ”Mini LuSAR” was built by students in a project during spring 2004, with the intention to create, a more lightweight, copy of the MICA wheelchair, concerning sensors and user interface. In this chapter follows a description of the installed equipment in the robots, together with a summary of the used software.

The MICA wheelchair is equipped with a number of different sensors, but this chapter will only cover those relevant for this thesis. Because ”Mini LuSAR” is the principal test object, the focus will be on this vehicle.

The two robots are fitted with three types of sensors. Most important is the range scanning laser, used primary for navigation. The other significant sensors are rate gyros and odometric encoders which measures the robots rate of rotation and speed respectively.

3

(13)

2.1 Range scanning laser

The time of flight laser range scanner, manufactured by SICK[1], is of type LMS200, see Table 1. LMS200 measures the distance to its surroundings by measuring the time between transmission and detection of a laser pulse. The laser scanner has a maximum field of view of 180

, with an angular resolution of 0.5

, which results in 361 measured points from every sweep of the laser. In this project, the laser is the primary sensor used for navigation and object detection.

SICK laser parameters [1]

Type / Model SICK LMS200

Scanning Range max 80m

Field Of View max 180

Angular Resolution 0.25

/ 0.5

/ 1

Response Time 53ms / 26ms / 13ms

Resolution 10mm

Systematic Error typ. ±15mm Statistical Error (1 Sigma) 5mm

Data Interface RS422 / RS232

Laser Class 1

Supply Voltage 24 V DC

Weight 4.5 kg

Table 1: Data for the SICK range scanning laser

In Figure 2.1 a typical illustration of how the LMS200 visualizes the surroundings. The

scan is taken in room a cluttered with chairs, desks and bookshelves.

(14)

2.2. Rate gyros 5

−2 −1.5 −1 −0.5 0 0.5 1 1.5 2

0 0.5 1 1.5 2 2.5 3 3.5

4

Typical scan from the SICK laser range scanner

[m]

[m]

Figure 2.1: A typical visualization of data from the SICK laser range scanner. The laser has been positioned in a room filled with desks, chairs and shelves. It is obviously very difficult to determine what the measurements originate from.

2.2 Rate gyros

Both robots are equipped with rate gyros to measure the vehicle’s angular velocity.

The MICA wheelchair is fitted with a more accurate fibre optic gyroscope, while ”Mini LuSAR” uses a lot cheaper piezo electric gyro, which is primarily used in passenger cars.

In Table 2

1

and 3

2

, more data about these gyroscopes is displayed.

2.3 Odometric encoders

”Mini LuSAR” and the MICA wheelchair are both fitted with odometric encoders. These encoders are mounted at the outgoing motor shafts, and are used to measure the vehi- cle’s covered distance and speed. This is performed by letting a microcontroller count the number of pulses per time unit, generated by the tachometer. Because the MICA wheelchair has a differential steering it has encoders at each wheel which also makes it possible to determine the vehicles rate of rotation.

1

The KVH E•Core 1000 gyroscope is on obsolete product and is no longer presented at the manu- facturers homepage.

2

Data for the TRW gyroscope is not available online because it is not accessible to the general public.

(15)

E•Core 1000 gyro paramters

Type / Model KVH E•Core 1000

Input Rate max 100

/s

Rate Resolution 0.05

/s Bias Stability 0.001

/s Angle Random Walk (noise) 20

/h/ √

Hz Instantaneous Bandwidth 100 Hz

Update Rate 10 values/s

Data Interface RS232

Supply Voltage 12 V DC

Weight 0.25 kg

Table 2: Data for the KVH fiber optic gyro

TRW gyro parameters

Type / Model TRW rate gyro

Input Rate ±128

/s

Rate Resolution 0.0625

/s Update Rate 100 values/s

Data Interface CAN

Supply Voltage 12 V DC Table 3: Data for the TRW gyro

2.4 Controller area network (CAN)

The CAN[2] bus is a simple but robust system for transferring data between many dif- ferent units. This system was invented by Bosch GmbH to be used in cars, but today CAN has become popular and is utilized in various branches, where communication must withstand tough conditions, especially when dealing with electromagnetic disturbance.

This system consists of only two signal wires, CAN low and CAN high, which are terminated at each end by a resistor, R. See Figure 2.2 for a schematic picture.

In a CAN network no ordinary addressing is necessary, because a message is sent to every node and the identifier decides which nodes shall take care of the message. Every message may contain up to 8 bytes of information with a maximum speed of 1 Mbit/s.

This is valid for network lengths up to 40 meters. If the speed is reduced it is possible

to use longer networks.

(16)

2.5. Vehicle control 7

R R

CAN LOW CAN HIGH

A B C

. . . .

. . . .

Figure 2.2: A schematic sketch of the CAN bus, with the two signal wires ”CAN LOW” and

”CAN HIGH”, and the termination resistors, R. Data may be sent between the connected nodes A, B and C.

2.5 Vehicle control

The movement of the vehicles are controlled by sending messages from the computer to microprocessor units which control the motor and the steering servo. ”Mini LuSAR” has two CAN nodes, one for the propulsion motor and one for the steering servo motor. The messages contain desired speed and steer wheel angle, and are sent as serial data from the computer via a CAN/RS232 dongle[3], which translates the serial data into CAN messages, and then relays to the CAN bus. Messages will be received by a CANDIP module[4] at every node. The CANDIP modules are equipped with CAN interface circuits and an AVR 8-bit RISC microprocessor[5], which may be easily programmed to perform any desired task. In this case, each AVR-processor program has been written to produce control signals to the motors.

A 24 volt DC motor[6] attends to the propulsion of the vehicle. The motor is controlled by an H-bridge motor drive[7], which is managed by the microcontroller. The DC motor is fitted with a digital tachometer[6], serving as odometric encoder, and produces pulses that can be counted by the microcontroller and by that the speed of the car can be calculated. This information is then sent back to the computer and used in the algorithms for navigating in the environment. The measured speed is also used by the microcontroller in a simple control loop, to maintain the desired speed.

Steering the car is accomplished by a powerful servo motor[8], which is controlled by

a standardized Pulse Width Modulation signal (PWM). The PWM signal is generated

directly by the microcontroller in the designated CANDIP module.

(17)

2.6 Computer hardware

The core of this system consists of a PC104[9] computer. A PC104 computer is basically an ordinary PC, but the concept builds upon a module system where the cards, 10 × 10 cm, are stacked together. Communication between the cards is managed via a specific bus which is a distinctive feature on the PC104. Furthermore, the modules are built to operate in though environments.

The computer mounted on ”Mini LuSAR” is equipped with the following modules.

• 1 CPU Board with a Geode GX1 processor.

• 1 Power Supply Unit.

• 1 I/O Unit with 4 16550 COM ports.

• 1 I/O Unit with a dual slot PC-Card/PCMCIA interface.

One of the PCMCIA sockets contains a standard IEEE 802.11b WLAN adapter, which provides for the communication to the Internet. This feature, together with a special software, makes it possible to control the the robot from anywhere in the world.

The entire computer is mounted in a robust aluminium container for protection against vibration, shock and dirt.

2.7 Computer software

The PC104 computer runs with the Red Hat Linux[10] operating system. Drivers for the peripheral equipment are written in the programming language C, and most of the code is specially written for the MICA system. The majority of the algorithms are written in Java, foremost because of the opportunity to visualize data in different ways using Matlab[11], which is a high-level technical computing language and interactive environment for algorithm development, data visualization, data analysis, and numeric computation. See [12], for more information about this issue.

2.8 Schematic overview of used equipment

In the earlier sections, lots of hardware have been presented. A key map is presented in

Figure 2.3 which also describes the interfaces used to connect the variety of units.

(18)

2.8. Schematic overview of used equipment 9

PC104 COMPUTER

TACHO- METER SICK LASER

LMS200

PCMCIA ADAPTER WLAN

VEHICLE PROPULSION

DC MOTOR H-BRIDGE MOTOR DRIVE CAR STEERING SERVO MOTOR

CANDIP MODULE RATE GYRO TRW

CANDIP MODULE

CANDIP MODULE

CAN/RS232 DONGLE

CAN RS232

PWM

RS232 RS232

CAN

PWM

DRIVE SHAFT

PULSES I2C,

ANALOG , PWM

Figure 2.3: A key map presenting all units, used, to manage the vehicle ”Mini LuSAR”. The

PC104 receives data from the sensors via serial communication, but because most sensors utilizes

the CAN bus, CANDIP modules has to be used as interface translators. The CANDIP module

designated for the propulsion, make use of a simple control lopp to maintain the desired speed.

(19)
(20)

C HAPTER 3 Motion models for two

coordinated vehicles

To implement the system described in the introduction, a mathematical description of how the vehicles move in their surroundings, is needed. Because the algorithm will be constructed using a Kalman filter, see appendix A, it is necessary that the final system are to be written as a state-transition model. The idea is to produce a general model which may be used in various contexts, though the main goal is to use them to create two systems named ”Follow Me” and ”Run Ahead of Me”.

3.1 Follow Me

In theory, ”Follow Me”, is a quite simple algorithm, but in practice it give rise to a number of difficulties. The purpose is that a vehicle shall follow another moving object, which moves arbitrary. The major problem when using a SICK laser range scanner as the only sensor, is to decide which measured points arise from the object to follow. If the tracking occurs in a large open space it is relatively easy to make it work, but when moving in more narrow spaces with scattered obstacles the difficulties pile up. Problem such as this, is what gave birth to the idea of object recognition described in chapter 4.

”Follow Me” obviously demands visual contact between the follower and the target, and if this is lost, the algorithm, more or less, falls apart. If the target is recognizable, it might be possible to resume the tracking when the target is found.

3.2 Run Ahead of Me

The algorithm called ”Run Ahead of Me” is based on ”Follow Me”, but they have one significant difference. In ”Run Ahead of Me” the tracking vehicle has knowledge about

11

(21)

the targets movements such as speed and rate of rotation. This makes it possible for the tracker to estimate the targets position without having visual contact.

3.3 Parametrization of models

From the beginning the idea was implement the tracking without any involvement of the global system of coordinates, but as the plans grew, the need for that system of coordinates were obvious. Lots of different version of the parametrization has been produced, but the final choice fell on the one described in Figure 3.1.

57°

67,81

47,45

η

ξ

V

2

ω

2

V

1

ω

1

r

β

X

2(t)

X

1(t)

Y

1(t)

Y

2(t)

Y

X ˆX

2

ˆY

2

ˆX

1

ˆY

1

1.

2.

Figure 3.1: Based on equations, describing the vehicle’s movement in the global system of

coordinate, the models are expressed in the parameters ξ, η and β, which corresponds to vehicle

2:s movement in vehicle 1:s cartesian system of coordinate.

(22)

3.4. Motion model of one moving vehicle 13 The plan has been to use the parametrization as in Figure 3.1, but with the possibility to add the global system of coordinates when needed.

In Figure 1.1 a bit different set of parameters where chosen, but these may though be calculated from the parameters ξ and η according to equation (3.1)

r = pξ

2

+ η

2

α = arctan 

η ξ

 (3.1)

Note that the calculation of α is dependent on the signs of ξ and η respectively.

3.4 Motion model of one moving vehicle

By starting from the global system of coordinates in, X, Y , it is possible to use the model in a larger concept, such as navigating, using maps. Figure 3.2 shows one vehicle moving in the global system of coordinates, X, Y .

30°

VX1

V1

ω

1

VY1

X Y

X1(t) Y1(t)

θ

1

ˆY1 ˆX1

1.

Figure 3.2: One vehicle in the global system of coordinates, where the movement is described by the equations in (3.2).

The vehicle movements are described by the equations in (3.2).

˙x

1

(t) = V

1

(t) cos(θ

1

(t))

˙

y

1

(t) = V

1

(t) sin(θ

1

(t)) θ ˙

1

(t) = ω

1

(t)

(3.2)

(23)

Assuming that the robot is enable to measure its position in the global system of coor- dinates, it is possible to create a state-space model with states and inputs according to equation (3.3).

State vector

 x

1

y

1

θ

1

 , Input vector  V

1

ω

1



(3.3)

If the equations in (3.2) are made linear and time discrete according to appendix B, the model, in state-space form, will be given by equation (3.4)

x

1

(k + 1) y

1

(k + 1) θ

1

(k + 1)

 =

1 0 −V

01

T sin(θ

01

(k)) 0 1 V

01

T cos(θ

01

(k))

0 0 1

 x

1

(k) y

1

(k) θ

1

(k)

+

T cos(θ

01

(k)) −0.5V

01

(k)T

2

sin(θ

01

(k)) T sin(θ

01

(k)) 0.5V

01

(k)T

2

cos(θ

01

(k))

0 T

 V

1

(k) ω

1

(k)



(3.4)

Since this is only a piecewise linear model, V

01

and θ

01

correspond to the present working point.

3.5 Motion model of two coordinated vehicles

This particular model is based on two robots moving in a global system of coordinates with the possibility to track their individual position in the same system of coordinates.

In Figure 3.3 a schematic view of two vehicles in a global system of coordinates is shown.

The model, describing the motion of robot 1, in Figure 3.3, is based on the same equations as in (3.2). Assuming that the speed V

1

(k), and angular velocity ω

1

(k), are constant during the time interval t

k

→ t

k+1

, the robot displacement is given by equation (3.5). It should be observed that the displacement vector D(V

1

, ω

1

) is described in the robots own system of coordinates at time t

k

.

D(V

1

, ω

1

) = 2V

1

(k)

ω

1

(k) sin  ω

1

(k)T 2



cos 

ω

1(k)T 2



sin 

ω

1(k)T 2



(3.5)

The robot rotation is calculated according to equation (3.6), where θ

1

(k) denotes the

orientation of the robots X-axis, in the global system of coordinates, X, Y , at time t

k

.

θ

1

(k + 1) = θ

1

(k) + ω

1

(k)T (3.6)

(24)

3.5. Motion model of two coordinated vehicles 15

30°

40°

55°

65°

46,08

47,8

74,43

92,22

ξ (k+1)

ξ (k)

ω

1

(k)

V

1

(k)

X

1

(k) X

1

(k+1) X

2

(k) X

2

(k+1)

Y

1

(k)

Y

1

(k+1)

Y

2

(k)

Y

2

(k+1)

ω

2

(k)

V

2

(k)

ω

2

(k) T

ω

1

(k) T

2.

η (k+1)

β (k+1)

η (k)

β (k)

1.

D

2

(k)

D

1

(k)

Y

X

Figure 3.3: A discrete time version of two coordinated vehicles moving in the global system of coordinates. The displacements are described for both the global system of coordinates and the parameters ξ, η and β.

Equation (3.5) and (3.6) now yield the robot movement in the global system of coordi- nates, X, Y , according to equation (3.7),

X

1

(k + 1) Y

1

(k + 1) θ

1

(k + 1)

 =

X

1

(k) Y

1

(k) θ

1

(k)

 +

R(θ

1

(k))D(V

1

, ω

1

) .. .

ω

1

(k)T

 (3.7)

(25)

where R(θ

1

(k)) is the rotation matrix in (3.8).

R(θ

1

(k)) =

cos(θ

1

(k)) − sin(θ

1

(k)) sin(θ

1

(k)) cos(θ

1

(k))

 (3.8)

If the same set of equations is set up to describe the movement of robot 2, it is possible to to accomplish the parametrization described in section 3.3. The model, described in equation (3.9), is based upon taking the differential vector of the two vehicle’s positions in the global system of coordinates. ∆(k) denotes the differential vector at time t

k

.

∆(k + 1) =  X

2

(k + 1) − X

1

(k + 1) Y

2

(k + 1) − Y

1

(k + 1)



=

=  X

2

(k) Y

2

(k)



+ R(θ

2

(k))D(V

2

, ω

2

) −  X

1

(k) Y

1

(k)



− R(θ

1

(k))D(V

1

, ω

1

)

(3.9)

(3.9) might be rewritten as (3.10).

∆(k + 1) = ∆(k) + R(θ

2

(k))D(V

2

, ω

2

) − R(θ

1

(k))D(V

1

, ω

1

) (3.10) To determine the parameters ξ and η, the differential vectors ∆(k + 1) and ∆(k) have to be rotated to robot 1:s system of coordinates. This is accomplished by (3.11) and (3.12) respectively.

∆(k) = R(θ

1

(k))  ξ(k) η(k)



(3.11)

∆(k + 1) = R(θ

1

(k + 1))  ξ(k + 1) η(k + 1)



(3.12) Inserting (3.11) and (3.12) in (3.10) gives the result presented in (3.13).

 ξ(k + 1) η(k + 1)



=

= R

−1

1

(k + 1)) (

R(θ

1

(k))  ξ(k) η(k)



+ R(θ

2

(k))D(V

2

, ω

2

) − R(θ

1

(k))D(V

1

, ω

1

) )

(3.13)

R

−1

(φ) = R(−φ) = R(φ)

T

(3.14)

R(φ)R(ρ) = R(φ + ρ) = R(ρ)R(φ) (3.15)

Using the properties of the two dimensional rotation matrix, R(θ

x

(k)), shown in (3.14)

and (3.15) together with the fact that θ

2

(k) = β(k) + ω

1

(k)T , makes it possible to further

(26)

3.6. Motion model based directly on parametrization 17 rewrite (3.13) according to (3.16). The angle β, in (3.17), is independent of any system of coordinate, but it is required to describe the complete system.

 ξ(k + 1) η(k + 1)



=

= R(−ω

1

(k)T )

(  ξ(k) η(k)



+ R(β(k))D(V

2

, ω

2

) − D(V

1

, ω

1

)

) (3.16)

β(k + 1) = β(k) + T (ω

2

(k) − ω

1

(k)) (3.17) If the displacement vectors D(V

1

, ω

1

) and D(V

2

, ω

2

) are simplified according to (3.18), the model can be written in state-space form as in equation (3.19).

D(V

x

, ω

x

) = V

x

(k)T

 cos 

ωx(k)T 2



sin 

ωx(k)T 2



(3.18)

ξ(k + 1) η(k + 1) β(k + 1)

 =

cos(−ω

01

(k)T ) − sin(−ω

01

(k)T ) 0 sin(−ω

01

(k)T ) cos(−ω

01

(k)T ) 0

0 0 1

 ξ(k) η(k) β(k)

 +

+

−T cos 

ω02(k)T 2



0 T cos 

β(k) +

ω02(k)T2



0

−T sin 

ω02(k)T 2



0 T

 sin



β

0

(k) −

ω02(k)T2

 + sin



β

0

(k) +

ω02(k)T2



0

0 −T 0 T

 V

1

(k) ω

1

(k) V

2

(k) ω

2

(k)

 (3.19)

Equation (3.19) only describes a piecewise linear model, with the working point ω

01

(k), ω

02

(k) and β

0

(k). Further simplifications may be performed, but of course the model will then be less accurate.

3.6 Motion model based directly on parametrization

As comparison to the models based on the global system of coordinates, another model was constructed. It is based on the same parametrization as in section 3.3 but the parameters ξ and η are calculated directly from the distance between the robots. A schematic description is seen in Figure 3.4.

The equations which describes the robots movement are presented in (3.20).

ξ(t) ˙ = V

2

(t) cos(β(t)) − V

1

(t) + ω

1

(t)η(t)

˙

η(t) = V

2

(t) sin(β(t)) − ω

1

(t)ξ(t) β(t) = ω ˙

2

(t) − ω

1

(t)

(3.20)

(27)

57°

67,81

47,45

ˆX

1

ˆY

1

ˆY

2

ˆX

2

V

1

ω

1

V

2

ω

2

r

ξ

η

β

V

ξ

V

η

1.

2.

Figure 3.4: An alternative system where the global system of coordinate have been removed, and a continuous-time description of the parameters ξ, η and β can be derived.

State vector and input vector are presented in (3.21)

State vector

 ξ η β

 , Input vector

 V

1

ω

1

V

2

ω

2

(3.21)

Unfortunately the complete linear time-discrete state-space model will contain too large elements in the matrices, which would yet make them unreadable if printed in this report.

The model will though have the standard state-space form as in equation (3.22).

x(k + 1) = Ax(k) + Bu(k) y(k) = Cx(k)

(3.22)

(28)

3.7. Summary of motion models 19 Depending on which measurements are available, the input matrix of the linear time- discrete model will be different. Unnecessary elements in the input matrix, might simply be discarded from the complete model, later on, depending on how the model shall be used. In spite of simplifications when the linear time-discrete system was calculated, simulations have proven that this parametrization works satisfactory.

This model was designed with the algorithm ”Run Ahead of Me” in mind, but simu- lations have also shown that it works just as well with the ”Follow Me” algorithm. If appropriate noise terms have been chosen the model seems quite insensible as long as useful measurement of the targets position is available. Because there are no, at this time, acceptable way of measure the targets orientation, this measurement has also been discarded in the simulations. The tracking should be a lot better if this measurement was available, but for the time being, the orientation hav to be estimated from the movements of the vehicles.

There should not be any problem to incorporate this model into the global system of coordinates, if at least one of the vehicles involved, were able to measure its position in this system of coordinates.

This model is well suited when the SICK laser range scanner works as sensor because ξ and η can be measured directly, and no transformations has to be performed.

3.7 Summary of motion models

All the models presented in this chapter are piecewise linear, which means that the el- ements in the state-transition matrix and input matrix has to be recalculated, at every time step, to be accurate. Every model have been simulated using Matlab and they all seems to work satisfactory if the noise parameters are carefully chosen. To see if the models fulfill all the expectations, some different cases have to be tested.

Some of the cases are

• Tracking of a fixed target

• Tracking of a moving target which moves arbitrary

• Tracking of a moving target, while receiving information about the target speed and angular velocity

To design good model for use in real-time systems, it is necessary to deal with time delays, which may occur. This is not performed in any of the models covered in this chapter, thus some of the delays can be taken care of as described in appendix B.

Both models, describing two moving vehicles, may be used when tracking a fixed target,

and it is only a matter of picking the necessary parts of the existing models. Proposals

have been made that the robots will use fixed targets to improve the tracking of each

(29)

other. The problematic part of this concept is to decide whether the robots track the

same target. Tracking fixed targets yields yet another problem, because the number of

targets will not be the same at all times. Implementing this, probably cares for a conjurer

also skilled in programming, because of the necessary use of dynamic sized state-vectors,

used in a Kalman filter. Another solution might be to have filters ”standing by” for every

new target to track, and then use the estimated position as a measurement in the main

tracking algorithm.

(30)

C HAPTER 4 Object recognition using a

time of flight laser range scanner

The first version of the algorithm ”Follow Me”, which I personally was involved in, took form in a student project during spring 2004. The ”Follow Me” algorithm is be described in section 3.1. This first attempt was soon proven to be inadequate as a tracking algo- rithm because it had been written to track a cluster of points in the data from the laser.

Problem quickly arose when this cluster came too close to other points of measurements, because there is no way to distinguish between the measured points. When operating in large open spaces the algorithm worked adequate, but in smaller spaces with lots of obstacles it was more or less useless. In an attempt to circumvent this problem, an object was created that possibly could be recognized from the laser measurements. This was also a necessity because of the specifications given before the start of this project, which implied that the tracking should be managed using laser data only. The intentions are to mount this object on the vehicle that are to be followed, and in this case, this is vehicle number 2. To make the tracking possible, the object had to be recognizable from every direction. A large number of different creations were considered before the choice fell on the target seen in Figure 4.1.

Due to the distinct angles between the wings of the object and the possibility to identify two of these wings at the same time, makes it possible to determine how the figure is rotated according to an observer.

21

(31)

150°

120°

90°

Figure 4.1: To make an object discernible in laser data it have to be designed with care. This object is detectable from about 70% of the total of 360

, using a SICK laser at close range.

4.1 Algorithms

The algorithm is based on a method which extracts line segments from scans of the SICK laser range scanner. Two segments, each with a predetermined length or consisting of a certain amount of points of measurements, have to lie next to each other, if the segments shall be noticed in the program. It should be noticed that, the number of laser pulses that reflects at the target, are depending on the targets position and orientation in respect to the observer.

Figure 4.2 shows an example of how the object is visualized in a laser scan.

(32)

4.2. Limitations 23

−0.2 −0.1 0 0.1 0.2 1.2

1.3 1.4 1.5 1.6 1.7

90 degrees

−0.2 −0.1 0 0.1 0.2 1.2

1.3 1.4 1.5 1.6 1.7

120 degrees

−0.2 −0.1 0 0.1 0.2 1.2

1.3 1.4 1.5 1.6 1.7

150 degrees

Figure 4.2: The object in Figure 4.1 visualized, from three different angles, through the laser.

From the detectable angles it is possible to determine the orientation of the object with an accuracy of ±3

at a distance of 1.5 meters.

Because this object, in practice, will not be recognized from all directions, it is necessary to keep track of the targets rotation. To manage the tracking, a very simple process model was used together with a Kalman filter, to estimate the targets orientation and angular velocity. This solution is unfortunately insufficient because the estimation quickly derails when no measurements are obtained. For the operation to work adequate one has to incorporate this model into a larger process model where both the observers and targets movements are measured and estimated.

4.2 Limitations

One of the largest obstacles, to make target tracking useful in practice, is the LMS200’s resolution, in both range and angle. It has proven to be way to low. Increasing the resolution in LMS200 is possible, but then the field of view reduces to 90

.

In the practical tests, the object have been placed 1.5 meters directly in front of the SICK laser and longer distances do not seem reasonable. If an accurate measurement of the object is obtained, the rotation may be determined with an accuracy of ±3

, at the mentioned distance.

It should be pointed out that this system only works when the vehicles move on plane surfaces, otherwise it is necessary to design a system capable of vertical tracking of the target, as well.

A possible improvement to some of the problems will be presented in chapter 5.

(33)
(34)

C HAPTER 5 Using mirrors to enhance the

target tracking

In an attempt to improve the accuracy, when tracking objects with the SICK laser range scanner, a mirror was mounted on each side of the laser. The idea is to be able to see an object from three different angles during one sweep of the laser. The field of view will be restricted, but that might be accepted if the accuracy of the measurements improve.

The structure, holding the mirrors, is only built to test the concept and the precision is not optimized in any way. It is only built to get an understanding of the result.

5.1 Recalculation of laser data

Since some of the laser pulses reflect in the mirrors the data from the laser range scanner will be misleading. Therefore the measured coordinates must be recalculated. Figure 5.2 shows a scan from the laser with the mirrors mounted. The laser has been positioned in a corridor, with a small box in front of it, and it is obvious that a lot of measurements are misplaced.

Figure 5.3 shows how the laser pulses reflect in the mirrors, and make the detector believe that the measured coordinate N (n

x

, n

y

) is located behind the mirror at M (m

x

, m

y

).

25

(35)

Figure 5.1: The SICK laser range scanner with a structure holding two mirrors. The mirrors are rotatable horizontally which enables an arbitrary area to be covered.

−3 −2 −1 0 1 2 3

0 1 2 3 4 5 6

X [m]

Y [m]

Laser data, showing small box in corridor

.

Small box

Figure 5.2: A scan taken with the mirrors mounted beside the laser range scanner. The laser

detector misinterprets the measurements because the pulses reflect in the mirrors.

(36)

5.1. Recalculation of laser data 27

30°

30°

30°

30°

60° 60°

22°

50 50

30 30

θθ

ε

L

γ

D

R

D

L

L

R

L

L

X Y

N(n

x

, n

y

)

M(m

x

, m

y

) (p

x2

, p

y2

)

(p

x3

, p

y3

) (p

x4

, p

y4

)

(0, 0)

(x, y)

ε

R

σ

R

σ

L

σ

R

σ

R

(p

x1

, p

y1

)

Figure 5.3: The detector in the laser can not determine whether the laser pulse have been reflected in a mirror or not. Therefore the laser detector is made to believe that the actual measured coordinate N is located at M .

To calculate the true coordinate N (n

x

, n

y

), it must be determined which pulses that reflect in the mirrors. This is decided by equation (5.1), were the laser pulse is reflected when γ lies within the intervals.

0 < γ < arctan |p

y2

|

|p

x2

|

!

π − arctan |p

y4

|

|p

x4

|

!

< γ < π (5.1)

The mirror endpoint coordinates are given by (5.2)

(37)

(p

x1

, p

y1

) = (D

R

, 0)

(p

x2

, p

y2

) = (D

R

+ L

R

cos(ε

R

), L

R

sin(ε

R

)) (p

x3

, p

y3

) = (−D

L

, 0)

(p

x4

, p

y4

) = (−(D

L

+ L

L

cos(ε

L

)), L

L

sin(ε

L

))

(5.2)

When it is decided which pulses are reflected in the mirrors, the coordinate transformation proceeds as follows.

The coordinate where the laser pulse is reflected in the mirror, is calculated by equation (5.3), where a and b, for example, are the components of the equation for a straight line, described by the extension of the laser pulse. Parameters c and d applies to the mirrors.

These equations are given in (5.4) and (5.5).

(x, y) =  d − c

a − c , ad − bc a − c



(5.3)

y

laser

= ax + b = y

2

− y

1

x

2

− x

1

x + y

1

x

2

− y

2

x

1

x

2

− x

1

(5.4)

y

mirror

= cx + d = y

4

− y

3

x

4

− x

3

x + y

3

x

4

− y

4

x

3

x

4

− x

3

(5.5) Values of x

1

, . . . , x

4

and y

1

, . . . , y

4

in the equations (5.4) and (5.5) are given by (5.6) when reflection occurs in the right mirror.



(x

1

, y

1

), (x

2

, y

2

) 

= 

(0, 0), (m

x

, m

y

) 



(x

3

, y

3

), (x

4

, y

4

) 

= 

(p

x1

, p

y1

), (p

x2

, p

y2

)  (5.6) When the laser pulse reflects in the left mirror, x

1

, . . . , x

4

and y

1

, . . . , y

4

, decides ac- cording to (5.7).



(x

1

, y

1

), (x

2

, y

2

)



=



(0, 0), (m

x

, m

y

)





(x

3

, y

3

), (x

4

, y

4

) 

= 

(p

x3

, p

y3

), (p

x4

, p

y4

)  (5.7) Remaining calculations determines the actual measured coordinate N (n

x

, n

y

). This is done according to equation (5.8).

 n

x

n

y



=  cos(φ) − sin(φ) sin(φ) cos(φ)

  m

x

− x m

y

− y

 +  x

y



(5.8)

In (5.8), φ = 2σ

R

with σ

R

= ε

R

− γ when reflection occurs in the right mirror. When the

pulse hits the left mirror φ = −2σ

L

, with σ

L

= ε

L

− π + γ.

(38)

5.1. Recalculation of laser data 29 Figure 5.4 shows the transformed information from the laser range scanner.

−3 −2 −1 0 1 2 3

0 1 2 3 4 5 6

X [m]

Y [m]

Laser data, showing small box in corridor

.

"Errors"

Small box

Figure 5.4: The measurements reflected in the mirrors has been recalculated and the corridor and small box are visualized correctly. Some ”Errors” still remains, which originate from faulty measurements of the actual structure holding the mirrors.

Most of the coordinates in Figure 5.4 have been correctly transformed, but there are still some obviously misplaced measurements labeled ”Errors” in the figure. These arise due to faulty measurements on the actual structure holding the mirrors.

In the implementation of this system, consideration has been taken due to pulses which

reflect at the mirror endpoints (p

x2

, p

y2

) and (p

x4

, p

y4

). These pulses might reflect arbi-

trary and result in faulty transformations. These measurements have therefore been

ignored.

(39)

5.2 Some limitations

When utilizing this system of mirrors to measure targets, the useful field of view is very restricted. Figure 5.5 visualizes the limited areas where the mirrors come to use. The shaded area around A is where it is possible to achieve three measurements of the target.

In the shaded area around B, only two measurements are achievable. In Figure 5.5 there exists two lines labeled ”Zonal boundary” which is the closest range where it is possible to get accurate measurements. The boundary lies somewhere in the range of 0.3 to 1.0 meters. This limitation, at close ranges, simply depends on the properties of the laser pulse, used in the SICK laser range scanner.

Laser

Lasers mirrored positions

Laser

A B

Zonal boundry

Figure 5.5: The area where it is possible to achieve two or three measurement of the target is strictly limited. Furthermore a, ”Zonal boundary” restricts the target tracking at ranges closer than approximately 0.7 meters.

The problem, with the rstricted field of view, could be solved by mounting servo motors

and make the mirrors rotatable. This makes it possible to adjust the effective search area

to a desired location. Another solution might be to motorize the rotation of the whole

SICK laser range scanner. To make the target tracking more efficient, the mirrors should

(40)

5.2. Some limitations 31 be placed longer from the laser, which will put the lasers mirrored positions farther apart.

This cares for larger mirrors though, which might result in a clumsy structure.

(41)
(42)

C HAPTER 6 Evaluation and future work

6.1 Evaluation

During the ongoing work a pair of obvious problems were discovered which actually would have made it impossible to implement the desired system i practice. The first one has been mentioned earlier, and originate in the resolution of The SICK range scanning laser, which is too low to be useful. The second, will at least destroy the prospects for the ”Run Ahead Of Me” algorithm which require access to a network covering a larger area. It might be possible to use direct connection between the vehicles, but that is a subject dealt with in another project and is yet inoperable.

All the mathematical models have proven too work in simulations during a number of different cases, but more testing is necessary to determine if any model is better than the other

Recognizing objects using a SICK laser range scanner is possible, but the object recog- nition will not be practical until a unit with much better resolution is used. A solution could be to use a larger object to track, but it will probably become bulky before it is useful as a target.

Using mirrors to create virtual lasers seems to have large potential, and the system worked as expected, though it have not yet been tested firmly. It is yet impossible to decide whether the use of mirrors is enough to compensate for the laser’s poor resolution.

As mentioned earlier the resolution is adjustable, but then the mirrors will be worthless because of the limited field of view. Consideration have to be taken due to the poor properties of ordinary mirrors, when used together with lasers, because they might create artifacts during reflections. The design engineer also have to be careful with how the LMS200 detector decides which reflection is valid, because one transmitted laser pulse may create multiple reflections.

33

(43)

Unfortunately a complete working system became too complex to be handled by a single person within available time. The programming proved to be too intricate to handle by any other than the person who have written most of the software for the MICA wheelchair, which are specialized for that system.

6.2 Future work

To reach the original goal of the project, a lot of work remains.

If the system shall be dealt with thoroughly, the first step is to extensively test the mathematical models, both in simulations, and then implemented, in a real-time system.

To establish a firm mathematical foundation a close error analysis should be performed.

To give the object tracking, using lasers, another dimension the mirrors, described in chapter 5, could be angled vertically, which would make it possible to watch an object in three different levels. This opens up the opportunity to choose recognizable objects to track.

An inspiring thought is to motorize the mirrors, for the possibility to angle the mirrors

both horizontally and vertically. Together with a high resolution laser range scanner this

might give rise to new areas of application for this type of optical systems.

(44)

A PPENDIX A Kalman filter

Kalman filter is a recursive solution used to filter linear time discrete data. The method was published in 1960 by R.E. Kalman[13] and have been subject for development ever since. Today it is used extensively in the fields of automation and navigation. In practice the Kalman filter is a set of equations which effectively estimate the state of a system.

Because the Kalman filter is very powerful it has become rather popular, and one reason for its popularity might be the possibility to estimate old, present and future states even though the exact process model is unknown. The Kalman filter consists of four steps of action, in the the following order. Initiation, prediction, observation and estimation.

To be able to use the Kalman filter some requirements have to be met. First of all a process model is needed which describes the systems dynamic behaviour. In some cases an input model is also necessary. This input model describes how sensor measurements are incorporated in the process. Above this the design engineer needs to have a rough knowledge about the ”strength” of disturbances and uncertainties in sensors.

An example of a linear time discrete dynamic state space model, required for the Kalman filter is presented in (A.1).

X(k + 1) = F (k)X(k) + G(k)U (k) + B(k)W (k) (A.1) where

X(k) : State vector at time step k.

X(k+1) : State vector at time step k+1.

F(k) : State transition matrix.

G(k) : Input matrix.

U(k) : Input vector.

B(k) : Process noise matrix.

W(k) : Process noise vector.

35

(45)

A.1 Initialization

The initiation of the filter consists of determining an original estimate. In short, this may be seen as the systems starting values.

A.2 Prediction

Given the estimated states, the new predicted states are calculated according to (A.2).

X(k + 1|k) = F (k)X(k|k) + G(k)U (k) + B(k)W (k) (A.2) The predicted covariance is given by (A.3).

C(k + 1|k) = F (k)C(k|k)F (k)

T

+ B(k)Q(k)B(k)

T

(A.3) R(k) : Observation noise matrix.

Q(k) : Process noise covariance matrix.

A.3 Observation

Data from the sensors are called observations, and are labelled with Z(k), according to (A.4).

Z(k) = H(k)X(k) + V (k) (A.4)

Z(k) : Sensor observation.

H(k) : Sensor transition matrix.

V(k) : Observation noise vector.

The observations can be predicted by using the predicted states, X(k + 1|k), according to (A.5).

Z(k + 1|k) = H(k + 1)X(k + 1|k) (A.5)

A prediction is almost always afflicted with errors. This error is called innovation and is labelled v(k + 1). The innovation is the difference between the predicted observation, Z(k + 1|k), and the actual observation Z

0

(k + 1), according to (A.6). If the system is set up correctly the innovation will be normally distributed with zero mean.

v(k + 1) = Z

0

(k + 1) − Z(k + 1|k) (A.6)

(46)

A.4. Estimation 37

A.4 Estimation

If the measurement is accepted, the Kalman filter gain K(k + 1) is calculated, which is used as a correction factor when creating the new estimated state vector, X(k|k), and covariance, C(k|k). In the case where the innovation is too large, the measurement will be discarded and the gain, K(k + 1), will be set to zero, which means that the prediction will be kept as observation.

There are numerous ways to calculate the Kalman filter gain, K(k + 1). The version shown i equation (A.7) corresponds to the ”optimal” gain, in this case.

K(k + 1) = C(k + 1|k)H(k + 1)

T

H(k + 1)C(k + 1|k)H(k + 1)

T

+ R(k + 1) 

−1

(A.7) The estimated state vector and covariance are calculated in (A.8) and (A.9) respectively.

X(k|k) = I − K(k + 1)H(k + 1)X(k + 1|k) + K(k + 1)Z

0

(k + 1) (A.8)

C(k|k) = I−K(k+1)H(k+1)C(k+1|k)I−K(k+1)H(k+1)

T

+K(k+1)R(k+1)K(k+1)

T

(A.9) The loop now starts over and a new prediction will be calculated.

A.5 Extended Kalman Filter (EKF)

In many cases a linear model is not available. The solution to this is to linearize the model around a working point. The state vector then consists of the deviations from these working points, labeled δX(t). After discretization equation (A.1) is rewritten as (A.10), which describes a linearized time discrete system.

δX(k + 1) = F (k)δX(k) + G(k)δU (k) + B(k)W (k) (A.10) Equation (A.11) shows the linearized observation model.

δZ(k) = H(k)δX(k) + V (k) (A.11)

The Kalman filter otherwise has the same structure as described earlier.

(47)
(48)

A PPENDIX B Linearization and

discretization

B.1 Linearization

Linearization is most easily performed by calculating the Jacobian[14]. The Jacobian is a multidimensional form of derivative, which for example, derives all of the systems equations in respect to all its state variables.

Given a system according to (B.1).

Y = F (X) (B.1)

By partial derivation, the system will be given by (B.2).

δY = ∂F

∂X δX (B.2)

The quadratic matrix containing the partial derivatives,

∂X∂F

, is the Jacobian. This might also be written as (B.3).

δY = J (X)δX (B.3)

This system has now been linearized around a working point and the values in the re- sulting matrix will only be valid in a small interval around that point. If the states vary more, recalculation of the elements in the Jacobian is necessary.

The final, linear system, may now be written according to (B.4).

39

(49)

˙x(t) = Ax(t) + Bu(t)

y(t) = Cx(t) + Du(t) (B.4)

When this type of system shall be implemented it is not unusual to realize that the measurements are afflicted with time delays. This implies that the actual system is described by equation (B.5), where τ denotes the time delay.

˙x(t) = Ax(t) + Bu(t − τ )

y(t) = Cx(t) + Du(t) (B.5)

B.2 Discretization

When a Kalman filter shall be implemented and with that calculated by a computer, the state space model has to be expressed in discrete time. This is performed by using Zero-Order-Hold sampling [15]. At periodic sampling with period h, the time discrete state space model in (B.6) a is acquired.

x(kh + h) = Φx(kh) + Γu(kh)

y(kh) = Cx(kh) (B.6)

If the system is afflicted with time delays, τ , the time discrete state space model will instead be described by equation (B.7).

x(kh + h) = Φx(kh) + Γ

0

u(kh) + Γ

1

u(kh − h)

y(kh) = Cx(kh) (B.7)

Φ, Γ, Γ

0

and Γ

1

are calculated according to equations (B.8) - (B.11)

Φ = e

Ah

(B.8)

Γ = Z

h

0

e

As

ds B (B.9)

Γ

0

= Z

h−τ

0

e

As

ds B (B.10)

Γ

1

= e

A(h−τ )

Z

τ

0

e

As

ds B (B.11)

In equation (B.8), e

Ah

is calculated according to (B.12),

e

Ah

= I + AΨ (B.12)

where Ψ is decided by (B.13).

(50)

B.2. Discretization 41

Ψ = Ih + Ah

2

2! + A

2

h

3

3! + . . . + A

i

h

i+1

(i + 1)! + . . . (B.13) e

As

is calculated in the same way as e

Ah

, with the sampling period h, replaced by the Laplace variable s.

Systems with longer time delays than the sampling interval h, are manageable, but require

further modifications. The same applies when a system has internal time delays. The

process of discretization is more extensively covered in [15].

(51)
(52)

A PPENDIX C Notations and abbreviations

43

(53)

C.1 Notations

α Angle to robot 2 in robot 1:s system of coordinates β Angle between robots

r Distance between robots

λ Angle to fixed navigation point s Distance to fixed navigation point V Robot velocity

θ Robot orientation in global system of coordinates ω Robot rate of rotation

ξ Distance to robot 2, along the X-axis of robot 1:s system of coordinates.

η Distance to robot 2, along the Y-axis of robot 1:s system of coordinates.

T Time step / Sampling interval

τ Time delay

γ Angle of transmitted laser pulse.

R(θ) Rotation matrix.

X(k) State vector at time step k.

X(k + 1) State vector at time step k+1.

F (k) State transition matrix.

G(k) Input matrix.

U (k) Input vector.

B(k) Process noise matrix.

W (k) Process noise vector.

R(k) Observation noise matrix.

Q(k) Process noise covariance matrix.

Z(k) Sensor observation.

H(k) Sensor transition matrix.

V (k) Observation noise vector.

C.2 Abbreviations

CAN Controller Area Network

EISLAB Embedded Internet System Laboratory at Lule˚ a University of Technology MICA Mobile Internet Connected Assistant

Mini LuSAR A 1:5 scale model car

LuSAR Lule˚ a Semi Autonomous Robot PWM Pulse Width Modulation

RS232 A common interface standard for data communications equipment

WLAN Wireless Local Area Network

(54)

R EFERENCES

[1] LMS200, Laser Measurement Systems. URL:

http://ecatalog.sick.com/Products/ProductFinder/product.aspx?finder=Produktfinder

&pid=9168&lang=en: SICK AG, 2006-01-21.

[2] R. Bosch, What Is CAN. URL: http://www.semiconductors.bosch.de/de/20/can/1- about.asp: Bosch GmbH., 2006-01-22.

[3] CAN/RS232 Dongle. URL: http://www.can232.com/index.htm: Lawicel AB, 2006- 01-22.

[4] CANDIP. URL: http://www.candip.com/candip-m162.htm: Lawicel AB, 2006-01- 22.

[5] AVR 8-bit RISC Microcontroller. URL: http://www.atmel.com/dyn/products/

product card.asp?part id=2023: ATMEL Corporation, 2006-01-22.

[6] Maxon DC Motor. URL: http://www.maxonmotor.com/docsx/Download/catalog 2005 /Pdf/05 083 e.pdf: Maxon Motor AG, 2006-01-22.

[7] Robot Electronics. URL: http://www.robot-electronics.co.uk/shop/

Motor Controllers2008.htm: Devantech Ltd., 2006-01-22.

[8] Hitec Analog Servos. URL: http://www.hitecrcd.com/homepage/product fs.htm:

Hitec RCD USA INC., 2006-01-22.

[9] Embedded PC Modules. URL: www.pc104.org: PC/104 Embedded Consortium, 2006-01-22.

[10] Red Hat Operating System. URL: www.redhat.com: Red Hat Inc., 2006-01-22.

[11] Matlab Technical Computing. URL: http://www.mathworks.com/products/matlab/:

The Mathworks Inc., 2006-01-22.

45

References

Related documents

Utöver sveputrustning föreslog också 1939 års sjöfartsskydds­ kommitte andra åtgärder för att skydda handelsfartygen. De föreslagna åtgärderna överlämnades

Då målet med palliativ vård är att främja patienters livskvalitet i livets slutskede kan det vara av värde för vårdpersonal att veta vad som har inverkan på

I texten löper resonemanget om problematiken som en röd tråd, att vara lösdrivare är associerat med vissa attribut. Lösdrivare beskrivs som medellösa, saknar

I en snävare betydelse måste en berättelse innehålla en berättare, det vill säga att det måste finnas någon som berättar något för någon.. 6.3 Några

Slutsatsen ¨ar att l¨ararens ledarskap och ifr˚agas¨attande ¨ar mycket viktigt och att det ocks˚a ¨ar av vikt att klarg¨ora f¨or eleverna att uppgiften ¨ar ett bra tillf¨alle

För att kunna ta fram relevant information från den stora mängd data som ofta finns i organisationers ERP-system så används olika tekniker för data- mining.. Inom BI så används

Om IASB ändå går vidare med det arbetet anser FAR att en grundlig kartläggning över karaktäristiken hos de tilltänkta tillämpande företagen bör göras för att sedan vara upp

However, considering the interviewed Somali children’s beliefs and attitudes, knowledge of their language is good for their ethnic identity and belonging, which may correlate