• No results found

Comparative study on road and lane detection inmixed criticality embedded systems

N/A
N/A
Protected

Academic year: 2022

Share "Comparative study on road and lane detection inmixed criticality embedded systems"

Copied!
53
0
0

Loading.... (view fulltext now)

Full text

(1)

INOM

EXAMENSARBETE MASKINTEKNIK, AVANCERAD NIVÅ, 30 HP

,

STOCKHOLM SVERIGE 2017

Comparative study on road and lane detection in mixed criticality embedded systems

Evaluation of performance on Altens mixed criticality platform

SANEL FERHATOVIC

(2)

Comparative study on road and lane detection in mixed criticality

embedded systems

Evaluation of performance on Altens mixed criticality platform Master Thesis

Royal Institute of Technology Stockholm, Sweden

Sanel Ferhatovic

sanelf@kth.se

June 22, 2017

(3)

.

Examensarbete MMK2017:156 MDA614 Comparative study on road and lane detection in

mixed criticality embedded systems Sanel Ferhatovic

Approved: Examiner: Supervisor:

2017-06-22 Martin T¨orngren De-Jiu Chen

Commissioner: Contact person:

Alten Detlef Scholle

.

Abstract

.

One of the main challenges for advanced driver assistance systems (ADAS) is the environment perception problem. One factor that makes ADAS hard to implement is the large amount of different conditions that have to be taken care of. The main sources for condition diversity are lane and road appearance, image clarity issues and poor visibility conditions. A review of current lane detection algorithms has been carried out and based on that a lane detection algorithm has been developed and implemented on a mixed criticality platform. The thesis is part of a larger group project consisting of five master thesis students creating a demonstrator for autonomous platoon driving. The final lane detection algorithms consists of preprocessing steps where the image is converted to gray scale and everything except the region of interest (ROI) is cut away. OpenCV, a library for image processing has been utilized for edge detection and hough transform. An algorithm for error calculations is developed which compares the center and direction of the lane with the actual vehicle position and direction during real experiments. The lane detection system is implemented on a Raspberry Pi which communicates with a mixed criticality platform through UART. The demonstrator vehicle can achieve a measured speed of 3.5 m/s with reliable lane keeping using the developed algorithm. It seems that the bottleneck is the lateral control of the vehicle rather than lane detection, further work should focus on control of the vehicle and possibly extending the ROI to detect curves in an earlier stage.

Keywords: Lane detection, Image processing, Raspberry Pi 3, Platoon driving

(4)

.

Examensarbete MMK2017:156 MDA614 J¨amf¨orande studie av olika v¨agh˚allningsalgoritmer

Sanel Ferhatovic

Godk¨ant: Examinator: Handledare:

2017-06-22 Martin T¨orngren De-Jiu Chen

Uppdragsgivare: Kontaktperson:

Alten Detlef Scholle

.

Sammanfattning

.

En stor utmaning f¨or avancerade f¨orarst¨odsystem (ADAS) ¨ar problemet med uppfattning av milj¨on runt omkring. En faktor som g¨or ADAS sv˚art att implementera ¨ar den stora m¨angd olika f¨orh˚allanden som m˚aste tas hand om.

De st¨orsta k¨allorna till olikheter ¨ar utseendet p˚a k¨orf¨altet och v¨agen, d˚aliga siktf¨orh˚allanden samt otydliga bilder. En granskning av nuvarande algorit- mer f¨or k¨orf¨altsdetektering har utf¨orts och baserat p˚a den har en k¨orf¨alts- detekteringsalgoritm utvecklats och implementerats p˚a ett blandkritiskt sys- tem. Avhandlingen ¨ar en del av ett st¨orre grupprojekt best˚aende av fem mastersstudenter som ska skapa en demonstrator f¨or autonom konvojk¨orning.

Den slutgiltiga k¨orf¨altsdetekteringsalgoritmen best˚ar av f¨orbehandlingssteg, d¨ar bilden konverteras till gr˚askala och allt utom intresseomr˚adet ¨ar bortk- lippt. OpenCV, ett bibliotek f¨or bildbehandling har anv¨ants f¨or kantdetekter- ing och houghtransformation. En algoritm som j¨amf¨or k¨orf¨altets mittpunkt och riktning med fordonets faktiska position och riktning har utvecklats och anv¨ands i experiment f¨or kontroll av fordonet. K¨orf¨altsdetekteringsalgo- ritmen har implementeras p˚a en Raspberry Pi som kommunicerar med en blandkritisk plattform genom UART. Demo-fordonet kan uppn˚a en uppm¨att hastighet p˚a 3,5 m/s med p˚alitlig v¨agh˚allning med den utvecklade algorit- men. Det verkar som att flaskhalsen ¨ar kontroll av fordonet i sidled och inte k¨orf¨altsdetektering, ytterligare arbete b¨or fokusera p˚a kontroll av fordonet och eventuellt ut¨oka synf¨altet f¨or att detektera kurvor i ett tidigare skede.

Nyckelord: K¨orf¨altsdetektering, Bildbehandling, Raspberry Pi 3, Konvojk¨orn- ing

(5)

ACKNOWLEDGEMENTS

I would first like to thank my academic supervisor De-Jiu Chen for the guid- ance and support throughout the project. The examiner Martin T¨orngren also deserves recognition for his involvement in the project. Further I would like to express gratitude to my industrial supervisor Detlef Scholle for giving me the opportunity to write my thesis at Alten and for the support during the project.

I would also like to thank the team that I have had the privilege to be part of. Emil, Erik, Daniel and Hanna, it has truly been a pleasure to work with you and you have all been a great contribution to this thesis.

Finally, I must express my very profound gratitude to my family, first of all my parents, Zlatko and Edina, my sister Amela and my girlfriend Jenny for providing me with unfailing support and continuous encouragement through- out my years of study and through the process of researching and writing this thesis. This accomplishment would not have been possible without them.

Thank you.

Sanel Ferhatovic

(6)

Contents

Acknowledgements ii

Abbreviations vii

1 Introduction 1

1.1 Background . . . 1

1.2 Problem statement . . . 2

1.3 Purpose . . . 3

1.4 Goals . . . 4

1.4.1 Team goal . . . 4

1.4.2 Individual goal . . . 4

1.5 Use case . . . 4

1.6 Delimitations . . . 5

1.7 Method description . . . 5

1.8 Ethical considerations and sustainablility . . . 6

2 Literature review 7 2.1 SAE level . . . 7

2.2 Lane keeping . . . 9

2.2.1 Modalities for environment perception . . . 10

2.3 Flowchart of image processing . . . 12

2.3.1 Image acquisition . . . 13

2.3.2 Preprocess . . . 13

2.3.3 Feature extraction . . . 13

2.3.4 Road model . . . 14

2.3.5 Model fitting . . . 15

2.3.6 Time integration . . . 15

2.3.7 Lateral control . . . 15

2.4 Platooning . . . 15

2.5 OpenCV . . . 16

2.6 Mixed-criticality systems . . . 17

(7)

2.6.1 Scheduling . . . 17

3 Implementation 19 3.1 Altens mixed criticality platform specification . . . 19

3.2 Raspberry pi . . . 19

3.2.1 Pi camera . . . 21

3.3 System architecture . . . 22

3.4 System identification . . . 22

3.5 Lane detection algorithm . . . 25

3.6 Lateral control . . . 29

3.7 Integration with Altens mixed criticality platform . . . 31

3.7.1 Tasks . . . 31

3.7.2 Priority . . . 32

4 Results 33 4.1 Evaluation of algorithm speed . . . 33

5 Discussion 36 5.1 Demonstrator . . . 36

5.2 Lane keeping system . . . 36

5.3 Camera input . . . 37

5.4 Research questions . . . 37

6 Future work 38 6.1 Zynq-7000 integration . . . 38

6.2 Image acquisition . . . 38

6.3 Variable speed . . . 38

(8)

List of Figures

1.1 Software architecture of Altens mixed criticality platform . . . 3

2.1 Flowchart of a general lane detection system . . . 12

2.2 Road model in two different perspectives . . . 14

3.1 Raspberry Pi 3 . . . 20

3.2 System architecture . . . 22

3.3 Angle measurement . . . 23

3.4 PWM and angle correlation . . . 24

3.5 Developed system . . . 25

3.6 Input image before and after grayscale filter . . . 26

3.7 Tresholded image using canny filter . . . 27

3.8 Detected lines . . . 28

3.9 False positive line . . . 29

3.10 Angle calculation . . . 30

3.11 Sequence diagram . . . 31

4.1 Average and WCET timings of the different elements in the algorithm . . . 34

4.2 The modified RC-car . . . 35

(9)

List of Tables

2.1 SAE levels description . . . 8

3.1 Raspberry Pi 3 Model B specifications . . . 20

3.2 Steering angle and direction . . . 24

4.1 System End-to-end time . . . 33

(10)

Abbreviations

Abbreviation Description

ADAS Advanced Driver Assistance Systems ASIL Automotive Safety Integrity Level ECU Electric Control Unit

GPS Global Positioning System

HDV Heavy Duty Vehicle

HT Hough Transform

LIDAR Light detection and ranging

MC Mixed Criticality

ROI Region Of Interest

RTOS Real Time Operating System SAE Society of Automotive Engineers

V2V Vehicle-to-vehicle

WCET Worst-Case Execution Time

(11)

Chapter 1 Introduction

This chapter will introduce the subject of road and lane detection, and mixed cirticality to the reader. The problems that exist in the field and what the purpose of this degree project is.

1.1 Background

There is a global trend to make vehicles more autonomous to reduce human error and workload. Most modern vehicles include safety-critical systems where a failure can cause great damage to both humans and the environ- ment. When the implementation is a safety-critical system it is important to be aware of the risks that are present and how to cope with them. One other increasingly important trend in the design of real-time and embedded systems is the integration of components with different criticality onto the same hardware platform [14].

The EM C2 project [2] is an initiative to drive the development of ”Em- bedded Multi-Core systems for Mixed Criticality applications in dynamic and changeable real-time environments”. One focus of the project is on au- tomotive applications for example: ”Advanced Driver Assistance Systems”

(ADAS). ADAS are systems designed to help the driver and to increase the safety when driving. One example is the lane detection system to help keep the car within its lanes [11]. What differs mixed criticality systems from reg- ular systems is that two components with different criticality are run on the same hardware platform. One example could be to run the ADAS and the infotainment system of the vehicle on the same electric control unit (ECU).

”Road vehicles -functional safety”, ISO 26262 is an international standard

(12)

Chapter 1 1.2. Problem statement

for the automotive industry regarding the electronic systems of the vehicles.

ISO 26262 defines four automotive safety integrity levels, ASIL A, B, C and D. ASIL A has the lowest integrity requirements and ASIL D has the highest.

The problem when implementing two applications of different criticality on the same platform is that both applications need to be certified for the level of the applications with the highest safety requirements. This means that in the case of integrating the ADAS and the infotainment system on the same hardware platform then one would need to certify the infotainment to ASIL D, which is a very tedious and thus expensive task.

If it would be possible to isolate the two applications using a technique called virtualization, where applications are run on virtual hardware rather than on bare metal. This approach would not require any extra work on certification compared to running the applications on separate ECUs.

The work performed in this thesis project aims at implementing a lane de- tection system for an autonomus vehicle in a mixed criticality platform.

1.2 Problem statement

Today there is a lot of research on ADAS where everything from ”Lane De- parture Warning (LDW)” to ”Full autonomous driving” is investigated [11], [26], [22].

However, there is a need for research about the integration of safety crit- ical applications and non-safety critical applications on a mixed criticality platform where the two applications are isolated from each other using virtu- alization. For an example Autosar, which is a partnership for development of software founded by major players in the automotive industry does address mixed criticality systems in the sense that they recognize that the standards must be supported on their platforms [14] [1].

This thesis will investigate different techniques for road and lane detection and how they can be implemented on the real-time operating system (RTOS) of a Mixed Criticality System.

(13)

Chapter 1 1.3. Purpose

1.3 Purpose

The purpose of the literature study is to give insight in the subject and answer the research question:

1. How does a modern lane keeping system function, and how do different systems compare to each other?

After the literature study is done the information will be analyzed and conclu- sions will be drawn and hopefully above research questions can be answered.

From that an implementation phase will begin where the lane detection al- gorithm should be implemented on Altens mixed criticality platform which consist of two operating systems on a Xilinx Zynq-7000 board. Figure 1.1 shows how the software architecture of Altens mixed criticality platform is set up. Arm Trustzone is a hardware isolation which does not allow non- secure software from the Linux OS to access secure memory resources which are available for the RTOS. This guarantees that Linux can not interfere with the RTOS called FMP [27]. SafeG is a hypervisor which decides which op- erating system should run and when. SHAPE is a cloud for communication between different nodes. Currently SHAPE only works for the Linux OS.

Figure 1.1: Software architecture of Altens mixed criticality platform The goal of the implementation phase is to evaluate how well a practical implementation of lane detection system can perform on a mixed criticality platform. Questions to be answered after the implementation:

(14)

Chapter 1 1.4. Goals

1. How can we guarantee the performance of the lane detection system?

2. What frame rate can a lane detection system achieve on Altens mixed criticality platform?

1.4 Goals

In this project there are five master thesis students working together on the same demonstator. This means that there are both individual goals and team goal which do not necessarily align with each other.

1.4.1 Team goal

The team goal is to develop a demonstrator which consists of two small RC- vehicles that are supposed to group into a vehicle convoy where the first vehicle follows a path marked on the ground and the other vehicle follows the first to demonstrate platoon driving.

1.4.2 Individual goal

The individual goal and expected outcome from this thesis is a study of existing road and lane detecting systems. Then comparing different systems to determine which is suitable for implementation in the safety critical system that the group is developing. The last part of the project is to implement the lane keeping algorithm on the RTOS of the mixed criticality system to demonstrate the functionality.

1.5 Use case

This thesis project is part of a larger project conducted by Alten which aims at developing a complete prototype of an intelligent transport system (ITS). In this ITS there will be two vehicles showing the concept of vehicle platooning. The vehicles will be fully autonomous and connected to the in- fratructure. The project is part of a large EU-project called Safecop which stands for Safe Cooperating Cyber-Physical Systems using Wireless Com- munication. By using wireless communication one can send commands to the vehicles in the platoon. For example if the conditions are satisfiened, e.

g. good connection the ITS can send a command to the vehicle to engage in platoon mode. When they are in platoon mode and one vehicle detects slippery road surface, it can communicate it to the rest of the platoon and

(15)

Chapter 1 1.6. Delimitations

the distance between the vehicles can be increased to some predefined safety distance.

The vehicles main computing board will contain components of different criti- cality which means that it is a mixed-criticality system. In a mixed criticality system it is important that the non-safety critical components can not in any way interfere with the safety critical components.

This thesis project focuses on the perception problem and the lateral control of the vehicles. The goal is to develop a system that can keep the vehicles within the lane boundaries while keeping satisfactory speed forward. The investigation will be of experimental nature and an evaluation if this platform is appropriate for future use in similar applications will be carried out.

1.6 Delimitations

The thesis is produced at Alten. Constrained to the Xilinx Zynq-70001. The scope of this work extends to investigating lane detection and platoon driving for small vehicles operating in a constructed environment. Machine learning approaches for lane detection are not within the scope of the project. The results will to some extent depend on the platform that the use case is built upon. In the case of objects on the track some collision avoidance system will be developed and will initially only consist of an emergency break of the vehicle.

1.7 Method description

This degree project will comply with the applied research methodology where information is gathered from accepted and well-known sources and applied to solve specific problems [17]. To gain knowledge in the field of lane detection systems a literature study will be performed which will guide the develop- ment direction of the project.

According to H˚akansson [17] an experimental research method is often used and well suited when investigating systems performance. In this degree project the data will be measured and results evaulated on the developed demonstrator.

1https://www.xilinx.com/products/silicon-devices/soc/zynq-7000.html

(16)

Chapter 1 1.8. Ethical considerations and sustainablility

1.8 Ethical considerations and sustainablility

The work performed in this thesis project is carried out in an as ethical and sustainable way as possible. As always when dealing with automation, it can be important to consider how the system will be used and how the people involved will be affected. One big concern when dealing with automated vehicles is how the decisions are made in situations where accidents occur.

In fact there will not even be accidents, but rather decisions made by the computer in the car that led to the situations. The era of automated ve- hicles will also introduce completely new security threats as the computers in the vehicles can be hacked and overtaken which can lead to injuries and death. Only when the system has been confirmed as safe and secure it can be deployed to real production vehicles.

(17)

Chapter 2

Literature review

This chapter will present the literature study that has been carried out and presenting the different topics to the reader.

2.1 SAE level

When talking about Advanced Driver Assistance Systems (ADAS) and au- tonomous vehicles it is important to define what it actually means. SAE International is a professional association and standards developing organi- zation for transport industries. They have developed a new standard for autonomous driving called ”J3016: Taxonomy and Definitions for Terms Re- lated to On-Road Motor Vehicle Automated Driving Systems,”. This stan- dard defines six levels of driving automation, from no automation to full automation and is described more in detail in table 2.1 below [6] [7].

(18)

Chapter 2 2.1. SAE level

Table 2.1: SAE levels description

Level Name Description

0 No Automation The human does all the work 1 Driver Assistance

The vehicle help out by doing a single task. One example is a cruise control where the car holds a reference speed

2 Partial Automation

The first level that is considered as an automated driving system. In this level the vehicle is able to make decisions as overtaking other vehicles and navigat- ing. In this level humans are only the fall-back option if something fails the vehicle will request the human to inter- vene

3 Conditional Automation

The first level that is considered as an automated driving system. In this level the vehicle is able to make decisions as overtaking other vehicles and navigat- ing. In this level humans are only the fall-back option if something fails the vehicle will request the human to inter- vene

4 High Automation

In level 4, the vehicle is able to oper- ate entirely by it self for the first time, there does not need to be any human behind the wheel as a fall-back. What differs this level from full automation is that it is on a geographically limited area like a center of a town, company area or college campus.

5 Full Automation

Level 5 is where full automated driving is reached. The vehicle can handle all operating modes. There is no steering wheel nor pedals. Just let the vehicle know where you want to go.

When developing automated vehicles there are many functional safety re- quirements that must be fully verified and validated. One importance area is the vehicle actuation systems which are totally controlled by electronic sys- tems. As the actuators are controlled by electronic systems they are strongly

(19)

Chapter 2 2.2. Lane keeping

linked to other so called by-wire systems. Two examples are drive-by-wire and brake-by-wire. These systems do not have any mechanical coupling be- tween the different elements but instead utilize sensors that read position of the brake pedal or steering wheel. In the development of these systems aspects as redundancy of the ECUs, sensors, actuators and power supply is required [24]. The standard ISO 26262 is the most recent standard available concerning functional safety of electrical systems in the automotive industry.

The standard requires determination of safety goals as part of hazard analy- sis and risk assessment. Once all the safety goals are defined, then functional safety requirements can be formulated.

According to Stolte [24] it is needed to adopt measures that go beyond the state-of-the-art of modern production vehicles for ensuring functional safety of automated vehicles. The authors point out that despite the importance of series deployment of automated vehicles, there is not much discussion about safety requirements within the ITS community.

2.2 Lane keeping

In its basic setting the lane detection problem seems like a simple one. The only thing needed is to detect a host lane and only for a short distance ahead.

For a human driving may seem like a simple process where two basic tasks are involved. The first is to keep the vehicle on the road and the second to avoid collisions. But in reality driving is not so trivial, a driver need to continuously analyze the road scene and choose and execute the appropriate maneuvers to deal with the current situation. To help the drivers do these tasks Driver Assistance Systems (DAS) have been developed. These systems can help the driver to perceive the blind area in the road for an example.

An extension is the Advanced Driver Assistance System (ADAS) which can perform basic tasks like: Lane following, Lane keeping assistance, Lane de- parture warning, lateral control, intelligent cruise control, collision warning and ultimately autonomous driving.

The main bottleneck in the development of ADAS systems is the perception problem, which has two elements: road and lane perception, and obstacle detection. This degree project focuses on the first element and investigates the current state of the art research.

A simple Hough transform-based algorithm solves the problem in 90% of highway cases [11]. But the impression that the problem is easy is mislead-

(20)

Chapter 2 2.2. Lane keeping

ing and building a useful system requires a huge R&D effort. One of the reasons is the high reliability demands. In order to be useful the system needs to reach very low error rates. The exact amount of acceptable false alarms of a lane departure warning is still a subject of research [11]. At 15 frames per second, 1 false alarm per hour means only one error in 54,000 frames.

One factor that makes ADAS hard to implement on large scale is the large amount of different conditions that has to be taken care of. The main sources for condition diversity are:

• lane and road appearance

• image clarity issues

• poor visibility conditions

When driving on freeways or large highways the road scene appearance diver- sity is minimized which makes it easier to implement lane detection functions and ultimately automated driving. This is one of the reasons why long haul trucks are the focus of a large portion of the research concering autonomous driving.

2.2.1 Modalities for environment perception

In this section the modalities used for road and lane detection will be de- scribed more in detail.

Today there are several different sensing modalities used for lane detection.

Some examples are monocular vision, stereo vision, LIDAR, IMU data, GPS.

Monocular vision

Vision is the most prominent research area due to the fact that road signs/markings are made for human vision. Vision sensors provides good position estimation on the road without the need for any other modalities. However, there are situations when vision sensors simply cannot perform well, for example in ex- treme weather conditions or when driving off-road. In this kind of situations it is possible to use sensor fusion with other sensor modalities to provide a better position estimate, and it is the reason why LIDAR and GPS are important compliments to vision for reaching full autonomous driving.

(21)

Chapter 2 2.2. Lane keeping

The monocular vision system is frequently used for road and lane detection.

It is more simply put one camera mounted on the vehicle. The required resolution can be derived from

Np = Cd w

where Np is the number of horizontal pixels, C is the camera field of view width in radians. w is the lane mark width in meter [11].

When humans drive we continuously look at the road boundaries, the lane markings and the road texture among other things. These road boundaries are designed so that they should be visible for human drivers in all driving conditions. Self driving vehicles that are supposed to share the road with human drivers will therefore most likely have to rely on the same perceptual cues as humans.

LIDAR

Light detection and ranging (LIDAR) is a modality that has been used to a large extent in the development of autonomous vehicles for research pur- poses. The LIDAR measures the environment around the vehicle in 3D. The LIDAR sends out light pulse and measures the time for it to come back. As it is an active light source it is not dependent of having good natural lighting as with a regular camera.

LIDAR sensors can perform well in certain situations for example in rural areas to detect road boundaries, [11] but are not well suited for multilane roads without vision data. As the LIDAR only measures 3D structures it is not able to detect road markings, although some research have been done on intensity measurement with LIDAR [18] [20] which would make it possible to detect line markings to some extent. One huge drawback with this modality is that the sensors are still very expensive and thus not yet an alternative for implementation in regular passenger vehicles.

Stereo imaging

Stereo imaging is the use of two cameras instead of one camera in order to obtain 3D information about the surrounding. It is a step in between monocular vision and LIDAR as it is much cheaper to implement than LIDAR but generally performs less good in terms of accuracy and reliability. Also the stereo imaging system generally requires greater processing power and is more prone to errors compared to LIDAR.

(22)

Chapter 2 2.3. Flowchart of image processing

GPS, IMU

Geographical information system (GPS) is currently widely used for naviga- tion systems. According to Wing [25] current commericial consumer grade GPS recievers can achieve an accuracy of 1.5-5 m. Which works sufficiently for map navigation when a human drives the vehicle but is simply not accu- rate enough to fully control a vehicle only based upon the GPS. Also the GPS does not give any information about the environment, e.g. other vehicles or pedestrians. This means that GPS will always need to be supplemented by a camera or LIDAR.

One problem with GPS is the reliability. GPS requires connection with enough satellites to function properly and that connection can be lost due to many reasons. Some level of lost connection can be tolerated by using inertial measurement unit (IMU). With the IMU it is possible to calculate current position and integrate with the GPS to get a more reliable estimation when the connection to satellites is weak.

2.3 Flowchart of image processing

Figure 2.1: Flowchart of a general lane detection system

(23)

Chapter 2 2.3. Flowchart of image processing

2.3.1 Image acquisition

The image acquisition typically comes from a camera that is mounted in the center of the vehicle.

2.3.2 Preprocess

The preprocess is a step where the image is prepared for the next steps in terms of image resolution, where a lower image resolution is often preferred due to the high computational load that high resolution images bring [26].

Everything that is not part of the region of interest (ROI) is often removed.

This typically means removing the region above the horizon. Often gray-scale images are preferred over color images due to reduced data load [26]. Removal of unwanted disturbances such as shadows are often done in the preprocess.

As mentioned in the section about road models, inverse perspective mapping is commonly done in the image preprocess to get rid of the perspective effect [12].

2.3.3 Feature extraction

There are several features that can be used for road and lane detection.

The most typically used are color, texture and edges. For structured roads with clear line markings the edges are the most common feature used for lane detection. An edge is defined as the gradient of the intensity func- tion [26]. The output of an edge based method are candidates for lane bound- aries, since edge based methods are able to find where the image brightness changes sharply. There are some well known edge detection methods (Pre- witt, Roberts, Sobel), but one method called Canny edge detector stands out and is still, 30 years after it was first developed, considered a state-of- the-art edge detector, some even mean that it is an optimal edge detector algorithm [13]. A general process of operations that occur in the canny edge detector starts with applying a gaussian filter to smooth the image in order to remove noise. The next step is to scan the image for intensity gradients with a gradient operator and then apply a filter to supress noise but keep edges in the image. Then the image is analyzed for non-maxima points to further remove pixels that are not actual edges in the image. Next step is to treshold the image and lastly finalize the detection of edges by a hysteresis treshold which supresses all the weak edges that are not connected to other edges.

Hough transform (HT) is then applied to the image in order to determine

(24)

Chapter 2 2.3. Flowchart of image processing

if there is an edge at one certain pixel or not. The Hough transform was originally invented 1962 and has since then been refined to the way it is uni- versally used today. The HT works by converting the white pixels from the thresholded input image to points in the parameter coordinate space, mean- ing they will be represented using a direction theta and distance r instead of x an y.

Each point in the parameter space has a count, and each point in the image space has a vote. Edge pixels with the same theta and r value are assumed to define a line in the image. To compute the frequency of each line theta and r are put in a number of so called bins. When all the edge pixels have been converted to parameter space these bins can be analyzed and the ones with the most amount of votes are the most prominent lines in the image.

Usually a threshold is set where counts that does not exceed the threshold are ignored and only the most prominent lines are accepted [16].

2.3.4 Road model

The majority of lane detection systems initially propose a model of a road.

This model can be both something simple as straight lines or more complex splines. Some researchers make the assumption that the road is two parallel lines in the image, this can be done after an operation called inverse perspec- tive mapping, which produces an bird’s-eye view perspective [12]. One other common method is to assume that the lanes have a common vanishing point where the both lanes meet and use that as a reference for the lines in the image [26] [19]. Both these two perspectives can be seen below in figure 2.2

(a) Inverse perspective mapping (b) Vanishing point Figure 2.2: Road model in two different perspectives

(25)

Chapter 2 2.4. Platooning

2.3.5 Model fitting

As mentioned in the road model section, very often a road model is used and fitted to the observed information from the feature extraction section.

The extracted data from the previous step would typically contain both in- liers, i.e. data that can be fitted to a line, and outliers, i.e. data that cannot be fitted onto the same line [23]. Assuming that the extracted data from previous steps contains data that can be fitted to one of the models chosen initially, several different approaches have been proposed for model fitting [11]. Some researchers use least squares method, which is a math- ematical procedure to fit a set of observed values to a function. The idea behind the method is to construct a function in such way so the sum of the difference between the observed value and the data points is minimized [3].

Other research propose the use of ”RANdom SAmple Consensus”, known as the RANSAC algorithm [18] [10] [21]. This method is stated to be superior to least squares method due to its ability to fit a line to the inliers only, without any influence of the outliers on the result. The disadvantage with this method is that the computational time usually is longer compared to least squares method and is very dependent on the amount of outliers in the image [5].

2.3.6 Time integration

The last step that is important for a reliable lane detection system is to be able to incorporate some knowledge from previous frames. This is done in order to increase the reliability of the system and decrease the computational load.

2.3.7 Lateral control

The lateral control task makes use of all the knowledge gathered from the previous steps to actually steer the vehicle and keep the vehicle within the lane boundaries.

2.4 Platooning

As the traffic intensity increases in the world, the problem with traffic con- gestion comes with it. In and around large cities today there are already huge problems due to heavy traffic. The situation leads to increasing emissions

(26)

Chapter 2 2.5. OpenCV

of greenhouse gases such as carbon dioxide. One way to ease the problem of traffic congestion and reduce the fuel consumption of vehicles is vehicle platooning. The concept of vehicle platooning is to reduce the distance be- tween the vehicles on the road and thus reducing the wind resistance acting on each vehicle. Today most of the research cover the topic of heavy duty vehicles (HDV), where trucks are used to form a platoon on highways. Mod- ern commercially available driver assistance systems such as adaptive cruise control use radar measurements to know relative distance and velocity to preceding vehicle and adjusts its own velocity automatically. This strategy works sufficiently good if the distance between the vehicles is long enough due to delays from measurements of the preceding vehicle to actuation of accelerating or braking torque at the wheels.

One effort to reduce the distance between the vehicles in the platoon while maintaining the safety requirements is to send a brake signal through wireless communication to the other vehicles in the platoon. This would allow for a faster actuation of the brakes compared to only using radar. In research done by [9], it is stated that if two identical vehicles are in a platoon on a highway road driving 90 km/h they can hold a minimum relative distance from each other of 1.2 m without endangering safety. In a scenario where a worst case delay of 500 ms is present in the system a minimum of 2 m distance should be kept. This distance is significantly shorter than what a modern adaptive cruise control achieves in order to keep a safe distance to preceding vehicle.

2.5 OpenCV

OpenCV is an open source computer vision and machine learning software library. The library has a large amount of optimized algorithms for com- puter vision. A few areas where OpenCV is used are face recognition, object detection, tracking of moving objects and lane detection. Because OpenCV is a BSD-licensed product, it is free to both utilize and modify the code by companies all over the world. Companies like Google, Microsoft, Intel, Honda and Toyota employ the library for use in various different applications.

OpenCV has C++, C, Python, Java and Matlab interfaces and supports all the major operating systems [4].

In this project several OpenCV funtions have been utilized mainly in the image processing part. More information about how it is implemented comes in next section.

(27)

Chapter 2 2.6. Mixed-criticality systems

2.6 Mixed-criticality systems

A trend in modern embedded systems is to take advantage of all the pro- cessing power available in a multicore processor chip. This can be done by combining different subsystems onto one chip which makes it possible to achieve higher CPU utilization and thus reduce hardware cost and power consumption. Sometimes these embedded systems contain components of different criticality. For the automotive industry that this thesis focuses on one example could be to run the ADAS and the infotainment system of the vehicle on the same electric control unit (ECU). If these two com- ponents are integrated onto a single hardware platform, the response time of the ADAS system should not be affected of the infotainment system. By scheduling these two components onto the same computing platform one cre- ates a mixed-criticality system.

Each industry field (automotive, aerospace, railway, etc) has certain safety and security regulations that the mixed criticality systems needs to comply with. There are several different criticality levels in each industry that depend on elements such as environment of operation and danger to human life [27].

According to Thane [8], safety can be defined as the absence of unacceptable risk. A system is safe if the risk associated with the system is acceptable.

2.6.1 Scheduling

Every task that is implemented has a worst-case execution time (WCET).

This is the maximum amount of time that the task can take to execute on the hardware platform. The WCET is used for guaranteeing that the temporal constraints will not be violated.

A scheduler can be either preemptive or non-preemptive. If the scheduler is preemptive, it can interrupt a task during execution if a task with higher priority is ready for execution, a non-preemptive scheduler will wait for the task to complete [15]. Some of the most common scheduling algorithms are:

Fixed priority

Every task has a fixed priority assigned by the developer and the processor will execute the highest priority task of those that are ready to be executed.

(28)

Chapter 2 2.6. Mixed-criticality systems

Earliest deadline first

Earliest deadline first is a dynamic scheduling algorithm that always checks which task in the queue has the shortest time to its deadline and executes that task next.

Rate-monotonic

Rate-monotonic scheduling is an static priority scheduler where the priority of the tasks are assigned according to the task cycle time. The highest priority task will be the one with shortest cycle time and the lowest priority task will be the one with longest cycle time.

Deadline-monotonic

Deadline-monotonic priority is just like rate-monotonic a static priority sched- uler with the difference that the priority is assigned according to deadline instead of cycle time. The task with the shortest deadline will be the one with highest priority.

(29)

Chapter 3

Implementation

This chapter will present the implementation phase of the thesis project and explain the main concepts in the implementation.

The literature review provided good insight in the subject of lane detection systems and paved the way for starting the implementation phase of the project. However, one important issue was brought up regarding the image acquisition on the Zynq-7000 board due to lack of camera drivers. A deci- sion to use a separate node for the image processing part was made and is described more in detail in this section.

3.1 Altens mixed criticality platform specifi- cation

The mixed criticality platform that Alten uses can either be a Zedboard or an EM C2-board. Both these boards hardware consists of a Xilinx Zynq- 7000 which consists of an dual-core ARM-Cortex-A9 processor as well as an programmable logic part (FPGA).

3.2 Raspberry pi

This section will describe the single board computer that is used for lane detection in this degree project. The chosen board is a Raspberry Pi 3 which is a credit card sized computer. The third generation of the raspberry has seen some major hardware updates compared to earlier versions. The one used in this project has the following specifications:

(30)

Chapter 3 3.2. Raspberry pi

Table 3.1: Raspberry Pi 3 Model B specifications Model: Raspberry Pi 3 Model B

Operating system: Rasbian-Jessie

Processor: ARM Cortex-A53 1.2 GHz 64-bit quad-core

Hardware Ports: 40 GPIO pins, 4 USB ports, HDMI port, Ethernet port, 3.5 mm audio jack, Camera interface, Display interface, Micro SD card slot

The main computer in this project is the Zynq-7000 board and thus it would be preferred to utilize it for the lane detection as well. But due to no camera drivers available for the Zynq-7000 board it would be difficult to manage the image acquisition. A search for other hardware that is more suitable for the task was carried out and the Raspberry Pi was chosen due to the fact that it is widely used in computer vision projects and because of its affordable price point.

Figure 3.1: Raspberry Pi 3

(31)

Chapter 3 3.2. Raspberry pi

3.2.1 Pi camera

The Raspberry Pi camera module has been chosen as image acquisition device for this project. The camera module has a five megapixel image sensor and a maximum resolution of 2952 x 1944 pixels. This camera was chosen due the fact that it is made specifically for the raspberry pi and is very easy to use. The one used has a very wide angle lens which turned out to entail both advantages and disadvantages. The positive thing with a wide angle lens is the wide image that the camera can capture so it can see the road in almost all angles. The negative thing with the wide angle lens is that the image is quite distorted at the edges, which makes the angle calculations less accurate.

(32)

Chapter 3 3.3. System architecture

3.3 System architecture

In figure 3.2 below all components implmented on the demonstrator vehicle are shown.

Figure 3.2: System architecture

The focus of this thesis is the lateral control task and more specific lane detection. The process of implementing the algorithms is described later in this chapter.

3.4 System identification

To know how the system will behave when being fed with different PWM values an experimental setup has been developed and many (136) different PWM inputs and steering angle outputs have been measured.

(33)

Chapter 3 3.4. System identification

Figure 3.3: Angle measurement

The mean value of the angle outputs have been calculated and and a first order polynomial have been fitted to the data points using the least squares method. The line calculated is shown in figure 3.4 and has the equation

y= −14.26x + 195.96

(34)

Chapter 3 3.4. System identification

Figure 3.4: PWM and angle correlation The angles in figure 3.4 are defined as in table 3.2 below.

Table 3.2: Steering angle and direction Angle Direction

<90 Right turn 90 Straight forward

>90 Left turn

The line equation calculated is used in the steering control system to eval- uate the steering angle compared to the identified line angles. This process is described later in this chapter. This is used in combination with the posi- tion control to eliminate the deviation from the centerline and thus improve performance of the system.

(35)

Chapter 3 3.5. Lane detection algorithm

3.5 Lane detection algorithm

This section will describe the algorithms used for lane detection that have been implemented on the demonstrator of this project.

The algorithm that has been implemented on the demonstrator so far consist of the following steps shown in 3.5

Figure 3.5: Developed system

1. The lane detection process starts with grabbing a frame from the Rasp- berry Pi camera and applying a few preprocessing steps to the image.

2. The first step is to crop the image to only contain the region of inter- est (ROI). This is a camera setting that can be predefined so that the camera only grabs the ROI and thus it is not needed to crop it after the frame is grabbed.

3. The following step is to convert the image to gray scale to prepare it for next coming operations. Figure 3.6 below shows how the acquired image looks in the first stages of the lane detection process.

(36)

Chapter 3 3.5. Lane detection algorithm

(a) Input image (b) Converted to grayscale

Figure 3.6: Input image before and after grayscale filter

The gray scale image is the input to the canny edge detection function. As described in the state of the art section the output of the canny function is a thresholded image where all the pixels that are part of edges are set to white and all pixels that are not part of edges are set black. Using the OpenCV library function Canny, figure 3.6 is obtained.

(37)

Chapter 3 3.5. Lane detection algorithm

Figure 3.7: Tresholded image using canny filter

This thresholded image is used as an input to the Hough transform function that is used for line detection.

The two figures below show the input image with lines drawn in different colors. The different colors of the lines indicate what kind of line it is. The red lines in the image are all the lines that the lane detection algorithm finds.

From the red lines that are close to each other, blue lines indicate the center of the road marking. The green line show the center of the road lane. The concept behing the lane detection algorithm is described below figure 3.8

(38)

Chapter 3 3.5. Lane detection algorithm

Figure 3.8: Detected lines

The were a lot of problems when developing the algorithm in terms of false positives when evaluating the lines in the image. A solution was developed in order to eliminate the false lines in the image and only keep the lines that are part of a road lane.

The concept behind this lane detection algorithm solution is to group lines in the image that are very close to each other. So for instance if we find several lines on both the left and right lane of the road, these form two groups of lines because the lines are close to each other. If there are other lines in the image that are not very close to these two lanes, they will be put in separate groups. There can be multiple groups depending on how many false lines that are detected. In the end the groups are evaluated and the two groups with the most number of lines in them are the one that are regarded as lanes.

The short red horizontal line visible in 3.8 is the threshold distance for lines to be grouped in the same group.

This gives a very robust lane detection algorithm that disregards false posi-

(39)

Chapter 3 3.6. Lateral control

tives from the hough transform.

What happens if a new line is introduced in the image that is not part of the road lanes?

Figure 3.9: False positive line

In the image above one extra line has been introduced and is detected by the system. But due to the concept of grouping lines and evaluating which ones are the most prominent, it is clear in this case that the system disregards from the new line and indicates the center and direction of the road correctly.

3.6 Lateral control

Now the lines are detected and the vehicle need to be controled in some way using the information from the lane detection. So far all of these steps are all done on the Raspberry Pi thanks to its easy camera implementation and that it supports OpenCV.

(40)

Chapter 3 3.6. Lateral control

There are two errors that are calculated and from which the vehicle is con- trolled by. The positional error is calculated by splitting the image into two halves and making the assumption that we have one lane in each of the two halves. A centerline is calculated from the two lanes and by measuring the distance from the centerline to the middle of the image, the error is calculated as

errorpos = center of camera − position of vehicle

The other error used for control is the actual angle by which the vehicle is travelling forward compared to the expected angle when looking at the road.

Figure 3.10: Angle calculation

In the captured images from the camera the angle of the centerline can be calculated using the known x and y values. The x in the picture is calculated as

x= abs(x1 − x2) and y is calculated as

y= abs(y1 − y2) and the angle is obtained using

tan(α) = y x

(41)

Chapter 3 3.7. Integration with Altens mixed criticality platform

and thus

α= arctan(y x)

This α angle is compared to the known angle from the system identification and an errorangle is calculated

errorangle= α − (u ∗ (−14.26) + 195.96)

The two errors are sent to the Zynq-7000 via serial communication where the lateral control is scheduled and executed on the mixed criticality platform.

A PID controller for the steering servo is developed using z transform. The output signal u to the steering servo is calculated as:

u= P W Mmin+ P W Mmax

2 − aangle∗ errorangle+ apos∗ errorpos

3.7 Integration with Altens mixed criticality platform

3.7.1 Tasks

As shown in figure 3.11 there are currently four tasks scheduled that run on the real time operating system.

Figure 3.11: Sequence diagram The tasks implemented on the board are described below.

(42)

Chapter 3 3.7. Integration with Altens mixed criticality platform

Communication

In this task all the vehicle-to-vehicle (V2V) communication and vehicle-to- infrastructure (V2I) communication is done. The information that is sent is

• Current state of vehicle

• Current speed of vehicle

• Distance to vehicle in front

• Longitudinal control signal

• Lateral control signal Lateral control

In this task the lateral control described earlier is executed.

Data aggregation

The data aggregation task is combining sensor data in order to detect anoma- lies, which could mean slippage of one wheel or other information which can be useful to share with other vehicles within the ITS.

Longitudinal control

This task controls the distance to the vehicle in front of the platoon. The distance is measured using a LIDAR.

3.7.2 Priority

All the tasks are scheduled with fixed priority. The prority of the tasks is listed below from high to low:

1. Data aggregation 2. Communication 3. Lateral control 4. Longitudinal control

(43)

Chapter 4 Results

This chapter will present the results to the reader.

4.1 Evaluation of algorithm speed

Table 4.1 shows the end-to-end time for the lane detection system currently implemented on the Raspberry Pi. Since the lane detection system need to be run in real time the speed of the algorithm is of great importance.

This measurement includes the time for the image processing steps as well as the control structure and steering signal that goes to the servo motor that controls the steering. The mean fps is calculated as 1/mean time.

Table 4.1: System End-to-end time Image Resolution 384x288 640x480

Mean time 0.057 0.0786

Mean fps 17.54 12.724

(44)

Chapter 4 4.1. Evaluation of algorithm speed

(a) Average timings

(b) WCET timings

Figure 4.1: Average and WCET timings of the different elements in the algorithm

Figure 4.1 shows how long time the different elements of the lane detection algorithm takes. In the test the smaller resolution of 384x288 was used. In 4.1a it is clear that it is the image processing parts that consume the largest amount of time. Figure 4.1b shows the worst-case execution time for the

(45)

Chapter 4 4.1. Evaluation of algorithm speed same elements.

Figure 4.2: The modified RC-car

The figure 4.2 above shows the demonstator vehicle with the mounted LI- DAR and Raspberry Pi with camera. The main board behind the Raspberry is a Zedboard with the Xilinx Zynq-7000 chip.

(46)

Chapter 5 Discussion

In this section the results produced during this thesis project will be dis- cussed.

5.1 Demonstrator

First and foremost the performance of the lane keeping system on the demon- stator vehicle can be discussed. It works very good in low speeds and keeps the vehicle within the lanes without much oscillation around the center line.

The problems arises when the speed of the vehicle rises and naturally the curves are the part of the road where the vehicle starts having problems and cannot fully keep within the lanes. The demo vehicle drives in a constant speed with no regards to if there is a sharp curve ahead or not. A human driver would naturally slow down before the curve and speed up again after.

It would be interesting to integrate the speed of the vehicle as a parameter in the lateral control. The correlation should be that when the angle of the centerline increases, the speed of the vehicle should decrease.

5.2 Lane keeping system

The method of the lane keeping system that is implemented can be improved and developed to include even more of the functinalities of the state of the art lane detection systems described in the literature review. It has been proven on the mixed criticality platform that non-critical components do not disturb or interfere with critical ones [27]. It would be really interesting to implement the edge detection filters on the FPGA of altens hardware plat- form which could reduce the computation time of the task.

(47)

Chapter 5 5.3. Camera input

5.3 Camera input

One thing that has been tricky is the wide angle lens of the Raspberry Pi camera. The lens is convex which results in a distorted image at the edges.

The effect is called barrel distorion and more commoly fisheye lens in cam- eras. This makes the angle calculations from the aquired images less accurate.

The decision to use the wide angle lens camera has advantages also. One of the most important advantage and the deciding factor to use it is that the camera on the demo vehicle is mounted close to the ground, and reqires a wide vision to be able to see the the lanes of the road.

5.4 Research questions

One of the research questions stated in the beginning was how can we guar- antee the performance of the lane detection system. One thing that can be said is that the deadline of the lateral control task is always met.

During tests it has been obvious that speed has a great impact on the lane keeping performance. In lower speeds the vehicle has no problems keeping good position on the road and managaes the curves in a good way, but in higher speeds sometimes it looses track in the curves. I belive that this is more of a control problem than a lane detection problem as it seems that it captures the lane properly, but cannot manage to control fast enough.

(48)

Chapter 6 Future work

This chapter will contain thoughts and ideas for future work building on this thesis.

6.1 Zynq-7000 integration

One of the main and the obvious direction that this project should proceed in is to integrate the whole lane detection system in the Zynq-7000 platform to get one uniform system. This would not guarantee a better system though, as the image processing part is most computation heavy part of the lane detection system and is now done in parallel to the tasks running on the Zynq-7000 board.

6.2 Image acquisition

One thing that could potentially improve the performance of the vehicles ability to keep within the lane in higher speeds is to change the camera to one that distorts the image to a lesser amount.

6.3 Variable speed

As mentioned in the discussion section when evaluating the lane keeping system it was clear that the curves were the problem and that the car had no problems to keep within the lane on straight parts of the track. One reason

(49)

Chapter 6 6.3. Variable speed

is that the speed of the car was constant with no regard to if there is a curve or a straight part of the track. This parameter would be very interesting to integrate into the demo, e.g. the speed of the vehicle should be dependent on the error from the centerline.

(50)

Bibliography

[1] AUTomotive Open System ARchitecture. http://www.autosar.org.

Accessed: 2017-02-28.

[2] Embedded Multi-Core systems for Mixed Criticality applications in dynamic and changeable real-time environments. https://www.

artemis-emc2.eu/. Accessed: 2017-02-28.

[3] Least Squares Regression. http://www.itl.nist.gov/div898/

handbook/pmd/section1/pmd141.htm. Accessed: 2017-05-30.

[4] OpenCV. http://opencv.org/about.html. Accessed: 2017-04-18.

[5] RANSAC. http://soe.rutgers.edu/~meer/UGRAD/cv9lsransac.

pdf. Accessed: 2017-05-28.

[6] SAE International. https://www.sae.org/misc/pdfs/automated_

driving.pdf. Accessed: 2017-03-14.

[7] SAE International. https://www.sae.org/news/3550/. Accessed:

2017-03-14.

[8] Safety Integrity. http://swell.weebly.com/uploads/1/4/3/4/

1434953/swell_safety_and_verification_20111007d.pdf. Ac- cessed: 2017-05-22.

[9] Assad Alam, Ather Gattami, Karl H Johansson, and Claire J Tomlin.

Guaranteeing safety for heavy duty vehicle platooning: Safe set com- putations and experimental evaluations. Control Engineering Practice, 24:33–41, 2014.

[10] Mohamed Aly. Real time detection of lane markers in urban streets. In Intelligent Vehicles Symposium, 2008 IEEE, pages 7–12. IEEE, 2008.

(51)

Chapter 6 Bibliography

[11] Aharon Bar Hillel, Ronen Lerner, Dan Levi, and Guy Raz. Recent progress in road and lane detection: a survey. Machine Vision and Applications, 25(3):727–745, 2014.

[12] Massimo Bertozzi and Alberto Broggi. Gold: A parallel real-time stereo vision system for generic obstacle and lane detection. IEEE transactions on image processing, 7(1):62–81, 1998.

[13] HS Bhadauria, Annapurna Singh, and Anuj Kumar. Comparison be- tween various edge detection methods on satellite image.

[14] Alan Burns and Robert Davis. Mixed criticality systems-a review. De- partment of Computer Science, University of York, Tech. Rep, 2013.

[15] Maryline Chetto. Real-time Systems Scheduling 1. John Wiley & Sons, 2014.

[16] E Roy Davies. Computer and machine vision: theory, algorithms, prac- ticalities. Academic Press, 2012.

[17] Anne H˚akansson. Portal of research methods and methodologies for research projects and degree projects. In Proceedings of the Interna- tional Conference on Frontiers in Education: Computer Science and Computer Engineering (FECS), page 1. The Steering Committee of The World Congress in Computer Science, Computer Engineering and Ap- plied Computing (WorldComp), 2013.

[18] Albert S Huang, David Moore, Matthew Antone, Edwin Olson, and Seth Teller. Finding multiple lanes in urban road networks with vision and lidar. Autonomous Robots, 26(2):103–122, 2009.

[19] Wang Jingyu and Duan Jianmin. Lane detection algorithm using van- ishing point. In Machine Learning and Cybernetics (ICMLC), 2013 International Conference on, volume 2, pages 735–740. IEEE, 2013.

[20] Soren Kammel and Benjamin Pitzer. Lidar-based lane marker detection and mapping. In Intelligent Vehicles Symposium, 2008 IEEE, pages 1137–1142. IEEE, 2008.

[21] Hao Li and Fawzi Nashashibi. Lane detection (part i): Mono-vision based method. PhD thesis, INRIA, 2013.

[22] Joel C McCall and Mohan M Trivedi. Video-based lane estimation and tracking for driver assistance: survey, system, and evaluation. IEEE transactions on intelligent transportation systems, 7(1):20–37, 2006.

(52)

Chapter 0 Bibliography

[23] Rahul Raguram, Jan-Michael Frahm, and Marc Pollefeys. A compara- tive analysis of ransac techniques leading to adaptive real-time random sample consensus. In European Conference on Computer Vision, pages 500–513. Springer, 2008.

[24] Torben Stolte, Gerrit Bagschik, and Markus Maurer. Safety goals and functional safety requirements for actuation systems of automated ve- hicles. In Intelligent Transportation Systems (ITSC), 2016 IEEE 19th International Conference on, pages 2191–2198. IEEE, 2016.

[25] Michael G Wing. Consumer-grade gps receiver measurement accuracy in varying forest conditions. Res J For, 5(2):78–88, 2011.

[26] Sibel Yenikaya, G¨okhan Yenikaya, and Ekrem D¨uven. Keeping the ve- hicle on the road: A survey on on-road lane detection systems. ACM Comput. Surv., 46(1):2:1–2:43, July 2013.

[27] Youssef Zaki. An embedded multi-core platform for mixed-criticality systems : Study and analysis of virtualization techniques. Master’s the- sis, KTH, School of Information and Communication Technology (ICT), 2016.

(53)

TRITA MMK 2017: 156 MDA 614

References

Related documents

Figure A.21: Confidences on Smaller Multi-stream Network: predictions for rain, straight road and lane change.. Figure A.22: Confidences on Smaller Multi-stream Network: predictions

A critical core runs a hierarchical scheduler (hs) with critical software: hard-coded, or Linux processes with associated metadata.. The best-effort core runs

If any high criticality task is detected to potentially to miss deadline, the system running mode will change to HIGH mode, all the low criticality tasks will be dropped and

Xilinx VMM OS SOA Design (.vhdl) Hardware Design Vivado step1 System (.bit) HW Platform Export HW to SDK SDK (.elf) fsbl step2 BSP U-boot step3 uboot (.elf) make SDK step4 BOOT

I två av dessa studier, där man genom ett finmotoriskt test bedömt barnens och ungdomarnas fingerfärdighet och hastighet i både en- och tvåhandsuppgifter och där enhandsuppgifterna

riskerna, där hänsyn tas till vad som kommit till kännedom i hemlandet. Om så inte är fallet ska en prövning av hur den sökande vid ett återvändande kommer att manifestera

Negativa attityder mot patienter med schizofreni påvisades också i samhället där individer ansåg att dessa patienter var farliga och oförutsägbara i jämförelse med andra

A study of restaurants in a seasonal, rural tourist destination was conducted in order to explore the daily practices of restaurant work and meal offerings in an area that supposedly