• No results found

Risk analysis of software execution in an autonomous driving system

N/A
N/A
Protected

Academic year: 2022

Share "Risk analysis of software execution in an autonomous driving system"

Copied!
35
0
0

Loading.... (view fulltext now)

Full text

(1)

DEGREE PROJECT IN ELECTRICAL ENGINEERING, SECOND CYCLE, 30 CREDITS

STOCKHOLM, SWEDEN 2020

Risk analysis of

software execution in an autonomous driving system

Joanna Ekehult

KTH ROYAL INSTITUTE OF TECHNOLOGY

(2)

Abstract

Autonomous vehicles have the potential to offer efficient ways of moving and im- prove the safety of driving. For this to occur, it must be ensured that the au- tonomous vehicles have a safe and reliable behaviour in nearly all situations and under nearly all circumstances. The system that enables autonomy relies on a stack of complex software functionalities, where the response and execution times are hard to predict. It is therefore essential to create effective tools and frame- works for evaluating the performance of the autonomous driving system in a risky scenario. The aim of this thesis is to create and evaluate a framework for analysing the risks of an autonomous driving system. The approach is based on an abstract model of the main components and interactions of the autonomous system. It pro- vides a manner for systematically analysing the system’s behaviour through simu- lations without requiring timely and costly testing, or a very detailed and complex model. Specifically, the use of the method for analysing the autonomous vehicle’s timing behaviour in a risky scenario is investigated.

The developed framework is used to evaluate the ability of a vehicle to stop before

colliding with a static obstacle. In such scenario, the model-based approach for

analysing the risks for an autonomous system is feasible and effective and can

provide useful information during the development process.

(3)

Sammanfattning

Autonoma fordon kan potentiellt erbjuda både säkrare och effektivare transport- möjligheter. Men om det ska bli möjligt måste man kunna verifiera att det au- tonoma fordonet har ett pålitligt och säkert beteende i nästan alla situationer och under nästan alla förhållanden. Systemet som möjliggör autonom körning byg- ger på en komplex mjukvarustack, där utfallet och exekveringstiden är svåra att utsäga. Det är därför är det väsentligt att man utvecklar verktyg och strukturer som kan förutsäga säkerheten hos ett autonomt system. Syftet med detta arbete är att utveckla och utvärdera ett verktyg för att analysera riskerna hos ett sys- tem för autonom körning. Tillvägagångssättet är baserat på en abstrakt modell över de huvudsakliga komponenterna och interaktionerna hos det autonoma sys- temet. Genom enkla simulationer istället för tidskrävande och kostsamma tester, eller mycket detaljerade och komplexa modeller, kan metoden förse användaren med ett sätt att systematiskt analysera systemets beteende. Specifikt undersöks huruvida metoden kan användas för att analysera det autonoma systemets tid- segenskaper vid ett riskfyllt scenario.

Den utvecklade strukturen används för att utvärdera om fordonet hinner stanna

innan det kolliderar med ett statiskt hinder. I ett sådant scenario kan man dra slut-

satsen att det modellbaserade tillvägagångssättet för att analysera riskerna hos ett

autonomt system är görbart och effektivt, och kan ge värdefull information under

utvecklingsarbetet.

(4)

Acknowledgements

I would like to thank my supervisor at Scania, Patricio Valenzuela, and my super

visor at KTH, Valerio Turri, for their insight and advice throughout this thesis

work. This work was supported by the Vinnova research project IQPilot (project

number 2016-02547).

(5)

Contents

1 Introduction 1

1.1 Motivation . . . . 1

1.2 Objective and scope . . . . 2

1.3 Outline . . . . 3

2 Background 4 2.1 Autonomous vehicles . . . . 4

2.2 Timing analysis . . . . 4

2.3 Related work . . . . 6

3 Method 8 3.1 Case study . . . . 8

3.2 The autonomous system . . . . 8

3.3 Sensor timing model . . . . 9

3.4 Software timing model . . . . 11

3.5 Actuator timing model . . . 12

3.6 Network timing model . . . 12

3.7 Activation patterns . . . . 13

3.8 Delimitations . . . . 15

4 Simulation study 17 4.1 Simulation setup . . . . 17

4.2 Results . . . 18

4.3 Comparison with human driver reaction time . . . 22

5 Conclusion 25 5.1 Evaluation of the method . . . 25

5.2 Future work . . . 26

References 28

(6)

1 Introduction

This chapter provides a general introduction to the thesis project. The project is performed at Scania CV AB, Department of Autonomous Transport Systems as a part of a master’s degree project course at KTH Royal Institute of Technology.

Figure 1.1 shows an image of two autonomous Scania trucks. The objective of this project is to develop a method for analysing the timing considerations and safety of an autonomous driving system. The main focus is on investigating the end-to-end reaction time of the system in a risky scenario by abstracting the system’s compo- nents and interactions into a model of the autonomous driving system.

Figure 1.1: Scania autonomous transport systems (Scania CV AB 2016). ©Scania CV AB (publ), SE-151 87 Södertälje, Sweden [20].

1.1 Motivation

An autonomous vehicle is a computer controlled intelligent vehicle that interacts with the physical world by responding to sensor inputs in real-time. Autonomous vehicles have the potential to substantially affect the safety, mobility and environ- ment by offering more effective and convenient ways of moving and by reducing the possibility of human error [8]. But it is also a safety-critical system, a sys- tem in which a failure could lead to a catastrophe, therefore, the developers have to ensure that the autonomous vehicles have a safe and reliable behaviour. They have to guarantee that the vehicles are able to fulfil their assigned goals with high probability and interact with the environment sufficiently fast, despite potential faults and a priori unknown situations.

Predicting the autonomous vehicle’s performance is a demanding task since the

system controlling the vehicle is large and complex; containing millions of lines

of code, logics, loops over large data structures and different hardware, and since

the risk of facing complex and unseen driving environments is high. It is therefore

essential to create effective tools and frameworks for predicting the performance

and safety of the autonomous driving system. See Figure 1.2 for an overview of a

general system architecture for an autonomous vehicle.

(7)

Lidar Radar Camera

GPS Other sensors

Sensor Fusion Environment Perception

and Localization Planning

Path Planning

Decision Making

Trajectory Generation

Driving

Control (Lateral and Longitudinal) Brake

Throttle

Steer

Road/Path Tracking

Figure 1.2: System architecture for an autonomous vehicle.

For guaranteeing the safe operation of an autonomous vehicle, critical tasks (e.g.

sensing, planning and control), have to meet real-time requirements and make correct decisions, so that the autonomous driving system can detect real-time traf- fic conditions and react to them fast enough to avoid accidents. When assessing the vehicle’s timing behaviour, many factors come into play: the time for perceiv- ing the environment and the time for recognition, decision, control and actuator processes.

1.2 Objective and scope

The objective of this thesis is to facilitate intuitive modelling and risk analysis con-

cerning the timing behaviour of an autonomous driving system, through abstrac-

tions and model-based approaches. The main objective of this thesis is to develop

a systematic method to analyse the timing considerations for software execution

in autonomous driving. It is also the objective to analyse if the reaction time from

sensor inputs to actuator outputs can be bounded to provide guarantees of both

safety and effectiveness. The overall approach is based on introducing timing in-

formation and fault probabilities at a high representation-level of the system, in-

cluding the sensors, network and actuators. This approach strives for generality

and flexibility, and the purpose of the model is to serve as a framework for study-

ing risks and time-related properties in an autonomous driving system.

(8)

1.3 Outline

Chapter 2 provides a description of an autonomous system as well as an introduc-

tion to timing analysis and related work. Chapter 3 gives a detailed explanation

of the model and the modelling approaches, followed by the delimitations and a

model validation approach. In Chapter 4, the experimental results are presented

and then discussed in Chapter 5. Finally, in Chapter 6 a more general discus-

sion on the thesis work, the conclusions and suggestions for future work, are pre-

sented.

(9)

2 Background

This chapter provides a background of the thesis work. We first present an overall view of the system to be analysed and an introduction to timing analysis.We later present the related work on the subject.

2.1 Autonomous vehicles

An autonomous vehicle must automatically and constantly sense its environment and make safe and correct driving decisions by itself, even in noisy and dynamic environments. The common system for autonomous diving is a complex com- bination of: sensors, perception, decision making, planning, control and actua- tors. The perception, decision making, planning and control components form a data-driven computing and analysing framework which enables autonomous driving. The autonomous vehicle uses multiple sensor technologies (e.g. cam- eras, radar and lidar) to create an accurate map of its surroundings under a range of conditions. Other devices and tools that are used by the autonomous vehicle to source information are inertial measurement units (IMUs) for measuring the vehicle’s three linear acceleration components and three rotational rate compo- nents, GPS, ultrasonic sensors (for short distance obstacle detection) and possibly v2v (vehicle-to-vehicle) or v2x (vehicle-to-everything) communication. See Fig- ure 2.1 for an illustration of some of these tools.

The perception system recognizes the surrounding driving environment by sensor interpretation (sensor fusion), using data from the sensors to understand traffic scenarios, detect objects and determine the exact location of the vehicle (posi- tioning). The decision-making system makes operating decisions to control the vehicle and avoid collisions, based on knowledge about the environment, e.g. by predicting the trajectories of moving objects to decide if lane change is necessary.

The planning system then decides how to execute these decisions, by finding driv- able paths and generating detailed operating motions, while following the routing path from source to destination. The controller generates control signals for phys- ically operating the vehicle to follow the planned paths and trajectories, and sends these instructions to the actuators, which control acceleration, steering and brak- ing.

2.2 Timing analysis

The objective of timing analysis is to help planning, understanding, optimizing

or validating systems with respect to their timing. It often involves attempts to

determine or estimate bounds on the execution time of a system to ensure that

end-to-end deadlines are met. Timing analysis can be performed by measuring

the time of real executions, measuring the time of executions on a simulator or

through analytical methods [7].

(10)

Figure 2.1: Illustration of a truck with ultrasonic sensors (blue), GPS and v2v (vehicle-to-vehicle) connection to a car.

Timing analysis often includes finding the worst-case execution time, maximum data age (which is related to the stability and performance of the system) or the response time (the time from that an event occurs until the control system has produced a response). These properties are affected by different time delays, data flows or hardware configurations in the system. If these delays are not considered they could cause vheicle instability or uncertainties in the system’s behaviour in risky situations. The timing analysis can be performed on different levels, such as system, code or network level and in different stages of the developing process;

early, integration or final phase, according to what the purpose of the analysis is [7][3].

In this work an analytical and probabilistic response time analysis is performed, which adds range and accuracy of sensors and network latency to a system level timing analysis. In order to perform the response time analysis, a system timing model should be derived. The timing model consists of a chain of tasks and in- volves properties concerning timing, requirements, dependencies and data flows.

A task represents a piece of the software, it can be a whole component or a piece of code, and the tasks communicate with each other through communication blocks.

A task chain is illustrated in Fig. 2.2. The most common timing properties asso- ciated to a task include:

• Periodicity: The time interval between the activation time of two task in- stances is the period of the task, the activation rate.

• Execution time: The time it takes for a task to execute.

• Precedence restriction: If a task has to run after another task in order to get the result value from it.

• Jitter: Any time variations in the period, execution time or activation of a task.

• Triggering: A task can be time triggered (when a task is triggered according to a predetermined schedule) or event triggered (when a task is triggered by some other task’s output or a message).

A common notation for an independent task is τ i {P i , T i }, where P i is its time period

and T i is its triggering time (the temporal characteristics of the task), and a chain

(11)

is denoted by a set of tasks Γ = 1 , τ 2 , τ 3 , ... }. A communication block is denoted as c j . See Figure 2.2 for an illustration of a simple task chain consisting of three tasks and two communication blocks connecting the tasks.

Since the tasks can have different time periods, over- and under-sampling may exist, meaning that some data might be over-written by the new data before they are read by the succeeding tasks, while some data might be consumed more than once before new data arrives [9]. In this thesis, only under-sampling is modelled since a message can only be read once. It is also assumed that the execution time of a task never exceeds its time period, so it is enough to only include the time period in the task model.

τ 1 c 1 c 2

{P

1

, T

1

} {P

2

, T

2

} {P

3

, T

3

}

τ 2 τ 3

Figure 2.2: Illustration of a task chain with communication.

The timing analysis can also include parameters that are described by random variables. The modelled property in the real system does not have to have a ran- dom behaviour itself, but the underlying system which determines its behaviour could be unknown and complex, thus it is best modelled by a random variable. A random variable is a set of possible values who depend on the outcome of a ran- dom process, such as throwing a dice.

2.3 Related work

The focus of most works concerning reliability or risk analysis in autonomous ve- hicles is on detecting, diagnosing and recovering from faults, so called fault tol- erance, or other similar ways of ensuring dependability. Kalastachi and Kalech give a good overview on which fault detection and diagnosis approaches that are best suited for different robotic systems [15]. In [13], the authors review the main issues, research works and challenges in the field of safety-critical robots, linking up dependability and robotics concepts.

Other related works include a work by Powell, Arlat, Chu, Ingrand and Killijan [19], who have developed a method and platform for testing the robustness of the safety mechanism of the functional layer (controller) of an autonomous system.

In [2] it is suggested to analyse the behaviour of autonomous vehicles through computer simulations and model checking, and the behaviour that is addressed is what actions are taken by the autonomous vehicle in different traffic scenarios.

In [1], a model of communicating autonomous vehicles for validation of an au- tonomous system is proposed. Work by Lattner, Timm, Lorenz and Herzog [16]

proposes a method for knowledge-based generic risk assessment for autonomous

(12)

vehicles, a way of identifying what environmental situations might endanger the autonomous vehicle and its passengers or other traffic participants. Crestani et al.

present an approach that aims to integrate fault tolerance principles into the de- sign of a robot real-time control architecture in [6]. These studies raise important aspects of risk analysis for autonomous vehicles, but they focus on interactions with and detection of risks in the environment, and do not analyse what the po- tential risks are for an autonomous system when performing tasks.

Timing analysis in itself is a thoroughly researched area which mostly deals with scheduling for real-time systems (systems which depend on at what time instant a result is produced). Scheduling is the problem of assigning tasks to limited resources in order to meet deadlines. For automotive systems, the focus is on schedulability analysis or timing analysis of embedded systems, and the methods used are static, measurement-based or hybrid analysis techniques [7]. Most of the available analysis methods are adapted for small and simple systems at the implementation level, including contributions from memory, caches, pipelines, branches, and code structure.

Fewer studies focus on larger complex systems, and in those works the focus is on performing timing analysis at early stages in the development, as model-based de- sign [4, 7]. Model-based design allows for modelling more complicated behaviour of systems, and the technological advances made in the area of model-based sim- ulation and analysis tools allow for good results, ensuring correct timing of the software and provide an understanding of the behaviour of the system before the whole system exists. Several works focus on these development methodologies for autonomous software systems and include studying the timing behaviour for applying real-time systems scheduling theory to an autonomous system [3, 9, 11, 18]. The limitations of these approaches are that they are typically only applied in cases where there is minimal variation in timing performance of each task and the focus is on pre-run-time analysis and scheduling, and they do not include the behaviour of chains of programs with network induced delays, sensors and actu- ators.

All the existing works on risk analysis of autonomous systems thus focus on either

modelling the fault tolerance and fault probabilities of the system, or on the timing

and scheduling of the software. So to the best of the author’s knowledge there are

no works where an abstract model of the software stack is used as a framework for

risk assessment of an autonomous system.

(13)

3 Method

This chapter presents the framework for the model of the system and the mod- elling approaches for the functional components, sensors and network. The inves- tigated scenario, as well as the delimitations of the thesis and a section on model validation are also included.

3.1 Case study

The analysed scenario consists of an autonomous vehicle driving at a constant speed v on a straight and flat road, when an obstacle appears directly in front of the vehicle at a distance d. When the autonomous vehicle has sensed this obstacle and decided to act, its collision avoidance maneuver is assumed to be hard braking.

See Figure 3.1 for a sketch of the initial scenario.

Obstacle d

Vehicle

v

Figure 3.1: Sketch of the investigated initial scenario.

3.2 The autonomous system

The main properties and characteristics of the system are modelled as a task chain of interacting components. A task represents a sequential execution of a piece of code and abstracts a functionality of some software or hardware. Based on this model of the system, and by performing an end-to-end response-time and delay analysis, the true behaviour of the system can be predicted. The system consists of four tasks that represent the software stack: perception, situational awareness, motion planning and control in the autonomous driving system, as well as a task representing the neural network (for image classification) and saturation (for sat- urating the controller output).

The input stimulus for the software stack is a sequence of messages from the sen-

sors. And the output from the saturation task goes to a model of an actuator for

braking. All these tasks communicate with each other through a model of the net-

work in a successive order. In Figure 3.2 the execution order of the system in the

model is shown. There is a restriction of precedence: tasks depend on the output

from the preceding task as their input, and the last task in the chain sends the re-

sults to the output ports, so there is no feedback. The calculations, necessary for a

real autonomous vehicle, are not included in the model; the model only includes

properties with a significant impact on the timing behaviour of the system.

(14)

Actuators Perception Situational

Awareness Motion

Planning Controller Onboard Platform

Radar

Lidar

Camera NN

Figure 3.2: The succession order of the autonomous system in the model

3.3 Sensor timing model

A sensor typically consists of a physical sensor, a computational component and a network interface. The sensors considered in this model are the sensors that are used by the vehicle for perceiving the environment: camera, radar and lidar. The camera takes photographs of the environment surrounding the vehicle by letting light to fall on a light-sensitive surface. While the radar transmits radio waves and can determine properties of surrounding objects by measuring the returning signal. The lidar works as the radar but transmits light instead of radio waves.

For autonomous vehicles the radar is mostly used to measure the velocity of and distance to surrounding objects, while the lidar is used for creating a map of the surrounding environment.

The frame rates of all the sensors are fixed. For a radar sensor, a frame corre- sponds to the data acquired for one full scan as one package at completion. The lidar sensor instead processes a small portion of the scan at a time and sends that data as one frame; it does not wait for the full scan to be completed. However, in this work it is assumed, in the model of the lidar, that the lidar works as the radar, sending one frame of the whole scan at the end of a revolution. This is a reason- able simplification for the lidar, since in this scenario the only object of interest to be detected is assumed static and so it can be assumed that the relevant data is contained in only one of the data packets per revolution. The frame rate of the lidar is set to 10 [Hz], of the camera to 20 [Hz] and of the radar to 25 [Hz].

3.3.1 Frame acquisition time

The time for the sensors to acquire a frame (i.e. to retrieve data and process it) can

vary since the signals interact with a dynamic environment [22]. For the models of

the camera and radar, the variation in acquisition time is modelled as a uniformly

distributed stochastic variable. The distribution is uniform since it is assumed that

all values are equally likely to be observed and that the values are bounded, since

after a maximum time a new scan starts. However, the lidar is assumed to have a

constant time for acquiring the data. This assumption is motivated by considering

that the lidar sends out signals which travel at the speed of light, so at an order of

magnitude of approximately 10 8 [m/s], and the distance that the signals travel is at

an order of magnitude of approximately 10 2 [m]. Since the signals travel a distance

(15)

Distance  Probability

Minimum range

Figure 3.3: Illustration of how the detection probability depends on the distance to the object.

d, during the time t trip to an object and then back the same distance one can set up the following relationship, also known from the time-of-flight principle:

2d = v · 2t trip .

Inserting the order of magnitude of the distance and velocity in this equation gives:

2 · 10 2 [m] = 10 8 [m/s] · 2t trip ,

where it can be noted that the time the signals travel, t trip , is of the order of mi- croseconds. A variation in the acquisition time would then be small and so the acquisition time of the lidar is assumed to be equal to the its time period with no variation.

3.3.2 Detection ranges

The lidar and radar sensors have limited detection ranges, and within these ranges, it is assumed that the probability of the object being detected increases as the dis- tance to the object decreases, but when the object is very close to the vehicle, within a minimum range, the probability of it being detected decreases again, due to the field of view of the sensors. The detection probability is modelled as piecewise linear, according to the distance. See Figure 3.3 for an illustration of how the probability of an object being detected depends on the distance to the object for the lidar and radar.

The camera however, is not limited by a detection range in the same way. Instead

the detection of an object depends on the size of the obstacle in pixels. It is as-

sumed that the obstacle is not further away from the vehicle, when it appears, and

not smaller than that it occupies enough pixels in the image to be detected by the

neural network at all the distances considered in this thesis. The camera module

consists of two parts, representing the total time from an image is being captured

until the pixels are classified. One module represents the image capturing, run at

the frame rate of the camera and one module represents the neural network used

to detect objects in the image.

(16)

3.4 Software timing model

The model of the software stack consists of four tasks, and each of the tasks is responsible for completing a time dependent behaviour, representing the per- ception, situational awareness, motion planning and control components respec- tively. The tasks in the system can either be time-triggered, activated according to what time it is, or event-triggered, activated when a specific event occurs. The execution time, a measure of how long a task has been executing for, is assumed to be smaller than or equal to the period of the task, so it does not contribute to any delay.

3.4.1 Execution times

The perception, situational awareness and neural network tasks have variable ex- ecution times, since they have non-deterministic properties that stem from the uncertainties in sensor data. The perception task deals with these uncertainties by requiring coherent data from at least two sensors within a time span before it confirms that there is an obstacle in the environment, and to avoid the vehicle to act on false positives from the sensors. This adds a delay to the reaction time of the system. There is also a small risk that the system does not associate detections with the obstacle due to e.g. noise in the data, which is also included in the model as a probability of not detecting the obstacle.

When the object is confirmed the prior delay and the prior reaction time are cho- sen as the largest ones, respectively, from those data messages that are used to confirm the detection. And for finding the delay, due to an input message not arriving at the start of a new execution cycle of the task, the buffering time, the oldest time stamp of the messages is used, ensuring that the reaction time is not underestimated. In this thesis, the perception task will have a time period of 10 milliseconds, and the situational awareness task will have a time period of 100 milliseconds. The execution time of the neural network is the time it takes for it to classify an image, this time is highly varying depending on the data and there- fore the model of the neural network has a varying time period which is equal to its execution time. The execution time is modelled as a stochastic variable from a uni- form distribution which varies between 40 milliseconds to 80 milliseconds.

The planning and control tasks, however, are considered deterministic. The mo-

tion planning task can change period dynamically in response to system events,

e.g. when the module is planning how to execute a path it has a shorter cycle than

when it is waiting for a new goal. These two time periods are set to 100 millisec-

ond or 1 second. Thus, when the perception confirms that an obstacle has been

detected the planning module starts running at a higher frequency. The controller

and saturation tasks are deterministic and have one time period each. The time

period of the controller and saturation tasks are set to 20 milliseconds.

(17)

time a full

t rs t r

0

Brake signal

tr  - braking system reaction time

trs -  braking system rise time afull - full braking deceleration

Figure 3.4: Model of the braking process.

3.5 Actuator timing model

An actuator converts an electrical signal from the control unit into an action. The obstacle avoidance maneuver (action) in this project is hard braking with full ef- fectiveness. There is a delay between the controller output and the vehicle brak- ing. The braking system is assumed to be a normal air brake system, common on heavy-duty vehicles, and event-triggered. The delay from the braking system, a brake lag, which arises from the time it takes to fill the cylinders with air, is in- cluded in the reaction time of the system and modelled as two components: one delay representing the time before the brakes are applied to the wheels, and one delay representing the time from that the brakes touch the wheel until full braking [5]. The former delay is modelled as a constant whereas the latter delay is mod- elled as a linear function, see Figure 3.4 for an illustration of the braking process.

For modelling the velocity of the vehicle v and the distance ∆x travelled during braking, standard kinematic formulas are used:

v = v 0 + a · T,

∆x = v 0 T + 1 2 aT 2 ,

where v 0 is the initial velocity, a is the acceleration, and t is the time step.

3.6 Network timing model

The message passing through the system is modelled as asynchronous and event-

triggered blocks, placed in between communicating tasks. The input to the com-

munication block is the output from a task, and the block imitates the timing be-

haviour of the network when passing the message on to the receiving task. Each

network block only handles the communication from one task to the succeeding

(18)

task, and the message passing is one-directional, see Figure 2.2. Each message has a priority according to the task that it was generated from. The network transmis- sion time, called the network latency, for each message is estimated based on the total load on the network in the system and the priority of the message. The load is calculated as the total number of messages being sent within a time period, where the time period in the simulation is chosen to maximize the variation in the load and is found by testing different time periods. The probability of a transmission error, causing packet loss, also depends on the current load on the network and on the priority of the message, but not on previous errors. The process is thus independent. If a transmission error occurs, the network will try to resend the message at the next time instant. A message with high priority will have a shorter latency and a lower risk of packet loss than messages with lower priority.

The communication from the controller to the actuator for braking happens over CAN (Controller Area Network) which is not a part of the modelled network. The signal from the controller to the brakes is assumed to have a negligible delay, since the signals for braking or steering have a high priority and the activation of the ac- tuator is event-triggered. The maximum latency from the network is small relative to other delays in the system, and it is the way the system reacts to any network latency which is of interest. Thus a more accurate network timing model has not been considered in this thesis.

3.7 Activation patterns

A task can either be triggered according to time or to an event, and the task trig- gering pattern is significant to the performance of the system.

3.7.1 Time-triggered activation

When tasks are triggered periodically, the first triggering of a task in the simu- lation, its activation (of which the future triggering follow periodically), does not have to be synchronized with the other task’s first triggering. With no other de- lays, the best-case triggering (resulting in a minimal delay) should be when all the tasks are synchronised so that they are triggered at the same time instant as when the preceding task finishes its execution. For adjacent tasks with time periods that are not multiples of each other, the delay is not necessarily zero. Figure 3.5 and Figure 3.6 illustrate the difference between synchronous and asynchronous triggering.

The worst-case scenario (resulting in a maximal delay due to asynchronous trig-

gering) is when, for the whole task chain, the succeeding tasks are triggered just

before the preceding ones. The combination of first triggering (activation) is called

the system’s activation pattern. Since the system model has non-negligible la-

tency in the case of medium or high load, and that the perception task receives

input from three sensors with different time periods and stochastic behaviour,

(19)

t

       Triggering        Task Execution        Data propagation        Overwritten data Task 1

Task 3 Task 2

Figure 3.5: Synchronized triggering

t

       Triggering        Task Execution        Data propagation        Overwritten data Task 1

Task 3 Task 2

Figure 3.6: Asynchronous triggering

and since the neural network task always has a stochastic time period, what the worst- and best-case activation patterns are is not obvious. By modelling the ac- tivation time as a stochastic variable from e.g. a uniform distribution within the time interval [0, P i ] for each task, the effect of a certain activation pattern is aimed to be deduced.

Considering the execution order of the tasks (see Figure 3.2) a negative differ-

ence in the starting times between two tasks means that the succeeding task starts

to execute before the preceding task. Hence, a low negative difference means that

the message will be buffered longer. Accordingly a high negative difference means

that the message will not be buffered a long time. A low positive difference means

that the succeeding task executes shortly after the preceding task, meaning a low

time for buffering, and a high positive difference means that the succeeding task

executes a longer time after the preceding task so a longer time for buffering. This

effect is illustrated in Figure 3.7. The time the message is buffered adds to the de-

lay of the system. How large the contribution to the total delay is depends on the

time period of the tasks, the largest possible negative difference in starting times is

equal to the preceding task’s time period, and the largest possible positive differ-

ence in starting times is equal to the succeeding task’s time period. A higher net-

work latency is expected in the case of synchronized activation, due to the higher

load in the system when the message passing happens simultaneously.

(20)

t

Task 1

Task 2

Low negative difference 

t

Task 1

Task 2

High negative difference 

delay

delay

t

Task 1

Task 2

Low positive difference 

delay

t

Task 1

Task 2

High positive difference 

delay

Figure 3.7: Illustration of the effect on the delay by four types of starting time differences.

3.7.2 Event-triggered activation

With even-triggered activation the delay is minimized, since a task is triggered when it receives an input, so there will not be any contribution from any buffered messages to the system’s delay. However this is not always true, since if a task is already executing when an input message arrives, the input will be buffered until the execution has finished, and a delay is introduced. A solution to this problem is called multi-threading, which allows for concurrent executions, but modelling this is not within the scope of this thesis. Due to the non-deterministic nature of the sensors and the perception task, the possibility of network induced delays and different time periods of tasks, the delay in the system will not always be zero even with event-triggered activation.

In this thesis, any delay in an even-triggered model stems from the perception task (needing multiple sensor inputs whom might not be synchronized) and, if applicable, the network latency.

3.8 Delimitations

For analysing a complex system, modelling can be used as a tool for structuring the knowledge of the system. A model is a representation of a system which captures important properties of the system for the intended application. A model is used for observing and analysing the behaviour of the target and can be built up on false assumptions/claims to simplify the system [10].

The objective in this work is to obtain a model of the autonomous vehicle’s tempo-

ral behaviour. Due to the complexity of the system the model should use a simpli-

fied description allowing for a sufficiently accurate timing analysis. Some assump-

tions are employed in this work to control the range of the study and to achieve

(21)

rewarding results within the project time boundaries. The following assumptions are considered:

• All tasks have the same perception of the current time.

• The hardware can process the software without violating any timing con- straints.

• There is isolation between tasks so that each task can be analysed indepen- dently of other tasks in the system.

• The execution time of the software tasks are constant.

• The sensors are positioned so that their maximum range can be used and there is no sensing degradation.

• The tasks are run without interruption or pre-emption and without any in- terference from other tasks that could be running on the same or different processor cores (i.e. no multi-threading and no cross-core interference).

• Failures; such as damages, short-circuits or over-voltage, to the hardware

and sensors are not considered.

(22)

4 Simulation study

In this chapter the results from simulations with three different kinds of activation pattern and two different initial velocities are presented and compared. First, the outputs of the simulation are explained and illustrated, and then the results from the activation pattern analysis are presented and analysed. This is followed by a comparison between the autonomous vehicle’s and the human driver’s reaction time, as a manner of assessing the autonomous vehicle. Finally, a summary of the results is presented.

The time periods used in the simulation are the same as the suggested ones in sections 3.3 and 3.4.1.

4.1 Simulation setup

4.1.1 Simulation outputs

Since the model contains non-deterministic components, the simulation must be executed several times for each initial distance, speed and way of triggering. Then the output data distributions could indicate what the true behaviour of the sys- tem would be. From the simulations, different properties of the system can be observed, of which the relevant ones for this thesis are illustrated in Figure 4.1 and described below:

• Total Time: the time between obstacle appearing and the vehicle stopping.

• Detection Time: the time between the simulation starting (indirectly when the obstacle appears) and the perception task confirming the detection of the obstacle.

• Action Time: the time between the obstacle detection and the controller out- putting a signal for an action to be taken.

• Reaction Time: the time between the obstacle appearing and the controller outputting a signal for an action to be taken.

• System Loop Time: the time between a sensor capturing a frame, that is then used by perception to conclude that there is an object, and the controller outputting a signal for an action to be taken.

• Braking Time: the time between the controller outputting a signal to the actuator and the vehicle stopping.

• Delay: The sum of all delays that are generated by buffered messages and the network latencies.

• Detection Distance: the distance that the vehicle travels during Detection Time.

• Reaction Distance: the distance the vehicle travels until it starts to brake.

(23)

• Distance at stop: the distance to the object when the vehicle stops.

Detection Distance

Action Distance Reaction Distance

Braking Distance

Vehicle Obstacle

Distance at stop

Vehicle when stopping

Figure 4.1: Illustration of the outputs from the simulations.

4.1.2 Model validation

For validating the accuracy of the model, the simulation outputs are compared to data from specific tests of the static obstacle detecting range performed by Scania.

In these tests, the vehicle started at a position where the obstacle was outside of field of view, more than 150 meters away, and drove at a constant speed towards the obstacle on a flat and straight road. The distance to the obstacle at detection was measured for different speeds. From the detection tests of a large (truck sized obstacle) the detection distance was found to be approximately 75 meters for all vehicle speeds (ranging from 10 [km/h] to 60 [km/h]). This is due to the detection ranges of the sensors as well as the time for the perception task to confirm the detection of the object. In this thesis, the size of the obstacle, the position of the sensors and their vertical field of view are not included in the detection model.

Instead, in the simulation the obstacle is assumed to be large enough for being detected by the sensors. The data from the simulation will therefore be compared to the detection distance for the truck sized obstacle, which was the largest used in the tests.

Simulating this scenario with the random activation pattern, initial speed of 60 [km/h] and an initial distance of 110 meters (outside of field of view for the model of the sensors) the mean detection distance is 83 meters, so 8 meters before the real tests showed. In Figure 4.2 a histogram over the detection distances from the simulations is shown, the detection distance ranges from 66 meters to 98 me- ters.

4.2 Results

Tables 4.1, 4.2, 4.3 and 4.4 show the mean values of the results from the simula-

tions, with random, synchronized and event-driven activation and different initial

speeds. The initial speed and distance of the vehicle affects the detection time (and

thus the reaction time and total time), since a higher speed allows the vehicle to

(24)

Figure 4.2: Histogram of the distance at which the simulation has detected the obstacle, from 1000 simulations with random activation pattern and initial speed 60 [km/h] and initial distance 110 [m].

come closer to the obstacle in a shorter time and it will be detected faster. The ac- tion time, system loop time and delay are not affected by these parameters, as we can see in 4.1 and 4.3. This expected since they stem from the software stack.

Total Time [s]

Detection Time [s]

Action Time [s]

System Loop Time [s]

Reaction

Time [s] Delay [s]

Random 6.68 0.36 0.36 0.51 0.73 0.17

Synchronized 6.72 0.36 0.41 0.57 0.77 0.22

Event-driven 6.54 0.35 0.24 0.39 0.59 0.05

Table 4.1: Table of the mean values of the timing results from three different types of activation and 10 4 simulations each. The initial speed is 100 [km/h] and the initial distance is 90 [m].

4.2.1 Random activation

By sampling the starting times for the tasks, from uniform distributions, differ- ent activation patterns are created. It is the difference in the starting times for consecutive tasks, that affect the delay in the system, not the exact value of the starting times. For concluding what combination or relation between the start times yielding high or low delays, a histogram over the difference in starting times for consecutive tasks can be used. The minimal and maximal 20% of the delays are split into two segments, with their respective activation pattern. Figure 4.6 (a) shows a histogram of the delays from the simulations with random activation.

The mean of the low delays is 0.10 seconds and the mean of the high delays is

0.24 seconds. Figure 4.3 and Figure 4.4 show the histograms over the differences

in starting time between different modules whom have generated low and high

delays. The neural network module, which follows the camera sensor module, is

(25)

Distance at stop [m]

Reaction Distance [m]

Detection

Distance [m] Crashes [%]

Random -18.43 20.20 10.09 100

Synchronized -19.70 21.47 10.00 100

Event-driven -14.57 16.33 9.60 100

Table 4.2: Table of the mean values of the distance results from three different types of activation and 10 4 simulations each. The initial speed is 100 [km/h] and the initial distance is 90 [m].

Total Time [s]

Detection Time [s]

Action Time [s]

System Loop Time [s]

Reaction

Time [s] Delay [s]

Random 5.05 0.40 0.36 0.51 0.76 0.17

Synchronized 5.08 0.38 0.41 0.57 0.79 0.22

Event-driven 4.91 0.38 0.24 0.39 0.62 0.05

Table 4.3: Table of the mean values of the timing results from three different types of activation and 10 4 simulations each. The initial speed is 70 [km/h] and the initial distance is 90 [m].

not included in the differences since its execution time is non-deterministic and thus its starting time does not affect the delay significantly. Recall that a negative difference in the starting times between two tasks means that the succeeding task starts to execute before the preceding task, and a positive difference means that the succeeding tasks executes after the preceding task.

From Figure 4.3 and Figure 4.4 it can be noted that the only evident contributor to a high or low delay is the difference between the situational awareness and mo- tion planning tasks, whom have equal and the highest time periods. Further on, it can be noted that a small negative difference is an important contributor to a high delay, and that a small positive difference contributes to a low delay, just as expected, see Figure 3.7.

To test the impact of the difference in starting time between the situational aware- ness (SA) and motion planning (MP) tasks, their starting times can be set to have a small positive difference (and thus expecting a low delay in the system) and com- pare this to setting the starting times so that they have a small negative difference (and thus expecting a high delay in the system), with the remaining task’s starting times set randomly. The results from these simulations are presented in Figure 4.5, and it can be noted that there is a significant difference in the mean delay for the two starting times settings. There is a correlation between the starting time for the SA and MP tasks and the delay in the system.

Histograms of the delays, detection times and reaction times generated by 10 4 sim-

ulations, each with a randomly generated activation pattern, are shown in Figure

4.6.

(26)

Distance at stop [m]

Reaction Distance [m]

Detection

Distance [m] Crashes [%]

Random 29.55 14.90 7.80 0

Synchronized 29.09 15.38 7.34 0

Event-driven 32.45 11.99 7.29 0

Table 4.4: Table of the mean values of the distance results from three different types of activation and 10 4 simulations each. The initial speed is 70 [km/h] and the initial distance is 90 [m].

Figure 4.3: Histograms of the differences in starting times between successive tasks, that resulted in a high delay. The x-axes show the differences in time [s]

and the y-axes show the counts.

4.2.2 Synchronized activation

For synchronized activation a low delay is expected, (see section 3.7.1). However, the simulations of a system with synchronized activation show a higher delay (thus also a higher reaction time) than for the system with a random activation pattern, see Table 4.1 and Table 4.3. This is probably due to the tasks having different time periods (which are not multiples of each other), so they cannot be completely syn- chronized and also due to a higher load on the network when the message passing is more synchronized. A high data load induces a delay from the network and re- sults in the message arriving right after the task is triggered, and hence the mes- sage will be buffered for almost a whole time period.

Histograms of the delays, detection times and reaction times generated by 10 4

simulations with synchronized activation pattern are shown in Figure 4.7.

(27)

Figure 4.4: Histograms of the differences in starting times between successive tasks, that resulted in a low delay. The x-axes show the differences in time [s] and the y-axes show the counts.

4.2.3 Event-driven activation

With event-driven activation the system should have the lowest delay, and this is also what the simulations show. Histograms of the delays, detection times and re- action times generated by 10 4 simulations with event-driven activation are shown in Figure 4.8.

4.3 Comparison with human driver reaction time

Presumably, an autonomous vehicle should be safer than a vehicle driven by a hu- man, and a comparison of their reaction times can indicate whether this is true in the investigated scenario. There have been many studies on the reaction time of a human driver, using many different methods and scenarios, and often con- cluding that the reaction time is strongly affected by the age, experience, alertness and expectancy of the driver. The reaction time of a human time refers to the time required to perceive, interpret, decide, and initiate a response to some stimulus.

A recent study by Marek Guzek found that the driver reaction time in a scenario

where an obstacle appeared on the road in a sudden way, was on average 0.993

seconds [14]. In [12], Mark Green reviews more than 40 studies on driver reaction

time and the basic reaction time literature and concludes that the reaction time

mean for surprise events, such as an object suddenly moving into the driver’s path,

is roughly 1.5 seconds [12]. And [17] agrees, saying that the mean brake reaction

time ( for the driver to realize that it must brake and initiate a response (touch the

brake)) for all different age groups was 1.5 seconds, with a standard deviation of

(28)

(a) (b)

Figure 4.5: Histogram of delays from testing to manually set the difference be- tween the starting times for tasks situational awareness (SA) and motion planning (MP) to be either (a) small and positive (+0.01) or (b) small and negative (-0.01).

The initial speed is 100 [km/h] and initial distance is 90 [m]. The mean delay in (a) is 0.13 [s] and (b) is 0.21 [s].

(a) (b) (c)

Figure 4.6: Histogram of the delay (a), reaction time (b) and detection time (c) from 10 4 simulations of the system with a random activation pattern. The mean value of (a) is 0.17 seconds, (b) is 0.73 seconds and (c) is 0.36 seconds. The initial speed is 100 [km/h] and the initial distance is 90 [m].

0.4 seconds, where this reaction time is associated with reacting to an obstacle in the vehicle’s path.

If a human driver has a reaction time of 1 second and drives at the speed of 70 [km/h], the vehicle will travel approximately 19.44 meters before starting to brake.

Looking at table 4.4, this is 5.81 meters, 5.16 meters and 8.49 meters more than for

the simulations of the autonomous vehicle, for random, synchronized and event

driven activation respectively. If the reaction time of the human driver instead is

1.5 seconds, the vehicle will travel approximately 29.16 meters before starting to

brake and will stop approximately 75 meters after the obstacle appeared. Com-

pared to approximately 60 meters for the autonomous vehicle. The risk for crash-

ing increases significantly for the human driver versus the autonomous vehicle,

since a human driver has a longer reaction time. A reaction time of 1.5 seconds

is in the 99.74th percentile of the reaction time data from the simulations with

random activation and an initial speed of 100 [km/h].

(29)

(a) (b) (c)

Figure 4.7: Histogram of the delay (a), reaction time (b) and detection time (c) from 10 4 simulations of the system with synchronized activation. The mean value of (a) is 0.22 seconds, (b) is 0.77 seconds and (c) is 0.36 seconds. The initial speed is 100 [km/h] and the initial distance is 90 [m].

(a) (b) (c)

Figure 4.8: Histogram of the delay (a), reaction time (b) and detection time (c)

from 10 4 simulations of the system with event-driven activation. The mean value

of (a) is 0.05 seconds, (b) is 0.62 seconds and (c) is 0.35 seconds. The initial speed

is 100 [km/h] and the initial distance is 90 [m].

(30)

5 Conclusion

This thesis proposed an abstract and general model of the system for autonomous driving for evaluating its safety in dangerous scenarios. It is crucial that frame- works as this are developed if safe and dependable autonomous vehicles are to be available for the public. Due to the limited availability of data for validation, it was not possible to fully validate the model and the results from the simulations.

Even though the model does not fully reflect the true behaviour of the system, it is still useful for observing the timing behaviour of the system. As a proof of its usability, some tests were performed on the effect of the activation pattern on the resulting delay in the system.

The results show that the event driven system is faster than the system with ran- dom or synchronized activation. It also shows that a contributing factor to the reaction time of the system is the start times, and that a lower reaction time can be achieved by setting the start times of consecutive blocks with longer execu- tion times to have a small positive difference, this difference should arguably be as large as the most common network delay in the system, for a minimum delay.

The detection time is only affected by the initial distance to the obstacle and the initial speed of the vehicle, and indeed by the way the detection is modelled. The simulated tests show that by decreasing the speed of the vehicle, from 100km/h to 70km/h, the chance of the vehicle colliding with the obstacle appearing at 90 meters in front of the vehicle decreases from 100% to 0%. The reaction times from the simulations are also compared to the reaction time of a human driver, and this comparison indicates that the autonomous vehicle reacts faster, and thus is safer than the human driver.

5.1 Evaluation of the method

This thesis provides a valuable abstract framework for testing the safety without having to perform risky and costly tests or build detailed and complex models of the system. The model can indicate whether the vehicle will have time to stop when facing a static obstacle, by showing that the sensor inputs can be considered sufficiently fast for the vehicle to operate in a risky environment. The conducted tests also demonstrate that the model could be used in a pre-development phase for investigating the effect on the timing behaviour of the system from different parameter values or properties, or for setting time constraints on the tasks of the system. In these ways, and more, the model can be adapted to its exact purpose.

By developing the model further, the indication can be more accurate and thus more useful.

The model for representing the autonomous system’s timing performance is based

on high level and abstract information about the system, which was collected by

interviewing the people who work with these systems. Because of this, not all in-

formation is available, nor can all its behaviour be represented in the model. Since

(31)

the fault and some timing characteristics are modelled as stochastic events, an ab- solute bound on the reaction time cannot be found. From the validation, it can be concluded that at this high abstraction level the model cannot predict the timing behaviour of the system with a high precision. Instead it underestimates the reac- tion time (which is worse than overestimating it, from a safety perspective). This is not only because of the model not being based on the actual implementation of the system, but also because of the assumptions and simplifications made.

The model does capture the performance variability of the system, which is char- acteristic for an autonomous system, e.g. the reaction time from the simulations with random activation vary from 0.32 seconds to 1.95 seconds; for synchronized activation from 0.46 seconds to 1.76 seconds; for event-driven activation from 0.27 seconds to 1.58 seconds. These distributions could be used when evaluating the safety of the system in different driving scenarios, e.g. for a safe boundary on the reaction time, the right tail of the distribution (e.g. the maximal value or the 90th percentile or higher) could be used. For human drivers the tail of the hu- man perception-reaction time is often considered when designing e.g. highways [21].

5.2 Future work

There are many ways in which the model can be improved to provide more ac- curate results, and with this first model in place, it can be easier to tell what the relevant improvements would be. From the model validation, it is apparent that improving the sensor/detection components of the model would be of importance.

The models of the sensors do not consider how the detection time of an object de- pends on other variables than the distance and the scan rate, such as the field of view of the sensor and the resolution, reflectivity and the shape of the object. Fur- ther on, because of the simplicity in the investigated scenario, there was no need to consider how the complexity of the environment affects the detection time, e.g.

how it takes more time for perception to detect objects in a complex inner-city environment than on a simple highway with light traffic. By allowing for dynamic objects, where feedback and replanning is necessary, the model could be used for further risk analysis.

By including the hardware that the software runs on in the model, its influence on the execution time of the tasks could be considered as well as potential hardware malfunctions. By adding the risk that some task or component would stop working or become faulty, the response time of the system could be analysed. That is, the time for the system to detect the fault and act on it. It is common to have a system for detecting faults in the software stack of the autonomous system, and to have a manoeuvre for getting to a safe state if a fault has occurred (by braking and going towards the side of the road, warning surrounding drivers etc.)

Another way of improving the accuracy of the model is to use data driven estima-

tion of the parameters in the model to fit it to some data. So iteratively improving

(32)

the model. This requires more data, than was used in the model validation in this

thesis, but it has been proven to be an efficient way of model-fitting.

(33)

References

[1] Arcile, J., Devillers, R., Klaudel, H., Klaudel, W., and Woźna-Szcześniak, B.

“Modeling and checking robustness of communicating autonomous vehi- cles”. In: International Symposium on Distributed Computing and Artifi- cial Intelligence. Springer. 2017, pp. 173–180.

[2] Arcile, J., Sobieraj, J., Klaudel, H., and Hutzler, G. “Combination of simu- lation and model-checking for the analysis of autonomous vehicles’ behav- iors: A case study”. In: Multi-Agent Systems and Agreement Technologies.

Springer, 2017, pp. 292–304.

[3] Ashjaei, M., Mubeen, S., Behnam, M., Almeida, L., and Nolte, T. “End- to-end resource reservations in distributed embedded systems”. In: 2016 IEEE 22nd International Conference on Embedded and Real-Time Com- puting Systems and Applications (RTCSA). IEEE. 2016, pp. 1–11.

[4] Brugali, D. “Model-driven software engineering in robotics: Models are de- signed to use the relevant things, thereby reducing the complexity and cost in the field of robotics”. In: IEEE Robotics & Automation Magazine 22.3 (2015), pp. 155–166.

[5] California Department of Motor Vehicles, S. of. California Commercial Driver Handbook Section 5: Air Brakes. 2017-2018. URL: https://www.dmv.ca.

gov/portal/dmv/?1dmy&urile=wcm:path:/dmv_content_en/dmv/pubs/

cdl_htm/sec5 (visited on 01/15/2020).

[6] Crestani, D., Godary-Dejean, K., and Lapierre, L. “Enhancing fault toler- ance of autonomous mobile robots”. In: Robotics and Autonomous Systems 68 (2015), pp. 140–155.

[7] Davis, R. I. and Cucu-Grosjean, L. “A survey of probabilistic timing analysis techniques for real-time systems”. In: Leibniz Transactions on Embedded Systems 6.1 (2019), 04:1–04:53.

[8] Fagnant, D. and Kockelman, K. “Preparing a nation for autonomous ve- hicles: opportunities, barriers and policy recommendations”. In: Trans- portation Research Part A: Policy and Practice 77 (2015), pp. 167–181.

[9] Feiertag, N., Richter, K., Nordlander, J., and Jonsson, J. “A compositional framework for end-to-end path delay calculation of automotive systems un- der different path semantics”. In: IEEE Real-Time Systems Symposium:

30/11/2009-03/12/2009. IEEE Communications Society. 2009.

[10] Giere, R. N. “Using Models to Represent Reality”. In: Model-Based Rea- soning in Scientific Discovery. Ed. by L. Magnani, N. J. Nersessian, and P.

Thagard. Boston, MA: Springer US, 1999, pp. 41–57.

[11] Goddard, S., Huang, J., Qadi, A., and Farritor, S. “Modelling computational

requirements of mobile robotic systems using zones and processing win-

dows”. In: Real-Time Systems 42.1-3 (2009), pp. 1–33.

(34)

[12] Green, M. “”How long does it take to stop?” Methodological analysis of driver perception-brake times”. In: Transportation human factors 2.3 (2000), pp. 195–216.

[13] Guiochet, J., Machin, M., and Waeselynck, H. “Safety-critical advanced robots:

A survey”. In: Robotics and Autonomous Systems 94 (2017), pp. 43–52.

[14] Guzek, M. “Driver’s Reaction Time in the Context of an Accident in Road Traffic”. In: International Scientific Conference Transport of the 21st Cen- tury. Springer. 2019, pp. 184–193.

[15] Khalastchi, E. and Kalech, M. “On fault detection and diagnosis in robotic systems”. In: ACM Computing Surveys (CSUR) 51.1 (2018), p. 9.

[16] Lattner, A. D., Timm, I. J., Lorenz, M., and Herzog, O. “Knowledge-based risk assessment for intelligent vehicles”. In: International Conference on Integration of Knowledge Intensive Multi-Agent Systems, 2005. IEEE. 2005, pp. 191–196.

[17] Lerner, N. D. “Brake perception-reaction times of older and younger drivers”.

In: Proceedings of the human factors and ergonomics society annual meet- ing. Vol. 37. 2. SAGE Publications Sage CA: Los Angeles, CA. 1993, pp. 206–

210.

[18] Mubeen, S., Nolte, T., Sjödin, M., Lundbäck, J., and Lundbäck, K.-L. “Sup- porting timing analysis of vehicular embedded systems through the refine- ment of timing constraints”. In: Software & Systems Modeling 18.1 (2019), pp. 39–69.

[19] Powell, D., Arlat, J., Chu, H. N., Ingrand, F., and Killijian, M.-O. “Testing the input timing robustness of real-time control software for autonomous systems”. In: 2012 Ninth European Dependable Computing Conference.

IEEE. 2012, pp. 73–83.

[20] Scania CV AB AUTONOMOUS TRANSPORT SYSTEMS. 2016. URL: https:

//www.scania.com/group/en/images-autonomous-transport-systems- 2016/ (visited on 02/11/2020).

[21] Transportation Research Board National Research Council, Highway ca- pacity manual. Vol. 2. 2000.

[22] Zug, S., Dietrich, A., and Kaiser, J. “Fault-handling in networked sensor

systems”. In: Fault Diagnosis in Robotic and Industrial Systems (2012).

(35)

TRITA-EECS-EX-2020:76

References

Related documents

The increasing availability of data and attention to services has increased the understanding of the contribution of services to innovation and productivity in

Generella styrmedel kan ha varit mindre verksamma än man har trott De generella styrmedlen, till skillnad från de specifika styrmedlen, har kommit att användas i större

Parallellmarknader innebär dock inte en drivkraft för en grön omställning Ökad andel direktförsäljning räddar många lokala producenter och kan tyckas utgöra en drivkraft

Närmare 90 procent av de statliga medlen (intäkter och utgifter) för näringslivets klimatomställning går till generella styrmedel, det vill säga styrmedel som påverkar

• Utbildningsnivåerna i Sveriges FA-regioner varierar kraftigt. I Stockholm har 46 procent av de sysselsatta eftergymnasial utbildning, medan samma andel i Dorotea endast

Den förbättrade tillgängligheten berör framför allt boende i områden med en mycket hög eller hög tillgänglighet till tätorter, men även antalet personer med längre än

På många små orter i gles- och landsbygder, där varken några nya apotek eller försälj- ningsställen för receptfria läkemedel har tillkommit, är nätet av

Figur 11 återger komponenternas medelvärden för de fem senaste åren, och vi ser att Sveriges bidrag från TFP är lägre än både Tysklands och Schweiz men högre än i de