DEGREE PROJECT IN ELECTRICAL ENGINEERING, SECOND CYCLE, 30 CREDITS
STOCKHOLM, SWEDEN 2020
Risk analysis of
software execution in an autonomous driving system
Joanna Ekehult
KTH ROYAL INSTITUTE OF TECHNOLOGY
Abstract
Autonomous vehicles have the potential to offer efficient ways of moving and im- prove the safety of driving. For this to occur, it must be ensured that the au- tonomous vehicles have a safe and reliable behaviour in nearly all situations and under nearly all circumstances. The system that enables autonomy relies on a stack of complex software functionalities, where the response and execution times are hard to predict. It is therefore essential to create effective tools and frame- works for evaluating the performance of the autonomous driving system in a risky scenario. The aim of this thesis is to create and evaluate a framework for analysing the risks of an autonomous driving system. The approach is based on an abstract model of the main components and interactions of the autonomous system. It pro- vides a manner for systematically analysing the system’s behaviour through simu- lations without requiring timely and costly testing, or a very detailed and complex model. Specifically, the use of the method for analysing the autonomous vehicle’s timing behaviour in a risky scenario is investigated.
The developed framework is used to evaluate the ability of a vehicle to stop before
colliding with a static obstacle. In such scenario, the model-based approach for
analysing the risks for an autonomous system is feasible and effective and can
provide useful information during the development process.
Sammanfattning
Autonoma fordon kan potentiellt erbjuda både säkrare och effektivare transport- möjligheter. Men om det ska bli möjligt måste man kunna verifiera att det au- tonoma fordonet har ett pålitligt och säkert beteende i nästan alla situationer och under nästan alla förhållanden. Systemet som möjliggör autonom körning byg- ger på en komplex mjukvarustack, där utfallet och exekveringstiden är svåra att utsäga. Det är därför är det väsentligt att man utvecklar verktyg och strukturer som kan förutsäga säkerheten hos ett autonomt system. Syftet med detta arbete är att utveckla och utvärdera ett verktyg för att analysera riskerna hos ett sys- tem för autonom körning. Tillvägagångssättet är baserat på en abstrakt modell över de huvudsakliga komponenterna och interaktionerna hos det autonoma sys- temet. Genom enkla simulationer istället för tidskrävande och kostsamma tester, eller mycket detaljerade och komplexa modeller, kan metoden förse användaren med ett sätt att systematiskt analysera systemets beteende. Specifikt undersöks huruvida metoden kan användas för att analysera det autonoma systemets tid- segenskaper vid ett riskfyllt scenario.
Den utvecklade strukturen används för att utvärdera om fordonet hinner stanna
innan det kolliderar med ett statiskt hinder. I ett sådant scenario kan man dra slut-
satsen att det modellbaserade tillvägagångssättet för att analysera riskerna hos ett
autonomt system är görbart och effektivt, och kan ge värdefull information under
utvecklingsarbetet.
Acknowledgements
I would like to thank my supervisor at Scania, Patricio Valenzuela, and my super
visor at KTH, Valerio Turri, for their insight and advice throughout this thesis
work. This work was supported by the Vinnova research project IQPilot (project
number 2016-02547).
Contents
1 Introduction 1
1.1 Motivation . . . . 1
1.2 Objective and scope . . . . 2
1.3 Outline . . . . 3
2 Background 4 2.1 Autonomous vehicles . . . . 4
2.2 Timing analysis . . . . 4
2.3 Related work . . . . 6
3 Method 8 3.1 Case study . . . . 8
3.2 The autonomous system . . . . 8
3.3 Sensor timing model . . . . 9
3.4 Software timing model . . . . 11
3.5 Actuator timing model . . . 12
3.6 Network timing model . . . 12
3.7 Activation patterns . . . . 13
3.8 Delimitations . . . . 15
4 Simulation study 17 4.1 Simulation setup . . . . 17
4.2 Results . . . 18
4.3 Comparison with human driver reaction time . . . 22
5 Conclusion 25 5.1 Evaluation of the method . . . 25
5.2 Future work . . . 26
References 28
1 Introduction
This chapter provides a general introduction to the thesis project. The project is performed at Scania CV AB, Department of Autonomous Transport Systems as a part of a master’s degree project course at KTH Royal Institute of Technology.
Figure 1.1 shows an image of two autonomous Scania trucks. The objective of this project is to develop a method for analysing the timing considerations and safety of an autonomous driving system. The main focus is on investigating the end-to-end reaction time of the system in a risky scenario by abstracting the system’s compo- nents and interactions into a model of the autonomous driving system.
Figure 1.1: Scania autonomous transport systems (Scania CV AB 2016). ©Scania CV AB (publ), SE-151 87 Södertälje, Sweden [20].
1.1 Motivation
An autonomous vehicle is a computer controlled intelligent vehicle that interacts with the physical world by responding to sensor inputs in real-time. Autonomous vehicles have the potential to substantially affect the safety, mobility and environ- ment by offering more effective and convenient ways of moving and by reducing the possibility of human error [8]. But it is also a safety-critical system, a sys- tem in which a failure could lead to a catastrophe, therefore, the developers have to ensure that the autonomous vehicles have a safe and reliable behaviour. They have to guarantee that the vehicles are able to fulfil their assigned goals with high probability and interact with the environment sufficiently fast, despite potential faults and a priori unknown situations.
Predicting the autonomous vehicle’s performance is a demanding task since the
system controlling the vehicle is large and complex; containing millions of lines
of code, logics, loops over large data structures and different hardware, and since
the risk of facing complex and unseen driving environments is high. It is therefore
essential to create effective tools and frameworks for predicting the performance
and safety of the autonomous driving system. See Figure 1.2 for an overview of a
general system architecture for an autonomous vehicle.
Lidar Radar Camera
GPS Other sensors
Sensor Fusion Environment Perception
and Localization Planning
Path Planning
Decision Making
Trajectory Generation
Driving
Control (Lateral and Longitudinal) Brake
Throttle
Steer
Road/Path Tracking
Figure 1.2: System architecture for an autonomous vehicle.
For guaranteeing the safe operation of an autonomous vehicle, critical tasks (e.g.
sensing, planning and control), have to meet real-time requirements and make correct decisions, so that the autonomous driving system can detect real-time traf- fic conditions and react to them fast enough to avoid accidents. When assessing the vehicle’s timing behaviour, many factors come into play: the time for perceiv- ing the environment and the time for recognition, decision, control and actuator processes.
1.2 Objective and scope
The objective of this thesis is to facilitate intuitive modelling and risk analysis con-
cerning the timing behaviour of an autonomous driving system, through abstrac-
tions and model-based approaches. The main objective of this thesis is to develop
a systematic method to analyse the timing considerations for software execution
in autonomous driving. It is also the objective to analyse if the reaction time from
sensor inputs to actuator outputs can be bounded to provide guarantees of both
safety and effectiveness. The overall approach is based on introducing timing in-
formation and fault probabilities at a high representation-level of the system, in-
cluding the sensors, network and actuators. This approach strives for generality
and flexibility, and the purpose of the model is to serve as a framework for study-
ing risks and time-related properties in an autonomous driving system.
1.3 Outline
Chapter 2 provides a description of an autonomous system as well as an introduc-
tion to timing analysis and related work. Chapter 3 gives a detailed explanation
of the model and the modelling approaches, followed by the delimitations and a
model validation approach. In Chapter 4, the experimental results are presented
and then discussed in Chapter 5. Finally, in Chapter 6 a more general discus-
sion on the thesis work, the conclusions and suggestions for future work, are pre-
sented.
2 Background
This chapter provides a background of the thesis work. We first present an overall view of the system to be analysed and an introduction to timing analysis.We later present the related work on the subject.
2.1 Autonomous vehicles
An autonomous vehicle must automatically and constantly sense its environment and make safe and correct driving decisions by itself, even in noisy and dynamic environments. The common system for autonomous diving is a complex com- bination of: sensors, perception, decision making, planning, control and actua- tors. The perception, decision making, planning and control components form a data-driven computing and analysing framework which enables autonomous driving. The autonomous vehicle uses multiple sensor technologies (e.g. cam- eras, radar and lidar) to create an accurate map of its surroundings under a range of conditions. Other devices and tools that are used by the autonomous vehicle to source information are inertial measurement units (IMUs) for measuring the vehicle’s three linear acceleration components and three rotational rate compo- nents, GPS, ultrasonic sensors (for short distance obstacle detection) and possibly v2v (vehicle-to-vehicle) or v2x (vehicle-to-everything) communication. See Fig- ure 2.1 for an illustration of some of these tools.
The perception system recognizes the surrounding driving environment by sensor interpretation (sensor fusion), using data from the sensors to understand traffic scenarios, detect objects and determine the exact location of the vehicle (posi- tioning). The decision-making system makes operating decisions to control the vehicle and avoid collisions, based on knowledge about the environment, e.g. by predicting the trajectories of moving objects to decide if lane change is necessary.
The planning system then decides how to execute these decisions, by finding driv- able paths and generating detailed operating motions, while following the routing path from source to destination. The controller generates control signals for phys- ically operating the vehicle to follow the planned paths and trajectories, and sends these instructions to the actuators, which control acceleration, steering and brak- ing.
2.2 Timing analysis
The objective of timing analysis is to help planning, understanding, optimizing
or validating systems with respect to their timing. It often involves attempts to
determine or estimate bounds on the execution time of a system to ensure that
end-to-end deadlines are met. Timing analysis can be performed by measuring
the time of real executions, measuring the time of executions on a simulator or
through analytical methods [7].
Figure 2.1: Illustration of a truck with ultrasonic sensors (blue), GPS and v2v (vehicle-to-vehicle) connection to a car.
Timing analysis often includes finding the worst-case execution time, maximum data age (which is related to the stability and performance of the system) or the response time (the time from that an event occurs until the control system has produced a response). These properties are affected by different time delays, data flows or hardware configurations in the system. If these delays are not considered they could cause vheicle instability or uncertainties in the system’s behaviour in risky situations. The timing analysis can be performed on different levels, such as system, code or network level and in different stages of the developing process;
early, integration or final phase, according to what the purpose of the analysis is [7][3].
In this work an analytical and probabilistic response time analysis is performed, which adds range and accuracy of sensors and network latency to a system level timing analysis. In order to perform the response time analysis, a system timing model should be derived. The timing model consists of a chain of tasks and in- volves properties concerning timing, requirements, dependencies and data flows.
A task represents a piece of the software, it can be a whole component or a piece of code, and the tasks communicate with each other through communication blocks.
A task chain is illustrated in Fig. 2.2. The most common timing properties asso- ciated to a task include:
• Periodicity: The time interval between the activation time of two task in- stances is the period of the task, the activation rate.
• Execution time: The time it takes for a task to execute.
• Precedence restriction: If a task has to run after another task in order to get the result value from it.
• Jitter: Any time variations in the period, execution time or activation of a task.
• Triggering: A task can be time triggered (when a task is triggered according to a predetermined schedule) or event triggered (when a task is triggered by some other task’s output or a message).
A common notation for an independent task is τ i {P i , T i }, where P i is its time period
and T i is its triggering time (the temporal characteristics of the task), and a chain
is denoted by a set of tasks Γ = {τ 1 , τ 2 , τ 3 , ... }. A communication block is denoted as c j . See Figure 2.2 for an illustration of a simple task chain consisting of three tasks and two communication blocks connecting the tasks.
Since the tasks can have different time periods, over- and under-sampling may exist, meaning that some data might be over-written by the new data before they are read by the succeeding tasks, while some data might be consumed more than once before new data arrives [9]. In this thesis, only under-sampling is modelled since a message can only be read once. It is also assumed that the execution time of a task never exceeds its time period, so it is enough to only include the time period in the task model.
τ 1 c 1 c 2
{P
1, T
1} {P
2, T
2} {P
3, T
3}
τ 2 τ 3
Figure 2.2: Illustration of a task chain with communication.
The timing analysis can also include parameters that are described by random variables. The modelled property in the real system does not have to have a ran- dom behaviour itself, but the underlying system which determines its behaviour could be unknown and complex, thus it is best modelled by a random variable. A random variable is a set of possible values who depend on the outcome of a ran- dom process, such as throwing a dice.
2.3 Related work
The focus of most works concerning reliability or risk analysis in autonomous ve- hicles is on detecting, diagnosing and recovering from faults, so called fault tol- erance, or other similar ways of ensuring dependability. Kalastachi and Kalech give a good overview on which fault detection and diagnosis approaches that are best suited for different robotic systems [15]. In [13], the authors review the main issues, research works and challenges in the field of safety-critical robots, linking up dependability and robotics concepts.
Other related works include a work by Powell, Arlat, Chu, Ingrand and Killijan [19], who have developed a method and platform for testing the robustness of the safety mechanism of the functional layer (controller) of an autonomous system.
In [2] it is suggested to analyse the behaviour of autonomous vehicles through computer simulations and model checking, and the behaviour that is addressed is what actions are taken by the autonomous vehicle in different traffic scenarios.
In [1], a model of communicating autonomous vehicles for validation of an au- tonomous system is proposed. Work by Lattner, Timm, Lorenz and Herzog [16]
proposes a method for knowledge-based generic risk assessment for autonomous
vehicles, a way of identifying what environmental situations might endanger the autonomous vehicle and its passengers or other traffic participants. Crestani et al.
present an approach that aims to integrate fault tolerance principles into the de- sign of a robot real-time control architecture in [6]. These studies raise important aspects of risk analysis for autonomous vehicles, but they focus on interactions with and detection of risks in the environment, and do not analyse what the po- tential risks are for an autonomous system when performing tasks.
Timing analysis in itself is a thoroughly researched area which mostly deals with scheduling for real-time systems (systems which depend on at what time instant a result is produced). Scheduling is the problem of assigning tasks to limited resources in order to meet deadlines. For automotive systems, the focus is on schedulability analysis or timing analysis of embedded systems, and the methods used are static, measurement-based or hybrid analysis techniques [7]. Most of the available analysis methods are adapted for small and simple systems at the implementation level, including contributions from memory, caches, pipelines, branches, and code structure.
Fewer studies focus on larger complex systems, and in those works the focus is on performing timing analysis at early stages in the development, as model-based de- sign [4, 7]. Model-based design allows for modelling more complicated behaviour of systems, and the technological advances made in the area of model-based sim- ulation and analysis tools allow for good results, ensuring correct timing of the software and provide an understanding of the behaviour of the system before the whole system exists. Several works focus on these development methodologies for autonomous software systems and include studying the timing behaviour for applying real-time systems scheduling theory to an autonomous system [3, 9, 11, 18]. The limitations of these approaches are that they are typically only applied in cases where there is minimal variation in timing performance of each task and the focus is on pre-run-time analysis and scheduling, and they do not include the behaviour of chains of programs with network induced delays, sensors and actu- ators.
All the existing works on risk analysis of autonomous systems thus focus on either
modelling the fault tolerance and fault probabilities of the system, or on the timing
and scheduling of the software. So to the best of the author’s knowledge there are
no works where an abstract model of the software stack is used as a framework for
risk assessment of an autonomous system.
3 Method
This chapter presents the framework for the model of the system and the mod- elling approaches for the functional components, sensors and network. The inves- tigated scenario, as well as the delimitations of the thesis and a section on model validation are also included.
3.1 Case study
The analysed scenario consists of an autonomous vehicle driving at a constant speed v on a straight and flat road, when an obstacle appears directly in front of the vehicle at a distance d. When the autonomous vehicle has sensed this obstacle and decided to act, its collision avoidance maneuver is assumed to be hard braking.
See Figure 3.1 for a sketch of the initial scenario.
Obstacle d
Vehicle
v
Figure 3.1: Sketch of the investigated initial scenario.
3.2 The autonomous system
The main properties and characteristics of the system are modelled as a task chain of interacting components. A task represents a sequential execution of a piece of code and abstracts a functionality of some software or hardware. Based on this model of the system, and by performing an end-to-end response-time and delay analysis, the true behaviour of the system can be predicted. The system consists of four tasks that represent the software stack: perception, situational awareness, motion planning and control in the autonomous driving system, as well as a task representing the neural network (for image classification) and saturation (for sat- urating the controller output).
The input stimulus for the software stack is a sequence of messages from the sen-
sors. And the output from the saturation task goes to a model of an actuator for
braking. All these tasks communicate with each other through a model of the net-
work in a successive order. In Figure 3.2 the execution order of the system in the
model is shown. There is a restriction of precedence: tasks depend on the output
from the preceding task as their input, and the last task in the chain sends the re-
sults to the output ports, so there is no feedback. The calculations, necessary for a
real autonomous vehicle, are not included in the model; the model only includes
properties with a significant impact on the timing behaviour of the system.
Actuators Perception Situational
Awareness Motion
Planning Controller Onboard Platform
Radar
Lidar
Camera NN
Figure 3.2: The succession order of the autonomous system in the model
3.3 Sensor timing model
A sensor typically consists of a physical sensor, a computational component and a network interface. The sensors considered in this model are the sensors that are used by the vehicle for perceiving the environment: camera, radar and lidar. The camera takes photographs of the environment surrounding the vehicle by letting light to fall on a light-sensitive surface. While the radar transmits radio waves and can determine properties of surrounding objects by measuring the returning signal. The lidar works as the radar but transmits light instead of radio waves.
For autonomous vehicles the radar is mostly used to measure the velocity of and distance to surrounding objects, while the lidar is used for creating a map of the surrounding environment.
The frame rates of all the sensors are fixed. For a radar sensor, a frame corre- sponds to the data acquired for one full scan as one package at completion. The lidar sensor instead processes a small portion of the scan at a time and sends that data as one frame; it does not wait for the full scan to be completed. However, in this work it is assumed, in the model of the lidar, that the lidar works as the radar, sending one frame of the whole scan at the end of a revolution. This is a reason- able simplification for the lidar, since in this scenario the only object of interest to be detected is assumed static and so it can be assumed that the relevant data is contained in only one of the data packets per revolution. The frame rate of the lidar is set to 10 [Hz], of the camera to 20 [Hz] and of the radar to 25 [Hz].
3.3.1 Frame acquisition time
The time for the sensors to acquire a frame (i.e. to retrieve data and process it) can
vary since the signals interact with a dynamic environment [22]. For the models of
the camera and radar, the variation in acquisition time is modelled as a uniformly
distributed stochastic variable. The distribution is uniform since it is assumed that
all values are equally likely to be observed and that the values are bounded, since
after a maximum time a new scan starts. However, the lidar is assumed to have a
constant time for acquiring the data. This assumption is motivated by considering
that the lidar sends out signals which travel at the speed of light, so at an order of
magnitude of approximately 10 8 [m/s], and the distance that the signals travel is at
an order of magnitude of approximately 10 2 [m]. Since the signals travel a distance
Distance Probability
Minimum range
Figure 3.3: Illustration of how the detection probability depends on the distance to the object.
d, during the time t trip to an object and then back the same distance one can set up the following relationship, also known from the time-of-flight principle:
2d = v · 2t trip .
Inserting the order of magnitude of the distance and velocity in this equation gives:
2 · 10 2 [m] = 10 8 [m/s] · 2t trip ,
where it can be noted that the time the signals travel, t trip , is of the order of mi- croseconds. A variation in the acquisition time would then be small and so the acquisition time of the lidar is assumed to be equal to the its time period with no variation.
3.3.2 Detection ranges
The lidar and radar sensors have limited detection ranges, and within these ranges, it is assumed that the probability of the object being detected increases as the dis- tance to the object decreases, but when the object is very close to the vehicle, within a minimum range, the probability of it being detected decreases again, due to the field of view of the sensors. The detection probability is modelled as piecewise linear, according to the distance. See Figure 3.3 for an illustration of how the probability of an object being detected depends on the distance to the object for the lidar and radar.
The camera however, is not limited by a detection range in the same way. Instead
the detection of an object depends on the size of the obstacle in pixels. It is as-
sumed that the obstacle is not further away from the vehicle, when it appears, and
not smaller than that it occupies enough pixels in the image to be detected by the
neural network at all the distances considered in this thesis. The camera module
consists of two parts, representing the total time from an image is being captured
until the pixels are classified. One module represents the image capturing, run at
the frame rate of the camera and one module represents the neural network used
to detect objects in the image.
3.4 Software timing model
The model of the software stack consists of four tasks, and each of the tasks is responsible for completing a time dependent behaviour, representing the per- ception, situational awareness, motion planning and control components respec- tively. The tasks in the system can either be time-triggered, activated according to what time it is, or event-triggered, activated when a specific event occurs. The execution time, a measure of how long a task has been executing for, is assumed to be smaller than or equal to the period of the task, so it does not contribute to any delay.
3.4.1 Execution times
The perception, situational awareness and neural network tasks have variable ex- ecution times, since they have non-deterministic properties that stem from the uncertainties in sensor data. The perception task deals with these uncertainties by requiring coherent data from at least two sensors within a time span before it confirms that there is an obstacle in the environment, and to avoid the vehicle to act on false positives from the sensors. This adds a delay to the reaction time of the system. There is also a small risk that the system does not associate detections with the obstacle due to e.g. noise in the data, which is also included in the model as a probability of not detecting the obstacle.
When the object is confirmed the prior delay and the prior reaction time are cho- sen as the largest ones, respectively, from those data messages that are used to confirm the detection. And for finding the delay, due to an input message not arriving at the start of a new execution cycle of the task, the buffering time, the oldest time stamp of the messages is used, ensuring that the reaction time is not underestimated. In this thesis, the perception task will have a time period of 10 milliseconds, and the situational awareness task will have a time period of 100 milliseconds. The execution time of the neural network is the time it takes for it to classify an image, this time is highly varying depending on the data and there- fore the model of the neural network has a varying time period which is equal to its execution time. The execution time is modelled as a stochastic variable from a uni- form distribution which varies between 40 milliseconds to 80 milliseconds.
The planning and control tasks, however, are considered deterministic. The mo-
tion planning task can change period dynamically in response to system events,
e.g. when the module is planning how to execute a path it has a shorter cycle than
when it is waiting for a new goal. These two time periods are set to 100 millisec-
ond or 1 second. Thus, when the perception confirms that an obstacle has been
detected the planning module starts running at a higher frequency. The controller
and saturation tasks are deterministic and have one time period each. The time
period of the controller and saturation tasks are set to 20 milliseconds.
time a full
t rs t r
0
Brake signal
tr - braking system reaction time
trs - braking system rise time afull - full braking deceleration
Figure 3.4: Model of the braking process.
3.5 Actuator timing model
An actuator converts an electrical signal from the control unit into an action. The obstacle avoidance maneuver (action) in this project is hard braking with full ef- fectiveness. There is a delay between the controller output and the vehicle brak- ing. The braking system is assumed to be a normal air brake system, common on heavy-duty vehicles, and event-triggered. The delay from the braking system, a brake lag, which arises from the time it takes to fill the cylinders with air, is in- cluded in the reaction time of the system and modelled as two components: one delay representing the time before the brakes are applied to the wheels, and one delay representing the time from that the brakes touch the wheel until full braking [5]. The former delay is modelled as a constant whereas the latter delay is mod- elled as a linear function, see Figure 3.4 for an illustration of the braking process.
For modelling the velocity of the vehicle v and the distance ∆x travelled during braking, standard kinematic formulas are used:
v = v 0 + a · T,
∆x = v 0 T + 1 2 aT 2 ,
where v 0 is the initial velocity, a is the acceleration, and t is the time step.
3.6 Network timing model
The message passing through the system is modelled as asynchronous and event-
triggered blocks, placed in between communicating tasks. The input to the com-
munication block is the output from a task, and the block imitates the timing be-
haviour of the network when passing the message on to the receiving task. Each
network block only handles the communication from one task to the succeeding
task, and the message passing is one-directional, see Figure 2.2. Each message has a priority according to the task that it was generated from. The network transmis- sion time, called the network latency, for each message is estimated based on the total load on the network in the system and the priority of the message. The load is calculated as the total number of messages being sent within a time period, where the time period in the simulation is chosen to maximize the variation in the load and is found by testing different time periods. The probability of a transmission error, causing packet loss, also depends on the current load on the network and on the priority of the message, but not on previous errors. The process is thus independent. If a transmission error occurs, the network will try to resend the message at the next time instant. A message with high priority will have a shorter latency and a lower risk of packet loss than messages with lower priority.
The communication from the controller to the actuator for braking happens over CAN (Controller Area Network) which is not a part of the modelled network. The signal from the controller to the brakes is assumed to have a negligible delay, since the signals for braking or steering have a high priority and the activation of the ac- tuator is event-triggered. The maximum latency from the network is small relative to other delays in the system, and it is the way the system reacts to any network latency which is of interest. Thus a more accurate network timing model has not been considered in this thesis.
3.7 Activation patterns
A task can either be triggered according to time or to an event, and the task trig- gering pattern is significant to the performance of the system.
3.7.1 Time-triggered activation
When tasks are triggered periodically, the first triggering of a task in the simu- lation, its activation (of which the future triggering follow periodically), does not have to be synchronized with the other task’s first triggering. With no other de- lays, the best-case triggering (resulting in a minimal delay) should be when all the tasks are synchronised so that they are triggered at the same time instant as when the preceding task finishes its execution. For adjacent tasks with time periods that are not multiples of each other, the delay is not necessarily zero. Figure 3.5 and Figure 3.6 illustrate the difference between synchronous and asynchronous triggering.
The worst-case scenario (resulting in a maximal delay due to asynchronous trig-
gering) is when, for the whole task chain, the succeeding tasks are triggered just
before the preceding ones. The combination of first triggering (activation) is called
the system’s activation pattern. Since the system model has non-negligible la-
tency in the case of medium or high load, and that the perception task receives
input from three sensors with different time periods and stochastic behaviour,
t
Triggering Task Execution Data propagation Overwritten data Task 1
Task 3 Task 2
Figure 3.5: Synchronized triggering
t
Triggering Task Execution Data propagation Overwritten data Task 1
Task 3 Task 2
Figure 3.6: Asynchronous triggering
and since the neural network task always has a stochastic time period, what the worst- and best-case activation patterns are is not obvious. By modelling the ac- tivation time as a stochastic variable from e.g. a uniform distribution within the time interval [0, P i ] for each task, the effect of a certain activation pattern is aimed to be deduced.
Considering the execution order of the tasks (see Figure 3.2) a negative differ-
ence in the starting times between two tasks means that the succeeding task starts
to execute before the preceding task. Hence, a low negative difference means that
the message will be buffered longer. Accordingly a high negative difference means
that the message will not be buffered a long time. A low positive difference means
that the succeeding task executes shortly after the preceding task, meaning a low
time for buffering, and a high positive difference means that the succeeding task
executes a longer time after the preceding task so a longer time for buffering. This
effect is illustrated in Figure 3.7. The time the message is buffered adds to the de-
lay of the system. How large the contribution to the total delay is depends on the
time period of the tasks, the largest possible negative difference in starting times is
equal to the preceding task’s time period, and the largest possible positive differ-
ence in starting times is equal to the succeeding task’s time period. A higher net-
work latency is expected in the case of synchronized activation, due to the higher
load in the system when the message passing happens simultaneously.
t
Task 1Task 2
Low negative difference
t
Task 1Task 2
High negative difference
delay
delay
t
Task 1Task 2
Low positive difference
delay
t
Task 1Task 2
High positive difference
delay