• No results found

Driver-Monitoring-Camera Based Threat Awareness for Collision Avoidance

N/A
N/A
Protected

Academic year: 2022

Share "Driver-Monitoring-Camera Based Threat Awareness for Collision Avoidance"

Copied!
70
0
0

Loading.... (view fulltext now)

Full text

(1)

IN

DEGREE PROJECT MECHANICAL ENGINEERING, SECOND CYCLE, 30 CREDITS

STOCKHOLM SWEDEN 2019,

Driver-Monitoring-Camera Based Threat Awareness for Collision Avoidance

SIQI GANG

(2)

.

(3)

Examensarbete TRITA-ITM-EX 2019:605

Driver-Monitoring-Camera baserad Hotmedvetenhet för att Undvika Kollision

Siqi Gang

Godkänt

2019-09-27

Examinator

Martin Edin Grimheden

Handledare

Dejiu Chen

Uppdragsgivare

Zenuity AB

Kontaktperson

Enrico Lovisari

Sammanfattning

Frontkollision (forward collision) är en av de vanligaste och farligaste typerna av trafikolyckor.

Många studier och undersökningar har genomförts för att utveckla system för att undvika kollisioner. För att underlätta avvägningar mellan komfort och säkerhet för att undvika

Frontkollision måste förarens tillstånd övervakas och skattas. Ett sådant stöd är nödvändigt för Forward Collision Warning (FCW) systemet, som involverar interaktion med människor.

Efterfrågan på kamerabaserad uppskattning för föraren har ökat på grund av framsteg Driver Monitoring System (DMS).

Det här examensarbete genomfördes på Zenuity AB och undersökte en metod för att skatta förarens medvetenhet baserad på Driver Monitoring System. Uppskattningen av förarens medvetenhet förväntas bidra till att anpassa FCW-systemet. Detta FCW-system är baserat på visuell uppmärksamhet om när oförutsägbar bromsning av det framförvarande fordonet sker.

Examensarbetet består av tre uppgifter: blickuppskattning, Gaze-to-Object Mapping (GTOM), och medventenhetsuppskattning. Ett kombinerat Kalman-filter har utvecklats i gaze uppskattning för att kompensera saknade data och outliers samt reducera skillnaden till “ground truth” data.

Osäkerhetesmatrisen från gaze uppskattningen användes för att extrahera en gaze-to-object sannolikhetssignal i GTOM. Den motsvarande fixeringsvaraktigheten erhålls också i GTOM. De två extraherade nya egenskaperna användes i medvetenhetsanalys med hjälp av två metoder:

logistic regression och two-Hidden Markov Model. Jämförelsen mellan de två metoderna avslöjar om en komplex metod är att föredra eller inte.

Resultatet av detta examensarbet visar att logistic regression fungerar bättre i förarens

statusuppskattning med 92% noggrannhet och 76.3% True Negative rate. Vidare forskning och förbättringar av den two-hidden Markov modell behövs för att dra en mer omfattande slutsats.

Det huvudsakliga bidraget av examensarbetet är en utforskning av en end-to-end metod för att uppskatta förarens medvetenhet och därmed kunna identifiera utmaningar för framtid studie.

Key word: Driver Monitoring Camera, Gaze Estimation, Awareness Estimation

(4)
(5)

Master of Science Thesis TRITA-ITM-EX 2019:605

Driver-Monitoring-Camera Based Threat Awareness for Collision Avoidance

Siqi Gang

Approved

2019-09-27

Examiner

Martin Edin Grimheden

Supervisor

Dejiu Chen

Commissioner

Zenuity AB

Contact person

Enrico Lovisari

Abstract

Since forward collision is one of the most common and dangerous types of traffic accidents, many studies and researches have been conducted to develop forward collision avoidance system.

To facilitate the tradeoff between comfort and safety for forward collision avoidance, the driver's state needs to be monitored and estimated. Such support is necessary for Forward Collision Warning (FCW) system given human-involved control. Due to the advances of Driver

Monitoring System (DMS), the demand for camera-based driver's state estimation has increased.

This master thesis project, conducted at Zenuity AB, investigates a method to estimate driver's awareness based on DMS. The estimation of a driver's awareness is expected to help adapt FCW system based on visual attention when facing the unpredictable braking of the leading vehicle.

The project consists of three tasks: gaze estimation, Gaze-to-Object Mapping (GTOM), and awareness estimation. A combined Kalman Filter was developed in gaze estimation for compensation of missing data and outliers and reducing the difference to “ground truth” data.

The uncertainty matrix from gaze estimation was utilized to extract a gaze-to-object probability signal in GTOM, while the corresponding fixation duration was also obtained in GTOM. The two extracted new features were used in awareness estimation with two methods: Logistic Regression and two-Hidden Markov Model. The comparison between the two methods reveals whether a complex method is preferred or not.

Based on the results of this project, Logistic Regression seems to perform better in driver's state estimation, with 92.0% accuracy and 76.3% True Negative rate. However, further research and improvements on the two-Hidden Markov Model are needed to reach a more comprehensive conclusion.

The main contribution of this project is an investigation of an end-to-end method for driver's awareness estimation and thereby an identification of challenges for further studies.

Key word: Driver Monitoring Camera, Gaze Estimation, Awareness Estimation

(6)
(7)

Acknowledgements

Foremost, I would like to express my gratitude to my supervisors, Enrico Lovisari for providing me the offer to do this project and his patience, motivation, enthusiasm, and immense knowledge, and Dejiu Chen from KTH for guidance in methodology and supervising me through the whole project.

Also, I would like to thank Nils Andr´en, who cooperated with me for this project.

The research questions of Nils Andr´en focus on the second task as well as the third task studying the performances of different features with Logistic Regression.

My sincere thanks also go to my manager Lars Bostr¨om, for the time for individual meetings and his professional advice on project management. I am also grateful to the members of Driver State Estimation group, especially for Claes Olsson, Ebba Wilhelmsson, Henrik Eriksson, Sebastian Gonzalez Pintor, who fully supported me with data gathering, configuration, testing, and technological guidance through the project. I also thank Gabriel Rodrigues de Campos for the attendance of the weekly supervising meeting and guidance in problem solving.

Last but not least, I would like to thank my examiner Martin Edin Grimheden, my

(8)

Contents

1 Introduction 1

1.1 Background . . . 1

1.2 Purpose . . . 2

1.3 Problem Description . . . 3

1.4 Contribution . . . 3

1.5 Research Questions . . . 3

1.6 Limitation . . . 4

1.7 Methodology . . . 5

1.8 Ethical Consideration . . . 5

2 Frame-of-reference 7 2.1 Gaze Estimation . . . 7

2.1.1 Gaze-Head Movement . . . 8

2.1.2 Gaze Behavior Extraction . . . 9

2.1.3 Kalman Filter . . . 9

2.2 Gaze-to-Object Mapping . . . 11

2.3 Awareness Model . . . 11

2.3.1 Logistic Regression . . . 12

2.3.2 Hidden Markov Model . . . 12

2.4 Alternative . . . 16

3 Implementation 17 3.1 Data Resource . . . 17

3.2 Validation of Reference Data . . . 18

3.3 Gaze Estimation . . . 19

3.3.1 Structure . . . 20

3.3.2 Data . . . 21

3.3.3 Initialization . . . 22

3.3.4 Outlier Classification . . . 23

3.3.5 Fixation Identification . . . 24

3.3.6 Kalman Filter . . . 24

3.4 Gaze-to-Object Mapping . . . 27

3.4.1 Data . . . 27

(9)

CONTENTS

3.4.2 Spatial Mapping . . . 27

3.5 Awareness Model . . . 28

3.5.1 Case Selection . . . 28

3.5.2 Feature Extraction . . . 29

3.5.3 Logistic Regression . . . 30

3.5.4 Hidden Markov Model . . . 32

4 Results 35 4.1 Gaze Estimation . . . 35

4.1.1 Estimation for Recorded Data . . . 35

4.1.2 Estimation for FOT Data . . . 36

4.2 Awareness Estimation . . . 38

4.2.1 Result of Logistic Regression . . . 38

4.2.2 Result of Hidden Markov Model . . . 41

5 Conclusion 45 5.1 Gaze Estimation . . . 45

5.2 Awareness Estimation . . . 46

5.2.1 Conclusion for Logistic Regression . . . 46

5.2.2 Conclusion for Hidden Markov Model . . . 48

6 Recommendation 49

Bibliography 51

Bilagor 55

A a and q matrices in gaze estimation 55

(10)

List of Figures

2.1 Affine correlation between eye position and head position (in camera

coordinate system). . . 8

2.2 An example of Hidden Markov Model with 2 hidden states s1, s2, obser- vation ot∈ {v1, v2, ...vM}, and corresponding hidden state qt∈ {s1, s2}[30] 13 2.3 An example of Forward Algorithm for HMM. . . 14

3.1 Architecture of gaze estimation implementation. . . 20

3.2 Eye-head model with dynamical vector in camera coordinate system. . 23

3.3 A hysteresis loop for fixation and saccade. . . 24

3.4 Gaze estimations with identity matrix, L1-norm and LS methods. . . 26

3.5 Seven features for awareness estimation. . . 31

3.6 Training accuracy of Logistic Regression with different threshold. . . 32

3.7 Structure of implemented 2-HMM method. . . 33

3.8 Training accuracy of 2-HMM with different threshold. . . 34

4.1 Histograms of testing gaze direction errors in yaw and pitch. . . 36

4.2 Histograms of testing eye position error in x, y, z. . . 37

4.3 Elapsed time for implemented Kalman Filter. . . 37

4.4 An example of probability and output of Logistic Regression for each frame compared with target awareness signal. . . 38

4.5 a, b, c and d based on various frame thresholds for Logistic Regression. . 41

4.6 An example of output of 2-HMM for each frame compared with target awareness signal. . . 42

4.7 a, b, c and d based on various frame thresholds for 2-HMM. . . 43

4.8 Comparison of a, b, c and d between Logistic Regression and 2-HMM. . . 44

(11)

List of Tables

3.1 Variables. . . 18

3.2 Positions of ten dots projected on the plane in vehicle coordinate system. 18 3.3 Statistics of reference positions and production positions on the projected plane. . . 19

3.4 Training Data . . . 21

3.5 Outlier Classification . . . 23

3.6 Weight vector β for Logistic Regression . . . 32

4.1 Confusion matrix for output of Logistic Regression. . . 39

4.2 Terminology and derivations from Logistic Regression confusion matrix. 39 4.3 An example of the numbers of classified cases based on frames of negative output and the reference FCW signal for Logistic Regression. . . 40

4.4 Confusion matrix for output of 2-HMM. . . 42

4.5 Terminology and derivations from 2-HMM confusion matrix. . . 42

4.6 An example of the numbers of classified cases based on frames of negative output and the reference FCW signal for 2-HMM. . . 43

A.1 a and q matrices . . . 56

(12)
(13)

Chapter 1

Introduction

This chapter provides the background of this project, the purpose of this report, the specific problem statement, the limitation of the project, and the chosen methodology.

1.1 Background

In recent years, autonomous driving was proposed to improve traffic safety. National Highway Traffic Safety Administration (NHTSA) released six levels of Society of Automotive Engineers (SAE) for autonomous driving[1]. Level 5 (fully automated) vehicles are supposed to handle all events in any environment[1]. However, due to current limitations of technology, the complexity of scenario classifications, and ethical considerations, Level 5 is still in development and testing. In Level 2 and Level 3, many effective strategies, such as Lane-Keep-Assist (LKA), are currently implemented on existing vehicles.

Forward Collision Warning (FCW) system with external sensors, such as Lidar and radar, is one of the most important systems in vehicle systems recently. Accord- ing to the report of National Transportation Safety Board (NTSB), almost 50% of two-vehicle crashes from 2012 to 2014 were rear-end collisions [2]. The need for an efficient forward collision avoidance system has grown rapidly in recent years. In addition to brake-capacity forward collision warning (B-FCW) systems (mostly in trucks), full forward collision warning (FCW) systems for passenger cars are also available on the market[3]. The goal of B-FCW is to give a warning when braking capacity does not meet the operating conditions in Adaptive Cruise Control (ACC) system, while a full FCW aims to provide reactions to the severe conditions which are crashes[3].

The inclusion of human state estimation in vehicle control is beneficial to safety control. The human-involved control requires estimation on driver’s mental activi- ties, such as attention of an object and awareness of an event. According to studies on naturalistic driving, 75% of drivers are non-attentive to the front object before

(14)

CHAPTER 1. INTRODUCTION forward collision threat occurs[4][5][6][3]. In addition, attention is also necessary for unpredictable events which could cause rear-end collision[3]. The most straightfor- ward way to detect attention is based on facial information. In addition, different drivers have different preferences in braking action. Some drivers prefer to brake smoothly and early, which has a low probability of triggering a warning system, while others prefer to brake hard and lately, which could probably result in warn- ings under the circumstances where the driver is aware of the situation and reacts by a late brake. Compared with dynamics and drivers’ actions, physiological data could provide a more straightforward indication of mental activities. Therefore, drivers’ facial information needs to be included in driving systems.

Among facial information, according to [7], gaze data can provide a straightforward indication of driver’s mental activities. [8] and [9] proposed that facial data, espe- cially gaze data, reveal drivers’ drowsiness, which contributes to 40.5% of all truck road departure accidents[10]. Therefore, studies on driver state recognition based on gaze data are expected to be of fundamental importance for analyzing human- vehicle interaction.

The commonly used system for collecting facial information is Driver Monitoring System (DMS). Given the advancements of DMS, several significant features for analyzing driver’s behavior such as gaze and head can be extracted in real-time.

Therefore, a camera-based threat awareness estimation system could be developed as an addition to FCW systems, in order to balance the tradeoff between driving safety and comfort that was discussed above.

1.2 Purpose

Based on the available support of DMS, this project aims to develop a camera-based driver’s awareness estimation method for the adaptation of forward collision avoid- ance algorithm.

The scenario considered in this project is defined as driving behind a vehicle that is reducing its longitudinal speed by means of braking or steering. The unpredictable braking of front vehicles can be treated as a threat measured by means of time- to-collision. A threshold was created on longitudinal time-to-collision. The ideal scenario is that when the condition of the threshold is triggered, the front vehicle will be regarded as a threat.

The awareness estimation method shall provide an estimation of the driver’s state for each frame when facing the threat. The available inputs to this method are DMS data and dynamics of the ego vehicle and the front vehicle.

2

(15)

1.3. PROBLEM DESCRIPTION

1.3 Problem Description

Based on the purpose above, detailed information on available data set and vari- ables are described as follows.

Two data sets have been provided by the company for this project, 1. Recorded data set, and 2. Field Operational Test (FOT) data set. The extensive DMS data, such as gaze and head, and dynamics of objects and ego vehicles are available in both data sets. The recorded data was collected by means of both a reference camera system, which is regarded as ground truth system and a production camera system, while the FOT data was only collected by means of the production camera system.

Because of the different locations and calibrations of two camera systems, the col- lected gaze and head data are different. The methods in this project are expected to be implemented on FOT data. The goal of the first task is to investigate the possibility to develop an estimator that minimizes the difference between produc- tion and ’ground truth’ reference data.

Based on the estimated gaze data, specific features need to be constructed as input to the final awareness estimation model. Consequently, the second task is to asso- ciate gaze with environmental objects to extract the features revealing driver’s at- tention. The process in this report will be called Gaze-to-Object Mapping (GTOM).

With extracted features, awareness estimation models have been developed based on probabilistic methods to provide an instantaneous estimation for each frame.

1.4 Contribution

The main contributions from the author are the developments of discrete Kalman Filter, feature extraction and awareness models. The study of this thesis focuses on gaze estimation and awareness estimation.

1.5 Research Questions

The project developed a gaze estimator and two awareness models and provided necessary performance analysis. For the first task, the goal is to minimize the difference between production data and reference data. In addition, gaze movement is limited with constraints, such as maximum velocity, the correlation between gaze and head movements and boundaries of gaze movements. Based on the consideration of gaze constraints, the performance of gaze estimation could be denoted by the reductions of differences in gaze direction and eye position. Therefore, the first research question is described as follows:

(16)

CHAPTER 1. INTRODUCTION

• Based on bio-mechanical constraints (or physical constraints on head-gaze movements), such as limitation of binocular field of view[11] with a prede- fined head direction and maximum eye movement speed[12], how much the performance of a production camera system (affected by noise and uncer- tainties) can be improved making use of a model, if compared to a reference camera system providing ground truth?

Due to the goal of the project is to estimate driver’s awareness, the performance of the third task, awareness estimation, is studied. As described in Section 1.3, the expected output from awareness model is an instantaneous estimation. For the evaluation, the computed awareness signal will be available which could compare with a reference signal, for example, the reference FCW. The comparison reveals the sensitivity of awareness estimation. In addition, spatial features and temporal features can be extracted. Two estimation methods can be developed to study the dynamics of spatial features and the performance of temporal features regarding driver’s state. Therefore, the second research question is defined as follows:

• Considering the threat of collision when driving behind a vehicle whose ve- locity decreases, which awareness metrics and awareness model (simple model or complex model) can provide a better estimation of driver awareness of for- ward vehicle based on the number of provided “unawareness” signals before

“emergency braking”?

In this project, the two awareness models are Logistic Regression and two-Hidden Markov Model. The “emergency braking” in this project was the triggering of a reference FCW system.

1.6 Limitation

DataThe available resources for the project consist of videos and data with differ- ent sampling frequencies. Due to extraction and recording methods, the videos and data are not 100% synchronized, and based on timestamps, the sequences of frames are missing in some cases. Because of time limitation and because it was not in the scope of the project, improvements on synchronization were not included in the project. The models were developed and tested based on 27 cases extracted from more than one thousand FOT cases. Considering the available FOT data set, the number of extracted cases was limited. Besides, due to the constraints on available variables, real “ground truth” of awareness can not be obtained. The annotation of awareness was conducted by 3 persons based on manual judgment for each frame in the DMS video. The annotation could be improved by means of physiological features, such as heart rate and perinasal perspiration[13]. Moreover, the classifica- tion of extracted cases could be carried out in the future by experts for tagging for each case as “aware” or “unaware” given the DMS video, the reaction of driver and reference FCW systems.

4

(17)

1.7. METHODOLOGY

Prototype The developed model is a prototype of driver’s awareness estimation method, which was not implemented in the real development vehicle for Software- in-the-Loop (SIL) and Hardware-in-the-loop (HIL) testing due to time limitation and safety concerns.

Intellectual Property BoundariesBecause of Intellectual Property boundaries, quantitative references of directly collected DMS and FOT data are not shown in this report.

1.7 Methodology

The project consists of three tasks: gaze estimation, Gaze-to-Object Mapping and awareness estimation as mentioned in Section 1.3. This section explains the applied methodology for this project.

First, the literature study was conducted for constructing the structure of the en- tire model and defining the goals of three tasks, which was followed by analyses of available data. The analyses indicated the challenges for each task, which helped to separate the tasks into sub-tasks. The detailed information about sub-tasks will be described in the following chapters.

In this project, DMS data, Host-log and a reference FCW are available. The data resource will be explained in detail in Section 3.1. In gaze estimation, only DMS data was used for generating estimated gaze direction and eye position. The results and Host-log data were used for GTOM to extract two spatial features. To compare the results of two awareness estimations, confusion matrix and corresponding terms, such as accuracy, were considered for analyzing the instantaneous performance of awareness estimation. A reference FCW was involved for analyzing the sensitivity of awareness estimations.

1.8 Ethical Consideration

The basic ethical considerations for this project are the videos. Since the Driver Monitoring Cameras were placed facing the driver, the recorded videos reveal infor- mation about the testing drivers’ behaviors. The presentation of these videos in this project was not allowed due to privacy protection. In addition, the recorded videos from the top-roof camera for checking driving scenarios also include information about the other vehicles and pedestrians. Therefore, privacy protection needs to be considered in this project.

(18)
(19)

Chapter 2

Frame-of-reference

This chapter introduces the key concepts and related technologies in this project. Al- ternatives are discussed in this chapter as well as the reason for the selected methods.

2.1 Gaze Estimation

Gaze estimation in this project includes a set of algorithms used to reduce noise, to handle outliers and to manage uncertainty for Gaze-to-Object Mapping. Based on the analysis of raw gaze measurement and the goal of gaze estimation, there are several challenges in conducting:

• Missing gaze data (eye position and gaze direction) based on the low quality of data.

• Difference between production gaze direction and reference gaze direction.

Given the fact of missing gaze data, head measurements can be taken into consider- ation. Because of the sensitivity of gaze tracking given light conditions and camera placement, head tracking can provide more robust measurements. [11] demon- strated the correlation between head and gaze movement by establishing that 70%

of gaze pose is statistically determined by head pose. Therefore, head data could be utilized to provide predictions for missing data. In addition, the relation between head and eye defines the constraints on gaze movement that help to classify outliers.

Gaze velocity[12] and quality of head measurements were also used as conditions for outlier classification. The outliers and missing gaze direction were treated as abnormal states, where the estimator is supposed to be different from the one for normal states. The estimations for abnormal states are expected to only depend on the linear model.

As for the second challenge, Kalman Filter[14] is a widely used estimator strategy for noise reduction in signal processing. The alternative was described in the following

(20)

CHAPTER 2. FRAME-OF-REFERENCE sections. In addition, the study on gaze movement revealed several gaze behaviors, such as fixation and saccade[15], which in this project can be utilized for estimation.

The main methods in gaze estimation are listed as follows:

• Gaze-Head Movement

• Gaze Behavior Extraction

• Kalman Filters

2.1.1 Gaze-Head Movement

According to [11] and [16], binocular field of view is limited by an overall 130 in pitch and 120 in yaw. In [17] and [18], Listing’s law was introduced to reduce dimensions of eye movement by estimating eye direction with a relative rotation axis on the head-fixed plane, which indicates a mathematical method to correlate gaze movement and head movement. Inspired by Listing’s law, the eye position in this project was estimated with a relative vector between average eye position and head center as Figure 2.1, where the detailed implementation of eye position estimation is in Section 3.3.3. The relation of head and eye positions can be described as follows:

Figure 2.1: Affine correlation between eye position and head position (in camera coordinate system).

Peye = Phead+ Vdynamical (2.1)

8

(21)

2.1. GAZE ESTIMATION

where Peye is the average eye position, Phead is the head position and Vdynamical is the dynamical relative vector.

2.1.2 Gaze Behavior Extraction

According to [15], fixation, saccade, and smooth pursuit are the three main gaze behaviors in driving scenarios, and they are widely used in driver’s behavior esti- mation. In this project, fixation and saccade are important gaze behaviors. It also indicates that fixation on the leading vehicle probably already reveals a straight- forward correlation to driver’s state. Moreover, considering the uncertainty of gaze data, the consistent threshold of 20/s on gaze velocity in [19] is not suitable for this project.

Instead, [15] proposed a Bayesian Mixture Model for fixation and saccade classifi- cation, in which two Gaussian distributions were learned with gaze dynamical data.

Inspired by [15], instead of using two/dimensional fixation points, gaze velocity was used to be fit into a Gaussian mixture model as follows:

P(x) =

2

X

j=1

wjN(x|µj, σ2j) (2.2)

where Pwj = 1, j = 1 is the index for fixation and j = 2 is the one for saccade.

Thresholds for fixation identification can be selected based on the joint point of probabilities of fixation and saccade.

pf ix= w1N(x|µ1, σ12) (2.3) psac= w2N(x|µ2, σ22) (2.4)

2.1.3 Kalman Filter

Kalman Filter is a method for addressing tracking and prediction problems[20] and is widely applied in gaze signal processing, for example, Komogortsev and Khan[21].

[22] proposed using Kalman Filter with a linear dynamical model for the smoothing gaze signal.

R.E. Kalman proposed a solution to the discrete-data linear filtering problem[23][24].

Since that time, extensive and in-depth researches have been conducted in appli- cations of Kalman Filter[23]. Discrete Kalman Filter is a solution to provide a minimum error variance states estimation of the true state x ∈ Rn in discrete se- quences as follows:

xk = Akxk−1+ wk (2.5)

zk= Ckxk+ vk (2.6)

(22)

CHAPTER 2. FRAME-OF-REFERENCE where w, v are process noise and measurement noise, the measurement is denoted as z ∈ Rm and Ak is the linear state-space model.

The meanings for the matrices are as follows:

• Ak the n × n state transition matrix

• Ck the m × n measurement matrix

• Qk the covariance matrix of process noise

• Rk the covariance matrix of measurement noise

wkand ukare assumed to be independent, white noises whose samples follow normal probability distributions with zero means and covariances Qk and Rk(see Equation 2.7 and 2.8).

wk∼ N(0, Qk) (2.7)

vk ∼ N(0, Rk) (2.8)

The process of a discrete Kalman Filter estimation follows the derivation of discrete Kalman Filter algorithm in [23]:

• Prediction: Prediction step is to provide a current prior state xk|k−1and a cur- rent prior error covariance Pk|k−1 based on previous posterior state xk−1|k−1, dynamical model Ak and noise covariance Qk as shown in Equation 2.9 and 2.10 [25][23] (if there is no control input).

ˆxk|k−1= Akˆxk−1|k−1 (2.9)

Pk|k−1= AkPk−1|k−1ATk + Qk (2.10)

• Update: Given the prior state ˆxk|k−1 and error covariance Pk|k−1, the pre-fit residual ykand its corresponding covariance Sk can be obtained based on cur- rent measurement zk, estimation Ckxk|k−1, and measurement error covariance Rk as shown in 2.11 and 2.12. Equation 2.13 is the formula for Kalman gain Kk which describes the change of estimation based on a measurement. The new posterior state ˆxk|k and error covariance Pk|k are calculated based on Equation 2.14 and 2.15. And the new residual ˆyk|k can also be obtained with zk and ˆxk|k.

ˆyk= zk− Ckˆxk|k−1 (2.11)

Sk = CkPk|k−1CkT + Rk (2.12)

Kk= Pk|k−1CkTSk−1 (2.13)

ˆxk|k = ˆxk|k−1+ Kkˆyk (2.14)

Pk|k = (I − KkCk)Pk|k−1 (2.15) ˆyk|k= zk− Ckˆxk|k (2.16)

10

(23)

2.2. GAZE-TO-OBJECT MAPPING

2.2 Gaze-to-Object Mapping

The process of associating gaze to the target object is called Gaze-to-Object Map- ping (GTOM)[26]. GTOM is developed based on gaze measurements and 3D en- vironment information. The object in the view is denoted as O, and the gaze position of the driver is represented by g. The defined output from GTOM is a predictive probability P (O|g) for each visual object O given gaze g using Bayesian inference[26]. According to [26], the defined two categories of GTOM methods are passive attention GTOM and active attention GTOM. In the passive attention GTOM, the probability is calculated based on the mapping from gaze to the object.

On the contrary, the active attention GTOM is based on the mapping from the specific object to gaze. The difference of functional implication of these approaches is whether there is a stochastic advantage for the large object if gaze positions are randomly distributed in the view. In this project, the size of the target object (the leading vehicle) is expected to influence the result of awareness. Therefore, the passive approach is more suitable for this project.

Given the gaze position, the passive GTOM is to find the probability of attention to the object. It is developed as a probabilistic model to estimate P (O|g). The posterior P (O|g) is derived by normalizing

P(O|g) ∝ P (O, g) (2.17)

where O is the object in surrounding, and g is the gaze position of the driver. [26]

proposed three methods for P (O, g), which are Ray Casting (RC), Closest Feature Mapping (CFM), and Fovea Splatting (FS). Given the tolerance of gaze error and prediction accuracy, Fovea Splatting can provide a better performance in a smooth output. The probability is derived by accumulating all pixels o ∈ O in the object over Gaussian distribution with the size of Fovea[27]. P (O, g) is proportional to a Bayesian inference that was expressed in Equation 2.18.

P(O, g) ∝X

o∈O

exp(k o − g k2

s ) (2.18)

Consequently, the implementation of GTOM in this project followed the instruction as above to obtain the probability of paying attention to the object in each frame.

The detailed information is provided in Section 3.4.

2.3 Awareness Model

Due to the complexity of estimating human behavior, linear methods are often in- sufficient, and they are gradually replaced by nonlinear or even probabilistic meth- ods[28]. In this project, the two strategies selected for awareness estimation are as follows:

(24)

CHAPTER 2. FRAME-OF-REFERENCE

• Logistic Regression

• Hidden Markov Model (HMM)

Compared with advanced machine learning methods, such as Neural Network, Lo- gistic Regression is more interpretable in analyzing the relationship among targets and different features, while Hidden Markov Model (HMM) gains advantages in sequential states modeling.

2.3.1 Logistic Regression

Logistic Regression is a machine learning strategy that aims to find the best fitting for the binary target sequence by optimizing the parameters of a suitable output function[29]. The method can be interpreted as a description of the influence of input features on the outcome targets[29]. The derivation of Logistic Regression follows the book by Hosmer et al. (2013)[29].

The input to Logistic Regression is an n-dimensional vector x. Each xi is a feature of the input. Given a x ∈ R of n features and a target Y ∈ {0, 1}, the conditional mean of expected Y based on the input x is E(Y |x), which is in [0, 1]. In order to simplify the notation, E(Y |x) is replaced with π(x). The form of logistic regression is:

π(x) = eβ01x12x2+...+βnxn

1 + eβo1x12x2+...+βnxn = 1 − 1

1 + eβ01x12x2+...+βnxn (2.19) 2.3.2 Hidden Markov Model

A Hidden Markov Model is an algorithm that is used to estimate the trajectory of hidden states Q = {q1, q2, ..., qT}, qt ∈ {s1, s2, ..., sN}, where sj is the hidden state, based on transition matrix A = {a11, a12, ..., aN N}and observation likelihoods B = {B1, B2, ..., BN}, learned with observation O = {o1, o2, ..., oT} [30]. Figure 2.2 is an example of Hidden Markov Model with two states and M possible observation states vm. Bj is observation likelihood matrix including the likelihood of each hidden state given each observation state Bj = P (v1|sj), P (v2|sj), ..., P (vM|sj). aij

is the transition probability of moving from hidden state si to sj. Initialization π = 1, π2}is the initial state distribution where π1 and π2 are the initial probabilities of state s1 and state s2. The following introduction follows the book by Jurafsky (2000)[30] and the tutorial by Rabiner (1989)[31].

The book[30] introduced three problems: Likelihood, Decoding, and Learning. The goal of Likelihood is to determine the likelihood P (O|π, A, B) based on the obser- vations and the model. Decoding is to find the corresponding hidden states to the observations. Learning is to learn aij and B in the model based on observations and states. In this project, only likelihood and learning were considered.

12

(25)

2.3. AWARENESS MODEL

Figure 2.2: An example of Hidden Markov Model with 2 hidden states s1, s2, ob- servation ot∈ {v1, v2, ...vM}, and corresponding hidden state qt∈ {s1, s2}[30]

.

• Likelihood: Given the model λ = {π, A, B} and the observation O, the task of Likelihood is to compute the probability P (O|λ). The Forward Algorithm[32]

is usually applied as an efficient method to address how well the model can fit the sequence. Figure 2.3 indicates the process of a Forward Algorithm for a 2-states Hidden Markov Model. With initial probability πi, transition probability aij and observation likelihood bj(ot) ∈ Bj of an observation ot = vmwith hidden state sj, the forward path probability αt(j) = P (O, qt= sj) is calculated as an accumulation of aijbj(ot) as follows:

Initialization (or first transition):

α1(j) = πibi(o1), 1 ≤ j ≤ N (2.20) Recursion:

αt(j) =

N

X

i=1

αt−1(i)aijbj(ot), 1 ≤ j ≤ N, 2 ≤ t ≤ T (2.21)

Termination:

P(O|λ) =

N

X

i=1

αT(i) (2.22)

where i and j represent for the index of hidden states si and sj.

• Learning: Given a sequence of observation O and a sequence of hidden states Q, the process of parameter optimization in A and B is usually conducted by local maximization of P (O|λ) with Baum-Welch Algorithm[33][34][35]. The

(26)

CHAPTER 2. FRAME-OF-REFERENCE

Figure 2.3: An example of Forward Algorithm for HMM.

backward probability γtis defined as seeing the observation sequence ot+1...oT

based on the current observation ot and the model λ, which is expressed as γt(i) = P (ot+1...oT|qt= si, λ).

Initialization (or first iteration):

γT(i) = 1, 1 ≤ i ≤ N (2.23)

Recursion:

γt(i) =

N

X

j=1

γt+1(i)aijbj(ot+1), 1 ≤ i ≤ N, 1 ≤ t ≤ T (2.24) Termination:

P(O|λ) =

N

X

j=1

πjbj(o1) (2.25) Based on the forward probability αtand backward probability γt, Expectation- Maximization Algorithm[36] can be utilized to optimize aij and Bj. The E- step and M-step are described as follows:

E-step:

δt(i, j) = P (qt= si, qt+1= sj|O, λ) = αt(i)aijbj(ot+1t+1(j) PN

i=1

PN

j=1αt(i)aijbj(ot+1t+1(j) (2.26) 14

(27)

2.3. AWARENESS MODEL

θt(i) =

N

X

j=1

δt(i, j) (2.27)

M-step:

aij = PT −1

t=1 δt(i, j) PT −1

t=1 θt(i) (2.28)

bj(vk) = PT

t=1,ot=vkθt(i) PT

t=1θt(i) (2.29)

δt(i, j) is the probability of having the state siat t and having another state sj

at t + 1 based on O and λ. θt(i) is the probability of having the state si based on O and λ. The estimated A and B matrices are calculated as Equation 2.28 and 2.29.

[37] proposed a two-Hidden Markov Model (2-HMM) method to estimate driver’s attention to the pedestrian with vehicles dynamical signals, in which the two Hidden Markov Models were trained by attentive training set and inattentive training set respectively. The result of this estimator is dependent on the log-likelihoods of two models. The detailed steps are as follows:

• Classify each feature (A, B, and C) into three clusters {0, 1, 2} by means of thresholds.

• Create observations based on three features or variables (A, B, C) as follows:

o= A × 32+ B × 31+ C × 30+ 1 (2.30) where A, B, and C ∈ {0, 1, 2}. Therefore, for each observation o, it has 33 possible values, which in other terms means M = 33, vm= m.

• Based on two types of the sequence of observation (attentive and in-attentive), train two Hidden Markov Models respectively.

• Compare the log-likehoods of two models and find a threshold for driver’s state classification.

The result in [37] inspired the strategy detailed in Chapter 3.5.4, where five features for “aware” and “unaware” training sets were used for 2-HMM model. The training and dimension reduction methods implemented in this project followed the steps above.

(28)

CHAPTER 2. FRAME-OF-REFERENCE

2.4 Alternative

Gaze Estimation

Kalman Filter provides strong estimations for linear systems with random noise, for example, a state model of dynamic motion with velocity and position. One possible extension of Kalman Filter is well-known as Extended Kalman Filter (EKF)[38].

EKF is applied especially to differentiable nonlinear systems by linearizing the sys- tem at each step. In this project, modeling the true gaze movement is complex and nonlinear given the transformation and difference between saccades and fixations.

However, regardless of the transformation, if the gaze behavior signal as presented in section 2.1 is available, the movements in saccades or fixations could probably be simplified to a linear model with random noise which meets the requirements of Kalman Filter.

Gaze-to-Object Mapping

In addition to Fovea Splatting in section 2.2, Ray Casting[26] is the simplest and most straightforward method to estimate P (O, g) with binary results. P (O, g) is dependent on whether the cast gaze ray intersects with the object. According to [26], Ray Casting costs less computational time and is suitable for real-time. How- ever, it requires high accuracy of gaze directions and eye positions.

There are some other mapping algorithms based on dwell-time accumulation to avoid high sensitivity of mapping [39][40]. Temporal features, such as fixation dura- tion, were developed inspired by dwell-time mapping in this project as Section 3.5.2.

Probabilistic Method

Probabilistic methods have emerged as a choice to address recognition problems and to extract semantic features. [41] presented a Hybrid Bayesian Network method for cognitive distraction detection. The approach was implemented on a simulator with gaze and driving performance. A supervised clustering model consisting of two lay- ers was developed to extract features, while a Dynamic Bayesian Network (DBN) at a higher layer was modeled for state recognition. The accuracy of recognition is 88%. Compared to HMM, DBN is a generalized extension to HMM with multiple variables.

In this project, dimension reduction was implemented due to multiple features.

DBN could be a possible strategy as an improvement for multiple features. Based on complexity and time cost, HMM gains more advantages.

16

(29)

Chapter 3

Implementation

This section explains data resource and implementation for the three components in the project.

3.1 Data Resource

As mentioned in Section 1.3, two data sets were available for the project: the recorded data set and Field Operational Test (FOT) data set.

In the recorded data set, both reference and production DMS data are available.

The production camera was placed on the center stack, while four reference cam- eras were evenly distributed on the dash cover. Since reference cameras can cover a wider field of gaze direction, the assumption in gaze estimation is that reference data is regarded as ground truth. The validation of reference data is illustrated in Section 3.2 with experiment design and analysis of results. In addition, Host-log and reference FCW data were also obtained in the recorded data set.

In FOT data set, only the production camera was integrated into the test vehi- cles. There are two placements for this camera in FOT data set: center stack and dashboard. Moreover, an environment monitoring camera was placed on the roof of the test vehicles to record videos and provide visualization of the threat objects in FOT. The DMS videos were also recorded in FOT cases, which were utilized to annotate drivers’ attention to the threat object in the project, as well as Host-log and reference FCW data. The data was recorded with driving by more than 20 drivers.

The available variables of DMS, Host-log, and reference FCW are listed in Table 3.1. DMS data was collected under camera coordinate system Cam CS, while Host-log data and the reference FCW were obtained with vehicle coordinate system V eh CS. X in vehicle coordinate system and z in camera coordinate system are longitudinal directions. Y and x are lateral directions, while Z and y are vertical

(30)

CHAPTER 3. IMPLEMENTATION directions. Two coordinate systems can transform into each other by means of a transition matrix T r and a rotation matrix R , which were provided by the company.

The disclosure of units for some variables is not allowed due to intellectual property barriers set by the company.

Table 3.1: Variables.

(a) DMS DMS

Variable Format

Gaze Quality Qg Float Eye Position Pg Float [x, y, z]

Gaze Direction θg Float [yaw, pitch]

Head Quality Qh Float Head Position Ph Float [x, y, z]

Head Direction θh Float [yaw, pitch]

(b) Host-log and reference FCW Host-log

Variable Format

Distance D Float

Ego Velocity Vego Float Object Velocity Vobj Float Ego Acceleration Aego Float Object Acceleration Aobj Float

Braking Pedal B Integer {0, 1}

Object Position P o Float [X, Y, Z]

Object Width W idth Float Object Height Height Float

Reference FCW

Variable Format

Warning W arn Integer {0, 1}

3.2 Validation of Reference Data

The experiments were conducted on the development vehicle of the company equipped with reference camera system and production camera system. Calibrations of cam- era systems for drivers were carried out. The experiments were performed in a closed room with 12 drivers with the same light condition. A map with ten dots placing in two lines was posted on the wall facing the vehicle. The 3-D vehicle coor- dinates of ten dots in camera coordinate system, listed in Table 3.2, were obtained by means of laser equipment.

Table 3.2: Positions of ten dots projected on the plane in vehicle coordinate system.

Number 1 2 3 4 5 6 7 8 9 10

X 9.7 9.6 9.6 9.5 9.4 9.7 9.6 9.6 9.5 9.5 Y 2.5 0.7 -0.5 -2.0 -3.3 2.5 0.8 -0.5 -2.0 -3.2 Z 1.1 1.0 1.1 1.0 0.9 0.5 0.5 0.5 0.5 0.5

18

(31)

3.3. GAZE ESTIMATION

The driver was required to look at each dot for three seconds following a sequence from 1 to 10 with manual alarms. Each driver had the test twice (except for driver No.1 and No.2 who tested once) resulting in 22 validation tests. The positions of dots were projected on a plane with yaw and pitch in vehicle coordinate system.

Based on gaze direction, eye position, coordinate transformation matrices, gaze po- sition on the plane (or wall) was derived. Therefore, the dots’ positions, reference gaze positions and production gaze positions were all projected on the same plane where the positions were described by 3D vectors [X, Y, Z]. X is the longitudinal direction, in which the values are almost same for ten dots. Because the dots were mapped manually which caused errors in X.

To compare the projected gaze position with dot position in each frame, a dot position signal needs to be created. The nearest neighbour classification algorithm was developed based on the distances between gaze position and dots’ positions.

Manual correction was performed following the nearest neighbour classification to address the wrong classification results.Based on the dot class in each frame, a dot position signal was generated as ground truth signal for gaze positions. Based on the comparison between ground truths and gaze positions, statistics of two types of positions were computed as shown in Table 3.3.

Table 3.3: Statistics of reference positions and production positions on the projected plane.

Systems Production Reference Root Mean Squared Error

X 0.1 < X < 0.3 X <0.1 Y 2.2 < Y < 2.5 Y <0.5 Z 2.4 < Z < 2.7 Z <0.3

In Table 3.3, Root Mean Squared Error (RMSE) in each dimension was calculated according to Equation 3.1 without missing data. The RMSEs for reference gaze data are much smaller than the ones for production data and considered to repre- sent with sufficient accuracy reality. Therefore, the reference gaze data was regarded as ground truth gaze data in gaze estimation.

RM SE = s

PN

i=1(Yi− ˆYi)2

N (3.1)

3.3 Gaze Estimation

This section introduces the structure, variables, data splitting, and components of gaze estimation.

(32)

CHAPTER 3. IMPLEMENTATION

3.3.1 Structure

The variables utilized in gaze estimation are the same as the DMS variables in Table 3.1. Figure 3.1 describes the four major tasks in gaze estimation:

• Initialization

• Fixation identification and outlier identification

• Kalman filtering

Figure 3.1: Architecture of gaze estimation implementation.

Statistics of production data indicated that there were offsets between production gaze direction and reference gaze direction. This could be caused by inappropriate camera calibration. Therefore, the first step is to adjust offsets in yaw and pitch in gaze direction. Besides, there was infrequent missing data in eye position in produc- tion data set. The demand of estimation on eye position in this situation is needed.

Therefore, in short, the first task, named as initialization, is to adjust offsets in gaze direction based on statistics of production data and to provide estimated eye positions when data is missing by means of a relative vector between head and eye position.

Based on the adjusted gaze direction, the second task is to identify gaze behavior, which has three classes: fixation, saccade and abnormal. Fixation and saccade were regarded as two normal gaze behaviors in this project while missing data and out- liers were treated as abnormal data. Fixation identification was developed based on the distribution of gaze velocity by means of thresholds extracted by the method mentioned in Section 2.1.2. And outliers were classified based on constraints on gaze movement and quality of data.

20

(33)

3.3. GAZE ESTIMATION

The third task in gaze estimation is Kalman filtering. According to gaze behavior, three models were developed. For the normal data, two dynamical models with matrices A and Q were built based on identical matrices for fixation and saccade.

The estimated gaze movement is supposed to be smoother in fixation, while the estimation should follow the change of measurements in saccade. For abnormal data (missing data or outliers), A and Q are identical to fixation matrices. In this situation, only the prediction step will be performed. Thus, estimation on gaze direction was provided by means of Kalman filtering.

Detailed explanations for these three tasks are described in the following sections.

3.3.2 Data

Five recorded data sets with different drivers were used in the development of gaze estimation method. In each data set, 75% was utilized for training or optimizing parameters, while 25% was for testing.

In addition, considering the influence on dynamical model from noise, raw reference gaze direction signal was smoothed with penalized regression. The smoothed signal is expected to reveal the actual movements of driver’s eyes with less noise. The im- plementation of penalized regression followed Equation 3.2, where x is the smoothed data and z is the raw measurement. λ is the control parameter for smoothing. As a result, smoothed reference gaze direction was obtained.

x= argmin

N

X

i=1

|zi− xi|+ λ

N

X

i=2

|xi− xi−1| (3.2) Given the need of calibrations of Af ix, Qf ix and Asac, Qsac, smoothed training set were separated into smoothed fixation set and smoothed saccade set based on signal F ix. The components of training set are shown in Table 3.4.

Table 3.4: Training Data

Data Name

Fixation Smoothed Reference F ix smooth ref Raw Reference F ix raw ref

Production F ix prod Saccade Smoothed Reference Sac smooth ref

Raw Reference Sac raw ref

Production Sac prod

All Smoothed Reference Smooth ref Raw Reference Raw ref

Production P rod

(34)

CHAPTER 3. IMPLEMENTATION

3.3.3 Initialization

Initialization is to adjust offsets in gaze direction and estimate eye positions when quality drops based on all training data.

Offset Adjustment: The first task in initialization is to adjust offsets for different drivers. The assumption is that the driver is supposed to look straight to the road most of the time, which means that the distributions of yaw and pitch should con- centrate around zero. Distributions in yaw and pitch for gaze direction in Raw ref were fitted by Gaussian functions with means of −1 (yaw) and −1.8 (pitch) . The offsets for each driver were extracted based on differences between means of Gaus- sian fitting for production yaw and pitch and the reference direction [−1, −1.8]. The production gaze data during the first 10 minutes from the beginning for each driver was utilized for offset calculation.

Eye Position Estimation: Due to eye closing and uncertainties when looking to the left side, quality of gaze data drops.The requirement for the second task in initialization is to provide an estimation on eye position when quality drops. Given the eye-head constraints, a dynamical relative vector was proposed as shown in Section 2.1. The vectors were denoted as v1 and v2 in Figure 3.2. Dashed circles are consensus eyes; black dots are head centers; dashed green arrows are dynamical vectors v; blue arrows are eye positions Pg; red arrows are head positions Ph; purple arrows are head directions θh. v1 in Z − Y plane is the vector when the driver is looking straight ahead.

The estimation of eye position follows the procedure of Algorithm 1, where v1 in Z − Y plane was optimized and used as a reference vector to estimate the dynamical vector vN +1= Rh,N +1v1 for new measurement.

Algorithm 1 Estimate Consensus Eye Position Input: Pg, Ph, θh, Qg

Output: ˆPg,N +1 1: for i ←1 to N do

2: if Qg,i is not zero then

3: Calculate dynamical vector vi = Pg,i− Ph,i

4: Calculate rotation matrix Rh,i based on head direction θh,i 5: Compute the reference vector in Z − Y plane v1(i) = RTh,ivi

6: end if

7: end for

8: Find most v1 based on the distribution of v1(i)

9: Estimated eye position ˆPg,N +1= Rh,N +1v1+ Ph,N +1

22

(35)

3.3. GAZE ESTIMATION

Figure 3.2: Eye-head model with dynamical vector in camera coordinate system.

3.3.4 Outlier Classification

Given four constraints on eye and head movement, data were classified as four classes in Table 3.5. According to [42], the maximum gaze velocity for saccade is approximately 500/s with small head movement. Considering uncertainty in gaze data and possible large head movements and the distribution gaze velocity, the threshold of gaze velocity in this project was tuned and selected as 1200/s. Based on constraints proposed in [11], 60 and 65 were utilized to reject data beyond the binocular view. In addition to constraints on gaze velocity and binocular view, quality of head data was also taken into consideration because of its influence on gaze quality. A specific threshold Tqh on Qh was manually selected to ensure an acceptable tradeoff between insurance of data quality and data availability. The conditions in Table 3.5 allow creating a signal Out identifying the outlier samples.

Table 3.5: Outlier Classification

Condition Class Number

Gaze velocity Vg >1200/s 1 (outlier)

Head quality Qh < Tqh 2 (outlier)

Binocular field of view: |yawgh| >60 or |pitchg− pitchg| >65 3 (outlier)

Else 0 (normal data)

(36)

CHAPTER 3. IMPLEMENTATION

3.3.5 Fixation Identification

Fixation Identification was developed based on Training Data “All” in Table 3.4 with two Gaussians fitting to the distribution of gaze velocity. The joint point of two Gaussians was regarded as a reference thre fix to thresholds on gaze velocity to classify fixations and saccades. Considering the transition between fixation and saccade, two thresholds instead of using one threshold thre fix,were chosen to de- scribe the difference of two transitions for a hysteresis loop . thre fixtosac requires a larger value compared with thre sactofix. According to the performance of clas- sification by means of different thresholds close to thre, 70 and 65 were selected.

Figure 3.3 describes transitions between gaze behaviors with conditions based on Vg, thre f ixtosac= 70, and thre sactofix = 65. Therefore, the state of gaze behavior was denoted as F ix ∈ {0, 1}.

Figure 3.3: A hysteresis loop for fixation and saccade.

3.3.6 Kalman Filter

The combined Kalman Filter follows the procedure of Algorithm 2. For the first two steps (or frames) (step number i = 1 and 2), gaze velocity Vg, estimated eye position ˆPg, estimated gaze direction ˆθg, and matrices P and K were initialized as follows:

Vg,1 =h 0 0

i (3.3)

Vg,2= ∆θg,2

∆T (∆θg,2 = θg,2− θg,1) (3.4)

Pˆg,i= Pg,i (3.5)

ˆθg,i= θg,i (3.6)

24

References

Related documents

This research work demonstrates a non-contact approach that classifies a driver’s cognitive load based on physiological parameters through a camera system and vehicular data

Starting with the analysis of the XGB models with RF features, for SmoteR Balance, there was a decrease in tail RMSE (of 12.95%) and correlation of actual and predicted target, as

if data is never going to be touched again, it is possible to schedule a task that brings new data into the cache; if the data is not going to fit in the cache anyways, tasks can

however, markedly improved heart function and relieved myocardial injury of the ISO-treated rats; it lessened cardiomyocyte apoptosis, up-regulated myocardial bcl-2,

In this study, a hydrological analysis of Hjuken river was done to examine if remote data through an analysis using GIS could be used for identifying three different process

Enligt ip 2 finns ingen ”barnfotboll” i Etiopien som är organiserad efter åldersindelning, det vill säga från 6 år till 12 år, utan istället spelar man i lag med olika

Slutsatsen som dras av resultatet är att faktorerna för små staters strategiimplementering inte ensamt kan användas för att påvisa och styrka att det skett en RMA i Sverige,

In the same way as the principle of momen- tum conservation is used in mechanical systems to describe instant changes, generalized mo- mentum conservation can be used in switched