• No results found

Real-Time Pupillary Analysis By An Intelligent Embedded System

N/A
N/A
Protected

Academic year: 2021

Share "Real-Time Pupillary Analysis By An Intelligent Embedded System"

Copied!
59
0
0

Loading.... (view fulltext now)

Full text

(1)

aster˚

as, Sweden

Thesis for the Degree of Master of Science in Engineering - Robotics

and Master of Science in Intelligent Embedded Systems 30.0 credits

REAL-TIME PUPILLARY ANALYSIS

BY AN INTELLIGENT EMBEDDED

SYSTEM

Mujtaba Hasanzadeh

mhh14002@student.mdh.se

Alexandra Hengl

ahl13002@student.mdh.se

Examiner: Ning Xiong

(2)

Abstract

With no online pupillary analysis methods today, both the medical and the research fields are left to carry out a lengthy, manual and often faulty examination. A real-time, intelligent, embedded systems solution to pupillary analysis would help reduce faulty diagnosis, speed-up the analysis procedure by eliminating the human expert operator and in general, provide a versatile and highly adaptable research tool. Therefore, this thesis has sought to investigate, develop and test possible system designs for pupillary analysis, with the aim for caffeine detection. A pair of LED manip-ulator glasses have been designed to standardize the illumination method across testing. A data analysis method of the raw pupillary data has been established offline and then adapted to a real-time platform. ANN was chosen as classification algorithm. The accuracy of the ANN from the offline analysis was 94% while for the online classification the obtained accuracy was 17%. A real-time data communication and synchronization method has been developed. The resulting system showed reliable and fast execution times. Data analysis and classification took no longer than 2ms, faulty data detection showed consistent results. Data communication suffered no message loss. In conclusion, it is reported that a real-time, intelligent, embedded solution is feasible for pupillary analysis.

(3)

Acknowledgments

We would like to thank our supervisors, Martin Ekstr¨om and Adnan Causevic, for their expert

(4)

Abbreviations

ANN Artificial Neural Network ANOVA Analysis Of Variance ECG Electrocardiography EEG Electroencephalography EMG Electromyography KNN K-Nearest Neighbors ML Machine Learning PLR Pupil Light Reflex

PERRLA Pupils are Equal, Round, and Reactive to Light and Accommodation SVM Support Machine Vector

(5)

Table of Contents

1 Introduction 1

2 Background 3

2.1 Related Work . . . 5

2.1.1 Caffeine related research . . . 5

2.1.2 Machine learning for pupillometry . . . 5

2.1.3 Effect of different wavelengths on the pupil . . . 7

2.2 Summary of related work . . . 7

3 Problem Formulation 8 3.1 Research Questions . . . 9

3.2 Hypothesis . . . 9

4 Method 10 5 Ethical and Societal Considerations 11 6 Eye manipulator 12 6.1 Circuit . . . 12 6.2 LED pattern . . . 12 6.3 Frame . . . 13 6.4 LED irradiation . . . 14 7 Experimental setup 15 7.1 Baseline measurement . . . 15 7.2 Caffeine measurement . . . 16 8 Offline analysis 17 9 The selected classification algorithm 20 9.1 ANN for the real-time analysis . . . 21

10 Statistical analysis 22 10.1 Baseline measurement . . . 22 10.2 Caffeine measurements . . . 22 10.3 Baseline vs caffeine . . . 22 11 Online analysis 23 11.1 System architecture . . . 23

11.1.1 Low level system communication . . . 23

11.2 Task level system architecture . . . 25

11.2.1 Real-time message handling . . . 27

11.2.2 Synchronization for real-time data collection . . . 28

11.3 Real-time caffeine detection . . . 29

12 Results 30 12.1 Offline analysis . . . 30

12.1.1 Data processing . . . 30

12.1.2 Comparison of classification algorithms . . . 36

(6)

12.4 Online analysis . . . 41

12.4.1 Data validation . . . 41

12.4.2 Filtering . . . 41

12.4.3 Feature extraction . . . 42

12.4.4 ANN . . . 42

12.4.5 Real-time caffeine detection . . . 43

13 Conclusions 44

14 Discussion and future work 46

References 49

(7)

1

Introduction

The pupil is the black spot centered in the iris (the colored area of the eye) and it is the open-ing to the eye. Pupil diameter decreases to stimulus by light. Its primary function is to control the amount of light that can get into the eye (i.e. onto the retina), by constriction and dilation. However, pupil size is not exclusively determined by the amount of light. Other factors, like ac-commodation (bringing a near object into focus), emotional stimuli and increased cognitive load, also result in constriction and dilation. Furthermore, neurological defects, drug use and mental state, can all effect the size and responsiveness of the pupil. Therefore, pupil measurement and examination can reveal much about an individual’s health and mental condition.

There are two main circumstances in which pupil examination is important: in clinical and in research settings. In the clinical setting, the pupil’s responsiveness to light stimulus is examined

for various reasons, such as part of a neurological examination, in critical care [1] to monitor

pa-tients with traumatic brain injury and stroke [2,3], etc.

There are various methods of eye examination that require the use of different equipment. For in-stance, with a slit lamp (a microscope with bright light), it is possible to see the back of the eye. It involves shining bright light through the pupils and requires that the pupils are dilated by dilating drops. During the examination, the patient’s head is kept steady by a chin rest and a forehead band. The physician sits in front of the patient and observes the pupils through the microscope. The success of such evaluation is highly dependent on the physician’s level of knowledge and experience.

Another procedure, which needs a minimal equipment is the PERRLA test. PERRLA is an

acronym (pupils are equal, round, and reactive to light and accommodation) that reminds the physician what to check for when examining the pupil. Normal pupils are equal in size (within one millimeter with each other) and perfectly round. The physician usually takes measurements by a millimetre ruler. A healthy pupil should contract when exposed to light and when shifting focus from a far object to a near object. The equipment needed for the PERRLA test is a moderately lit room, a millimetre ruler (pupillary gauge) and a transilluminator (light source). The physician

tests the pupillary light reflex by applying the swinging light test[4]. This test involves exposing the

pupils, one at a time, to a hand-held light source while taking measurements manually. Pupillary responsiveness to accommodation is tested by bringing the tip of a pen close to the eye, along the patient’s visual axis.

Both types of examination methods mentioned above are manually conducted and require the expertise of a medical professional. Studies have shown that such manual examination is prone to

inaccuracy in measurements at levels as high as 39% [5]. A much more effective method involves

automated pupillometry, for instance by a pupillometer. A pupillometer is a handheld infrared equipment that measures pupillary reflexes induced by light stimulus. It provides a more objective and more accurate measurement than the manual methods, and the measurement data can be saved for offline analysis. Pupillometers are proprietary devices with set functionalities. Therefore, they are neither adaptable, nor versatile. Another disadvantage with pupillometers is that the measurement data has to be analysed offline and manually by a medical professional. This can be both time consuming and potentially result in faulty diagnosis.

Pupillometry is also utilized in the research setting. The pupil’s reaction to light stimulus

un-der different circumstances e.g. unun-der the effect of substances (drugs, alcohol, caffeine, etc)[6,7,8],

health problems (Alzheimer’s and Parkinson’s disease) [9] and different states of mind (working

memory load, sleepiness, etc)[10, 11, 12], is widely studied in the field of neuroscience. Usually

(8)

light reaching the pupils. In such cases, the measurement data must be discarded and, the test may have to be repeated. Thus, reaching conclusion from test results can be a lengthy, repetitive process because of the offline nature of the evaluation procedures, often requiring a human expert in the loop. If the faulty data could be discovered at the time they occurred, it would be possible to discard them and repeat the measurement immediately. This way, it would be possible to en-sure the inclusion of correct data only, by the time the test finished. Such a closed feedback-loop could perhaps be realized by a real-time solution with precise synchronization between subsystems. Another common problem with pupil testing methods today, is being cumbersome and even un-reliable. This is because the various types of equipment utilized can be sensitive to calibration, ambient light and other factors; for instance if the test subject wears glasses, it can reflect light,

leading to bad measurement[13], or if the test subject’s head moves farther from the light source

of the manipulator while taking the measurement, the data will become inaccurate. Such data has to be identified and excluded from any further evaluation. However, faulty data identification is not a straightforward task, as multiple attributes have to be recorded and considered in parallel. For instance, to be able to conclude that a pupil measurement is faulty because the test subject’s head moved too far from the manipulator’s light source, such head movement information needs to be recorded in parallel with the pupil dilation data. Obviously, in such a scenario, pupil dilation is the main source of interest, while head movement is irrelevant to the objective of the study. Unfortunately, the more data is collected, the more constraints are put on the system, in terms of bandwidth, throughput and processing power. A wearable solution for a manipulator, that sits at a constant distance from the eye, could perhaps simplify the problems arising from light conditions related to distance and ambience. In effect, it would be possible to decrease the amount of data collected.

Furthermore, the different means of manipulators utilized and the lack of specificity of them in research makes it difficult to reproduce the reported results. The different light source manipula-tors used include flashlights, monitor screens, cellphones, and pen light. These tools have different wavelengths and brightness levels and therefore produce different test results, often incomparable across studies. A versatile, wearable solution would possibly help standardize pupil manipulator equipment in research, thus enable easier reproduction of results.

This thesis report is structured in the following way: section 2, Background, will explain some of the important definitions related to pupillometry, as well as briefly present the history of meth-ods of pupillometry and identify the state-of-the-art today. In section 3, Problem formulation, the challenges will be identified, and the research goals and research questions of this thesis work will be defined. In section 4, Method, the steps toward answering the research questions will be described in detail. Section 5, Ethical and societal considerations, the method of handling sensitive or private information, will be declared. In section 6, Eye manipulator, the design of the eye ma-nipulator is described in details. Section 7, Experimental setup, will explain the procedures of the measurements. In section 8, Offline analysis, the method and procedure of the data processing, as well as the classification algorithm in the offline analysis method will be described. In section 9, The selected classification algorithm, the chosen machine learning method and its implementation is described. Section 10, Statistical analysis, explains the statistical tests that have been performed on the obtained data. In section 11, Online analysis, the implementation on the real-time platform is explained. Section 12, Results, will show all the obtained results from the offline and real-time analysis. Section 13, Conclusion, will summarize the thesis report and discuss its significance. Section 14, Discussion and future work, will discuss the results and the suggested future work.

(9)

2

Background

PLR (Pupillary Light Reflex) is the name of the phenomenon that can be observed when the pupils are stimulated by light. Brighter light conditions trigger pupillary constriction, while darker light conditions result in the dilation of the pupil. The PLR can be triggered in two different ways thanks to the afferent and the efferent nerve pathways. The afferent pathway is responsible for pupil constriction to direct stimulus, i.e. if light is shone in the eye, its pupil will constrict. The efferent pathway, on the other hand, causes constriction in the other eye, i.e. shining light into one eye causes equal constriction in the other eye as well. When the stimulus turns off, the pupil’s diameter expands and returns to its baseline (original size). If these diameter values - following a dose of light stimulus - are drawn on a chart with respect to time, a curve called the PLR curve, can be visualized. Figure 1 shows such a typical PLR curve.

Light onset

Light offset

CA

D

base

D

min

t

C

t

L

Pupil 

diameter

Time

MCV

MRV

Figure 1: Shows a PLR curve, following light stimulus. Dbase is the initial diameter before illumination.

Response latency (tL) is the time delay before the pupil starts to constrict after the light onset. Constriction

time (tC) is the time interval between the start of constriction and the minimum diameter Dmin. The

velocity of this phenomena is M CV , the maximum constriction velocity. CA is the constriction amplitude, i.e. the total amount of constriction. After illumination, the pupil starts to redilate to its initial diameter. M RV (maximum recovery velocity) is the speed in which this happens.

(10)

Bellarminow, in 1885 [15], was first to describe his study utilizing photographic technique. The horizontal movement of a light sensitive paper could capture the pupil’s dilation as a black band with changing width. Later, the invention of cinematography contributed to the new method of pupillography which produced a long series of photographs, capturing the pupillary reflex in motion. The developments in infrared technology have also been a substantial advancement that enhanced data accuracy. Namely, it made data collection possible in complete darkness, which is an important condition for establishing a baseline pupil size. Lowenstein and Loewenfeld used

infrared technology in their ”electronic pupillograph” [16]

Until this time, any pupillary measurements could only be taken manually, by a ruler or a compass. However, with the advent of computers, automated pupillometry has been utilized. A pupillometer is such an automated device, specialized for use in clinical settings. These devices are equipped with visible and infrared light and are able to manipulate one eye at a time. They capture the pupil’s reaction to light stimulus and enable offline data analysis. They measure pupil changes in a quantitative way and have more objective, and thus more accurate results than the previous,

manual methods[17].

Unfortunately, commercial systems, like pupillometers, are purposed for a specific task and they

use proprietary software, which limits the usage of these systems for other applications [18]. In

research settings however, often more versatile and more adaptable monitoring and data collection tools are required. Most recently, research has taken advantage of the evolving eye-tracker smart

camera technologies [18]. Smart cameras are embedded systems that incorporate real-time image

processing capabilities to extract high-level features from the viewed scene. In contrast with tra-ditional cameras that capture and output pixel color related information, smart cameras typically

output application-specific data for use in intelligent systems[19]. Some of their many use case

applications are traffic- and human surveillance[20], assisted living[21,22], health care[19], medical

research[23], human detection, movement analysis, etc. They are capable of performing real-time

image analysis[24] at a high frame-rate with high resolution, while their integrated processing units

replace full-size desktop computers. These compact systems are favourably utilized at embedded

systems research[25]. Eye-trackers are smart cameras specialized for eye-related data generation.

They potentially enable the creation of highly versatile pupillometric systems.

There are many different types of physiological sensors utilized in research, along with pupillo-metric systems. Physiological sensors are equipment that are able to record data from human physiological signals. Types of such sensors are ECG (electrocardiography, also called EKG) for measuring electrical activity of the heart, EMG (electromyography) for recording electrical activ-ity of skeletal muscles, EEG (Electroencephalography) for recording electrical signals produced by brain activity, etc.

EEG is a noninvasive method that only requires electrodes to be attached on the skin of the scalp, thus causes no discomfort. It has a wide range of clinical applications, from diagnosing epilepsy

and other seizure disorders[26], sleep disorders, confirming brain death in patients in coma, to

finding the right level of anesthesia[27], etc. EEG is also used in research of mental disabilities like

ADD, ADHD and Alzheimer’s disease, mental state classification[28], mental workload[29,30], and

other mental states. Since these conditions can also affect the pupil behaviour, EEG’s are likely to be used in state-of-the-art pupillometric research together with an eye-tracker in order to evaluate pupillary data. For example, pupil dilation has been observed in response to emotionally charged

(whether positive or negative) stimuli[31] (presented as images, words, sounds, etc). Moreover,

pupillometric research has been interested in the correlation between the degree of pupil dilation and cognitive load. Studies have been conducted where participants solved cognitively demanding tasks (e.g. Stroop task) while pupillary response was tracked. The results suggest that more

in-tensive cognitive processing results in larger pupil diameters [32].

The evaluation of sensory data is often done manually. However, automation of this process is a sought-after technique. ML (Machine learning) algorithms are promising candidates for this be-cause they are used to analyse, predict and find relationships between linear or non-linear data sets

(11)

in different applications. ML can be divided into two main groups based on learning method: su-pervised and unsusu-pervised. At a susu-pervised learning, the output of the training data set is labeled with the known solution. In contrast, at an unsupervised learning, the output is unlabeled and

the algorithm should find the solution [33] by itself. One of the applications of ML is to analyse

data within pupillometry, where the data is gathered by appropriate devices in form of numerical or imagery data. Furthermore, machine learning algorithms can be used for prediction making and for classification, for the purpose of emotion classification, prediction of different eye diseases, mental workload, etc.

2.1

Related Work

In this section, the related work is investigated and categorized into three sections. First, caf-feine related studies are examined. Then, the different machine learning algorithms utilized in pupillometry is summarized. The last section discusses the effects of manipulation by different wavelengths.

2.1.1 Caffeine related research

Multiple factors can affect the light response of the eyes, some of which are age, gender, iris color,

drug use, neurological health, etc [18]. Figure 1 in the previous section, shows the PLR curve as

a function of time in normal pupils. This curve will look different in subjects after drug use or even caffeine consumption, for example. The effects of caffeine on humans have been an interest of science for some time. Most studies measure pupil size in relation to conditions like sleepiness, alertness, cognitive performance, etc, with and without (before and after) the effect of caffeine, but not using light stimulus; i.e. only pupil size is measured, not the pupillary light reflex. Some papers conclude that caffeine has an effect on the pupil indeed, while others failed to detect any significant effect.

Minzhong et al. in 2004 [34], measured pupil diameter and pupil contraction latency in sleep

deprived participants, some of whom ingested caffeine (400mg on one day and 100mg on the next), others placebo. Test measurements were taken by a FIT 2003 oculometer (Pulse Medical In-struments, Rockville, Maryland), but found no significant difference between the placebo and the caffeine group, in terms of pupillary measurements.

A study in 2008 [35], has been able to show the effect of a low dose (200mg) of caffeine on alertness

in non-sleep deprived individuals. They did this with the OptalertTM system [36], which utilizes

infrared reflectance oculography based on the JDS [37] scores. This method measures total blink

duration and eyelid velocity while opening and closing.

The study in 2017 [38], measured the amplitude of pupil size and accommodation after 250 mg

caffeine ingestion. Pupil size was measured with a slit microscope and a millimeter ruler, in a dimly lit room, 30, 60 and 90 minutes after consumption. Statistical data analysis was done utilizing the software Graphpad prism (GraphPad Software Inc., San Diego, CA, USA). The obtained results show that caffeine dilates the pupils and increases the accommodative amplitude.

Bardak et al. [8] examined certain ocular variables, including pupil size, after a single

admin-istration of coffee (57mg caffeine). The measurements were taken with a wavefront aberrometer (Irx3, Imagine Eyes). The authors conclude that changes in pupil diameter, however slightly increased, were not statistically significant.

(12)

workloads of nine subjects, in two different time segmentation (one-second average data and recall average data). Pupillary data (diameter, fixation, divergence and movement) was obtained using an eye-tracking camera while the subjects were performing memory tests. The algorithms were neural network with back propagation, logistic regression, and classification tree. ANN and clas-sification tree showed more significant results than logistic regression. According to the authors, classification, based on one-second average data is more suitable for real-time application than recall averaged data.

The study by Acharya et al. [40] evaluated the performance of an Artificial Neural Network

(ANN), a Fuzzy classifier and a Neuro-Fuzzy classifier, based on data from 135 subjects with nor-mal health as well as subjects with three different eye diseases. The images were processed and four different features were extracted. The data-set was divided into a training set with 76 samples and a test set with 54 samples. All classifiers achieved 100% accuracy for the normal subjects, while the accuracy for the diseased subjects varied. However, the Fuzzy and Neuro-Fuzzy classi-fiers obtained the same accuracy and performed better than the ANN on the diseased eyes. As an average percentage, ANN, Fuzzy classifier, and Neuro-Fuzzy classifier obtained 89%, 92.94% and 92.94%, respectively. It is mentioned that the performance of the classifiers can be improved by using more training data, and by finding more suitable input parameters.

By measuring the pupil diameter, emotions can be classified. One of these studies is done by

Areej et al. [41] where the emotions of the subjects (positive emotion and negative emotion) are

classified using K-Nearest-Neighbor (KNN). In this study two data sets were gathered from thirty subjects in normal state in two different experimental setups. Once the data sets were gathered, the data was processed in order to removes outliers to increase the accuracy of the result. The

classifier achieved an accuracy of 96.5%. Classification of emotions is also studied in the paper [42]

by Aracena et al, where the experiment was performed on four subjects. The data was gathered by an eye tracker camera when the subjects were looking at images on a monitor. The emotion states were divided into negative, positive and neutral classes. The data sets were preprocessed before analysing. Classification (individually as well as for all subjects) of the data was performed by an ANN with different number of neurons in the hidden layer and a combination of algorithms of Decision Tree (DT) and ANN. The DT with ANN had a better classification performance for both cases than ANN alone. The results showed that DT combined with ANN with 60 neurons in the hidden layer obtained 53.6% precision while ANN with 60 neurons in the hidden layer achieved 50.1% precision.

In a study by Wioletta et al [43], the authors investigated PLR in reaction to flash by

differ-ent colors of light (white, red and blue). The subjects that took part in this experimdiffer-ent were a combination of healthy and Alzheimer’s diseased (AD) individuals. This study aimed at classify-ing and predictclassify-ing persons with AD. After extractclassify-ing features from the data set by usclassify-ing Discrete Fourier Transform (DFT), the random forest technique was used to classify the data. This re-sulted in the conclusion that it is possible to identify a healthy person, while the results of the subjects with AD varied (most error rates were obtained at 50%). According to the authors, more details and information - such as eye movement, blinking and degree of the AD - of the pa-tients should be taken into consideration while analysing the PLR response, to improve the results.

In a work by Jungjin et al [44], the learning state of 72 subjects were studied. Data was gathered by

an eye tracker while the subjects were in the learning state. To classify the obtained data, a Sup-port Machine Vector (SVM) was used. The data set was divided into two parts: 80% training set and the remainder 20% was used for validation. The result of the chosen algorithm was validated using a K-fold cross validation method. The result of the preprocessed data-set was compared to the result of non-preprocessed data-set. However, it showed that with a preprocessed data set, SVM obtained 68.8% accuracy while the gained accuracy without preprocessing was 65.114%.

(13)

2.1.3 Effect of different wavelengths on the pupil

Increasing the light’s intensity on the pupil leads to a maximum constriction amplitude, lower

la-tency response, and maximum redilation [45]. The pupillary response varies to light with different

wavelengths but same intensity. In the paper of Baritz et al [46], a measurement of PLR was

performed on four subjects by using different wavelengths (red, blue, white, green and yellow). Each eye of the subject was manipulated for 5 seconds with the mentioned wavelengths. The pupils’ diameter changed a different amount to each wavelength. The results showed that red light constricted the pupils less than the other wavelengths, especially yellow, which constricted the pupils the most.

Another study that investigates the effect of red and blue lights on the pupil is done by Herbst et

al [47]. One eye of each of the 10 healthy subjects was stimulated for 20 seconds by blue (470 nm)

and red (660 nm) wavelengths with a fixed intensity of 300 cd/m2, and the reaction of the second

eye was recorded with an infrared video camera. According to their experiment, stimulation by blue light resulted in a larger constriction amplitude and the pupil maintained its constricted size during the whole time of the exposure. In contrast, the pupil partially redilated to stimulation by red light. Some difference between the effects of red and blue light stimulus was also observed after light offset. In case of red light, pupil size regained to its baseline within 20 seconds, while for blue light it took over 30 seconds for every subject.

Another factor that changes the pupil light reflex in humans is iris color. It has been shown that normal and healthy subjects with brown iris color have higher pupil contraction velocity and redilation velocity and also higher amplitude (magnitude of the pupil contraction) than the sub-jects with blue iris color. However, there are no significant differences in pupil size and latency

response between these groups [48].

2.2

Summary of related work

Studies show that PLR has been investigated in different application areas. The eyes were illu-minated with different wavelengths in different experimental set ups. The most common device that is used to gather data is an eye tracker. ML is used to classify the obtained data in purposed applications. Studies show that the commonly used classification algorithm is ANN. ANN, as well as KNN and the fuzzy classifier have all achieved higher accuracy than SVM. It can also be mentioned that all these studies were performed offline.

(14)

3

Problem Formulation

Some in-house, ongoing research studies at M¨alardalen University currently utilize a system for

pupillary analysis. This system is equipped with a flash light manipulator, an eye tracker camera system and an EEG device (see figure 2). The EEG serves two purposes. It not only measures brain activity in certain research, but also provides synchronization between the different subsystems. Because of this synchronization requirement, there is currently no way to exclude the EEG from the system, even if it is not used for sensory measurements. In this system, as well as in the current, general methods of pupillometric research, four problems have been recognised:

1. Current methods of pupillometry are missing a standardized solution for a manipulator. They produce uneven and inconsistent intensities of illumination. Such inconsistency in research makes it difficult to reproduce the reported results. Furthermore, the results obtained this way can not be reliable. Therefore, a more reliable way of illumination is needed.

2. Occasionally faulty or inaccurate data collection calls for repeated pupillary measurements. However, discovery of faulty data takes place too late in the testing process, since data evaluation is usually done after the measurement process. Real-time fault detection together with a feedback loop to the manipulation would shorten the testing procedure.

3. Manual analysis can be prone to misinterpretation and faulty diagnosis. Manual data analysis should be substituted with an intelligent, automated method to reduce the time of data analysis and to reduce faulty diagnosis.

4. The current system at M¨alardalen University, should have the option to substitute the EEG

if needed. Therefore, another solution for synchronization is necessary.

Figure 2: Shows the current system at M¨alardalen University. The eye tracker gathers pupillary data from the subject, while the flash light above the camera illuminates the pupils. The measurement data is sent to the host computer where the operator sits. The subsystems are synchronized by the EEG device.

To solve the problems above, a new system - integrated with the current system - with the following properties is proposed (see figure 3).

1. The eye manipulator should be redesigned. A wearable form factor that is not sensitive to the head movements of the subject, would provide a more consistent illumination.

2. With the use of a real-time embedded system and a fault detection method, faulty data could be discovered and handled in real-time, thus shortening the whole testing process.

(15)

3. An intelligent embedded system with some machine learning method, could automate the analysis and evaluation of pupillary data. Thus, it would eliminate the need for manual data analysis.

4. A synchronization method by the embedded system would free the EEG from its synchro-nization duties, thus making the system cheaper and more modular.

Figure 3: Shows the proposed embedded system. In this system, no operator is needed for the data evalua-tion. Instead, the gathered data is sent to the embedded system for analysis. The results are presented on the host computer. The embedded system also provides a fault detection feed-back loop through controlling the eye manipulator. The EEG device is now optional, because the embedded system provides synchronization.

3.1

Research Questions

The problems and the proposed solutions mentioned above, lead to the following research questions: RQ1. What are the characteristics of an embedded system that is suitable in terms of speed,

accuracy and dependability to process pupil dilation data in real-time?

RQ2. What kind of communication and synchronization method would be sufficient between the EEG medical equipment, the eye tracking camera, and the embedded system that also minimizes latency?

RQ3. Based on the unique characteristics of the pupillary data, what kind of data processing method should be applied that is useful for classification?

RQ4. What would be an appropriate machine learning algorithm as classification method for a real-time embedded system?

RQ5. Is the system able to detect one administration of caffeine via pupillometric data?

3.2

Hypothesis

The hypothesis of this thesis states that the proposed system in Figure 3 would enable a faster and more adaptable data analysis mechanism, thus contributing to providing a versatile tool for research in neuroscience. With regards to caffeine detection, the hypothesis states that it is possible to predict one administration of caffeine in subjects, by utilizing machine learning algorithm.

(16)

4

Method

To answer the research questions above, an iterative study and development strategy has been employed. First, a literature study of the related works was conducted in order to understand the state-of-the-art methods of pupillometric research with regards to both hardware and algorithm aspects.

In the first development step, the eye manipulator hardware and software were designed and developed. This hardware was fundamental to gather some initial measurement data with the given system. On the other hand, the software was necessary to provide control to the hardware. A prerequisite for the software development was to get acquainted with a new development envi-ronment, i.e. LabVIEW.

The eye manipulator was used in the next step to obtain a deeper understanding of the given system, as well as the raw data that it can produce. For this, a series of empirical measurements were taken with the data collection equipment. These measurements produced the essential raw data that could be visualized and analysed in an offline practice. With the help of the raw data, it was possible to experiment with different data processing methods and choose the most suc-cessful and necessary ones. At this point, some tests could be carried out with some candidates of machine learning algorithms. After the evaluation of the tests, the best performing algorithm was chosen. The result of this process could answer RQ3 and also established an offline pupillary analysis method.

Understanding the given system also provided an insight into the possible design choices regarding the required embedded system. In this stage, appropriate communication and synchronization methods could be established and RQ2 could be answered.

To answer RQ1, the embedded system had to be developed. For the development it was

es-sential to understand the nature of the raw data, as well as the employed data communication method, so that the speed and dependability of the system could be adjusted. The previously developed offline analysis method also provided a ground for the implementation in the current (online) stage, since the data analysis and the machine learning parts could be reused.

RQ4 and RQ5 could be answered during evaluation tests conducted with the final embedded system design. The results from the online tests were compared to the results with the offline method.

(17)

5

Ethical and Societal Considerations

The data that have been documented about each subject are first name, age, gender, weight, as well as known eye abnormalities. These data are handled according to GPDR (the General Data Protection Regulation) Law. All data documentation have been hidden and the names of the subjects have not been published in the report. No other than the researchers have access to the data. Instead of their names, the subjects are referred to as S1, S2, S3, etc. in the report. Participating in the tests was voluntary and the participants were informed about the measurement procedure. All the measurement sessions started with the participants’ consent.

(18)

6

Eye manipulator

This section describes the design of the eye manipulator and the different components that were used to create it. The circuit was designed and given.

6.1

Circuit

LED’s (Light Emitting Diodes) are used as light source in the circuit of the manipulator. LED’s can be monochromatic (single-colored) or multichromatic (multi-colored). Monochromatic LED’s enable a more precise control of the desired wavelength and light intensity. In this study, four dif-ferent colours (wavelengths) of monochromatic LED’s have been chosen: yellow, red, green, blue. These colours have also been used in other studies involving PLR. The table below shows the exact wavelength of the selected colours. In order to switch the LED’s on and off, each group of the

Table 1: Shows the selected wavelengths.

Yellow Red Green Blue

wavelength (nm) 592 632 520 465

Luminous intensity (mcd) 80 60 225 150

LED’s is controlled by a transistor. The most common types of transistors are the current driven BJT (Bipolar Junction Transistors) and the voltage driven FET (Field Effect Transistors). The advantages of BJT over FET are lower impedance and operation in higher frequencies, both of which are important attributes to consider when the component is used with PWM (Pulse Width Modulation).

Resistors have been used so that the LED’s and the transistors get the correct amount of cur-rent and voltage in order to operate properly. The diffecur-rent wavelengths of LED’s require resistors with different resistance values. To calculate these resistance values, the following formula was used:

Rc =

Vin− Vf

If

(1)

Where Rc is resistance in Ω, Vinis the voltage from the source (5V in this case), Vf is the forward

voltage of the LED, If is the forward current of the LED.

Rb =

hF E(Vport− VBE)

Ic

(2)

Where Rb is resistance in Ω, hF E is the gain factor of the transistor,Vport is the digital signal (in

this case 3.3V), VB E is the base voltage and Ic is the collector’s current.

6.2

LED pattern

The LED’s have been placed in a circular pattern onto the PCB board, see figure 4. This pattern creates potentially the most even distance and thus the most uniform illumination of the pupil. The diameter of the circles was chosen to be 40mm. A circle that is too large, could place the LED’s too far from the pupils. On the other hand, a circle that is too small, could interfere with the ability of the camera to recognise the eyes. Figure 5 shows the PCB design.

(19)

Figure 4: For each eye, four of each wavelength of LED’s are placed on the circuit board. The angle between the LED’s of the same wavelength is 90◦ and the angle between each LED is 22.5◦. The circle’s diameter is 40mm. This diameter is the minimum diameter through which the eyes could be detected by the camera.

Figure 5: The PCB’s of the eye manipulator.

6.3

Frame

To make wearing the designed PCB’s possible, a pair of 3D cinema glasses and some self-designed 3D printed frames have been used. The cinema glasses have been modified so that the frames themselves were removed and only the top-bar, hinges and the temples remained. A pair of frames have been designed with Solidworks software and printed so that they encapsulate the PCB boards and can be attached to the top-bar. One challenge with the design is to make it fit everyone, since the distance between pupils can vary individually. Therefore, the frames have been designed so that they enable the PCB’s to slide left or right and adjust the distance between them. Figure 6 below shows the completed manipulator.

(20)

Figure 6: The 3D-printed frames are attached to the top-bar of the stripped-down 3D cinema glasses. The PCB’s can slide left or right to adjust their distance.

6.4

LED irradiation

For a safe pupillary illumination with different wavelengths, it is important to take the intensity of the light source, as well as the illumination duration into consideration. Large light intensity and long illumination can damage the eyes. Therefore, it is crucial to know the maximum allowed light intensity and duration. According to the IEC-62471 documentation from the International

Electrotechnical Commission (IEC) organization [49], a light source with a wavelength between

300-700nm should have an maximum irradiance of 1.0W/m2, if the illumination duration is more

than 100 seconds. If the illumination duration is less than 100 seconds, then the allowed irradiance should be calculated as follows:

EEL=

100

t (3)

Where EEL is the irradiance illumination limit in W/m2 and t is the illumination duration in

seconds.

To measure the actual irradiance of the LED’s, each wavelength was tested in a dark room by a radiometer. The distance between the light source and the radiometer was adjusted to the ap-proximate distance of the eye of a person wearing the glasses. Table 2 shows the results from the radiometer. The illumination duration for all LED’s is 200ms in this thesis. The reason of selecting this duration time is discussed in the Discussion section. By using this duration time

in formula 3, the maximum irradiance exposure that occurs is 500W/m2. It is obvious that all

obtained irradiance values are below the allowed maximum. This ensures that the eyes of the subjects will not get damaged during the measurements.

Table 2: Shows the irradiance of four LED’s of the same wavelengths. These results indicate how much illumination each eye will receive by each wavelength, at each flash of manipulation.

Irradiance (W/mˆ2)

Yellow 0.005

Red 0.015

Green 0.020

(21)

7

Experimental setup

The measurements took place in a dark room (approximate illuminance <0.079 lux). The subjects were seated in front of an unlit monitor marked with a sticker in the middle of the screen. They were instructed to focus on that designated point during the measurement. The eye tracker device was placed at the bottom of the monitor, facing the subject according to the official, recommended setup. The distance between the subjects and the eye tracker was approximately 50cm. The subjects were wearing the eye manipulator. The eyes of the subjects were illuminated by one 200ms long flash of light, by one of the wavelengths at a time. The flash was followed by a 15 second pause to let the pupils redilate. After the 15 seconds, a new flash of a different wavelength followed. This is illustrated in figure 7. There was a buzzer given to the subjects that would alert them one second before each illumination. Alerting before each flash was useful in order to obtain as many valid measurements as possible. When the exact event of illumination was unknown to the subjects, more faulty (i.e. blink contaminated) data was obtained. The eye tracker was recording and logging during the whole time of the measurement. Figure 8 illustrates the procedure.

200ms

15s 15s 15s 15s

200ms 200ms 200ms

Figure 7: Shows the order of the different wavelength during manipulation. The colors of the circles indicate the wavelength of the illumination in order. After each flash, a 15second pause was scheduled so that the pupils could redilate to their initial diameters.

Buzzer Illumination Next illumination Measurement start 1s 14s Data logging LED Manipulation

Smart Eye software

Measurement end

Tracking

Figure 8: Shows the procedure of the measurement. The Smart Eye software was logging the measurement data. One second before each illumination, the buzzer alerted the subject to look in the middle of the screen.

The above described measurement was applied during both the baseline and for the caffeine mea-surement. These measurements are explained below.

(22)

7.2

Caffeine measurement

For each subject, 3mg of caffeine/kg of body weight was administered. (The weight of each subject was measured in place.) Immediately after caffeine intake, the manipulation started. During this procedure, the pupils were illuminated five times by each wavelength (adds up to five minutes) and then a five-minute resting time was scheduled. This was repeated seven times. This means that every ten minutes, a five-minute measurement was obtained. The total length of this procedure took 65 minutes.

The obtained data was saved for offline analysis. The steps of the offline analysis are explained in the next section.

(23)

8

Offline analysis

The pupillary data gathered from the measurements are left pupil diameter, right pupil diameter as well as both pupils’ diameter (consensus values of the right and the left pupil). There are several problems with the data sets. Each data set is a continuous measurement whereby the individual PLR curves are not separated. Furthermore, the data is noisy and it contains blinks and outliers. This section explains the steps of the offline analysis procedure. Figure 9 shows the different steps employed in the order of application. Each step is explained separately in the following subsections. All these steps were implemented in a MATLAB software environment.

Data set Filtering Curve

extraction Blink removal Feature extraction Outlier removal Normal-ization Classifi-cation

Figure 9: Shows the steps to take during offline analysis.

Filtering

Due to the sensitivity of the measurement equipment, the gathered data is quite noisy. This noise has to be reduced. A filtering method can be applied to the data set to smooth it out. For this purpose, median filter is utilised. A median filter replaces each data point by taking the median value of the neighbours of that point. The amount of neighbours needs to be selected carefully. By taking a large number of neighbours, the shape of the PLR curve can be distorted which causes inaccurate feature extraction in a later step. The amount of neighbours for each data point is set to 40. This value has been selected after testing with different number of neighbours.

Curve extraction

Each obtained data set is a continuous stream of pupillary diameter measurement, containing all wavelengths. Curves that correspond to a specific wavelength need to be separated from the other curves. This is needed in order to analyze and categorize data based on the wavelengths. To separate the signals, the beginning of each PLR curve has to be known. A synchronization time stamp at the beginning of every flash enables the separation of the individual PLR curves. The synchronization timestamp implementation is explained in section 11.

Blink removal

If a blink occurs during a test, the pupil diameter values will be set to zero by the eye tracker in the interval of the blink. Any zero values would preclude the feature extraction step, when trying to find the minimum of the curve amplitude. Therefore, the zero values have to be eliminated.

(24)

One way to reconstruct the curve is to use the cubic spline-fit [50] algorithm. This method uses four equally distributed time points and interpolates between the start point and the end point of the blink. After finding the blink start and end times, the following formulas are used to calculate the remainder time points:

t1= t2− t3+ t2 (4)

t4= t3− t2+ t3 (5)

Where t1is the time before the blink, t2 is the start point of the blink , t3is the end point of the

blink and t4is the time after the blink.

Feature extraction

After filtering and removing the blinks from the data set, five dynamic features are extracted from each pupillometric curve. The extracted features are the base diameter, the minimum constriction diameter, the constriction amplitude, the time the pupil has recovered to 50% of the constriction amplitude, as well as recovery velocity. These features are shown in figure 10 and the following

formulas [51] have been used to extract the five features:

dbase= d(0) (6) dmin= min(d(t)) (7) ∆d = dmin− dbase (8) tR= [min(t) − tmin]|  t > tmin d(t) = 0.5 ∗ ∆d + dmin (9) νR= 0.5 ∗ ∆d tR (10)

where dbase is the initial diameter, dmin is the minimum diameter after illumination, ∆d is the

constriction amplitude, tR is the recovery time of 50% of the constriction amplitude, νR is the

recovery velocity and t is the time in seconds.

tmin dmin dbase d 0.5 d tr Time Pupil diameter

Figure 10: Shows the extracted features from the pupil’s signal

Outlier removal

If a subject’s gaze wanders off the middle of the screen where they were instructed to focus or, when the eye tracker fails to find the correct position of the pupils, the eye tracker might imprecisely estimate the pupil diameter. Such events will produce data that looks very dissimilar to the other curves in the same iteration (illuminated by the same color). Such data should be considered as outlier, since features extracted from such a curve will not be an accurate representative of the set. Therefore, these outlier data are removed from the data set by the MATLAB function, called rmoutliers.

(25)

Normalization

Before the features are sent to the machine learning algorithm, it is good practice to normalize the data set. Normalization ensures that every feature value is between 0 and 1. A normalized data set helps the classification algorithm to converge faster in the training phase. To normalize a data set, the following formula is used:

x0 = x − min(X)

max(X) − min(X) (11)

Where x is a feature value in feature set X, x0 is the rescaled value of x, min(X) and max(X) are

the minimum and the maximum values in the feature set X.

In an offline analysis where all the data is at hand at once, it is possible to select the minimum and the maximum values of each feature set. However, in the real-time system, a new value can change this feature-range during runtime. Even though re-normalizing the data set would be feasible, this operation would falsify the previously obtained and classified data. The only way to normalize such a dynamic, real-time data is to choose predefined (non-dynamic) minimum and maximum values. The problem then is, how to choose proper values. Some heuristic knowledge about the real-time

data is necessary. For instance, it is known that human pupil sizes occur between 1.5-8mm [18].

This means that it is quite safe to define the range for the initial diameter feature to be between the values 1.5-8mm, since the chance of encountering an outlier is statistically low.

In this study, predefined values were used for both the offline and the real-time analysis, so that the two methods would be comparable with one another. The minimum and maximum values below have been sufficient for normalization of the data set. These values were selected after analyzing the offline data set.

dbase= [0.003, 0.008] (12) dmin= [0.002, 0.007] (13) ∆d = [0.001, 0.006] (14) tR= [0.01, 4.5] (15) νR= [0.001, 0.5] (16)

Classification

In order to find out which classification algorithm would be most suitable for this application, MATLAB toolboxes have been used. Three different classification algorithms, ANN, SVM and KNN have been trained and validated with the offline data set. For supervised learning, the sam-ples in the data set should be labeled as caffeine and non-caffeine affected. The data set includes equal amount of samples from both labels.

The Deep learning toolbox has been used to train and test an ANN of type pattern recognition and classification. This ANN type is appropriate for this application since it aims to detect and classify if a subject has drunk caffeine or not. Different ANN configurations, such as number of hidden layers and number of neurons, have been tested. The toolbox requires dividing the data set into three parts, training set, validation set and test set. The results obtained on the test set is used to calculate the accuracy of the ANN. To evaluate the performance of the ANN, mean square error method has been used.

(26)

9

The selected classification algorithm

The selected classification algorithm for this thesis is ANN. The reason is explained in the Results section. A brief explanation of the algorithm is provided below.

ANN is based on the human’s brain and is built of multiple neurons (nodes). ANN can have at least one input layer, one hidden layer and one output layer. Each layer can have one or multi-ple neurons depending on the commulti-plexity of the problem. Each neuron has an activation function and it is connected to the other neurons by links which are weighted. The activation function maps the input of the neuron into the specific range (depends on the activation function) and the output of the neuron is then sent further into the next neuron. There are various number of activation functions such as sigmoid, linear, tansig, etc. It is possible to use the same activation function for the entire ANN but also to combine different activation functions together and use them in the different ANN layers. Figure 11 shows an example of the ANN and figure 12 shows a node in the ANN.

Input Layer Hidden Layer Output Layer

n1

n2

n3

n4

nn

Figure 11: An example of ANN which has one input layer, two hidden layers and one output layer. The neurons are connected by weighted links. Each node has an activation function that maps its input to the desired range. 1 x1 x2 x3 xn Weighted sum

f

w1 w2 w3 w4 wn Activation function Inputs Output Weights

Figure 12: Inputs are multiplied with the corresponding weight and the sum is then sent into the activation function to map it to the functions range.

ANN needs to be trained before using for the proposed application. One of the most common learning algorithm is backpropagation. This algorithm is divided into two passes, forward pass when the output of the ANN is obtained and the error between the output and the actual output is calculated then the obtained error is used in backward pass to update the weights based on the derivative of the activation functions that are used in the system. Once the training is finished,

(27)

the ANN needs to be validate in order to obtain the accuracy of the ANN. If the accuracy in this phase is not high, this means that the ANN needs to be trained more.

9.1

ANN for the real-time analysis

The ANN for the real-time analysis consists of one input layer with five neurons, one hidden layer with ten neurons and one output layer with one neuron. This ANN has been trained and validated offline. The activation function that is used for all layers is the sigmoid function. The range of sigmoid function is between zero and one. Formula 17 shows the expression of the sigmoid function.

x0= e

x

1 + ex (17)

Where x0 is the output and x is the input into the function.

The eye tracker produces data from the right pupil, the left pupil and both pupils. All three data sets are classified by the ANN separately. The final value of classification is the common output value of the three data sets.

(28)

10

Statistical analysis

One of the purposes of this thesis was to find out which wavelength is most suitable for caffeine detection. The developed eye manipulator is equipped with four different wavelengths (red, blue, green and yellow). These wavelengths affect the pupils differently. To find out whether significant differences between the results by these wavelengths can be observed, the ANOVA (Analysis Of Variance) test has been used. ANOVA is a method to test the null hypothesis (there are no differ-ences between the groups) and determines an error value (p) between the groups. The threshold that is commonly used to reject the null hypothesis is 0.05. If p <0.05 then the null hypothesis can be rejected with 95% confidence, meaning there are significant differences between the groups. In this thesis, the following groups have been considered: yellow, red, green and blue. Each group is the results obtained from its corresponding wavelength from the baseline measurement and the caffeine measurements and contains the measured values from the left pupil, the right pupil, and both pupils. The groups have been compared regarding constriction amplitude, recovery time and recovery velocity. The performed ANOVA test is explained in more details below.

10.1

Baseline measurement

The results from the baseline measurement has been investigated in order to find out if there are significant differences between the groups. This test studies the affects of the wavelengths on the three pupillary features mentioned above.

10.2

Caffeine measurements

The ANOVA test has been performed to find out if there is significant difference between the results from all repetitions. This test studies if caffeine affects the three pupillary features in any of the different measurement times significantly.

10.3

Baseline vs caffeine

Lastly, the results from the baseline was compared to its caffeine measurements in order to find out which wavelength caused significant differences between its baseline and caffeine measurements. By doing this, it can be concluded which wavelength is suitable for this application.

(29)

11

Online analysis

The online analysis involves establishing a suitable system architecture and method of data commu-nication on both a hardware and a task level, as well as choosing proper data processing methods that can produce accurate results. The offline analysis described in section 8 has established dif-ferent data processing methods which are reused for the online analysis. The following subsections explain these methods.

11.1

System architecture

In this section, the different subsystems and their relation to one-another is described. First a low level description, focusing on the constraints of communication between the subsystems and design choices are discussed. Afterward, a task level system architecture is presented.

11.1.1 Low level system communication

Figure 13 shows the communication direction, communication method and the type of exchanged data between the different system components. The role of each component, as well as the com-munication between them is explained below.

Data collection and processing

Manipulation

Inter-process

variable Repeat flash

Smart Eye software

GUI schedule creator

USB 3.0

Raw pixel data

Host computer RoboRIO Aurora camera GPIO 5V GPIO 3.3V UDP/IP TCP/IP

Raw pupil data

Manipulation schedule

LED manipulator

timestamp trigger

LED ON/OFF

Figure 13: Shows the hardware components of the system: the Aurora smart camera, the host computer, the RoboRIO and the LED manipulator. The rectangles in the RoboRIO and the host computer represent the communicating tasks. The arrows show the direction of communication between the subsystems. The

(30)

The Smart Eye Aurora camera and the host computer

The data output of the Aurora camera is 2.3 Mp generated at a maximum rate of 120Hz. This pixel data stream is forwarded via USB 3.0 to the Smart Eye software that resides on the host computer. The pixel data is processed in real-time by the Smart Eye software in order to obtain raw pupil data. The USB 3.0 connection between the camera and the host computer is not modifiable.

The host computer and the RoboRIO

A connection between the host computer and the RoboRIO are to serve several goals. Firstly, the raw pupil data generated by the Smart Eye software needs to be sent to the RoboRIO for feature extraction and classification. Secondly, the host computer provides a GUI schedule creator, devel-oped as a LabVIEW application, for setting up an LED flashing scheme for the LED Manipulation task. Both the Smart Eye system and the RoboRIO support three methods that are the TCP, UDP and the CAN protocols, compared below.

The Smart Eye software and the data analysis process

The pixel data from the Aurora camera is processed by the Smart Eye software to produce raw pupillometric data. This data is sent at a rate of 120Hz, which the Data analysis process should be able to receive. Also, based on this data, the Data analysis process should validate the re-ceived information, extract necessary features and perform classification in real-time. For these requirements, a suitable communication protocol should be chosen. The common protocols that are enabled by the Smart Eye system and the RoboRIO, are the TCP/IP, UDP/IP and the CAN. To determine which protocol is the most suitable, the communication requirements of the system have to be considered:

The maximum amount of data transferred from the host computer, does not exceed a few hundred bytes, at a maximum rate of 120 Hz. This is well within the capacity of both the CAN and the Ethernet based technologies i.e. TCP and UDP.

The CAN and the TCP protocols support the retransmission of erroneous or lost messages, while the UDP does not. TCP however does this while blocking, which impedes any real-time require-ments. Losing some messages would not have critical consequences is this application, therefore UDP is considered a sufficient technology. In addition, the risk of losing messages in the network of a point-to-point connection applied in this project, is quite low.

Moreover, the system in this work requires only two nodes to communicate: 1) the host computer that sends pupil related data and 2) the microcontroller that receives the data stream. Ethernet connection is adequate for point-to-point communication and an Ethernet port is readily available in most desktop computers. In contrast, a CAN interface requires additional hardware, such as a PCIe-to-CAN converter.

The GUI schedule creator and the LED manipulation process

The GUI schedule creator task provides a GUI for a human operator so that a manipulation schedule can be assembled, saved into file, loaded from file and sent for execution by the LED Manipulation task on the RoboRIO. When the operator chooses to execute the schedule, it is sent to the LED Manipulation process over the network for execution.

The GUI schedule creator task resides on the host computer, while the LED Manipulation runs on the microcontroller. To forward the manipulation schedule, the two devices use a LabVIEW specific mechanism for data communication in distributed applications, called NPSV (Network Published Shared Variables). This technology is built on top of the TCP/IP protocol and has been optimised for maximizing data throughput performance in a LabVIEW environment. In contrast with the previously described communication requirements (the Smart Eye software and the data analysis process), the blocking nature of the TCP protocol does not impose any impediment on this

(31)

real-time system. This is the case because the schedule that is composed by the GUI application has been designed to be sent once, before the execution of the rest of the system tasks, and is not changed manually during runtime.

The RoboRIO and the LED manipulator

The LED manipulator consists of a set of LED’s and transistors on an eye-wear frame. The RoboRIO can control the ON and OFF states of the LED’s by applying a 3.3V signal to the appropriate transistor gate. Such control is possible through the GPIO pins of the microcontroller. The RoboRIO has both digital and PWM (pulse width mondulated) GPIO pins. Utilizing the PWM pins, it is possible to define a frequency and a duty cycle that together define a pulse signal, thus providing control of the brightness of the LED’s.

The RoboRIO and the Aurora

The execution of the manipulator and the data recording from the Aurora need to be synchronized. The Aurora provides a synchronization interface in form of a 5V read signal. The RoboRIO can provide such synchronisation through its digital GPIO pins. When the manipulation starts, the microcontroller should drive its digital pin high which commands the Aurora to time-stamp the following eye data.

11.2

Task level system architecture

The application is built from several tasks, illustrated in figure 14. The Smart Eye system and the GUI schedule creator run on the host computer while the other tasks execute on the RoboRIO. The LED Manipulation task runs independently from the rest of the tasks on the RoboRIO. The rest of the tasks (Data collection, Data validation, Feature extraction and ANN classification) run sequentially.

Figure 14: Shows the relation between the system tasks. The rectangles represent the tasks themselves. The arrows show the data-flow and its direction between the tasks. The type of data forwarded is written with italic above the arrows.

(32)

use and saved schedules can be loaded into the program. The assembled manipulation schedule is forwarded to the LED Manipulation task for real-time execution.

The LED Manipulation task executes a predefined schedule of flash manipulations. Right be-fore each flash, the task triggers a synchronization timestamp from the Smart Eye system. The value of this timestamp remains the same until the next trigger and the latest value becomes attached to every consecutive data packet. After the flash manipulation, the task sleeps for the amount of time predefined in the schedule. When the task awakes, it checks for a command sent by the Data validation task, to determine whether the previous manipulation procedure has to be repeated, or to follow the rest of the schedule.

The Smart Eye system takes measurements and generates UDP messages from the measurement values. Each of these messages contains the latest measured values of the predefined selection of variables. In this project, three pupil diameter variables are selected: Left pupil diameter, Right pupil diameter and Common diameter. Two additional variables have also been selected: the Synchronization timestamp, whose value changes once before every flash and, the Data timestamp which is measured at the midpoint of every image exposure and is suitable to detect data loss in the system.

The Data collection task waits for incoming UDP packets. When a packet arrives, the task

extracts all the information from it and decides whether it is to be kept or discarded. A mes-sage should be discarded when the information it contains was gathered outside the time frame of influence of a manipulation event. Such packets are sent because the Smart Eye system takes measurements and sends them continuously, regardless of manipulation events. Messages that are to be kept, are the ones that were gathered during the influence of a manipulation event. In this project, the average time length of pupillary reaction curves has been measured. Based on the measurements, a fix, safe time frame of four seconds has been defined as the ”time frame of influence”, because in all of the cases, the four-second time frame was long enough for the pupils to react to the light flash and to fully recover to their original baseline diameters. The raw data from the messages is collected to form arrays of values that can be processed by the following tasks. The Data validation task is based on the blink removal procedure, described in the Offline analysis section. This task receives an array of pupillometric data, i.e. pupil diameter, and validates the information. The information is considered invalid if the array contains diameter values equal to zero. Such values are the results of blinking or other events that prevent the Smart Eye system from obtaining a valid measurement. The invalid information is discarded and the task commands the Manipulation task to repeat the same manipulation. If the information is valid, the data gets forwarded to the Feature extractor task.

The Feature extractor ’s main task is to obtain from the pupil diameter values the features neces-sary for classification. The same method is used as described in the Offline analysis section. The features to obtain are the pupil diameter before the flash, the smallest diameter following the flash, the amplitude of the pupil reaction curve, the length of time it took for the pupil to gain back 50% of its size from before the flash and finally, the velocity of this recovery. However, before the aforementioned features can be obtained, the feature extractor needs to apply a filter on the original data to smooth out the pupil reaction curve. This task uses the median filtering method, just like in the offline analysis.

The ANN classifier task receives the necessary features extracted from three different sources of data: 1)values obtained from the left eye, 2) from the right eye and, 3) from values of a curve that represent both pupils together (generated by algorithms from the Smart Eye system) and runs them through the neural network. Each of the three data sources is classified separately, although the final decision is based on the most common result. Thus, at least two of the three data sources need to be classified as caffeine influenced. When the classification is completed, the application returns to waiting for the next scheduled manipulation data.

Figure

Figure 8: Shows the procedure of the measurement. The Smart Eye software was logging the measurement data
Figure 9: Shows the steps to take during offline analysis.
Figure 14: Shows the relation between the system tasks. The rectangles represent the tasks themselves.
Figure 15: Shows one hyper period of the message arrival times. The scale shows the time in milliseconds.
+7

References

Related documents

This study aimed to compare the IPs and accuracy rates for the identification of different types of auditory speech stimuli (consonants, words, and final words in sentences)

A study of rental flat companies in Gothenburg where undertaken in order to see if the current economic climate is taken into account when they make investment

Taking basis in the fact that the studied town district is an already working and well-functioning organisation, and that the lack of financial resources should not be

Eftersom det är heterogen grupp av praktiker och experter på flera angränsande fält täcker vår undersökning många olika aspekter av arbetet mot sexuell trafficking,

• Page ii, first sentence “Akademisk avhandling f¨ or avl¨ agande av tek- nologie licentiatexamen (TeknL) inom ¨ amnesomr˚ adet teoretisk fysik.”. should be replaced by

People who make their own clothes make a statement – “I go my own way.“ This can be grounded in political views, a lack of economical funds or simply for loving the craft.Because

The Riksbank has, at several times, said that the high household debt is a financial- stability risk and is therefore affecting the repo rate even though this is contrary to their

compositional structure, dramaturgy, ethics, hierarchy in collective creation, immanent collective creation, instant collective composition, multiplicity, music theater,