• No results found

Inertial and optical hybrid head tracking for helicopter simulators

N/A
N/A
Protected

Academic year: 2021

Share "Inertial and optical hybrid head tracking for helicopter simulators"

Copied!
69
0
0

Loading.... (view fulltext now)

Full text

(1)

 

Degree project in Computer Science Second cycle

Inertial and optical hybrid head tracking for helicopter simulators

HUGO BLANC

(2)

Master Thesis Report

Inertial and optical hybrid head tracking for helicopter simulators

Thesis Student Hugo Blanc hblanc@kth.se +33 (0)7 86 36 55 69

KTH Supervisor John Folkesson johnf@csc.kth.se +46 (0)8 790 6201

KTH Examiner Stefan Carlsson stefanc@csc.kth.se +46 (0)8 790 8432

Thales Supervisor Yannick James yannick.james@thalesgroup.com +33 (0)1 34 22 82 64

(3)
(4)

Abstract

Inertial and optical head tracking for helicopter simulators

Nowadays we start noticing all the potential and amazing applications of virtual and augmented reality. The mass-market availability of cheap and powerful CPUs and GPUs has mostly solved the computational complexity issue of these applications. The displays and cameras, basic building blocks of many augmented reality applications, are also getting cheaper. But the environment sensing systems (radars, lidars, 3D scanners, head-trackers...) that are accurate and reliable enough to be used for critical applications are still rare and expensive. They might be affordable for the plane and helicopter manufacturers but it is a different story when it comes to small training simulators.

This Master Thesis tackles the issue of producing a cheap, accurate and robust head tracker with 6 degrees of freedom for a simulation and training setup. Our solution consists of an inertial and optical hybrid tracker using Inertial Measurement Units and cameras. Both are now cheaply available due to massive use in smartphones. The developed tracker will be operated in helicopters. So, in this report, we focus on the operational constraints of this environment. These constraints are mainly connected to latency, accuracy and robustness issues but also include compactness and discretion.

We successfully designed a cheap, reliable, highly modular and accurate head tracker with latency below 4ms. It is designed to work with helicopters pilots’ head-mounted displays in order to align the pilots eyes, a target and its corresponding

“augmented” information. The trackers modularity allows a lot of flexibility on costs and accuracy so it will hopefully allow many other virtual reality applications to emerge.

(5)

Table of Contents

(6)

Introduction 1

Background 1

Existing trackers 2

Project description (and limitations) 3

Conventions 5

Inertial head tracker 9

Goals 9

Hardware and tools 9

Inertial Measurement Units 9

Virtual IMU 11

Calibration and test benches 11

Inertial tracking principle 11

Single IMU tracker 14

Naive approach 14

Correcting gyroscopes using gravity and local magnetic field in an Extended Kalman Filter 16 Simpler tracker with orientation drift correction using gravity 20

Performance evaluation 24

Differential tracking 31

Principle 32

Implementation alternatives 33

Performance evaluation protocol 34

Fusion of the inertial and the optical data 35

Presentation of the optical tracker 35

Single camera and single marker 35

Multi camera and single marker 38

Multi camera and multi marker 38

Basic trackers performances summary 39

Inertial tracker 40

Optical tracker 40

Inertial tracker drift correction using an optical tracker 41

Asynchronous datafusion 41

Position realignment 42

Orientation realignment 43

Velocity realignment 45

Trackers reference frame offset issues 47

Human vision model based smooth convergence process 48

Static and dynamic error perception and tolerances of the human eye 48

Vision model based signal tracking 50

Hybrid tracker performances evaluation 54

Conclusion 58

References 59

Glossary 62

(7)

Introduction

This report presents the work done by Hugo Blanc from the 28th of January 2013 to the 28th of June 2013 during his master thesis concluding a two years master program in Systems, Control & Robotics at the Royal Institute of Technology of Stockholm. The thesis work has been conducted for the multinational company Thales that designs and builds electrical systems and provides services for the aerospace, defense, transportation and security markets. But more specifically for its subsidiary Thales Training & Simulation, at their facilities in Cergy-Pontoise (France), under the supervision of Yannick James (Thales thesis supervisor) and John Folkesson (KTH thesis supervisor) whom I would like to thank for their precious and uninterrupted support. I also would like to thank Sebastian Van De Hoef for his suggestions and his help proof reading this report.

Background

Training pilots has always been a very expensive task in terms of staffing, logistics, vehicles availabilities and safety. This is especially the case when vehicles such as planes, helicopters or even trucks are extremely expensive themselves or dangerous to operate for beginners. So this is quite natural that simulation grew-up rapidly in this area, not only for beginners but also for experienced officers in need of training their commanding and strategy skills.

Figure 1: 3D rendering of an helicopter simulator by Thales Training & Simulation. It can simulate a wide variety of helicopters and scenario (image courtesy of Thales T&S)

Of course training in a simulator is a very different experience compared to flying a real helicopter but many companies such as Thales Training & Simulation work hard to make it as realistic as possible. The goal isn’t to reproduce 100% of the real operation physics but to help the pilot familiarize himself with the vehicle, take the right decisions in a wide variety of situations. Regarding this last point the possibilities offered by simulation are endless: If one can think of an extreme

(8)

situation, we can simulate it without waiting months or even years to meet the proper conditions in the real world! Such simulations require building realistic physics models, 3D representations of the real world, high fidelity on board instruments, low latency sensing and display systems... Most of those issues aren’t specific to simulation but shared with most virtual and augmented reality applications.

Thales recently unveiled its latest-generation “Reality H” helicopter simulator equipped with the all-new, all-electric “Hexaline”

high-fidelity, six-axis linear motion system. The “Reality H” simulator was developed in partnership with helicopter operators and is designed to maximize training capacity and availability while ensuring easy set-up, implementation and maintenance.

That shows Thales commitment to the simulation area and this master thesis work purpose is to enhance another aspect of helicopter simulators: the head tracking for semi-transparent and opaque head-mounted displays (HMDs) and heads-up displays (HUDs).

Figure 2: Example of helicopter’s pilot helmet integrated HMD: The Thales Topowl (courtesy of Thales T&S)

It might seem strange for someone who isn’t familiar with virtual reality and augmented reality that the trackers specifications are determined according to the display properties (depending on whether the user eye has external references or not). This will be explained a bit more thoroughly in the second half of this report.

Figure 3: Example of contextual information that can be displayed by the HMD (© Brett B. Despain)

Existing trackers

Before detailing our solution further we will give a quick and not exhaustive overview of the head tracking technologies used in simulation along with their main strengths and drawbacks. These trackers can be sorted in four main categories based on the sensing technology they rely on. The table in Figure 4 introduces them briefly.

(9)

Optical only Inertial only Magnetic ToF / Phase shift

E.g.

Qualisys Oqus, ARTTrack

E.g.

Specs

• Fast and quite low latency

• Precise

• Not very compact

• Very expensive

• Position tracking only (but easy to get orienta- tion with several markers

positions)

Intersens InertiaCube Polhemus Scout LaserBird2

• Fast and quite low latency

• Quite precise

• Quite cheap

• Very compact

• Orientation tracking only (AHRS)

• Fast and quite low latency

• Quite precise

• Very compact

• Quite cheap sensor but requires many expensive specific modifications of the cabin and extensive

calibrations (used in real aircrafts)

• Fast and low latency

• Very precise

• Quite compact

• Very expensive

• Short lifetime (less than 2 years)

Figure 4: Existing trackers summary

Of course there are many other trackers available (mechanicals, sonars...) but they cannot be used for head tracking in narrow spaces such as helicopters cabins and fit the simulation specific requirements.

Project description (and limitations)

The development of a new head-tracker by Thales Training and Simulation is not a purely theoretical research project at all. It is the result of an internal project for a Night Vision Goggles (NVGs) compatible head tracker for NH-90 Reality H simulators which already have an external client. The problem is that the existing trackers are either much too expensive (magnetic trackers), not accurate and fast enough or use lasers or IR light sources which shine like bright suns in the NVGs.

Although this project has been initiated after a commercial stimulation, Thales wants to develop several versions of the tracker even if only one of these will be sold in the short term. The goal is to design a modular system that can be used in a wide variety of applications starting with a very low price tag. In this thesis we will focus on the tracker version that is of interest for the external customer. From now on we will call it the main version since it has top priority. But we will also discuss and investigate quickly some modularity aspects.

The chosen technical solution is a hybrid tracker composed of an inertial tracker for low latency and an optical tracker for good accuracy and precision. Many studies, publications and patents about inertial tracking have been released in the last couple of decades such as [1-8]. Optical tracking is also very well documented [9]. There are even some publications investigating hybrid inertial and optical tracking [10], [11]. Our solution reuses some of these results and findings but is novel in the realignment and convergence process.

For the optical tracker we could use automatic detection of external reference points but due to robustness considerations we chose to use predefined markers as in [12]. The problem, even with predefined markers, is the wide range of luminosities in the visible domain in the cabin from flights with a clear sky and a bright sun to night time flights with almost no light in the visible domain at all. We cannot afford changing the luminosity in the cockpit by putting a lamp. It would blind the pilots, especially if they are using NVGs. For the same reason we cannot use infrared light either. So we opted for active markers emitting in the ultra-violet range. The UV active markers embed LEDs whose light are partially occulted according to a given

(10)

pattern. By making the marker active we can keep the light intensity low enough to stay below the level fixed by the health regulations.

The new head-tracker is developed from scratch according to the following specifications:

Tracker requirements Tracker requirements Tracker requirements

Accuracy Orientation accuracy

Accuracy

Position accuracy

Static precision

Orientation Static precision

Position

Latency Global tracking latency (up to the tracker PC output interface)

Domain of capture (centered on the middle eye position and orientation at rest)

Yaw

Domain of capture (centered on the middle eye position and orientation at rest)

Pitch Domain of capture

(centered on the middle eye position and orientation at rest)

Domain of capture Roll (centered on the middle eye

position and orientation at rest) X Domain of capture

(centered on the middle eye position and orientation at rest)

Y Domain of capture

(centered on the middle eye position and orientation at rest)

Z

0.45 ° 2 mm

0.03 ° at one standard de- viation σ

0.1 mm at one standard deviation σ

7 ms from -130 ° to +130 °

from -70 ° to +45 ° from -45 ° to +45 ° from -150 mm to +150 mm from -300 mm to +300 mm from -100 mm to +100 mm Figure 5: Tracker specifications set by Thales Training & Simulation

This has been a six months project (study, design, prototyping, test and integration) involving one software developer for the optical tracker, the author of this thesis report for the inertial tracking and the fusion under the supervision of the optical tracker responsible. This is it for the head-tracking part but there were many other people and entities involved in the development of the helmet for the mechanical design, the head-up display, the connection to the rest of the cockpit instruments... The other people involved were: an optician, an engineer from the mechanical research department, an engineer from the electrical research department, a low level software developer (firmware + VHDL), a system integrator, a project responsible, a project manager, a supply chain and quality expert, one main subcontractor, several component suppliers and some help from the purchase division.

All the work presented in this report has been done by the author alone if not otherwise stated and is articulated around four phases:

(1) Design a low latency inertial tracker that can track the pilot’s head with respect to the earth using one inertial measurement unit attached to the pilot’s head.

(2) Design a low latency inertial tracker that can track the pilot’s head with respect to a frame attached to a moving platform using two inertial measurement units: one attached to the pilot’s head, the other attached to the moving platform.

(3) Design a realignment algorithm to build a very low latency estimate of the pilot’s head position and orientation which is as accurate as possible at each instant.

(4) Design a human perception based convergence algorithm that generates the output of the tracker used by the HMD module. This signal should have very low latency, fit within the Thales accuracy and precision requirements and have very little noise in steady situations.

(11)

Figure 7: An insight in the lab where the famous investigations of Azuma on inertial and optical data fusion were conducted shows a “no swimming” sign used to remind everyone of the desire to reduce the visual effects of excessive latency [13].

But one can actually divide this thesis work in two bigger parts. The first being sensor oriented, meaning that we will try to get the best estimate of the position and orientation of the pilot’s head using the inertial sensors. Only the fixed platform inertial tracker (single IMU) will be thoroughly investigated in this report due to time constraints. We will describe the physics issues by giving the equations used for the differential inertial tracker but we will not detail its implementation and its performances. The second part being vision oriented, meaning that we will focus on building an estimation which is as good as possible and will comply with the properties of human vision. We will also use the human vision limitations to get some margin to improve the perceived performances of the system.

Conventions

Reference frames

Yi Xi

Xn

Yn

Zn

Xb

Yb

Zb

Zi

Figure 8: Definition of the different frames of reference

(12)

To avoid redundant explanations we will use the same conventions regarding the different frames of reference all along this report. For this application we can identify three different frames of references, which are described in the following table and illustrated in Figure 8.

The inertial frame ⓘ: • x-axis aligned with the cabin heading when resting, pointing forward

• z-axis aligned with the gravity, pointing downward The navigation frame ⓝ: • x-axis aligned with the cabin heading, pointing forward

➜ The navigation-IMU, also referred below as the reference-IMU, is fixed in that frame.

The body frame ⓑ: • x-axis aligned with the direction the pilot’s head is facing

➜ The body-IMU is fixed in that frame.

Please notice that the navigation frame is the same as the inertial frame in the fixed platform single IMU system Units

If not specified otherwise, all variables or data will be expressed using the International System of Units: distances in meters (m), velocities in meters per second (m/s), accelerations in meters per second square (m/s²), angles in radians (rad)...

Euler angles

On one hand, Euler angles provide a quite simple and rather intuitive representation of rigid body orientation in 3D space.

But on the other hand they present many singularities and there is no worldwide consensus concerning their definition. In this thesis report we will use one of the many common conventions often referred as Yaw-Pitch-Roll sequence (or 3-2-1 or k-j-i).

This is a sequence of 3 rotations: - At first, right hand rotation around the z-axis, denoted as the yaw angle ψ           -­‐  Then, right hand rotation around the new y’-axis, denoted as the pitch angle θ           -­‐  At last, right hand rotation around the new x’’-axis, denoted as the roll angle φ

Xn

Zn = Z1

Yn

X1

Y1

ψ (yaw)

Z1

X1

Y1 = Y2

Z2

X2

θ (pitch)

Z2

X2 = Xb

Y2

Yb Zb

φ (roll)

Figure 9: Euler angles Yaw-Pitch-Roll sequence

Quaternions

Because Euler angles present singularities at some points, they are quite tedious to work with. Therefore we decided to use unit quaternion representation (denoted q) to handle orientation in all our algorithms. Unit quaternions offers a convenient representation of rotation in 3D space since any rotation can be modeled as a continuous function of the four quaternions components. For a complete explanation of the quaternion representation please refer to the publications of Pittelkau [32]

and Madgwick [33].

q = [vector, scalar]

T

= ⇥

~e

T

, q

4

T

= [q

1

, q

2

, q

3

, q

4

]

T

(13)

But because quaternion representation is not very intuitive, and because of the projects requirements, the orientation will be converted to Euler angles at the very end of the process, just before sending the result.

Poincaré Diagram

To analyze the performances of the trackers in terms of accuracy we will measure the difference between the tracker output and the true pose of the tracked object. But to get information about the precision of the system from this error plot is not very straightforward. Of course we have some simple statistical analysis tools to help us with that but most of them do not give us intuitive visual representations of what they measure. That’s why we used the Poincaré diagram [37]. It offers an online and real time visual representation of the noise in the tracker. With one look we can see the low and high frequency noise characteristics of a tracker and tune its parameters to make it as good as possible in real time instead of doing an offline analysis for a bunch of set of parameters values.

Figure 11: Example of Poincaré diagram for 3 discrete time signals (Yaw, Pitch and Roll) You will find a short explanation below about how this diagram is constructed and how to read it.

¯

s

LP,k+1

= ¯ s

LP,k

+ ↵ (s

k+1

s ¯

LP,k

)

⇢ x

i

= s

2i

¯ s

LP,2i+1

y

i

= s

2i+1

¯ s

LP,2i+1

with

y

x

P (x

i

, y

i

)

Low frequency component

High frequency component t

s(t = k · T

S

)

Figure 10: Poincaré diagram construction principle for any discrete signal s

(14)

The diagonal axis (in grey and pointing upwards in the Figure 10) represents low frequency noise, between half the sampling frequency of s and the low-pass filter cut-off frequency. This low frequency noise is often referred as “drift” when we talk about the inertial tracker. The other diagonal represents the high frequency component of the input signal, around the sampling frequency. All along this report that is what we will refer to when talking about low frequency and high frequency noises.

We have chosen this validation tool since it allows us to check the effective noise level of a tracker with a blink of an eye. The Poincaré diagram is for example used to analyze the heart’s RR interval [38]. It was adapted here to dynamically measure the first order (high frequency) of Allan’s variance [36], which is state of the art for inertial sensors and IMU characterization. With this tool we can check if the noise in the tracker so it is compatible with an HMD without setting an entire verification chain (Head tracker + Image generator + HMD).

Notations

To make this report easier to read we will use the same notation convention all along. So, if not otherwise stated, the following rules apply:

Notations / Definition Example

Scalars regular font α, λ, a, π

Vectors lowercase bold

q: unit norm rotation quaternion v: linear velocity vector

p: position vector ω: angular velocity vector a: linear acceleration vector

Matrices uppercase bold

Frames of reference

index n refers to the navigation frame index b refers to the body frame

index i refers to the inertial frame ⓝ ⓑ ⓘ Rotation matrices rotation matrix from ⓝ frame to the ⓑ frame Cnb

⨂ operator for quaternions

quaternion product operator corresponding to the DCM (Direct Cosine Matrix) product where the order of corresponding terms is the same

[32]

q1 ⨂ q2

Ternary operator ?: if X is true then a=b otherwise a=c a = ?(X)b:c Figure 12: Notation conventions summary

(15)

Inertial head tracker

Goals

The purpose of the inertial head tracker is to estimate the pilot’s head position and orientation in the navigation frame (which can be fixed or moving with respect to the inertial frame) using Inertial Measurement Units. Those Inertial Measurement Units, often called IMUs, are in fact a combination of several MEMS (Microelectromechanical systems) sensors in a single package.

The most widespread IMUs have 3 accelerometers, 3 gyroscopes and 3 magnetometers to measure acceleration, angular velocity and magnetic field along the 3 axis of an orthogonal frame. They also often contain a bunch of temperature sensors to correct the thermal drift of the sensors listed beforehand. Some also have barometric pressure sensors and GPS embedded.

Hardware and tools

Inertial Measurement Units

The Inertial Measurement Unit is the core building block of our tracker. The one used for prototyping in this project only includes the basic sensors: 3-axis accelerometer, 3-axis gyroscope and 3-axis magnetometer. It can be bought from the company xSens for about 1k€ which is the typical price for a mid-range IMU at the time this report is written. It is available with several interfaces: USB, RS232 or RS422. Since one of our top priorities is to reduce the latency of our tracker as much as possible we chose to use the RS422 version with a dedicated acquisition module on the PC.

Figure 13: Picture of an MTi 10-series IMU from Xsens (http://www.xsens.com/en/general/mti-10-series)

(16)

The characteristics of the sensors embedded in the MTi-10 are summarized below:

Gyroscopes

Gyroscopes AccelerometersAccelerometers

Typ. Max. Typ. Max.

Standard full range 450 °/s - 50 m/ss -

Bias repeatability (1 yr) 0.2 °/s 0.5 °/s 0.03 m/ss 0.05 m/ss

In-run bias stability 18 °/h 40 µg

Bandwidth (-3 dB) 415 Hz N/A 375 Hz N/A

Noise density 0.03 °/s/√Hz 0.05 °/s/√Hz 80 µg/√Hz 150 µg/√Hz

g-sensitivity (calibrated) 0.006 °/s/g 0.02 °/s/g N/A N/A

Non-orthogonality 0.05 deg - 0.05 deg -

Non-linearity 0.03 % FS 0.1 % FS 0.03 % FS 0.5 % FS

Magnetometer Magnetometer

Typ Max

Standard full range - +/- 80 uT

Noise density 200 µG/√Hz -

Non-linearity 0.1 % FS -

Figure 14: Sensors specifications according to the manufacturer (http://www.xsens.com/en/general/mti-10-series) IMU calibration

Figure 15: Specific intrinsic sensor calibration data provided by xSens for one MTi-10 IMU

(17)

Virtual IMU

To be able to objectively evaluate the algorithms we developed and their variations we created a model capable of generating acceleration and gyroscope data stream from a given trajectory. To make this ideal data stream as close as possible to the real IMU, we injected noise with the same characteristics as the one we observed on the MTi-10.

To allow fair comparison between the different algorithms it is possible to use the same pseudo random series of numbers several experiments in a row.

Calibration and test benches

All the static calibration and performance evaluation tests are done with a system called Robocop. It is a precise 3 degrees of freedom (3 Euler angles rotations) system that can be fine tuned using integrated mechanical systems.

Figure 16: Picture of the “Robocop” system used for static IMU calibration and static tests of the tracker

Inertial tracking principle

The inertial tracking method is based on a couple of extremely basic physics principles. To make it short, in a Galilean referential, one can integrate the acceleration a of a body with respect to a frame of reference to get its linear velocity in the same frame. If a is constant over Δt:

vt+ t= at+ t· t + vt

By integrating one more time one can get the position of this body. Both of these integrations require knowledge of the initial state of the body (e.g. the initial conditions of the differential equations). If v is constant over Δt:

xt+ t= vt+ t· t + xt

(18)

Similarly one can integrate the angular velocity ω of the body to obtain its orientation. If ω is constant over Δt

t+ t= !t+ t· t + ✓t

In practice it is little bit more complicated than since the sensors in the IMU are not perfect and because they sense others physical phenomena than the one we described above. The following equations give a rather accurate model of the IMU even if it neglects (among others) the non-linearities of the sensors and their temperature drift. This is a group of three vector equations, one for each group of sensors: the 3-axis gyroscope, the 3-axis accelerometer and the 3-axis magnetometer.

8<

:

!imu = gK!body+gb +gw

aimu = aK [Cnb(q)(g + abody)] +ab +aw mimu = mKCnb(q)(hEarth+ hdist) +mb +mw

Let us take a closer look at each of these equations separately. The first one is the equation of the gyroscopes:

!imu =gK!body+gb +gw

!imu

!body

gK

gb

gw

Gyroscopes sensors raw data

Angular velocity of the IMU with respect to the body frame Gain of the gyroscopes

Bias of the gyroscopes

Noise of the gyroscopes, assumed to be uncorrelated white Gaussian measurement noise

The equation of the accelerometers is quite similar but with the noticeable difference that they not only sense the body acceleration but also the earth gravity:

aimu=aK [Cnb(q)(g + abody)] +ab +aw

aimu

g

abody

Cnb(q)

aK

ab

aw

Accelerometer’s sensors raw data Gravity acceleration

Acceleration of the IMU due to specific forces (excluding gravity) with respect to the body frame

Direction Cosine Matrix (DCM) for the transformation from

ⓝ to ⓑ given in terms of the orientation quaternion q.

Gain of the accelerometers Bias of the accelerometers

Noise of the accelerometers, assumed to be uncorrelated white Gaussian measurement noise

(19)

The Direction Cosine Matrix is the rotation matrix between the two frames of reference expressed in terms of quaternion’s elements.

Cbn(q) = 1 kqk ·

2

4q21+ q22 q32 q42 2(q2q3+ q4q1) 2(q2q4 q1q3) 2(q2q3 q4q1) q12 q22+ q32 q24 2(q3q4+ q1q2) 2(q2q4+ q1q3) 2(q3q4 q1q2) q12+ q22 q23 q42

3 5

Note that you need a good estimate of the DCM and the gravity acceleration in order to extract the body acceleration which is what we want here to integrate it twice to get the position.

Finally the equation of the magnetometers is very similar with the accelerometers one:

mimu=mKCnb(q)(hEarth+ hdist) +mb +mw

mimu

Cnb(q)

hEarth

hdist

mK

mb

mw

Magnetometer’s sensors raw data

Direction Cosine Matrix for the transformation from ⓝ to ⓑ given in terms of the orientation quaternion q

Earth magnetic field

Magnetic field generated by other sources such as elec- tronic devices, actuator, magnets...

Gain of the magnetometers Bias of the gyroscopes

Noise of the magnetometers, assumed to be uncorrelated white Gaussian measurement noise

We would like to integrate a and ω to obtain position and orientation:

( q = [q1, q2, q3, q4]T

pn = [xn, yn, zn]T

(20)

Single IMU tracker

Figure 17: A pilot’s helmet without the external protection skin, seen from the left side.

The yellow module is the head-mounted IMU (image courtesy of Thales T&S)

Naive approach

Orientation integration

The first implementation of the single IMU head tracker was based on the equations presented by Foxlin in [10]. It is a really efficient but a bit naive implementation since it is not performing any correction or filtering of the IMU data. Its efficiency comes from small tricks based on a simplification of Rodriguez's Formula which allows to perform the integration of the angular rate provided by the gyroscopes using a simple matrix multiplication. This trick is detailed in Szeliski’s book [14] but here is a short explanation:

R(n, ✓) = I + sin ✓[n]+ (1 cos ✓)[n]2 (known as Rodriguez’s Formula) Where [n] represents the matrix form of the vector cross product with the vector n = (nx, ny, nz)

[n]= 2

4 0 nz ny

nz 0 nx

ny nx 0 3 5

The product of the axis n and angle θ, ω, is a minimal but not unique representation. However it is an excellent choice here since, in our application, we run the IMU at 500 Hz so we have very small rotation angles between integrations.

The fact that we have small angles here also allows us to simplify Rodriguez's Formula to:

R(!)⇡ I + sin ✓[n]⇡ I + ✓[n]= 2

4 1 !z !y

!z 1 !x

!y !x 1 3 5

The orientation is obtained by integrating the angular velocity using the trick presented before and solving the following first order linear differential equation:

nb = CnbR(!bnb)

Position integration

The position is then computed by integrating twice the acceleration expressed in the global frame.

(21)

˙vnnb = Cnbfnbb + gn

˙pnnb = vnbn

where

gn ⇡ [0 0 9.8 m.s 2]T

Those equations do not take into consideration the rotation of the earth since the effect between two samples taken at 500 Hz is really small and the gyroscopes in the MTi-10 IMU are anyway not sensible enough to detect a rotation as slow as 18 °/h (0.005 °/s).

Observations

We wrote a small piece of software to do these naive integrations in orientation and position and build ourselves an intuition regarding the level of expectation for the inertial tracker. In the figure below you can see the drift after 1s expressed in the body frame when the IMU is held steady in all three frames.

Figure 18: Screenshot from the first draft of the tracker software displaying RAW IMU data and a naive integration in the B frame (Helmet’s IMU frame)

For an IMU held steady on the desk for 1 s we obtained a drift of 1.8 ° and 10 cm in 10 s which seemed quite convincing.

But it was assuming that the orientation was always null for the accelerometers integration, regardless of the gyroscope integration. The results were much worse when we implemented the online projection of the gravity vector for gravity acceleration removal. The implementation of this naive tracker failed to reach the required performance level. The orientation drift was contained and more or less satisfactorily but the position drift was so huge (several meters for a 2 s interval). This can easily be explained by the fact that the orientation estimation error has a very strong impact when translating the accelerations from the IMU frame to the platform frame.

With a simple calculation one can prove that an orientation error of 0.5 ° causes a position estimation error of 8.5 cm after 1 s due to the use of the orientation to compute the rotation matrix Cbn, which is itself used to convert the position to the navigation frame.

(22)

Figure 19: Screenshot from the second draft of the tracker software displaying RAW IMU data and a naive integration in the N frame (Platforms frame)

The conclusion is straightforward: no good can come from a bad orientation estimate. We will then focus on improving it.

Correcting gyroscopes using gravity and local magnetic field in an Extended Kalman Filter

EKF for the orientation estimation

Our goal is to use an EKF to improve the quality of the inertial tracker. But keeping the latency as low as possible is critical here: It is the very reason we added inertial tracking to the optical tracker (cf. second part of the report). By using this filter we only wish to eliminate the orientation drift.

Shall we try to enhance the angular velocity measurement or try to produce a better estimate of the orientation? In fact it is hard to do anything good by directly processing the data from the gyroscopes. It is much simpler (and intuitive) to correct the orientation estimation. For example, when we are tracking on a fixed platform we know the exact direction of the gravity in the navigation frame. Luckily we already have sensors in the IMU that are capable of measuring the gravity acceleration in static situations. So knowing the direction of the gravity in the body frame should allow us to at least enhance the pitch and roll estimations.

In practice we use the 3-axis accelerometer embedded in the IMU to estimate the orientation of the gravity vector and we check if it is consistent with the current orientation estimate of the pilot’s head. This is done by checking if the estimated gravity orientation in the body frame lies along the z-axis of the inertial frame after we applied it the change of frame from the body frame to the inertial frame. The main difficulty lies is the fact that, as in most IMUs, the accelerometers in the MTi 10 from xSens measure gravity and body acceleration altogether. It is therefore quite hard to separate them to get a good estimate of the gravity orientation when the body frame is moving with respect to the inertial frame. The developed Extended Kalman Filter we implemented (based on Sabatini's paper [15]) is designed to perform this gravity vector estimate but also to quantify and compensate for the accelerometers bias.

However knowing the orientation of gravity does not give us any information about the correctness of our yaw angle estimate. It is by definition a quantification of the rotation around the z-axis of the navigation frame and the earth gravity is by definition aligned with the z-axis of the inertial frame, whose frame are identical in the fixed platform version. The most common solution is to use the earth magnetic field to realign the yaw angle estimate.

Many publications tackle the issue of inertial human body orientation tracking using inclinometers and magnetometers data to compensate for the drift of the gyroscopes [16-18]. This is on this particular aspect that our filter differs from the one

(23)

presented by Sabatini et al. since we cannot really use the magnetometer data inside a metallic structure full of electronics such as an helicopter simulator. But we have information that they couldn't assume when tracking humans in a general environment: the pilots is seated and the only moving part (with respect to the navigation frame) is his head. So the in-line calibration procedure uses no magnetometers but the IMU 3-axis accelerometers and an optical tracker relying on the orientation estimation made by fixed cameras. The optical tracker and the data fusion are detailed in the second part of this report.

We will first focus on how to enhance the orientation estimation using a 3-axis accelerometers and a 3-axis magnetometer.

This is assuming that the magnetic field is not too disturbed in the tracking environment. It is not the case in our application but it is anyway interesting to see how it works and it does not add a lot of complexity since it is very similar to the gravity based realignment process of the pitch and roll angles.

Filter Overview

The following schematic presents the Extended Kalman Filter structure use to perform the desired orientation estimate realignment. It is a reproduction of a figure included in Sabatini's paper [15], which includes the yaw-angle realignment process using magnetometers even if we chose not to use it.

Measurement validation test

Kalman gain computation Jacobian

computation

Gyroscope Accelerometer Magnetic Sen- sor

Measurement validation test

R-adaptation

Project ahead Update esti-

mate MODS

Figure 20: Orientation filter structure [15]

The integration of the rate gyroscopes is done using rotation matrix multiplication as presented by Foxlin [10] (and by us in the Naive approach section).

It is a quite simple EKF in fact since we only deal with non-linearities in the measurement update phase (f function). The mathematical structure of the filter is summarized below.

① Compute the a priori state estimate:

① Compute the a priori state estimate:

xk Current state

xk+1= (Ts, !k)xk

xk+1 A priori state estimate xk+1= (Ts, !k)xk

(Ts, !k)

Augmented rotation matrix corresponding to the rotation from the previous sampling time to the last sampling time based on the gyroscopes data

(24)

② Compute the a priori error covariance matrix:

② Compute the a priori error covariance matrix:

Qk Process noise covariance matrix Pk+1= (Ts, !k)Pk (Ts, !k)T + Qk Pk Current error covariance matrix

Pk+1 A priori error covariance matrix

③ Compute the Kalman gain:

③ Compute the Kalman gain:

Kk+1 Kalman gain

Kk+1= Pk+1FTk+1(Fk+1Pk+1FTk+1+ Rk+1) 1 Fk+1 Jacobian matrix (cf equations below)

Rk+1 Covariance matrix of the measurement model

④ Compute the a posteriori state estimate:

④ Compute the a posteriori state estimate:

xk+1 Updated state

xk+1= xk+1+ Kk+1

zk+1 f (xk+1)⇤ zk+1 Measurement vector xk+1= xk+1+ Kk+1

zk+1 f (xk+1)⇤

f (xk+1) Function that compute what the measurement vector should be if the real state was equal to the a priori state estimate

⑤ Compute the a posteriori error covariance matrix:

⑤ Compute the a posteriori error covariance matrix:

Pk+1= Pk+1 Kk+1Fk+1Pk+1 Pk+1 A posteriori error covariance matrix Figure 21: Orientation EKF algorithm [15]

The state vector is composed of the orientation quaternion, the accelerometers’ biases and magnetometers’ biases:

x = (x1,· · · , x10)T = (q1, q2, q3, q4,abx,aby,abz,mbx,mby,mbz)T

The quaternion is projected ahead during the prediction stage of the filter using the data from the gyroscopes based on the same equation we saw before:

qk+1= exp(⌦kTs)qk with ⌦[!k] = 1 2

2 66 4

0 !z,k !y,k !x,k

!z,k 0 !x,k !y,k

!y,k !x,k 0 !z,k

!x,k !y,k !z,k 0 3 77 5

The measurement vector contains the (calibrated) raw data from the accelerometers and magnetometers.

zk+1=

 ak+1

mk+1

The Jacobian F is computed as follow.

Fk+1= @zk+1

@xk+1 xk+1=x

k+1,vk+1=0

(25)

We saw before that the measurement vector can be expressed as a function of the current orientation, the local gravity and magnetic fields and the bias and noise vectors characterizing the sensors.

zk+1=

Cbn(qk+1) 0 0 Cbn(qk+1)

 g

h +

a bk+1 mbk+1 +

a wk+1 mwk+1

For the sake of the reader we will omit the k indexes and q parameters in the rest of the derivation. For example the equation above will be written as follow:

z =

Cbn 0 0 Cbn

 g

h +

a b

mb +

a w

mw

The term at row i and column j of the Jacobian matrix is the partial derivative of the ith row of the measurement vector denoted fi with respect to the jth term of the state vector xj evaluated at the previous state vector value and no noise.

Fk+1= 2 64

@f1

@x1 · · · @x@f101

... . .. ...

@f6

@x1 · · · @x@f106

3 75

xk+1=xk+1,wk+1=0

Let us compute the upper-left (first row, first column) term of the Jacobian in detail.

@f1

@x1 = @x@1 Cbn(1, :)· g +ab

= @x@1 Cbn(1, :) · g

= ⇥ @

@x1Cbn(1, 1) @x@1Cbn(1, 2) @x@1Cbn(1, 3)⇤

· g

Where the derivative of the diagonal elements of the rotation matrix with respect to the first element of the state vector are the followings.

@

@x1Cbn(1, 1) = @q@1

q21 q22 q23+q42

pq21+q22+q23+q24

= q1(q21+3q22+3q23+q24)

(q21+q22+q32+q42)32

= q1(q21+q22+q23+q24)+q1(2q22+2q23)

(q21+q22+q23+q42)32

= kqk1 q1

⇣1 +2qkqk22+2q232

= kqk1 q1

⇣2 qq212+q22+q32+q42

1+q22+q32+q42+2qkqk22+2q232

= kqk1 q1

⇣2 q21 q22kqkq23+q42kqk1

= kqk1 q1

⇣2 Cbnkqk(1,1)

@

@x1Cbn(1, 2) = @q@

1

2(q

1q2+q3q4) kqk

= 2( q1q2q3k~qk+q233+(q32q24)q2)

= kqk1

2q2 Cbn(1,2) kqk q1

(26)

@

@x1Cbn(1, 3) = @q@12(q

1q3 q2q4) kqk

= 2(q1q2q4+qk~qk33+q322q3+q24q3)

= kqk1

2q3 Cbn(1,3) kqk q1

So the upper-left term of the matrix F can be expressed as follow:

Fk+1(1, 1) = 1 kqk

2 66 4

2q1 Cbn(1,1) kqk q1

2q2 Cbn(1,2) kqk q1

2q3 Cbn(1,3) kqk q1

3 77 5

T

· g

The calculations of the other terms of the Jacobian are quite similar (and therefore tedious) so we will not explain them in detail here.

The measurement noise covariance matrix is adapted at run-time to guard against the effects of body motion disturbances on the reliability of an aiding system sensor measurement.

Many matrices in the filter are block diagonal so we did massive optimizations on multiplication or inversions of these matrices. As a result, the mean execution time for one iteration of the filter dropped from 2 ms to 0.5 ms on the computer we used for designing the system (2.4 GHz Intel™ Core2Quad CPU, 8 GB RAM)

But the EKF is hard to tune and debug especially after all the optimizations to make it run fast enough, the algorithm complexity is growing rapidly when augmenting state or measurement vectors... It is a bit oversized for correcting drifts on a signal we will anyway realign using the optical references.

Simpler tracker with orientation drift correction using gravity

The EKF based tracker presented before is good but it is quite complex, hard to tune and computationally expensive.

Further more the IMUs calibration data provided by the manufacturer has shown to be good enough that we do not need to implement gyroscopes and accelerometers bias removal structures in our filter. So we decided to extract the key ideas we developed for it and put them in a simpler, more efficient and more reliable algorithm. The simplicity we obtained that way allowed us to come up with new ideas and a better tuning of the parameters. But first thing first, lets start with a description of this so-called simplified tracker.

Orientation estimation

R. Mahony [19] proved that computing orientation correction from updated orientation instead of using the previous orientation estimation introduces less error. So basically we designed a filter similar to the one presented by Madgwick in [20]

but with the enhancement presented above and a validation step before the correction to make sure that the acceleration sensed by the IMU corresponds to gravity alone.

(27)

qk⌦ !b,k+1⌦ qk1

qk⌦ ˆgb,k+1⌦ qk1

k+1⌦ ˆqk+1

k+1=

1,!x

2f,!y

2f,!z

2f

k+1=

1,gˆy

2,ˆgx

2, 0 Estimate gb

k+1⌦ qk

!n,k+1

!

b,k+1

k+1 k+1

q

k

q

k

q

k

q

k+1

q ˆ

k+1

q ˜

k+1

g ˆ

n

ˆ g

b

gn= [0, 0, g]T

a

b,(···k+1)

ˆ

g

b

unreliable ˆ

g

b

reliable

Figure 22: Orientation estimation process

The orientation estimation of the simplified tracker is done in two steps in a similar fashion J. Favre presented in [8]. The first step consists on providing an updated estimation of the pilot’s head orientation based on an absolutely naive integration process using the previous orientation estimate and the last data from the gyroscopes. The second step consists of correcting this first estimate by using information about the gravity if available. More precisely, if the IMU is static or moving at a constant linear velocity we first measure the direction of the gravity using the accelerometers. Then we compute the rotation quaternion between the estimated gravity vector and the real gravity vector, which is known to be pointing along the z axis in the N frame. Finally we rotate the first estimation of the updated orientation quaternion by applying the rotation quaternion we just computed.

The gravity estimation validation is done as presented in the figure below. This is an adaptation of the technics presented by Sabatini [15], Madgwick [20] and Favre [8]. We check if the absolute value of the modulus of the sensed acceleration minus the earth gravity lies below a given threshold for the last n samples.

kakk •••

•••

kak 1k

ka

k

k kgk = g

kak nk

-+ +

- +

-

abs() abs() abs()

(· 6 ✏a)? true : false (· 6 ✏a)? true : false (· 6 ✏a)? true : false

& & true: ˆgb reliable

false: ˆgb unreliable

•••

Figure 23: Gravity estimation validation process

a Maximum deviation of acceleration modulus from gravity modulus to consider that acceleration is pure gravity

To make sure we do not consider the IMU to be steady when it’s not we can do the same process for the angular velocity.

Here we check if the absolute value of the modulus of the instantaneous angular velocity lies below another threshold. Then we feed the boolean results of this process and the one of the previous to an “AND gate”. The boolean output of the last

“AND gate” tells us whether we can consider the IMU to be steady in the navigation frame or not. If it is false then we do not realign the inertial tracker on gravity in that step as shown in Figure 22.

The following figure shows the orientation estimation drift with and without realignment on gravity.

(28)

2000 3000 4000 5000 6000 7000 8000

−1

−0.5 0 0.5

Yaw angle (in deg)

2000 3000 4000 5000 6000 7000 8000

−1 0 1 2 3

Pitch angle (in deg)

2000 3000 4000 5000 6000 7000 8000

−0.2 0 0.2 0.4 0.6 0.8

Time (in ms)

Roll angle (in deg)

True Value Inertial tracker without gravity based correction Inertial tracker with gravity based correction

Figure 24: Comparative plot of the inertial orientation tracker with and without realignment on gravity Position estimation

The position estimation is rather simple. The reason is that we have very little margin since we do not want to introduce any delays in the process. So we just do a naive integration of the accelerometer data using Euler's method.

TS

Delay

ak+1 vk+1 pk+1

++ ↵ TS

Delay

++

Figure 25: Position estimation process

ak+1

vk+1

pk+1

↵ =?(IMU is not moving) ↵K: 1.0

Acceleration estimate at k+1 Velocity estimate at k+1 Position estimate at k+1

Euler's method can produce good approximations but it is not usually practical since it requires a lot of computation. But here we have a lot a computational power at our disposal and the Euler method has proved to work well at 1 kHz on the lab computer so we can assume it will not cause any trouble when running the inertial tracker at 500 Hz.

As we saw, to get the position we integrate the acceleration of the pilot’s head twice. This acceleration in the inertial frame is due to the effect of the external forces, excluding gravity. This exclusion is rather problematic. We could just remove a fixed value in the z-axis from the accelerometer data expressed in the navigation frame. But a small deviation between what is sensed by the IMU and this scalar would make the position estimation drift significantly along the z-axis even in static situations. We can enhance this process a bit in these static situations by removing the running average of the accelerometer data instead of a fixed value as depicted in the figure below.

(29)

ab,(···k+1)

qk+1

gn = [0, 0, g]T ab,k+1

b ¯an

ˆan

ak+1 Running

average q⌦ ¯ab⌦ q 1

q⌦ ab⌦ q 1 an gn

an ¯an

IMU is not moving

IMU is moving

Figure 26: Acceleration estimation process

ab,k+1

ab,(···k+1)

qk+1

Last IMU acceleration in b frame IMU acceleration history in b frame Updated orientation quaternion

To make it short, the position integration strongly relies on the orientation estimation which itself relies on the gravity modulus estimation. The gravity was assumed constant over the simulator area, which seems reasonable, but an unexpected problem showed up. The gravity modulus sensed by the IMU was depending on the IMU orientation. We measured it for a bunch of orientations and gathered the results in the table below.

Pitch Pitch

-90 ° -67,5 ° -45 ° -22,5 ° 0 ° 22,5 ° 45 ° 67,5 ° 90 °

Roll -157,5 ° 9,818 9,941 10,049 10,016 9,881 9,726 9,643 9,694 9,814 -157,5 ° Roll -135 ° 9,818 9,912 10,035 10,017 9,892 9,76 9,673 9,721 9,814 -135 ° -112,5 ° 9,818 9,883 9,965 9,954 9,877 9,774 9,717 9,748 9,814 -112,5 °

-90 ° 9,818 9,854 9,86 9,851 9,814 9,771 9,758 9,775 9,814 -90 °

-67,5 ° 9,818 9,8085 9,747 9,736 9,743 9,775 9,811 9,8208 9,814 -67,5 ° -45 ° 9,818 9,763 9,66 9,661 9,718 9,804 9,868 9,8665 9,814 -45 ° -22,5 ° 9,818 9,7175 9,61 9,64 9,746 9,876 9,95 9,9123 9,814 -22,5 °

0 ° 9,818 9,672 9,606 9,672 9,818 9,962 10,019 9,958 9,814 0 °

22,5 ° 9,818 9,6988 9,636 9,727 9,886 10,027 10,065 9,9363 9,814 22,5 ° 45 ° 9,818 9,7255 9,681 9,773 9,925 10,036 10,056 9,9145 9,814 45 ° 67,5 ° 9,818 9,7523 9,716 9,783 9,892 9,98 9,997 9,8928 9,814 67,5 °

90 ° 9,818 9,779 9,761 9,784 9,827 9,878 9,892 9,871 9,814 90 °

112,5 ° 9,818 9,8268 9,81 9,789 9,768 9,762 9,776 9,82 9,814 112,5 ° 135 ° 9,818 9,8745 9,878 9,817 9,737 9,682 9,683 9,769 9,814 135 ° 157,5 ° 9,818 9,9223 9,948 9,887 9,761 9,653 9,632 9,718 9,814 157,5 ° Roll 180 ° 9,818 9,97 10,014 9,961 9,813 9,677 9,616 9,667 9,814 180 ° Roll

-90 ° -67,5 ° -45 ° -22,5 ° 0 ° 22,5 ° 45 ° 67,5 ° 90 ° Figure 27: Measured gravity modulus for different orientations of the IMU

References

Related documents

Reduced MPFC activity can be found in a variety of dehumanizing contexts, such as when sexist men look at pictures of scantily clad women (Cikara et al., 2011), when people shoot

The n and nS u (n) for all valid bins in each spectrum were then used in the IDM. Normalised spectra in a) mean wind direction and b) in the vertical. The line in the upper

5 Optical See-Through Head Mounted Display Calibration The Optical See-Through Head Mounted Display (osthmd) calibration process is introduced by giving an overview of the ar system

Today he works as a head and neck surgeon with special interests in airway management and minimally invasive techniques, for example transoral robotic surgery.. He has been

In this thesis the treatment outcome after ECT in twenty patients with head and neck cancer and in six patients with non-melanoma skin cancer will be reported as well as evidence

This thesis investigates how UV-induced damage of the skin and dif- ferent physiological factors of the skin influences the uptake of 5- aminolevulinic acid, ALA, and its conversion

MEMS inertial sensors nodes containing a 3D accelerome- ter, gyro and magnetometer (IMUs) are increasingly being used for quantitative study of human motion, but integrating the

1 The artis- tic personality of Fetti, educated in Rome, later active at Mantua and Venice, and pa- tronised by Duke Ferdinando II Gonzaga (1587–1626), the celebrated Mantuan art