• No results found

Facial animation parameter extraction using high-dimensional manifolds

N/A
N/A
Protected

Academic year: 2021

Share "Facial animation parameter extraction using high-dimensional manifolds"

Copied!
51
0
0

Loading.... (view fulltext now)

Full text

(1)

Thesis

Facial animation parameter extraction using

high-dimensional manifolds

Henrik Ellner

(2)
(3)

Facial animation parameter extraction using

high-dimensional manifolds

ISY, Link¨opings Universitet Henrik Ellner LiTH-ISY-EX- -06/3774- -SE

Examensarbete: 20 p Level: D

Supervisor: J¨orgen Ahlberg,

ISY, Link¨opings Universitet Examiner: Robert Forchheimer,

ISY, Link¨opings Universitet Link¨oping: March 2006

(4)
(5)

Institutionen f¨or Systemteknik 581 83 LINK ¨OPING SWEDEN March 2006 x x www.ep.liu.se LiTH-ISY-EX- -06/3774- -SE

Facial animation parameter extraction using high-dimensional manifolds

Henrik Ellner

This thesis presents and examines a method that can potentially be used for extracting parameters from a manifold in a space. In the thesis the method is presented, and a potential application is described. The application is determining values. FAP-values are used for parameterizing faces, which can e.g. be used to compress data when sending video sequences over limited bandwidth.

Nyckelord Keyword Sammanfattning Abstract F¨orfattare Author Titel Title

URL f¨or elektronisk version

Serietitel och serienummer Title of series, numbering

ISSN ISRN ISBN Spr˚ak Language Svenska/Swedish Engelska/English Rapporttyp Report category Licentiatavhandling Examensarbete C-uppsats D-uppsats ¨ Ovrig rapport Avdelning, Institution Division, Department Datum Date

(6)
(7)

Abstract

This thesis presents and examines a method that can potentially be used for extracting parameters from a manifold in a space. In the thesis the method is presented, and a potential application is described. The application is determin-ing FAP-values. FAP-values are used for parameterizdetermin-ing faces, which can e.g. be used to compress data when sending video sequences over limited bandwidth. Keywords:

(8)
(9)

Acknowledgements

I would like to thank my supervisor J¨orgen Ahlberg for the help and support with ideas and writing. I would also like to thank my examiner Robert Forch-heimer and Peter Johanson for helping me with ideas. I would also like to thank Ralph Dragon for giving me material and ideas at the early stages of my thesis.

(10)
(11)

Contents

1 Introduction 1 1.1 Problem formulation . . . 1 1.2 Analysis by synthesis . . . 2 1.3 3D vs 2D . . . 2 1.4 Outline . . . 2

2 A method for parameter approximation 5 2.1 The method . . . 5

2.2 A simple example . . . 6

2.2.1 Interpolation method one . . . 6

2.2.2 Interpolation method two . . . 7

2.3 Another simple example . . . 9

2.4 The complete model . . . 9

3 Application to face images 15 3.1 Prerequisite . . . 15

3.2 Estimating local movement of the vertices . . . 15

3.3 Estimating the rotation . . . 16

3.4 Generating training data . . . 16

4 Experiments 17 4.1 Introduction . . . 17

4.2 The simple examples . . . 17

4.3 The vertex position . . . 18

4.4 Rotation around a known axis . . . 19

4.5 Rotation around an ”unknown axis” . . . 19

5 Summary and Conclusion 23 5.1 Summary and conclusion . . . 23

5.2 Future work . . . 23

A Principal Component Analysis, PCA 27 B MPEG-4 FA standard 31 C Algorithms and program issues 35 C.1 Bilinear interpolation . . . 35

C.2 Calculating Euler angles from an axis of rotation . . . 36

C.3 P5 PGM file format . . . 37

(12)

C.4 Candide . . . 38 C.5 The programs I used . . . 38

(13)

Chapter 1

Introduction

In this introductory chapter our method for parameter extraction is briefly introduced and the layout is presented.

1.1

Problem formulation

Sending video sequences over a communication channel requires large bandwidth since every frame is a new picture to send or receive. This thesis will treat analysis of faces, motivated by the desire to compress the data in video sequences containing faces. In this thesis we do this by using a proposed method of compression that uses parameters extracted from the face, the MPEG-4 FA standard1. The face’s texture and the parameters will be enough to reconstruct the video sequence. The compression that is considered is not restricted to faces, but could be used for video sequences where we have few objects and know beforehand what type of objects there are in the video sequence. Also the parameters used for the compression of facial images can be used for creating a video sequence of any person’s face, or for controlling animated characters. Of course, there are already numerous methods for compressing video sequences. However, the compression techniques used today consider the entire image space while we will be considering manifolds in the image space. Thus data reduction will be very good. An example of a case where this kind of compression would be useful is for cellular telephones to allow for video conversation over their limited bandwidth. Since in a video conference, usally the only important part is the face of the other person.

Faces are controlled by muscles and thus a person has very limited amount of different expressions that he or she can perform. In contrast, the image space is an extremly large space for reasonably sized images. Knowing beforehand that we are capturing a video sequence of a face we can send a 3D model of the face and update it instead of sending new images for every frame. Such a model could be a 3D face with muscles that can contract or relax. The parameters to send would be the amplitude of contraction the different muscles have, and global motion parameters such as translation and rotation. However, this kind of model will not be considered in this thesis since it is not general enough nor is it portable to other faces. Lighting is also a parameter that should be considered

1The MPEG-4 FA standard is described in Appendix B.

(14)

when describing a face. However, images of faces with different lighting condions still constitute a manifold in the image space. Since this means that we are not considering the entire image space we are considering a very good compression. We will not consider lighting conditions as a parameter in this thesis.

According to the MPEG-4 FA standard a face is modelled by its neutral texture (skin), some vertices and some surfaces (triangles) that are built from the vertices (in 3D) on the face. Different expressions can be made by moving the vertices (moving vertices simulate the movement of the different muscles in the face) and linearly interpolate the texture over the stretched surface2.

This thesis proposes a method for extracting the displacement of the vertices and the displacement and rotation of the surfaces (i.e. both global and local displacement). However, we presume that size and translation of the head in the picture is known. Although the method could be used to detect faces and size of faces in a picture, it has not been tried in this thesis.

Note that the method is not constricted to face images only, but for any problem where you want to find parameters in a manifold within a complex space.

1.2

Analysis by synthesis

Most methods used today for extracting the positions of vertices are based on an approach called analysis by synthesis. This approach works by comparing the image from which we want to extract the parameters (original image), Ii, and a synthesized image of the same face, Is(p), where p are the parameters. Starting with a synthesized image with the same parameters as the previous frame (some arbitrary choice of parameters must be made for the very first frame) we compare the two images over some distance measure, d(x, y). This can be rewritten as trying to solve argminpd(Ii, Is(p)) which can be done with some iterative method starting with the p value used in the last frame. The method has the drawback that it does not not guarantee an optimum and since we do not know the parameter values for the first frame we might get stuck in a local minimum.

1.3

3D vs 2D

The analysis by synthesis method does not guarantee an optimal solution since it quits at a local minimum. Using the analysis by synthesis method might not be the most cost effective since it requires synthesizing many images, without any preprocessing. The method this thesis presents uses a 2 dimensional approach to the problem based on PCA where the eigenbases are already calculated. The method is expected to be faster and not to be as likely to get stuck in a local minimum.

1.4

Outline

The second chapter presents the method, free from applications. The third chapter describes the application of the method on images of faces. The fourth

(15)

1.4. Outline 3

chapter presents the results from applying the method on face images for dif-ferent parameters. The fifth chapter contains a summary, a conclusion and suggestions for future work.

(16)
(17)

Chapter 2

A method for parameter

approximation

In this chapter the method that is used in the following chapters is presented in full.

2.1

The method

Let us consider the case where we have a set of vectors, training data1, with some kind of characteristic, and we have a vector, test data2, that we want to see if it has the same characteristic as the training data. For example, the training data could be vectors with norm one. We want to compare a test vector to such a set and see if it belongs to the set or not3.

To allow for more complicated problems, consider the case where we have more than one set of training vectors. For example we have one set of training vectors with norm one and another set of training vectors with norm two. The problem would be to decide which of the sets the test vector belongs to or even what norm the test data has. To allow for even more complicated cases, consider the case where the characteristics are not so well defined (as the norm) add un-certainty and let the space be continuous, and we have the problem this thesis aspires to give a method for. The method is also tested on a real world problem; determination of Facial Animation Parameters4.

Images are usally represented by matrices, where each element corresponds to one pixel. To apply the method on images we need to have a vector repre-sentation of the images. In this thesis it is done by stacking the matrix columns on each other.

The distance measure I used in this thesis is the norm of the difference of vectors, where the norm is the largest singular value. If a and b are vectors the distance meassure is thus k a − b k. The reason I used the mentioned norm is because it is the one matlab has as default.

1Further on, training data and training vectors will be used synonymously. 2Further on, test data and test vectors will be used synonymously.

3Of course that answer could be answered by calculating the norm of the vector, so this

case is of no real importance.

4More about the Facial Animation Parameters can be found in Appendix B.

(18)

2.2

A simple example

Consider the case where we have a black and white picture of six by six pixels. Let the image have the characteristic that it has a vertical border. Left of the border all the pixels are black, right of the border all the pixels are white. We want to decide where the border is located for some test vector, i.e find the value for the parameter p ∈ {0, ..., 6}, where p is the position of the border. This problem could be solved by comparing the distance of the different synthesised images, training data, (the ones corresponding to p = 0, ..., 6) to the test vector, since the images are only six by six pixels and thus only seven comparisons would be needed. However, we are going to scale up, so this is not a good solution. Let us instead use fewer images5 to compare with and interpolate to get an estimation of the position of the border. First we consider the case where we only use two bases, and then we generalize to cases with more than two bases. Call the synthesised image with the border to the far left E1 (p = 0), the one with the border to the far right E2 (p = 6) and the original image I. The distance between the two bases and the original image need not be zero (which is the case if 1 ≤ p ≤ 5 for I). To find an approximation to the position of the border we calculate which linear combination has the smallest distance to the original image. The parameter of the linear combination can be mapped to the position of the border. The details are described in interpolation method one below. Another way of calculating an approximation of the border position can be done by considering the error from comparing the images to the original image as a second degree polynomial where the error is zero for the wanted parameter value, which can be translated to the position of the border. This is explained in more detail in interpolation method two.

2.2.1

Interpolation method one

This interpolation method minimizes the error from the linear combination of the two bases. The linear combination calculates a linear transition caused by changing the parameter. In reality the transition is not linear for every pixel in the image so this is an approximation. A two dimensional example of this can be seen in figure 2.1.

We use a norm that is induced by a scalar product. Let the error be: e∗ = argmin0≤t≤1 k t ∗ E1 + (1 − t) ∗ E2 − I k=

= argmin0≤t≤1 k t ∗ E1 + (1 − t) ∗ E2 − I k2=

= argmin0≤t≤1 t2k E1 k2+(1 − t)2k E2 k2+ k I k2+ 2t ∗ (1 − t) < E1, E2 > −2t < E1, I > −2(1 − t) < E2, I >=

= argmin0≤t≤1 t2(k E1 k2+ k E2 k2−2 < E1, E2 >) − 2t(k E2 k2− < E1, E2 > + < E1, I > − < E2, I >)+ k E2 k2+ k I k2−

2 < E2, I >=

= argmin0≤t≤1 t2k E1 − E2 k2−2t(k E2 k2− < E1, E2 > +

5For the more complex problems we can not directly compare the test vectors to single

training vectors. Instead we have to see how close the test vector is to a set of training vectors by expressing it in a base for the training vectors. Since this part is a build up for the more complex problem the images in this example will be called bases.

(19)

2.2. A simple example 7

< E1, I > − < E2, I >)+ k E2 k2+ k I k2−2 < E2, I >

Let the t2 term be a and the t term be −2b and the constant term be c. The

Figure 2.1: Visualisation of interpolation method one.

a term, being a norm is always positive. We thus seek the argmin0≤t≤1 of the function:

f (t) = at2− 2tb + c which has the derivative

f0(t) = 2at − 2b and thus an extremum in t = ab. Examining the second deriva-tive

f00(ab) = 2a > 0 we find that the extremum is a minimum. We thus have: t = argmin0≤t≤1k t∗E1+(1−t)∗E2−I k=      0 or kE1−E2k2 kE2k2−<E1,E2>+<E1,I>−<E2,I> or 1

When a t that minimizes the linear combination is found. We see that the larger t is the more we use of base E1 so the position of the border should in-crease with increasing value of (1 − t), the opposite applies for E2. The simplest way to estimate the position of the border would be to round 6 ∗ (1 − t) to the closest integer. This works well in practice too.

2.2.2

Interpolation method two

The second interpolation method works by seeing the error as a function of a variable t and finding the minimum of that function. Let the error function be e(t), where e(0) = e1 and e(1) = e2. e1 is the error from expressing the test data in the first base, E1, and e2 is the error from expressing the test data in the second base, E2. See figure 2.2.

If we would have used a base with correct value, the error, e∗, would be zero. We need to make an approximation; that e(t) can be approximated, locally, with

(20)

Figure 2.2: Visualisation of interpolation method two.

a second-degree polynomial. Let t∗ be the value of t that minimizes e(t), we thus have:

e(t) = at2+ bt + c for some a, b and c

Error at t = 0 being e1 gives

e(0) = a ∗ 0 + b ∗ 0 + c = e1 ⇒ c = e1 Error at t = 1 being e2 gives

e(1) = a + b + e1 = e2 (2.1)

Error at t = t∗ being 0 gives

e(t∗) = at∗2+ bt∗+ e1 = 0 (2.2)

The derivative is

e0(t) = 2at + b (2.3)

Since t∗ is at an extremum (minimum) we have

e0(t∗) = 2at∗+ b = 0 (2.4)

Subtracting 2.4*t∗ from 2.2-gives:

−at∗2+ e1 = 0 ⇒ at∗2 = e1 (2.5)

Inserting 2.5 in t∗ 2.4 gives:

bt∗ = −2 ∗ e1 (2.6)

Inserting 2.6 in t∗ 2.1 gives:

(21)

2.3. Another simple example 9

Inserting 2.5 in t∗2.7 gives:

e1 − 2e1t∗+ (e1 − e2)t∗2 = 0 (2.8)

If e1 = e2 we have t∗= 12. Else we have t∗2− 2e1 e1 − e2t ∗+ e1 e1 − e2 = 0 ⇒ t ∗ = e1 e1 − e2± r ( e1 e1 − e2) 2 e1 e1 − e2 = d ±pd2− d where d = e1 e1 − e2 The term inside the square root is always positive since e1 and e2 ≥ 0. Apparently t∗ has two (or one) solutions corresponding to a value inside the interval 0 ≤ t ≤ 1 and a value outside that interval (if more than one solution) naturally we want a solution of the first kind.

We see that the larger t is the more we use of base E2 so the position of the border should increase with increasing value of t. The simplest way to approximate the position of the border would be to round 6 ∗ t to the closest integer. This works well in practice. However, since interpolation method 1 works much better I used it in this thesis.

2.3

Another simple example

Consider the same case as in the previous example, but this time the border is not well defined, see figure 2.3. The same method as the one used in the previous example results in an approximate border minimizing the distance between the test data and an image with the best approximated border. More about the results in chapter 4.

Figure 2.3: Example of image without well defined border.

2.4

The complete model

Consider the general problem of deciding the parameter value of a vector when we have different sets of vectors, training data, where each set contains vectors with the same parameter value e.g. if we have an image of a face, the parameter could be the amount of happiness that the person is displaying. The different sets would be pictures of people with different (different between the sets but same within the same set) amount of happiness.

To get an approximation of the parameter value we do as in the simple examples. At first we collect sets of vectors with different parameter values. Secondly we compare the test data to the different sets. The two best matching,

(22)

Figure 2.4: The interpolation.

Figure 2.5: The interpolation.

i.e. smallest distance to the test data, sets are the ones closest to the correct parameter value. Thirdly we interpolate between those two bases to get a more precise value for the parameter. See figure 2.4 and 2.5.

If the test data is not well defined e.g. if we have randomness like in human faces, we have to have a statistical model of the sets. The statistical model we use in this thesis is called PCA6.

The problem

Let the test vector be V . We can see V as a function of some parameters pk, where k is an enumeration of the different parameters. We want to calculate the value of pk for some k.

To estimate the parameter pk we have access to sets of training vectors with different values for the parameter pk. Let Spk,i be the set, where i indexes the

training vectors with different values for the parameter pk.

Since we want sets that describe data with the parameter pk well, we need the amount of parameters that are to be approximated simultaneously to be few or that V can be projected so that its dependence of pl, where l 6= k, is gone. Let us therefore assume that V is only dependent on one parameter p7.

Finding the two best sets

Performing PCA on all the sets we get orthogonal base vectors for the different sets. We can therefore project the test vector on the base vectors. If we recon-struct the image for every set by adding the projections, we can compare the result to the original vector to determine which sets of training data are closest to the test vector and how close they are.

6For more information on PCA, see appendix A.

(23)

2.4. The complete model 11

Calculating the approximated value of the parameter

Having the two best bases we can calculate an approximated value using inter-polation method one or two.

The two parameter problem

If we would solve the two parameter problem in the same way as we did in the one parameter problem we would get many elements in every set, since if we want to calculate one of the parameters we would want at least one element of every value from the other parameter in the training sets, or we would have to do many searches over the different values for the other parameter. Instead we search in different directions along the parameter plane, i.e. we do the same thing as in the problem with one parameter for many different directions. When the best approximations of the vector is calculated for the different directions we take the two best approximations and interpolate between them to calculate a better approximation of the parameter values, see figure 2.6.

Figure 2.6: Visualization of the interpolation in the problem with two parame-ters.

The problem with three parameters

As in the two parameters problem doing the same as in the problem with one parameter would either result in many elements in the sets or many searches. So this time we look in different directions in the parameter space. This time we take the three closest approximations and interpolate the parameter values. See figure 2.7.

This time we want to solve:

argmint1,t2 k t1∗ E1 + t2∗ E2 + (1 − t1− t2) ∗ E3 − I k

With the conditions:

0 ≤ t1 0 ≤ t2 t1+ t2 ≤ 1

(24)

Figure 2.7: Visualization of the interpolation in the problem with three param-eters.

The problem can be rewritten as:

argmint1,t2 k t1∗ E1 + t2∗ E2 + (1 − t1− t2) ∗ E3 − I k

2

Calculating the square gives: t2

1k E1 k2+t22k E2 k2+(1 − t1− t2)2k E3 k2+ 2t1t2< E1, E2 > +2t1(1 − t1− t2) < E1, E3 > +2t2(1 − t1− t2) < E2, E3 > −2t1< E1, I > − 2t2< E2, I > −2(1 − t1− t2) < E3, I > + k I k2

Collecting the different terms we get:

t21(k E1 k2+ k E3 k2−2 < E1, E3 >) + t22(k E2 k2+ k E3 k2−2 < E2, E3 >) +

2t1t2(k E3 k2+ < E1, E2 > − < E1, E3 > − < E2, E3 >) + 2t1(− k E3 k2+ < E1, E3 > − < E1, I > + < E3, I >) + 2t2(− k E3 k2+ < E2, E3 > − < E2, I > + < E3, I >) + k E3 k2+ k I k2−2 < E3, I >

Let the t21term be a1, the t22 term be a2, the t1t2term be a3, the t1 term be a4, the t2 term be a5 and the constant term be a6we then have the problem:

argmint1,t2(t

2

1a1+ t22a2+ 2t1t2a3+ 2t1a4+ 2t2a5+ a6) To solve this, we consider the function

f (t1, t2) = t21a1+ t22a2+ 2t1t2a3+ 2t1a4+ 2t2a5+ a6 Which has the partial derivatives:

ft0

1(t1, t2) = 2a1t1+ 2a3t2+ 2a4 and

(25)

2.4. The complete model 13

We then have, for an extremum:

2a1t1+ 2a3t2+ 2a4= 0 (2.9) 2a3t1+ 2a2t2+ 2a5= 0 (2.10) From 2.9 we get: t1= − a3t2+ a4 a1 (2.11) From 2.10 and 2.11 we get

−a 2 3t2 a1 −a3a4 a1 + a2t2+ a5= 0 ⇒ t2( a23 a1 − a2) = a5− a3a4 a1 ⇒ t2= a1a5− a3a4 a2 3− a1a2 (2.12) From 2.11 and 2.12 we get:

t1= − a3a1aa25−a3a4 3−a1a2 + a4 a1 ⇒ t1= a1a3a5− a23a4+ a23a4− a1a2a4 a1(a23− a1a2) ⇒ t1= a3a5− a2a4 a2 3− a1a2 (2.13) The solution is thus:

t1, t2=        t1+ t2= 1 or t1= 0 or t2= 0 or a3a5−a2a4 a2 3−a1a2 , a1a5−a3a4 a2 3−a1a2

The first three cases, t1+ t2 = 1, t1 = 0 and t2= 0 should also be calculated. However doing so is easy and therefore not necessary to do here. When t1 has a high value, much of E1 is used and thus the parametervalue is close to parameter value of E1. When t2has a high value, much of E2 is used and thus the parametervalue is close to parameter value of E2. When the value for both t1 and t2 is low, much of E3 is used and thus the parametervalue is close to parameter value of E3. Let the parameter value for E1, E2 and E3 be p1, p2 resp p3 and the approximated parameter value be p. One exampel of how the approximated value could be calculated is:

p = t1p1+ t2p2+ (1 − (t1+ t2))p3

An alternative to considering E1, E2 and E3 at the same time, one could use interpolation method one on E1 and E2 to get an approximation E. Having calculated E the final step would be to use interpolation method one on E and E3 and translate the values of the interpolation to parameter value.

(26)
(27)

Chapter 3

Application to face images

In this chapter the method is applied on FAP determination.

3.1

Prerequisite

In this chapter we apply the method on images of faces. First, however, there are some things that need to be defined. The MPEG-4 standard, which we follow in this thesis, includes a standard for parameterization of faces. The model is constituted by vertices, displacement of vertices and a texture. The parameters are the displacement (global and local) of the vertices. Some vertices only have global displacement. The displacements are made in distances relative features of the face and are called facial animation parameters or FAPs. The displacement of a vertex is split up in three different parameters, one for the x-direction, one for the y-direction and one for the z-direction. Rotation of a head is defined in Euler angles.1

3.2

Estimating local movement of the vertices

Since we are looking at displacements of specific points on a face we can crop the image so that it only contains the area in which the vertex moves, thus for most vertices we have separated the problem to x,y and z motion of the vertices. Since we are only looking at the 2D projection of the face, we only have two directions in which the vertices can move, horizontally or vertically in the image. Thus we have the 2 parameter problem described in the previous chapter. Using FAPs we get a problem, since different faces have different positions for the corresponding vertex. Using a set of relative motion of the vertices would result in a set where the vertex is at different position within the same set. This could trick the method to choose a set that was wrong because the vertex was at the right position even though the FAP value was not the same. Consider e.g. the position of the left corner of the mouth. Since people have different sized mouths, a small mouth with large displacement of that vertex may look the same as a large mouth with no displacement. Thus, instead of using FAPs as parameter we use the position of the vertices. Once we have estimated the

1More about the MPEG-4 FA standard and Euler angles in appendix B.

(28)

Figure 3.1: Relaxed mouth. Figure 3.2: Small displacement of left corner lip.

position we can translate it to FAPs. See figure 3.1 and 3.2 for an example of a moving vertex for the corner of the mouth.

3.3

Estimating the rotation

Since Euler angles are not very nice to work with we will estimate the rotation and the axis of the rotation instead. They are easily translated to Euler angles via quaternions2. The axis and the amplitude of the rotation requires three parameters, and we therefore have the three parameter problem described in the previous chapter, where the rotation around the x, y and z directions are the parameters.

3.4

Generating training data

To use the method, collecting training data is needed. In the case of faces, the training data is pictures of different peoples faces. In this thesis the faces of 26 different people was used to create the bases. Generating different param-eter values was done with an application using Candide3, where the AUs were redefined to behave like FAPs.

2See appendix C.

(29)

Chapter 4

Experiments

In this chapter the results from different examples are presented.

4.1

Introduction

Four experiments have been performed. The first experiment concerned the simple example problems introduced in the method chapter. The second exper-iment aimed at deciding the position of a vertex for FAP determination. The third experiment was estimating the amplitude of rotation along a known axis. The fourth experiment was estimating the x, y and z rotation of a head. All programs used were written in C++ or matlab1. All calculations were made in matlab on a computer with a 64 bit AMD 3200+ processor.

4.2

The simple examples

The simple examples were black and white pictures. The first simple examples had a vertical border. Left of the border all pixels were black. Right of the border all pixels were white. I used the method to find the position of the border. The second simple examples were also black and white pictures. These also had a border, where the pixels to the left were black and pixels to the right were white. This time, gaussian noise was added to the position of the border for every row in the picture.

Example with well defined border

An example of this case, where the number of pixels are 6x6, can be seen in the figure 4.1.

Example without well defined border

An example of this case, where the number of pixels are 100x100, can be seen in the figure 4.2.

1More about the programs can be found in appendix C.

(30)

Figure 4.1: Example of an image with well defined border.

Figure 4.2: Example of image without well defined border and its approximate image with border.

Results

Approximating the border for images of size 6 by 6, 25 by 25, 50 by 50 and 100 by 100 pixels always generated the correct position, even when using only two bases. The bases that were used were the ones corresponding to an image with the border to the far left (white image) and to an image with the border to the far right (black image). I also examined how scaling up the problem would affect the computation time. For images of the size 6 by 6, 25 by 25, 50 by 50 and 100 by 100 pixels, the amount of time spent calculating, using maximum amount of samples, i.e. 6,25,50 and 100 respectively, was around: 0.015, 0.110, 0.578 and 4.703 seconds respectively, see figure 4.3.

4.3

The vertex position

Using the method for detecting position of different vertices yielded such poor results that there is no point in showing any graphs. The reasons for the poor result could be that this problem is very sensitive to the amount of test images used, and that 26 test images are to few. I examined how the result changed if I used the face that was used as test data in the set of training data (the vertex that this was examined for was the left cornerlip vertex). Since this was done extremly late in this thesis I did not have any time to examine it very much. However, the result was that the biggest error was 0.0106 from the correct value where 0 is no displacement and 0.2 is full displacement. In this case I used a base for every 0.5 value of the parameter and every 0.025 value of the parameter was approximated.

(31)

4.4. Rotation around a known axis 19

Figure 4.3: Time spent using maximal number of sample bases.

4.4

Rotation around a known axis

I applied the method on the problem of determining the amount of rotation around the axes passing through (0,0,0) and (1,0,0), (0,1,0), (1,6,0) and (1,3,0). The error in angle approximation for two different faces can be seen in figure 4.4 through 4.7 (the same two faces were used for the different directions of rotation). In the plots the y-axis is the error for the rotation with the amplitude given by the x-axis, the values for the x-axis and the y-axis are measured in degrees. Rotation around (1,0,0) required that all the faces were centered in the image and that the placement of the nose and the eyes were at the same height for the unrotated picture to get acceptable results (the same values for resizing was used for all pictures of the same face). Setting the average gray scale value of the face to 0.5, where 0 is black and 1 is white, was also needed. Rotation around (0,1,0) required that the face was bounded by the picture. Rotation around (1,3,0) and (1,6,0) required the same normalization as rotation around (0,1,0) for best angle approximation. All these transformations required resizing the images, which was done with bilinear interpolation2. I used a base for every 15 degrees of rotation.

4.5

Rotation around an ”unknown axis”

For this problem I only used three directions for identification, I used (0,1,0), (1,6,0) and (1,3,0) where the direction of the correct rotation was (1,12,0). When performing the experiment described in the previous section, I found that in general different normalizations were needed for different directions of rotation (however, (0,1,0), (1,3,0) and (1,6,0) had the same normalization). For each axis, its best normalization was used. However, when I interpolated between two axes I used a normalization that placed the head in the middle of the image and resized it so that the nose and the eyes were at same height. The reason I used (1,3,0) was to see if the wrong base ever would be identified as one of

(32)

Figure 4.4: Rotation around 010. Figure 4.5: Rotation around 100.

Figure 4.6: Rotation around 130. Figure 4.7: Rotation around 160.

the best bases. The results were that sometimes the wrong base was identified. Even if (1,3,0) was identified as one of the best bases I only used (0,1,0) and (1,6,0) for the interpolation of the correct value. I used one base per 15 degrees of rotation. The result was that the biggest amplitude difference was about eleven degrees. The largest direction error was when the method estimating the direction with (1,6,0) or (0,1,0). See figure 4.8 for more detailed view. The top part of the figure shows the found direction, the lower part of the figure shows the error in amplitude. Notice that for the lower graph the x-value corresponds to the order of which the results were calculated. The first 13 correspond to the first face and the number yields the correct angle by multiplying the number with 7.5 degrees. For the last 13 the same applies if you subtract 13 from the number. In the top part of the figure the correct direction is drawn as a dashed line,the remaining lines correspond to the found directions. Computation time was unacceptably long.

(33)

4.5. Rotation around an ”unknown axis” 21

(34)
(35)

Chapter 5

Summary and Conclusion

In this chapter a summary and suggestions for future work are presented.

5.1

Summary and conclusion

The use of the method consist of fitting a vector to different statistical models of different sets of vectors with the same parameter value. If the number of samples is high enough a linear interpolation between the two closest fitting bases will yield an estimation of the vectors parameter value. Testing this on images of faces, trying to identify some FAP values, I got varying results, this was seen in the experiments chapter. The result was good for simple images, even when scaling up, see the results from the simple example in the experiments chapter. However, when the complexity was increased, the results got worse. See the results from identifying the different rotations in the experiments chapter. The worst result was achieved when the difference between the sets was small, as in the case of identifying vertex positions. Therefore, I think that either the method should be used on simpler examples, or a stronger statistical model than PCA should be used. E.g. kernal PCA could be used. I only used 26 different images for the statistical models of the bases with different parameter value. The result might improve if more images would have been used. The fact that the result improved very much when the face for which we are estimating its displacement was used in the bases seem to corroborate this.

In the case were we use a model of a face to compress the information in video sequences, we already have the model for the face for which we want to estimate its parameter values. If we use this model to generate bases, the result gets pretty good as in the example of the vertex position. Using bases containing the face for which we want to estimate the value, would probably give good results for facial animation too.

5.2

Future work

Future work could involve examining how using the face for which we want to estimate its parameter values in the bases would affect the result, i.e. calculation time and error in the estimation. If the method is too time consuming using the result from previous frame would limit the possible parameter values greatly,

(36)

thus only the first image will require long processing time. Other examples of future work could involve examining how other statistical models would effect the result using the method. To apply the method to other objects than faces is also an interesting continuation of this work.

(37)

Bibliography

[1] Igor S. Pandzic et al, (2002) MPEG-4 Facial Animation: The Standard, Implementations, and Applications, John Wiley & sons Ltd, Chichester, UK.

[2] Michael T. Heath, (2002), Scientific computing an introductory survey, sec-ond edition, McGraw-Hill, NY.

[3] J¨orgen Ahlberg, (2002), Model-based Coding—Extraction, Coding and Eval-uation of Face Model Parameters, Dissertation No. 761, Link¨oping Univer-sity.

[4] Jacob Str¨om, (2002) Model-Based Head Tracking and Coding, Dissertation No. 733, Link¨oping University.

[5] Magnus Borga et all, A unified Approach to PCA,PLS,MLR and CCA Technical report LiTH-ISY-R-1992, Link¨oping University.

[6] J¨orgen Ahlberg, (2001) CANDIDE-3 An updated parameterized face Tech-nical report LiTH-ISY-R-2326, Link¨oping University.

[7] Dante Treglia, (2002) Game Programming Gems 3 Charles River Media. [8] Tom Hammersley, (2006-03-01), Bilinear Interpolation of Texture Maps

http://www.gamedev.net/reference/articles/article810.asp [9] (2006-03-01) PGM, http://netpbm.sourceforge.net/doc/pgm.html

[10] (2006-03-01) Face and 2-D Mesh Animation in MPEG-4

http://www.chiariglione.org/mpeg/tutorials/papers/icj-mpeg4-si/08-SNHC visual paper/8-SNHC visual paper.htm

(38)
(39)

Appendix A

Principal Component

Analysis, PCA

In writing this part of the appendix I have been using ideas from [3], [4] and [5]. If you have an N -dimensional stochastic vector of some distribution and want to approximate it with a linear combination of M < N vectors you can use PCA. PCA describes the vector using only M base vectors. These basevectors are calculated so that the projection of the vector onto the base has the highest variance. In other words it calculates a base for which the vector is most likely to have a good description in. In our world most people wake up at different times, usally their cars have different colors and almost everyone has got two legs. If we are to describe a person in this world, we would tell the time the person wakes up, then the color of his car, and finally how many legs the person has got. Thus maximizing the information1. If we could tell only two things we would tell the color of the car and wake up time and if we only could tell one thing we would tell the wake up time. Thus, the listener can make a good guess of how the individual is knowing how the world works. Performing PCA on some sample vectors and telling PCA how many characteristics it may use, i.e. the number of eigenbases we may use, yields the base of characteristics with the highest variance, i.e. most information.

How it works

Let us consider a vector X of dimension N and zero mean2, belonging to some class e.g. the class of faces. The covariance matrix of X is denoted Cxx. Note that Cxx is non-defective, i.e. the eigenvectors are linearly independent, being real-symmetrical (table 4.1 page 173 in Scientific computing [2]). Let the eigen-values and normalized eigenvectors of Cxxbe λ1, λ2, . . . , λN and φ1, φ2, . . . , φN respectively, where the λis are sorted in decending order and eigenvalues and eigenvectors with the same index are eigenpairs.

First we want to find a normalized direction, ϕ1in which ϕ1X has the highest

1Naturally if the individual is missing one leg, this information would be very descriptive.

However, these people being few omitting this fact makes the listener only guess wrong on few people.

2We can always get zero mean if we instead consider the vector X − E(X).

(40)

variance, we thus want to solve:

argmaxϕ1(E(ϕ

0

1Xϕ01X)) Since ϕ01X is scalar we have:

ϕ01X = X0ϕ1 So the problem can be rewritten as: argmaxϕ1(E(ϕ 0 1XX 0ϕ 1)) = argmaxϕ1(ϕ 0 1E(XX 0 1) = argmaxϕ1(ϕ 0 1Cxxϕ1) Since Cxx is non-defective we can write the vector ϕ1 as a linear combination of eigenvectors: ϕ1= a1φ1+ a2φ2+ . . . + aNφN where a21+ a 2 2+ . . . + a 2 N = 1 Carrying out the multiplication ϕ01Cxxϕ1 we get:

ϕ01Cxxϕ1= a21λ1+ a22λ2+ . . . + a2NλN Since λ1is the largest of the λs we get

argmaxϕ1(E(ϕ

0

1XX0ϕ1)) = argmaxϕ1(ϕ

0

1Cxxϕ1) = φ1

Since we want the next normalized base vector ϕ2 to give as much information as possible we have the condition:

E(ϕ01Xϕ02X) = 0 Let us examine this condition further:

E(ϕ01Xϕ02X) = E(ϕ01XX0ϕ2) = ϕ01Cxxϕ2= 0

We claim that this means that ϕ1 and ϕ2 are linearly independent. Consider the opposite, then:

ϕ2= b1ϕ1+ b2φ2+ . . . + bNφN where b21+ b22+ . . . + b2N = 1 and b16= 0

Carrying out the multiplication, using that ϕ1 is an eigenvector. ϕ01Cxxϕ2= b21λ16= 0

So the claim is valid. We thus have ϕ2= c2φ2+ . . . + cNφN where c22+ c 2 3+ . . . + c 2 N = 1 Carrying out the multiplication ϕ02Cxxϕ2 we get:

ϕ02Cxxϕ2= c22λ2+ . . . + c2NλN Since λ2is the largest number we get

argmax(ϕ02Cxxϕ2) = φ2

Continuing like this adding the conditions ϕ0iCxxϕj = 0 for i < j we see that the base we are seeking are the eigenvectors of Cxx.

(41)

29

Some practical complications

Since we generally do not know the distribution of our vector X, i.e. E(X) and Cxx are unknown, we have to do some approximations. To do this we use training vectors of the class, x1, x2, . . . , xK. Let tilde, ˜, mark that we are calculating approximations. We then have:

˜ E(X) = 1 K K X i=1 xi and C˜xx= 1 K − 1 K X i=1 xix0i

Where ˜Cxx can be written in matrix notation: ˜ Cxx= 1 K − 1AA 0 where A = [x 1x2. . . xK]

Solving the eigenvector problem for AA

0

We want to find an eigenvalue of AA0 but since the vectors corresponding to pictures3 get quite large, N , the number of elements of the matrix AA0get very large, N ∗ N . But since we only have a few training pictures, M , the matrix A0A is relatively small, M ∗ M . Luckily the eigenvectors to the latter matrix will give us what we need, as is shown next.

Let us consider the eigenvalues and eigenvectors of A0A: let λ be an eigen-value and ξ an eigenvector we then have:

A0Aξ = λξ ⇒ AA0(Aξ) = λ(Aξ)

Thus λ is an eigenvalue of both A0A and AA0 and Aξ is an eigenvector to AA0. Next we will show that the important eigenvectors, i.e. the eigenvectors with non zero eigenvalue, are gained from the eigenvectors of A0A. Consider the SVD of A:

A = U ΣV0⇒ A0 = V Σ0U0 ⇒ A0A = V Σ0ΣV0 and AA0= U ΣΣ0U

Since Σ is a matrix with nonzero elements only on the diagonal we have that Σ0Σ and ΣΣ0 also have nonzero elements only along the diagonal and that both have the same amount of nonzero elements and these elements have the same values (observe that the number of elements of value zero on the diagonal will differ unless A is a square matrix). The diagonal elements are the eigenvalues and thus the eigenvectors corresponding to nonzero eigenvalues are received for both A0A and AA0. That the diagonal elements of ΣΣ0 and Σ0Σ are eigenvalues can be realised by multiplying the SVD of A0A with a column of V and the SVD of AA0 with a column of U .

(42)
(43)

Appendix B

MPEG-4 FA standard

In 1999 the Moving Pictures Expert Groups Facial Animation standard, MPEG-4 FA became an international standard. The standard defines a neutral state, FP, FAPU and FAP1.

Neutral State

A neutral state is defined in [1] as: • Gaze in direction of z-axis. • All face muscles are relaxed. • Eyelids are tangent to the iris.

• The pupil is one third the diameter of the iris.

• Lips in contact; the line of the lips is horizontal and at the same height at lip corners.

• The mouth is closed and the upper teeth touch the lower ones.

• The tongue is flat, horizontal with the tip of the tongue touching the boundary between upper and lower teeth.

Feature Point, FP

The MPEG-4 FA standard defines 84 FPs. FPs are spatial points, vertices, that correspond to different points on a face. The FPs on a model should be located according to figure B.1, the figure is copied from [10], which in turn are copied from ISO/IEC IS 14496-2 Visual, 1999.

1The standard defines more than this, such as FATs (FATs are used for getting higher level

of detail, e.g. since FAPs only correspond to motion of feature points creases and other effects will not show in the face unless you use FATs) etc.

(44)

Facial Animation Parameter Unit, FAPU

Human faces come in different sizes and shapes. Since the standard should be able to describe different expressions uniformly for all faces, i.e. the FAP values for a happy face should be the same independent of what face we are using, the MPEG-4 FA standard defines FAPU. The FAPUs are defined as fractions of distance between key facial features, such as mouth width etc. An example of neutral state and FAPUs can be found in figure B.2, the figure is copied from [10], which in turn are copied from ISO/IEC IS 14496-2 Visual, 1999.

Facial Animation Parameter, FAP

The FAPs are based on the study of minimal perceptible actions and are closely related to muscle actions. For each FAP the standard defines which FP it moves, the direction in which the FP moves for this FAP and how far the FP moves for every increment of the FAP value. The incremental change is relative to a FAPU. More information about the MPEG-4 FA standard can be found in [1].

(45)

33

(46)
(47)

Appendix C

Algorithms and program

issues

C.1

Bilinear interpolation

Material for this section is taken from [8]. Changing the size of an image is done with some interpolation method. The most common interpolation methods are nearest neighbour, bilinear and bicubic. Nearest neighbour is the simplest and bicubic is the most complex. Since I have used bilinear interpolation on my images I will describe it. I will assume that the pictures are grayscale. If we would use the RGB color scheme, we would do the same as in grayscale, where we interpolate one color component at a time.

Let x and y be the relative position of a pixel, that is the x and y coordinates of the pixel in the new image divided by its width and height respectively. Let the width of the old images be W and the height be H. Then the pixel value for the new image is at the position (x*W,y*H) of the old image. Let us call these coordinates X and Y respectively. Let col(x, y) be the pixel value at position (x,y) for integer values x and y. Also let colB(x, y) be the bilinearly interpolated pixel value at (x,y) where x and y do not need to be integer.

To make life easier we need to define some parameters. Let: x0 = f loor(X), y0 = f loor(Y ), x1 = x0 + 1, y1 = y0 + 1, 1 − dx = X − x0 and 1 − dy = Y − y0 We then have: 1 − dx = X − x0 ⇒ X = 1 − dx + x0 ⇒ X = 1 − dx + x0 + dx ∗ x0 − dx ∗ x0 ⇒ X = dx ∗ x0 + (1 − dx) ∗ (x0 + 1) ⇒ X = dx ∗ x0 + (1 − dx) ∗ x1

Doing the same for Y, gives:

Y = dy ∗ y0 + (1 − dy) ∗ y1

(48)

Since x0, y0, x1 and y1 are integers we know the grayscale value for any pixel indexed by them.

Since X,Y is boxed in by (x0,y0), (x1,y0), (x0,y1) and (x1,y1) we will get a good approximation from a linear interpolation of these four points.

colB(x, y) = dy ∗ (dx ∗ col(x0, y1)) + (1 − dx) ∗ col(x1, y0)) + (1 − dy) ∗ (dx ∗ col(x0, y1) + (1 − dx) ∗ col(x1, y1))

C.2

Calculating Euler angles from an axis of

ro-tation

In writing this part of the appendix I have been using ideas and results from [7].

Quaternions are an extension of complex numbers. Let q be a quarternion then q can be typed as:

q = qx∗ i + qy∗ j + qz∗ k + qw Or the same equation in vector notation:

q = (qx, qy, qz, qw)0

Addition and multiplication of quaternions is performed in the same manner as for complex numbers, where the following rules apply:

i ∗ i = −1 j ∗ j = −1 k ∗ k = −1 And: i ∗ j = k, j ∗ i = −k j ∗ k = i, k ∗ j = −i k ∗ i = j, i ∗ k = −j Where the norm of q is:

|q| = q2

x+ qy2+ qz2+ q2w The conjugate of q is:

q∗= (−qx, −qy, −qz, qw) The inverse of q is:

q−1= q ∗

|q|

If (X,Y,Z) is a point in space and we want to rotatet it θ radians around the axis passing through (0, 0, 0) and (t, u, v). We can do this by using quarternions. Let

V = (X, Y, Z, 0)0 and

(49)

C.3. P5 PGM file format 37

From the equations

v = qV q−1 or v = qV q∗ if |q| = 1

we get the rotated x-value from the first element in v, the rotated y-value from the second element in v and the rotated z-value from the third element in v. Calculating what matrix operations q ∗ V ∗ q−1 correspond to, the rotation in quarternion can be written as:

    1 − 2(q2 y+ qz2) 2(qxqy− qwqz) 2(qwqy+ qxqz) 0 2(qxqy+ qwqz) 1 − 2(q2x+ qz2) 2(qyqz− qwqx) 0 2(qxqz− qwqy) 2(qyqz+ qwqx) 1 − 2(qx2+ q2y) 0 0 0 0 1         X Y Z 0    

If we consider a rotation using Euler angles we have:   1 0 0 0 cos(ϕ) −sin(ϕ) 0 sin(ϕ) cos(ϕ)     cos(θ) 0 sin(θ) 0 1 0 −sin(θ) 0 cos(θ)     cos(ψ) −sin(ψ) 0 sin(ψ) cos(ψ) 0 0 0 1  

When concatenating the Euler rotations we get: 

cos(θ)cos(ψ) −cos(θ)sin(ψ) sin(θ)

sin(ϕ)sin(θ)cos(ψ) + cos(ϕ)sin(ψ) cos(ϕ)cos(ψ) − sin(ϕ)sin(θ)sin(ψ) −sin(ϕ)cos(θ) sin(ϕ)sin(ψ) − cos(ϕ)sin(θ)cos(ψ) cos(ϕ)sin(θ)sin(ψ) + sin(ϕ)cos(ψ) cos(ϕ)cos(θ)

Since we want to translate the rotation around an axis to Euler rotations, we compare the 3 by 3 matrix in the top left corner of the quaternion rotations with the concatenated Euler rotation matrix to identify the Euler rotations.

Identifying the element at (2,3) divided by the element at (3,3) gives the equation for ϕ. Identifying the element at (1,3) gives the equation for θ. And identifying the element at (1,2) divided by the element at (1,1) gives the equation for ψ. ϕ = atan(−2(qyqz− qwqx)/(1 − 2(q2x+ q 2 y)) θ = asin(2(qwqy+ qxqz)) ψ = atan(−2(qxqy− qwqz)/(1 − 2(q2y+ q 2 z))

C.3

P5 PGM file format

The programs developed for this thesis have been using the P5 PGM file format. Therefore the P5 format will be described in this section. The first row of a PGM file specifies the type of PGM file, this row should contain P5. The second row contains a number, a blank space and a number. The first number contains the width in ASCII format, the second number the height in ASCII format. The third row contains the largest gray scale value, which must be larger than zero and smaller than 65536. This number is also in ASCII. A PGM file may contain comments. Comments start when a # appears before the maximum grayscale value and the line that follows should be ignored. The remaining rows contain a raster from top to bottom. Every pixel is in byte format. More information about the PGM format can be found at [9].

(50)

C.4

Candide

Candide was developed 1987 at Link¨opings University and the version I used, referred to as Candide I in [6] is a slight modification of the model from 1987. Since Candide I was developed before the MPEG-4 FA standard it does not con-tain FAPs or the FPs specified by the standard. Candide I uses 11 action units, AUs. Instead of just moving one vertex in one direction AUs are movements of sets of vertices, e.g. one AU corresponds to how much a face smiles. AUs are not scaled to some FAPU so it is not as portable as the FAPs standard. The Candide description of a face is implemented with an image of the face and the face’s wire frame model file, WFM file. The WFM file contains a description of all the vertices in the wire frame model, how the verices are connected and the definition of the AUs. More information about Candide can be found in [6]. When working on this thesis I had access to a CD containing faces and their WFM file. Generating training data was done with these files.

C.5

The programs I used

I used xproject, which can be found at http://www.bk.isy.liu.se/candide/. While working on this thesis I made a couple of programs. One of them changes can-dide models to allow for new definitions for the AUs, i.e. redefine or add new AUs. I made a couple of matlab files that tested the method. I also created a Windows version of xproject that reads from a file for quicker and smoother picture generation.

m-files

The program that tests the method is split up into several m-files. I also created matlab files for generating eigenbases and normalising pictures.

The windows version of xproject

I developed a windows program that can be used to create images of faces with different AU values, given that we have the images of the faces in the P5 PGM format, and we have its WFM file. All information needed for generating modified images using this program are specified in the data.tex file. How to use the program is specified in the programs readme.txt file.

(51)

Copyright

The publishers will keep this document online on the Internet - or its possi-ble replacement - for a period of 25 years from the date of publication barring exceptional circumstances. The online availability of the document implies a permanent permission for anyone to read, to download, to print out single copies for your own use and to use it unchanged for any non-commercial research and educational purpose. Subsequent transfers of copyright cannot revoke this per-mission. All other uses of the document are conditional on the consent of the copyright owner. The publisher has taken technical and administrative mea-sures to assure authenticity, security and accessibility. According to intellectual property law the author has the right to be mentioned when his/her work is accessed as described above and to be protected against infringement. For ad-ditional information about the Link¨oping University Electronic Press and its procedures for publication and for assurance of document integrity, please refer to its WWW home page: http://www.ep.liu.se/

Upphovsr¨att

Detta dokument h˚alls tillg¨angligt p˚a Internet - eller dess framtida ers¨attare - under 25 ˚ar fr˚an publiceringsdatum under f¨oruts¨attning att inga extraordi-n¨ara omst¨andigheter uppst˚ar. Tillg˚ang till dokumentet inneb¨ar tillst˚and f¨or var och en att l¨asa, ladda ner, skriva ut enstaka kopior f¨or enskilt bruk och att anv¨anda det of¨or¨andrat f¨or ickekommersiell forskning och f¨or undervisning.

¨

Overf¨oring av upphovsr¨atten vid en senare tidpunkt kan inte upph¨ava detta tillst˚and. All annan anv¨andning av dokumentet kr¨aver upphovsmannens med-givande. F¨or att garantera ¨aktheten, s¨akerheten och tillg¨angligheten finns det l¨osningar av teknisk och administrativ art. Upphovsmannens ideella r¨att in-nefattar r¨att att bli n¨amnd som upphovsman i den omfattning som god sed kr¨aver vid anv¨andning av dokumentet p˚a ovan beskrivna s¨att samt skydd mot att dokumentet ¨andras eller presenteras i s˚adan form eller i s˚adant sammanhang som ¨ar kr¨ankande f¨or upphovsmannens litter¨ara eller konstn¨arliga anseende eller egenart. F¨or ytterligare information om Link¨oping University Electronic Press se f¨orlagets hemsida http://www.ep.liu.se/

c

2006, Henrik Ellner

References

Related documents

The prevalence of antibiotic resistance in the environment is shown to correlate with antibiotic usage, meaning that countries, that have a high antibiotic consume, are dealing

– Visst kan man se det som lyx, en musiklektion med guldkant, säger Göran Berg, verksamhetsledare på Musik i Väst och ansvarig för projektet.. – Men vi hoppas att det snarare

What is interesting, however, is what surfaced during one of the interviews with an originator who argued that one of the primary goals in the sales process is to sell of as much

During the past 18 years, on the other hand, I have worked as a Special Education teacher in the English section of a Swedish international school where my interest in

This
is
where
the
Lucy
+
Jorge
Orta
have
founded
the
Antarctic
Village,
the
first
symbolic


Konventionsstaterna erkänner barnets rätt till utbildning och i syfte att gradvis förverkliga denna rätt och på grundval av lika möjligheter skall de särskilt, (a)

Bursell diskuterar begreppet empowerment och menar att det finns en fara i att försöka bemyndiga andra människor, nämligen att de med mindre makt hamnar i tacksamhetsskuld till

This study aimed at answering the following research question; how the study abroad decision-making process of international students, choosing to study in Sweden, is influenced