• No results found

Navigation in 2.5D spaces using Natural User Interfaces

N/A
N/A
Protected

Academic year: 2021

Share "Navigation in 2.5D spaces using Natural User Interfaces"

Copied!
46
0
0

Loading.... (view fulltext now)

Full text

(1)

Department of Computer and Information Science

Final thesis

Navigation in 2.5D spaces using Natural

User Interfaces

by

Avraam Mavridis

LIU-IDA/LITH-EX-A--15/005--SE

2015-03-05

Linköpings universitet

(2)

 

 

 

Final Thesis

Navigation in 2.5D spaces using Natural

User Interfaces

by

Avraam Mavridis

LIU-IDA/LITH-EX-A--15/005--SE

2015-03-05

Supervisor: Erik Berglund Examiner: Erik Berglund

 

 

 

 

 

 

 

 

 

 

(3)

 

 

 

Abstract  

 

Natural   User   Interfaces   (NUI)   are   system   interfaces   that   aim   to   make   the   human-­‐computer   interaction   more   “natural”.     Both   academic  and  industry  sectors  have  developed  for  years  new  ways  to   interact   with   the   machines.   With   the   development   of   these   interaction   techniques   to   support     more   and   more   complex   communication   actions   between   a   human   and   a   machine   new   challenges   arise.     In   this   work   the   theory   part     describe   the   challenges  of  developing  NUI  from  the  perspective  of  developers  and   designers  of  those  systems  and  also  discusses  methods  of  evaluating     the  system  under  development.  In  addition  for  the  aim  of  the  thesis  a   prototype  video  game  have  been  developed  in  which  three  different   interaction   methods   have   been   encapsulated.   The   goal   of   the   prototype   was   to   explore   possible   NUI   that   can   be   used   in   2.5D   spaces.   The   interaction   techniques   that   have   been   developed   are   evaluated   by   a   small   group   of   users   using   a  questionnaire   and   the   results  are  presented  at  the  end  of    this  document.  

                         

 

(4)

 

 

 

Acknowledgments  

 

First  of  all  I  would  like  to  thank  my  supervisor  Erik  Berglund  for   his   help   and   the   advises   he   gave   during   the   execution   of   my   thesis   project.    

Also   I   would   like   to   thank   the   people   from   the   Berlin   Game   Development  Meetup  Group  who  helped  me  for  the  evaluation  part   of  my  work.  

Finally,   I   would   like   to   thank   all   the   proffesors   that   I   had   during   my   studies   in   Sweden   for   giving   me   valuable   knowledge   for   my   carreer   and   also   my   classmates   and   friends   that   made   my   MSc   studies  in  Sweden  a  great  experience.  

Last  but  not  least  I  would  like  to  thank  my  family  for  supporting   me  during  my  studies.    

                                       

(5)

 

Table  of  Contents  

CHAPTER  1  ...  7  

1.  INTRODUCTION  ...  7  

1.1  Motivation  ...  7  

1.2  Purpose  ...  7  

1.3  Limitations  ...  7  

1.4  Structure  of  the  document  ...  8  

CHAPTER  2  ...  9  

2.  TOOLS  ...  9  

2.1  Kinect  and  Unity3d  ...  9  

CHAPTER  3  ...  16  

3.  NATURAL  USER  INTERFACES  ...  16  

CHAPTER  4  ...  18  

4.  TECHNIQUES  OF    EVALUATING  GAMEPLAY  EXPERIENCE  ...  18  

4.1  Introduction  ...  18  

4.2  Evaluation  of  movement  methods  ...  23  

CHAPTER  5  ...  24   5.  MOVEMENT  METHODS  ...  24   5.1  Introduction  ...  24   5.2  Method  1  ...  25   5.3  Method  2  ...  32   5.4  Method  3  ...  36   CHAPTER  6  ...  40   6.1  DISCUSSION  ...  40   6.2  FUTURE  WORK  ...  40   BIBLIOGRAPHY  ...  43  

(6)
(7)

 

1.  Introduction  

1.1  Motivation    

Most   research   and   publications   have   been   focused   on   using   motion   sensors   in   3D   virtual   worlds.   The   aim   of   my   thesis   is   to   determine   the   general   challenges   and   impacts   in   the   use   of   motion  

sensors   in   2.5D   virtual   worlds   in   terms   of   user  

experience/interaction   (UX/UI).     I   will   determine   and   evaluate   various   methods   of   navigating   in   2.5D   spaces   using   natural   interfaces,   discover   the   challenges   and   propose   solutions   on   the   various   problems   that   presented.   For   the   purpose   of   the   thesis   a   game  designed  using  Microsoft  Kinect  and  Unity3D.  

The  purpose  of  a  natural  interface  is  to  provide  an  effective  way  to   the  end  user,  be  invisible  and  continuously  response  in  complex  user   interactions.   The   user   should   be   able   to   learn   to   interact   with   the   machine   easily   and   effectively   and   the   natural   interface   should   be   designed   in   a   way   that   helps   the   user   to   quickly   move   from   a   beginner  level  to  a  skilled  level  of  use.  

There   will   be   used   UX   evaluations   methods   that   have   been   proposed  in  Academic  Journals  or  are  used  by  the  gaming  industry  to   measure  the  user  experience.  

1.2  Purpose      

This  project  is  performed  as  a  Master’s  thesis  work  in  Computer   Science   at   Linköping   University,   Sweden.   It   is   performed   at   the   request  from  Therese  Kristoffer  Publishing  AB.  

1.3  Limitations    

The   game   have   been   tested   in   a   small   group   of   users,   so   the   evaluation   of   the   proposed   navigation   methods   are   based   on   their  

(8)

opinions.  We  should  also  take  into  consideration  that  age  and  gender   are  factors  that  may  affect  the  opinion  of  the  end  user.  

 

1.4  Structure  of  the  document        

In  Chapter  2  the  tools  that  are  used  for  the  implementation  of  the   thesis   are   described.   In   Chapter   3   the   theoretical   background   of   Natural   User   Interfaces   are   described   based   on   the   literature.   In   Chapter   4   the   theoretical   background   behind   the   UX   evaluation   in   game  industry  is  described.  In  Chapter  5  the  implemented  methods   are  described  among  with  the  limitations  and  evaluation  results  for   each  of  them.  In  Chapter  6  there  is  final  evaluation  and  comparison   of   the   methods   followed   by   a   short   discussion   of   potential   future   work.                                                  

(9)

 

Chapter  2  

 

2.  Tools  

2.1  Kinect  and  Unity3d    

In  this  chapter  I  will  describe  what  is  the  Kinect,  how  it  works  and   the  how  its  SDK  can  be  used  along  with  the  Unity3D  

2.1.1  Microsoft  Kinect  

 

The  Kinect  is  a  motion  sensing  device  which  is  consisted  from  an   RGB   camera,   a   depth   sensor   and   a   number   of   microphones   that   cooperate   in   tandem   (figure   1).   This   setup   provide   a   3D   motion   capture,  voice  and  facial  recognition.  The  first  version  of  Kinect  was   released   in   November   2010   for   XBOX   360.   On   February   of   2012   a   version  for  Windows  was  released.  

 

 

Figure  1.  Kinect  

 

Kinect  uses  a  decision  tree  to  identify  the  data  for  a  specific  type   of   body,   the   nodes   in   this   tree   are   labeled   with   body   part   names  

(10)

(Jana   2012).   Kinect’s   SDK   provides   20   joints   points   of   a   human   skeleton  as  shown  in  figure  2  below.    

 

Figure  2.  Kinect's  Join  Points    

Table  1.  Kinect's  Join  Points  mapping  

Join  Point   Enumeration  

Hip  Center   0   Spine   1   Shoulder  Center   2   Head   3   Shoulder  Left   4   Elbow  Left   5   Wrist  Left   6   Hand  Left   7   Shoulder  Right   8   Elbow  Right   9   Wrist  Right   10  

(11)

Hand  Right   11   Hip  Left   12   Knee  Left   13   Ankle  Left   14   Foot  Left   15   Hip  Right   16   Knee  Right   17   Ankle  Right   18   Foot  Right   19    

For  the  needs  of  the  game  that  I  developed  I  wrapped  them  into  a   namespace  called  HumanBones  and  named  them  appropriately.  

   

(12)

namespace!HumanBones! {! {! !!!!public!enum!Bones! !!!!{! !!!!!!!!HipCenter!=!0,! !!!!!!!!Spine,! !!!!!!!!ShoulderCenter,! !!!!!!!!Head,! !!!!!!!!ShoulderLeft,! !!!!!!!!ElbowLeft,! !!!!!!!!WristLeft,! !!!!!!!!HandLeft,! !!!!!!!!ShoulderRight,! !!!!!!!!ElbowRight,! !!!!!!!!WristRight,! !!!!!!!!HandRight,! !!!!!!!!HipLeft,! !!!!!!!!KneeLeft,! !!!!!!!!AnkleLeft,! !!!!!!!!FootLeft,! !!!!!!!!HipRight,! !!!!!!!!KneeRight,! !!!!!!!!AnkleRight,! !!!!!!!!FootRight,! !!!!!!!!PositionCount! !!!!}! !!!!public!class!BonesIndex! !!!!{!

!!!!!!!!public!static!int!getBoneIndex(Bones!bone){! !!!!!!!!!!!!return!(int)bone;!

!!!!!!!!}! !!!!}! }! !      

The  Kinect  is  using  the  same  approach  that  is  used  by  the  most  of   the   depth-­‐sensing   systems.   A   signal   is   send   by   the   the   Kinect’s   emitter,  the  signal  is  reflected  by  the  user’s  body  and  is  received  by   the   sensor.   The   returned   signal   is   analyzed   by   the   KINECT’s   middleware   with   that   way   KINECT   is   able   to   measure   the   distance   between  the  sensor  and  the  user’s  body.    

(13)

 

Figure  3.  

 

Kinect’s  middleware  is  also  responsible  for  skeletal  tracking,  the   way  that  Kinect  achieve  that  is  by  segmenting  the  user’s  body  into  a   skeleton  with  a  series  of  body  data  joints.  These  segments  are  then   exposed   to   the   developers   though   the   Kinect’s   SDK.   The   following   diagram  show  the  layers  that  consist  the  Kinect’s  SDK:  

 

 

Figure  4  

 

The  use  of  human  body  tracking  and  motion  capturing  have  been   used   in   a   wide   range   of   applications   (J.   Shotton,   2011)   (Capture,   2014)   including   gaming,   security,   human-­‐computer   interaction   and   healt-­‐care.   In   game   industry   a   wide   range   of   studios   have   publish  

(14)

software   or   hardware   that   uses   body   tracking   techniques   to   entertain   the   users,   there   are   more   than   100   games   for   Nintendo’s   Wii  (Wii  games,  2014)  and  more  than  50  titles  for  Microsft’s  Kinect   (List   of   titles   for   Kinect,   2014).   In   Singapore   the   government   had   financed  studies  that  investigate  the  use  of  Kinect  in  a  public  safety   and   homeland   security   context   (Verton,   2013).   Many   reasearchers   had   examine   the   potential   use   of   Kinect   on   the   medical   and   health   care  sector,  Roanna  Lun  published  survey  in  which  the  potential  uses   of  in  healthcare  are  examined  (Roanna  Lun,  2014).    

Kinect  have  been  used  from  many  researches,  Andrea  Sanna  and   Fabrizio   Lamberti   had   use   Kinect   to   develop   animators   that   can   control  virtual  characters  in  real-­‐time  (Andrea  Sanna,  2013),  Robert   T.   Held   from   the   University   of   California   and   Ankit   Gupta   from   the   University   of   Washington   developed   a   system   that   produces   a   3D   animation   using   physical   objects,   the   system   allows   the   users   to   create  3D  animations  using  everyday  objects    (Robert  T.  Hel,  2012)    

2.1.2  Alternatives  

 

Commercial   alternatives   to   Kinect   have   been   produces   by   other   manufactures,   ASUS   have   developed   Xtion   Pro,   a   motion   sensing   solution  specially  design  for  developers  that  offers  gesture  and  body   detection     (ASUS,   2014)   and   it   is   based   on   PrimeSense’s   NiTE   middleware   (Wikipedia-­‐PrimeSense,   2014).   PrimeSense   has   also   developed   its   own   solution   called   PrimeSense   Carmine.   Sony   has   done  various  attempts  in  the  motion  sensing  field,  PlayStation  Eye  is   a   digital   camera   that   supports   gesture   recognition   and   using   computer  vision  techniques  process  images  taken  by  the  camera,  it  is   also   supports   sound   commands   through   a   microphone   array   (PlayStation-­‐Eye,  2003).  PlayStation  Camera  for  Playstation  4  is  the   successor   of   PlayStation   Eye,   it   has   two   cameras   with   lenses   that   support   depth-­‐sensing   and   motion   tracking   (Playstation-­‐Camera,   2014).   PS   Move   is   a   motion   sensing   game   controller,   it   has   a   light-­‐ emitting  ball  at  its  top  that  helps  the  Playstation  to  identify  the  depth   and   location   of   the   users   (PS-­‐Move,   2009).   In   the   academical   area,   Pranav   Mistry,   a   Phd   candidate   at   MIT   Media   Lab   have   developed   SixthSense,   a   wearable   device   that   supported   gestural   recognition   (SixthSense,  2001).    

(15)

   

2.1.3  Unity3D  

 

Unity  is  a  game  creation  system  that  includes  a  game  engine  and   an   IDE   that   was   lanched   on   2005   (Unity3D,   2005),   since   then   it   supports   a   range   of   platforms   including   Windows,   Linux,   iOS,   Android.    Unity  has  been  used  alogn  with  motion  capturing  sensors   in   both   the   academic   and   the   commercial   areas,   Boffswana,   a   creative  company  located  in  Melbourne  had  use  Kinect  with  Unity3D   to  develop  a  3D  face  tracking  system  (Boffswana,  2013),  Unity  is  the   main   tool   to   develop   games   for   Nintendo’s   Wii.   Kinect   can   be   used   with  Unity3D  through  a  SDK  wrapper  (Kinect-­‐Wrapper,  2013).  

 

2.1.4  2.5D  spaces  

 

The  term  “2.5D”  are  also  reffered  as  pseudo-­‐3D  and  are  used  by  the   game  industry  to  describe  graphical  projections  in  which  the  game’s   graphics   are   3D   rendered   but   the   gameplay   are   restricted   to   two   dimensions.      

 

 

 

 

(16)

Chapter  3  

 

3.  Natural  User  Interfaces  

 

The  term  Natural  User  Interace  is  used  from  developers  to  refer  to   interfaces   between   human   and   machine   that   are   invisible   through   their   use   and   remain   invisible   as   the   user   learn   more   complex   interactions  with  the  machine.    The  goal  of  natural  user  interfaces  is   to   mimicry   tools   that   humans   use   in   the   “real   world”.   The   term   “natural”  is  used  to  describe  properties  that  do  not  belong  closely  to   the   product   but   help   the   users   to   interact   with   it.   NUIs   try   to   use   modern  input  technology  devices  and  techniques  to  take  advantage   of   the   users   capabilities   and   fulfill   their   needs.   The   first   attempt   to   commercialize  a  product  that  had  NUI  capabilities  made  by  Apple  in   1993   with   the   release   of   MessagePad   (MessagePad,   1994),   a   handwriting  recognition  device  for  the  Newton  OS.  Since  then  there   have   been   several   efforts   in   the   field   and   many   companies   tried   to   commercialize   devices   with   NUI   capabilities,   some   well-­‐known   products   are   Nintendo   Wii,   Google   Project   Glass,   Microsoft   Kinect.   Daniel   Wigdor   and   Dennis   Wixon   in   their   book   (Daniel   Wigdor,   Dennis  Wixon  ,  2011)  define  the   main  design  guidelines  that  a  NUI   must  have:  

 

1.   They   must   create   an   experience   that   feels   natural   to   both   novice  and  expert  users  

 

2.  They  must  create  an  experience  that  the  users  can  feel  like  an   extension  of  their  body  

 

3.  The  user  interface  must  consider  the  context  and  take  advance   of  it.  

 

4.  Avoid  copying  existing  user  interface  patterns.    

Another   important   point   worth   mentioning   is   that   the   design   of   NUI   can   mimic   some   activity   with   which   the   user   is   already   comfortable.  Usually  when  a  new  type  of  user  interface  come  to  the  

(17)

market  designers  and  developers  try  to  exploit  its  capabilities  using   techniques   that   they   already   know,   many   times   this   fact   lead   to   complex   intrerfaces   that   users   are   not   able   to   use,   mostly   because   the   developers   and   the   designers   relied   on   implementation   techniques  and  approaches  that  worked  on  the  past  for  other  type  of   input  devices,  to  avoid  cases  in  which  the  user  is  confused  about  how   to   use   an   interface   the   developers   and   designers   should   almost   forget  past  interaction  approaches  and  try  to  identify  the  uniqueness   of   the   new   interface   exploiting   its   capabilies,   addressing   the   challenges   that   the   new   interface   brings,   understand   the   most   cardinal   mechanics   and   implement   firstly   the   most   needed   interactions   before   moving   to   more   complex   concepts.   The   development   team   should   also   take   into   consideration   the   constraints   of   the   context   and   enviroment   inside   which   the   system   will   be   used   since   these   may   be   barriers     to   a   succesful   NUI   implementation.  

 

As   noticed   by   Gideon   Steinberg   (Steinberg,   2012),   users   are   very   impatient,   they   want   interfaces   with   no   much   response   time,   they   dont  want  to  to  wait  until  they  are  able  to  start  a  new  interaction.  

                                         

(18)

Chapter  4  

 

4.  Techniques  of    evaluating  gameplay  experience  

4.1  Introduction  

 

The  great  explosion  in  the  field  of  game  development  leads  to  the   need   of   understanding   why   some   games   are   more   succesfull   than   others   even   if   they   have   lower   quallity   graphics   or   poοr   sound   effects.  Scientist  and  professonals  from  the  game  industry  have  tried   for  years  to  find  out  what  are  the  incrediants  that  lead  to  a  success   game   realease   and   increase   the   number   of   sells   or   active   users.   Studies   have   show   that   Gameplay   is   an   important   factor   for   the   satisfaction   of   the   players.   There   many   definitions   of   the   term   gameplay.   Sid   Meier   define   gameplay   as   "A   series   of   interesting   choices"  (Rollings  &  Morris,  1999),  Nacke  and  Lennart  describe  it  as  

“the  interactive  gaming  process  of  the  player  with  the  game”  (Nacke,  

2009)  while  Andrew  Rollings  and  Ernest  Adams  define  gameplay  as   "One   or   more   causally   linked   series   of   challenges   in   a   simulated   environment."   (Adams   Ernest,   2003)   definition   for   the   term   of   gameplay   The   term   gameplay   means   the   specific   ways   with   which   players   interact   with   the   game,   the   patterns,   the   goals   and   the   challenges  that  are  used  in  a  virtual  world  to  arouse  the  interest  of  

the  users  and  force  them  to  act.     There   are   mainly   three  

components  of  the  gameplay:    

Manipulation  rules:  These  are  the  set  of  rules  that  determine  what   the  player  can  do  inside  the  game,  which  are  the  allowed  actions.  

 Goal  rules:  These  are  the  set  of  goals  inside  the  game,  which  are   the  goals  that  the  user  should  achieve.  

Metarules:   These   are   the   set   of   actions   that   the   user   can   do   to   modify  the  game.  

 

In   my   game   implementation   the   ability   of   the   user   to   navigate   inside  a  2.5D  virtual  space  consists  the  Manipulation  rules.  The  need   of  the  player  to  escape  from  the  maze  avoiding  the  guards  are  the  set   of   goal   rules.   The   ability   of   the   user   to   change   the   movement  

(19)

technique   –the   way   he/she   interacts   with   the   NUI   to   navigate   the   avatar-­‐  consists  the  Metatarules  of  the  game.  

 

There  is  a  great  need  of  understanding  the  elements  that  consist  a   succesfull   gameplay   implementation,   an   implementation   that   can   forcast  player’s  thoughts,  nettle  his/her  sensations  leading  him/her   to  take  actions  and  covering  his  feelings.    The  game  industry  try  to   find   ways   to   keep   the   interest   on   a   game   with   more   complex   and   unique   gameplay   implementations.   More   and   more   games   try   to   actively  include  the  player  in  the  construction  or  development  of  the   gameplay   as   the   game   story   proceeds,   based   on   their   previous   experiences  and  trying  to  fulfill  their  expecations.  The  context  inside   which   a   game   take   place   affects   the   opinion   of   a   player   about   the   gameplay   e.g.   playing   a   Kinect   game   in   a   small   room   can   be   an   unpleasant   activity   while   playing   the   same   game   in   a   larger   enviroment  can  be  much  more  interesting.  In  addition  the  severity  of   the   gameplay   in   the   satisfaction   of   the   player   varies   among   the   different   game   genres.   The   challenges   should   fit   with   the   player’s   skills   and   evolve   with   them,   easy   to   solve   challenges   may   lead   to   a   player  that  loose  his/her  interest  while  too  difficult  tasks  may  lead  to   an   unpleasant   experience.   The   purpose   should   be   to   always   keep   a   balance   between   what   the   player   is   able   to   do   and   the   level   of   difficulty   of   the   barriers   to   achieve   the   game’s   goals.   A   good   gameplay   implementation   can   increase   the   player’s   euphoria   and   satisfaction  when  he  manage  to  overcome  the  burriers.  

 

Torben   Grodal   (Grodal,   2009)   clearly   notes   the   importance   between   the   skills   of   the   player   and   the   challenges   of   the   game,   "When  beginning  a  new  behavior  or  learning  a  new  environment,  we   may  feel  that  we  have  many  options  that  depend  on  our  own  choices.   However,   as   we   learn   those   behaviors   and   environments,   we   may   even  feel  that  we  are  just  alienated  robots  that  follow  the  commands   of   society   or   our   own   fixed   compulsions.   To   play   video   games   provides   a   similar   variation   in   our   experience   of   interactivity   ".   In   their  research  Ermi  and  Mäyrä  (Ermi,  2003)    tryied  to  list  the  various   factors  that  affect  the  gameplay  experience,  the  figure  below  present   their  findings:    

(20)

 

Figure  5  

 

Ermi   and   Mäyrä   (Laura   Ermi,   2005)   supported   that   a   gameplay   experience  is  consisted  of  three  dimensions,  the  sensory  immersion   (graphics   and   sound   of   the   game),   the   challenge-­‐based   immersion   and  the  imaginative  immersion  (the  feelings  of  the  player  because  of   the   story   of   the   game),   and   they   proposed   a   model   (SCI-­‐model)   to   evaluate   the   gameplay   experience   of     a   game,   the   model   has   a   quastionarie  with  30  questions  in  5-­‐points  Likert  scale  that  address   the  various  aspects  of  the  3  dimensions  that  the  model  suggest  and   they  grade  various  games  based  on  that  model.  

(21)

 

Figure  6  

 

For   decades   the   evaluation   of   digital   games   are   focused   on   identifying  software  bugs  or  rating  the  media  aspects  of  the  product   like  the  sound  and  the  graphics.  The  gameplay  evaluation  was  mainly   be  done  from  people  that  were  involved  in  the  process  of  developing   the   game.   Nacke   and   Drachen   in   their   work   (Lennart   Nacke,   2010)   noted   the   importance   of   evaluating   a   game   using   people   outside   of   the   team   that   design   and   develop   the   game.   Developers   and   designers   of   the   game   are   more   aware   of   the   types   of   actions   that   they  are  allowed  to  perform,  they  have  spend  hours    for  testing  the   game   and   balancing   its   variables     and   thus   this   affects   the   whole   gameplay   experience.   They   also   pointed   the   importance   of   the   context  in  which  the  evaluation  takes  place,  factors  like  time  of  day,   player’s   age,   presence   of   other   players   and     health   condition   are   affecting  the  player’s  experience  e.g.  playing  a  sport  game  in  Kinect   with  other  players  is  more  competitive  than  playing  the  exact  same   game   in   a   single-­‐mode.   The   figure   above   underlines   the   difference   between   the   experience   that   the   game   designer   have   and   the   gameplay  experience  of  a  player  based  on  the  context  that  surround   him.  

(22)

 

Figure  7  

   

Classical  methodologies  for  user  experience  evaluation  are  not  easily   applicable  on  the  digital  games  since  factors  like  effectiveness  in  task   completion  or  task  level  satisfaction  can  have  a  completely  different   meaning  in  that  context.  In  addition   evaluating  the  user  experience   in   digital   games   is   mostly   evaluating   the   emotions   of   the   players,   something  that  has  psychological  and  physiological  aspects  that  are   difficult  to  objectively  addressed.  

 

Researchers   have   proposed   various   methods   for   measuring   user’s   satisfaction  in  digital  game,  following  are  some  examples:  

• Questionnaire:     Simple   forms   with   questions   that   are   filled   by   the   users   at   the   end   of   their   session,   that   try   to   identify   the   satisfaction  of  the  user  based  on  a  scale  (e.g.  negative,  positive,   neutral)  

• Eye  tracking:  Used  to  measure  the  attention  levels  of  a  user  in  a   specified  action  or  task.  

• Interviews:   Discussion   with   the   user’s   about   their   experience   after  the  end  of  the  game  session  trying  to  gather  user  feedback   • RITE  Method:  A  method  that  designed  and  executed  by  Dennis   Wixon   at   Microsoft   Labs   in   which   there   is   defined   test   script   that   the   users   have   to   execute   while   the   engage   a   verbal   protocol  (think  aloud)  (Wixon,  2003)  

(23)

• Advanced   sensor   techniques:     Use   of   sensors   to   measure   various   physical   and   physiological   characteristics   of   the   participant   while   he/she   is   playing   the   game,   like   Electromyography   (electrical   activation   of   muscles)   and   Electroencephalography  (brane  waves  measurement).  

 

4.2  Evaluation  of  movement  methods  

 

For  the  evaluation  of  the  movement  methods  which  are  the  core   of   the   gameplay   I   used   the   method   of   questionnaire.   My   questionnaire   is   a   variation   of   the   System   Usability   Scale   (SUS)   (Brooke,   1996)   and   has   15   questions   that   try   to   identify   the   emotions  and  the  level  of  satisfaction  of  the  player,  5  questions  for   each  movement  method.  The  evaluations  where  taken  place  during   meetups   of   the   Game   Developers   Berlin   meetup   group.   The   user   group  is  small  (8  people  participate  and  answer  the  questionarrie).     The  answers  are  based  on  a  5  value  scale  with  a  range  from  Strongly   Agree   to   Strongly   disagree.   The   sentences   that   were   asked   to   the   players  are  the  following:  

 

1. I  found  the  method  unnecessarily  complex  

2. I  found  the  steering  navigation  technique  complex.  

3. I  found  the  forward  motion  navigation  technique  complex.  

4. I  manage  to  learn  this  navigation  method  very  quickly  

5. I  found  the  navigation  and  steering  techniques  well  integrated    

 

During  the  evaluation  of  the  methods  the  users  were  free  to  tweak   the   various   variables   of   the   game   on-­‐top   camera   position   and   lighting  were  able  to  be  changed.  In  method  1  the  players  were  able   to   change   also   the   rotation   factor   of   the   avatar.   In   method   2   the   players  were  able  to  change  the  speed  factor  of  the    avatar  while  in   method  3  there  was  a  circle  inside  which  no-­‐force  was  applied  to  the   avatar,  the  players  were  able  to  change  the  radius  of  this  circle.  

       

(24)

Chapter  5  

5.  Movement  methods  

5.1  Introduction  

 

For   the   purpose   of   this   thesis   I   developed   and   examine   three   different   methods   of   character   navigation   in   a   2.5   world.   The   first   method  uses  the  hip  center  position  of  the  human  body  in  the  x  axes   for  the  character  steering  and  the  movement  of  the  users  hands  for   the   forward   movement.   The   second   method   uses   the   distance   between  the  hip  center  and  an  virtual  circle  for  the  steering  and  the   hands   movement   for   the   forward   movement.   The   third   method   is   uses   the   same   way   to   for   the   character   steering   but   instead   of   the   hands  the  distance  from  the  center  is  used  for  forward  navigation.    

5.1.1  Movement  and  steering    techniques  that  are  used  in  the  Game  Industry  

 

Navigation   with   Kinect   is   used   mostly   in   three   game   categories,   action   games   (first-­‐person   shouters,   third-­‐person   shouters   ),   adventure   games   and   mostly   puzzle   games.   The   majority   of   the   published   titles   do   not   involve   the   navigation   of   the   avatar   in   a   virtual   environment   but   the   player’s   body   is   used   to   handle   an   almost   static   -­‐in   depth   dimension-­‐   avatar,   games   like   that   are   the   Fruit  Ninja  for  Kinect,  Dance  Central,  Zumba  Fitness.  There  are  a  few   games  that  tries  to  use  extensively  all  the  capabilities  of  Kinect.  Call   Of  Duty  Black  Ops  is  using  Kinect  for  avatar  driving,  the  players  have   to   tend   their   hands   in   front   of   them   for   forward   motion,   while   the   position  of  the  shoulder‘s  join  points  are  used  for  streering.  In  Harry   Potter  for  Kinect  the  rotation  of  the  avatar  is  based  on  the  player’s   shoulder   position   in   the   x-­‐axis.   In   Modern   Warfare   the   forward   motion   of   the   avatar   is   based   on   the   position   of   the   knees   joint   points.    

     

5.2.1  Implementation  of  the  application  

(25)

For  the  implementation  of  the  game  I  used  the  Kinect  SDK  1.7.  The   avatar  are  designed  using  Blender  2.7  an  the  source  code  is  writtern   in  C#.  

 

5.2  Method  1  

5.2.1  Implementation  

 

In  the  first  method  we  apply  torque  to  the  character  based  on  the   movement   of   the   user   on   the   x-­‐axis.   I   calculated   the   difference   between   the   user’s   position   in   a   frame   and   his/her   position   in   the   next   frame   (figure   4).   The   more   the   user   is   moved   in   the   x-­‐axis   between  two  frames,  the  more  torque  we  apply  to  the  character.    

 

 

Figure  8  

 

To   find   the   user’s   position   I   am   using   one   of   the   Kinect’s   join   points,  the  Hip  Center:

 

NewHipXPosition-=-sw.bonePos[0,-(int)Bones.HipCenter].x;

!  

(26)

The  reason  that  I  chose  to  use  the  Hip  Center  instead  of  the  feet’s   join  points  is  because  there  are  cases  that  these  joint  points  are  not   visible  to  Kinect  e.g.  when  the  user  is  too  close  to  the  sensor.  Then  I   apply   torque   to   the   character   based   on   the   difference   between   the   values  of  HipCenter.X  on  a  frame  and  its  next  frame:  

 

rigidbody.AddTorque(new/Vector3(0,/rotationFactor/*/(NewHipXPosition/A/OldHipPosition),/0));

!  

If  the  user  haven’t  move  signifficanlty,    I  set  the  characters  velocity   to  zero  forcing  the  character  to  stop  rotating.  For  the  purpose  of  the   thesis   and   for   my   experiments   I   used   a   rotationFactor   on   my   equation,   this   factor   is   multiplied   by   the   difference   of   the   user’s   position  and  can  be  set  by  the  user  (figure  5).  People  of  different  ages   or   genders   may   need   a   different   rotation   factor   to   feel   that   the   rotation   of   the   character   is   natural,   this   can   be   set   from   the   user   interface  of  my  application:  

   

 

Figure  9.  Rotation  factor  

 

For  the  forward  movement,  first  I  calculate  the  difference  between   the  position  of  the  left  hand  in  a  frame  and  its  position  on  the  next   frame   (figure   6),   the   same   for   the   right   hand,   then   I   find   the   maximum  distance  that  the  left  and  right  have  covered:  

   

(27)

 

Figure  10  

 

To   find   the   position   of   the   user’s   hands   I   am   using   two   of   the   Kinect’s   join   points,   the   left   hand   (number   7   on   figure   1)   and   right   hand  (number  11  on  figure  1)  join  points:  

 

NewLeftHandPosition/=/sw.bonePos[0,/(int)Bones.HandLeft].y;/

NewRightHandPosition/=/sw.bonePos[0,/(int)Bones.HandRight].y;!    

Then  I  calculate  the  distance  that  each  hand  have  covered  between   two   frames,   am   I   using   the   Math.abs   method   of   C#   Math   library   to   find  the  absolute  value  since  the  distance  should  not  be  negative:  

  ! DifferenceBetweenOldAndNewRightHandPosition!=! Math.Abs(OldRightHandPosition!>!NewRightHandPosition);! ! DifferenceBetweenOldAndNewLeftHandPosition!=! Math.Abs(OldLeftHandPosition!>!NewLeftHandPosition);!    

And   then   I   am   applying   force   to   the   character   in   the   direction   in   which  it  is  facing:    

!!!!!!!!! float!forceByHands!=!Math.Max(DifferenceBetweenOldAndNewRightHandPosition,! DifferenceBetweenOldAndNewLeftHandPosition);! ! rigidbody.AddForce(rigidbody.transform.TransformDirection((new!Vector3(0,!0,!1))!*! forceByHands!*!speedFactor));!    

(28)

There  is  also  a  speed  factor  (figure  7)  that  can  be  set  by  the  user,   this   is   used   for   the   same   reason   I   used   the   rotation   factor   as   I   explained  previously:  

 

 

Figure  11.  Speed  factor    

In   the   table   below   (table   2)   we   can   see   the   relationship   between   the  force  that  we  applied  to  the  character  (rigidbody)  because  of  the   hands  movements,  and  its  velocity  magnitude:  

 

Table  2.  

Frame   Force  by  hands   Velocity’s  magnitude  

1   0,004368067   0,08351061   2   0,004853427   0,05248121   3   0,005118072   0,03636492   4   0,01362312   0,02803884   5   0,01036048   0,03737772   6   0,006374955   0,0371998   7   0,004050791   0,0306194   8   0,003748119   0,01715235   9   0,002634585   0,02093103   10   0,000703216   0,0156817   11   0,005183101   0,009677171   12   0,01326829   0,01011825   13   0,02649879   0,01269093   14   0,02998668   0,05013624   15   0,06377602   0,07620776   16   0,09469002   0,1455334   17   0,1358536   0,2336963   18   0,1018448   0,4058163   19   0,0128082   0,4759087   20   0,1417847   0,2925246      

(29)

 

Chart  1.  Relationship  between  the  force  applied  by  hands  and  the  velocity’s  magnitude    

As  we  can  see  from  the  graph  above  (chart    increasing  the  force   because  of  the  hands  movement  leads  to  increase  of  the  rigidbody’s   velocity,   on   the   16th   frame   the   user   stopped   moving   his   hands   but   the   rigidbody   has   still   some   velocity   magnitude   which   start   decreasing,   on   frame   19th   the   player   start   moving   his   hands   again   but  the  velocity  magnitude  is  not  increasing  because  the  rigidbody  is   collided  with  another  gameobject.  

 

5.2.2  Limitations  and  constraints  of  the  method  

 

The  most  significant  problem  of  this  method  is  the  rotation  of  the   character  when  the  user  is  at  the  edges  of  Kinect’s  visible  field,  e.g.   when   the   user   is   on   the   left   edge   (figure   8)   of   the   visible   field   and   he/she  wants  to  rotate  the  character  to  the  right  (figure  9).  

  0   0.1   0.2   0.3   0.4   0.5   0.6   1   2   3   4   5   6   7   8   9   10  11  12  13  14  15  16  17  18  19  20   Velocity's  magnitude   Force  by  hands  

(30)

 

Figure  12.  Player's  body  at  the  left  edge  of  Kinect's  visible  field  

 

 

Figure  13  

 

5.2.3  Evaluation  of  the  method  from  the  users  

 

Most  playes  found  the  method  complex,  specially  the  noticed  that   the  steering  technique  is  uncomfortable  and  unusual.  Bellow  are  the   answers  to  the  evaluation  questions:  

(31)

   

 

  Strongly  

agree  

Agree   Neutral   Disagree   Strongly   Disagree  

I  found  the  method  

unnecessarily  complex   5   2   1   0   0  

I  found  the  steering   navigation  technique  

complex.  

6   1   1   0   0  

I  found  the  forward   motion  navigation   technique  complex  

0   1   2   2   3  

I  manage  to  learn  this   navigation  method  very  

quickly  

0   1   3   2   2  

I  found  the  navigation   and  steering  techniques  

well  integrated   0   0   2   3   3                         0   1   2   3   4   5   6   7   Strongly  agree   Agree   Neutral   Disagree   Strongly  Disagree  

(32)

5.3  Method  2    

5.3.1  Implementation  

In   the   second   method   I   apply   torque   to   the   avatar   based   on   the   position  of  the  user  regards  to  the  center  point.    The  center  point  is   where  HipCenter.z  and  HipCenter.x    are  equal  to  zero.    To  calculate   the  rotation  angle  I  used  the  simple  formula:  

  𝐴𝑛𝑔𝑙𝑒𝑠𝐼𝑛𝑅𝑎𝑑𝑖𝑎𝑛𝑠 = arcsin  (𝑂𝑝𝑝𝑜𝑠𝑖𝑡𝑒  𝑠𝑖𝑑𝑒 hypotenuse )       Figure  14      

I  used  Math.Asin  from  the  C#  Math  library  to  calculate  the  arcsine   of   the   rotation   angle,   the   function   returns   the   angle   in   radians.   To   convert  the  radians  to  degrees  I  used  the  simple  formula:  

 

𝐴𝑛𝑔𝑙𝑒𝑠𝐼𝑛𝐷𝑒𝑔𝑟𝑒𝑒𝑠 = AngleInRadians ∗  180

π    

The   avatar   is   rotated   the   same   degrees   as   the   user   to   the   same   direction.  To  achieve  that  I  am  using  the  Quaternion.Euler  method  of   Unity3D.   The   method   returns   a   rotation   that   rotates   z   degrees  

(33)

around  the  z  axis,  x  degrees  around  the  x  axis,  and  y  degrees  around   the  y  axis.  For  example:  

 

                       rotation  =  Quaternion.Euler(Vector3(30,  60,90));    

Is  a  rotation  30  degrees  around  z-­‐axis,  60  degrees  around    y-­‐axis   and  90  degrees  in  the  z-­‐axis.  

 

double  hypotenusePower2  =  Math.Pow(NewHipXPosition,  2)  +  

Math.Pow(NewHipZetPosition,  2);  

double  hypotenuse  =  Math.Sqrt(hypotenusePower2);  

double  angle  =  Math.Asin(NewHipZetPosition  /  hypotenuse);  

double  degrees  =  angle  *  (180  /  Math.PI);                            

if  (NewHipZetPosition  >  0  &&  NewHipXPosition  >  0)        {  

             rigidbody.transform.rotation  =  Quaternion.Euler(0,  (float)(90  -­‐  

Math.Abs(degrees)),  0);        }  

else  if  (NewHipZetPosition  <  0  &&  NewHipXPosition  >  0)        {  

             rigidbody.transform.rotation  =  Quaternion.Euler(0,  (float)(90  +  

Math.Abs(degrees)),  0);        }  

else  if  (NewHipZetPosition  <  0  &&  NewHipXPosition  <  0)          {  

             rigidbody.transform.rotation  =  Quaternion.Euler(0,  (float)(-­‐90  -­‐  

Math.Abs(degrees)),  0);          }  

else  if  (NewHipZetPosition  >  0  &&  NewHipXPosition  <  0)          {  

             rigidbody.transform.rotation  =  Quaternion.Euler(0,  (float)(-­‐90  +  

Math.Abs(degrees)),  0);          }  

   

For   the   forward   movement   of   the   avatar   I   am   using   the   same   technique  as  described  previously  in  the  first  method.  There  is  also   an  option  to  specify  the  speed  factor  which  is  used  to  define  the  rate   with  which  the  avatar’s  speed  is  increasing.  

   

 

(34)

5.3.2  Limitations  and  constraints  of  the  method    

As   noticed   during   the   evaluation   of   the   method   there   was   many   cases  where  the  player  have  been  moved  outside  of  Kinect’s  visible   field,   the   player   were   keep   moving   but   the   avatar   remained   static,   something  that  lead  to  confusion.  In  the  following  figure  a  case  like   that  is  showed:  

 

 

Figure  16  

 

The   determination   of   the   value   of   the   speed   factor   and   the   position  of  the  camera  may  also  affect  the  evaluation  results.  

 

5.3.3  Evaluation  of  the  method  from  the  users  

 

Most   playes   found   the   rotation   method   feasible   but   during   the   discussion   with   them   after   the   session   they   mentioned   that   the   speed   factor   and   the   position   of   the   camera   may   affected   their   opinion  about  the  method.  Bellow  are  the  answers  to  the  evaluation   questions:  

   

(35)

   

 

  Strongly  

agree   Agree   Neutral   Disagree   Disagree  Strongly  

I  found  the  method  

unnecessarily  complex   0   1   2   2   3  

I  found  the  steering   navigation  technique  

complex.  

0   0   2   2   4  

I  found  the  forward   motion  navigation   technique  complex  

0   1   3   2   2  

I  manage  to  learn  this   navigation  method  very  

quickly  

2   3   2   1   0  

I  found  the  navigation   and  steering  techniques  

well  integrated   0   1   4   2   1                               0   0.5   1   1.5   2   2.5   3   3.5   4   4.5   I  fo un d  t he  me th od   un ne ce ss ar ily  co mp le x   I  f ound  the  s teer ing   navigation  technique   co mp le x.   I  f ound  the  f or war d   mo tio n  n av ig ati on   te ch niq ue  co mp le x   I  ma na ge  to  le ar n  t his   na vig ati on  me th od  v er y   quick ly   I  f ound  the  navigation   and  s te er ing  te ch ni que s   we ll  i nte gr ate d   Strongly  agree   Agree   Neutral   Disagree   Strongly  Disagree  

(36)

5.4  Method  3  

5.4.1  Implementation  

 

In   the   third   method   I   apply   torque   to   the   avatar   based   on   the   position   of   the   user   regards   to   the   center   point.     To   calculate   the   rotation  angle  I  used  the  same  formula  as  in  the  method  2.     In   this   method  the  user  is  not  using  his/her  hands  for  the  movement.  The   speed  of  the  forward  movement  is  calculated  based  on  his  distance   from   the   center   of   Kinect’s   view.   In   addition   there   is   a   cycle   inside   which  no  force  is  applied  to  the  avatar.    

 

 

Figure  17  

   

The  user  is  able  to  set  radius  of  the  inner  cycle:    

 

Figure  18  

(37)

The  user  is  also  able  to  set  the  maximum  speed  of  the  avatar   similarly  to  the  method  2.    Force  is  applied  to  the  avatar  only  if  the   user  is  out  of  the  inner  cycle  and  also  the  speed  of  avatar  is  less  than   the  defined  maximum  speed.  

 

if  (force  >  PlayerController.circleradious)                          {  

                               //Speed  

                               if  (player.rigidbody.velocity.magnitude  <  maximumSpeed){  

rigidbody.AddForce(rigidbody.transform.TransformDirection((new  Vector3(0,  0,  1))  *   speedFactor  *  force));  

                                                           }    

Below   we   can   see   how   the   avatar’s   velocity   magnitude   is   increasing.   The   radious   of   the   inner   cycle   is   0.6,   when   the   user’s   distance  from  the  center  is  less  than  0,6  no  force  are  applied  to  the   avatar  and  its  speed  is  zero.  When  the  user  is  out  of  the  cycle,  force  is   starting  to  be  applied  to  the  avatar  and  its  speed  starts  increacing.    

   

Frame   Distance  from  

center   Avatar’s  Velocity   Magnitude   1   0,116574   0   2   0,125786   0   3   0,245694   0   4   0,356584   0   5   0,534539   0   6   0,9842399   0,00844736   7   0,9657335   0,00845916   8   1,028793   0,092231   9   1,0231   1,2212   10   1,01696   1,345   11   1,01765   1,5231    

(38)

   

   

5.4.2  Limitations  and  constraints  of  the  method    

This  method  have  the  same  limitations  as  the  method  2  regards  to   the   steering     of   the   avatar   and   also   it   was   difficult   for   the   users   to   determine   the   optimal   speed   factor   for   their   session.   Another   difficulty   was   to   determine   the   radius   of   the   circle   inside   which   no   force  to  the  avatar  was  applied.  

   

5.4.3  Evaluation  of  the  method  from  the  users  

 

Most   playes   found   the   rotation   technique   of   this   method   was   feasible   while   it   was   difficult   for   them   to   determine   the   optimal   speed   factor   and   the   radius   of   the   “no-­‐force”   circle.   Bellow   are   the   answers  to  the  evaluation  questions:  

    0   0.2   0.4   0.6   0.8   1   1.2   1.4   1.6   1   2   3   4   5   6   7   8   9   10   11  

Distance  from  center   Avatar  Speed  

(39)

   

 

  Strongly  

agree   Agree   Neutral   Disagree   Disagree  Strongly  

I  found  the  method  

unnecessarily  complex   1   2   2   2   1  

I  found  the  steering   navigation  technique  

complex.  

0   0   3   1   4  

I  found  the  forward   motion  navigation   technique  complex  

0   2   2   2   2  

I  manage  to  learn  this   navigation  method  very  

quickly  

0   1   2   3   2  

I  found  the  navigation   and  steering  techniques  

well  integrated   0   0   2   2   4                           0   0.5   1   1.5   2   2.5   3   3.5   4   4.5   I  fo un d  t he  me th od   un ne ce ss ar ily  co mp le x   I  f ound  the  s teer ing   navigation  technique   co mp le x.   I  f ound  the  f or war d   mo tio n  n av ig ati on   te ch niq ue  co mp le x   I  ma na ge  to  le ar n  t his   na vig ati on  me th od  v er y   quick ly   I  f ound  the  navigation   and  s te er ing  te ch ni que s   we ll  i nte gr ate d   Strongly  agree   Agree   Neutral   Disagree   Strongly  Disagree  

(40)

Chapter  6  

6.1  Discussion  

 

In  general,  based  on  the  evaluation  results  we  can  notice  that  the   users  felt  more  comfortable  using  the  second  method,  more  precisely   only  one  user  felt  that  the  method  was  unessecary  complex.    

 

   

 

The  users  felt  that  the  steering  technique  in  which  the  avatar  was   rotating  using  the  position  of  their  body  from  the  center  was  more   natular  while  in  case  of  forward  motion  the  technique  in  which  force   applied   to   the   avatar   based   on   how   fast   the   player   was   moving   his   hands  (method  1  and  method  2)  was  more  comfortable  for  them.      

The  evaluation  group  was  small  and  mostly  belonged  from  people   that   had   previous   experience   using   Kinect   or   similar   NUIs,   these   facts  may  had  influence  on  their  judge.  

  6.2  Future  Work     0   1   2   3   4   5   6   7  

I  found  the  method  unessecary  compelx  

Method  1   Method  2   Method  3  

(41)

There  are  several  aspects  that  can  be  studied  and  implemented  in   a   future   work.   Firstly,   it   will   be   interesting   to   evaluate   the   same   techniques  in  different  age  groups.  The  evaluation  group  of  my  study   was   belonged   from   men   between   18   and   30   years   old.   It   will   be   important   to   evaluate   the   same   techniques   in   older   and   younger   people   both   men   and   women   since   studies   have   shown   that   demographics  is  an  important  factor  in  the  perception  of  a  UI.  

 

Another  aspect  than  can  change  is  the  evaluation  technique.  I  used   a  Likert  like  questionarrie,    however  different  evaluation  techniques   may  lead  to  different  results.    

 

During   the   sessions   the   users   were   able   to   change   the   various   variables   of   the   system.   A   study   can   be   done   to   investigate   the   optimal   values   of   these   variables   and   with   microptimizations   find   out  in  which  values  of  these  variables  each  method  is  more  „natural“   to  the  users.  In  addition,  it  was  noticed  that  some  users  were  more   willing   to   customize   the   system   to   their   needs   than   others.   An   evaluation  of  the  system  can  be  done  during  which  no  customization   is   allowed   and   also   the   factors   that   lead   some   users   to   want   to   customize  the  parameters  in  contrast  with  others  that  wanted  to  use   the   system   expecting   that   already   is   optimal   fort   hem   can   be   a   subject  of  investigation.  

 

New   techniques   for   streering   and   forward   motion   can   be   implemented  and  evaluated  e.g.  rotation  of  the  avatar  based  on  the   hips’   left   and   right   joint   points   difference   or   forward   motion   of   the   avatar  based  on  the  movement  of  knees  joint  points.  

 

Other   than   the   theoritical   part   it   could   be   interesting   to   investigate  the  same  techniques  using  other  NUIs  and  compare  their   results   with   the   results   from   Kinect.     Moreover   their   are   experimental   implementations   that   are   combining   more   than   one   Kinect,   a   study   on   the   same   techniques   can   be   done   using   two   Kinects,  one  in  front  of  the  player  and  another  one  behind  him,  with   that   way   the   visible   field   is   incrising,   the   results   can   be   more   accurate     and   therefore   opinion   of   users   about   a   technique   can   dramatically  change.  

References

Related documents

L3 menar att även om pojkar anses ha en fördel i ett praktiskt ämne som idrott av att generellt sett vara starkare rent fysiskt än flickor så ska inte det hindra flickor från

One quarter of the additional arable land was allocated to ley crop production, which was used as a green manure in the sugar-beet crop... Arable land used for production of

Detta gäller så länge som in-spridning av strålning inte beaktas, dvs så länge som den strålning som sprids från partiklar/volymselement inte anses kunna reflekteras och bidra

This survey will also assist the CIGRE C4.47 PSR WG in directing the focus of Task Teams 2 and 3 towards practical issues in order to maximise the impact

Uppsatsen syftar till att förklara om det finns ett samband mellan grundarnas humankapital i form av tidigare entreprenöriella erfarenheter, tidigare erfarenheter av startupföretag,

In particular robotic systems with symbolic components need to solve the anchoring problem in order to connect the information present in symbolic form with the sensor data that

The ideas presented here builds generally on a long history of work with mobile services [6] but more specifically on a diary study of Internet use from cell phones [9] and

It can be observed from Table 4.3 that among the cases of pulse data, when all features are used, the system retrieves cases of same subject 92.5% times within 5 nearest neighbor