• No results found

Haptic Navigation Aids for the Visually Impaired

N/A
N/A
Protected

Academic year: 2021

Share "Haptic Navigation Aids for the Visually Impaired"

Copied!
170
0
0

Loading.... (view fulltext now)

Full text

(1)

DOCTORA L T H E S I S

Department of Computer Science, Electrical and Space Engineering Division of EISLAB

Haptic Navigation Aids for the

Visually Impaired

Daniel Innala Ahlmark

ISSN 1402-1544 ISBN 978-91-7583-605-8 (print)

ISBN 978-91-7583-606-5 (pdf) Luleå University of Technology 2016

Daniel Innala

Ahlmark Haptic Na

vigation

Aids for the

Visually Impair

ed

(2)
(3)

Haptic Navigation Aids for the

Visually Impaired

Daniel Innala Ahlmark

Dept. of Computer Science, Electrical and Space Engineering

Lule˚

a University of Technology

Lule˚

a, Sweden

Supervisors:

Kalevi Hyypp¨

a, Jan van Deventer, Ulrik R¨

oijezon

European Union Structural Funds

(4)

ISSN 1402-1544

ISBN 978-91-7583-605-8 (print) ISBN 978-91-7583-606-5 (pdf) Luleå 2016

(5)

To my mother

(6)
(7)

Abstract

Assistive technologies have improved the situation in society for visually impaired individ-uals. The rapid development the last decades have made both work and education much more accessible. Despite this, moving about independently is still a major challenge, one that at worst can lead to isolation and a decreased quality of life.

To aid in the above task, devices exist to help avoid obstacles (notably the white cane), and navigation aids such as accessible GPS devices. The white cane is the quintessential aid and is much appreciated, but solutions trying to convey distance and direction to obstacles further away have not made a big impact among the visually impaired. One fundamental challenge is how to present such information non-visually. Sounds and synthetic speech are typically utilised, but feedback through the sense of touch (haptics) is also used, often in the form of vibrations. Haptic feedback is appealing because it does not block or distort sounds from the environment that are important for non-visual navigation. Additionally, touch is a natural channel for information about surrounding objects, something the white cane so successfully utilises.

This doctoral thesis explores the question above by presenting the development and evaluations of different types of haptic navigation aids. The goal has been to attain a simple user experience that mimics that of the white cane. The idea is that a navigation aid able to do this should have a fair chance of being successful on the market. The evaluations of the developed prototypes have primarily been qualitative, focusing on judging the feasibility of the developed solutions. They have been evaluated at a very early stage, with visually impaired study participants.

Results from the evaluations indicate that haptic feedback can lead to solutions that are both easy to understand and use. Since the evaluations were done at an early stage in the development, the participants have also provided valuable feedback regarding design and functionality. They have also noted many scenarios throughout their daily lives where such navigation aids would be of use.

The thesis document these results, together with ideas and thoughts that have emerged and been tested during the development process. This information contributes to the body of knowledge on different means of conveying information about surrounding ob-jects non-visually.

(8)
(9)

Contents

Abstract v

Contents vii

Acknowledgements xi

Summary of Included Papers xiii

List of Figures xvii

Part I

1

Chapter 1 – Introduction 3

1.1 Overview – Five Years of Questions . . . 3

1.1.1 The Beginning . . . 3

1.1.2 Next Steps . . . 5

1.1.3 The Second Prototype . . . 6

1.1.4 The LaserNavigator . . . 7

1.1.5 Two Trials . . . 8

1.1.6 The Finish Line? . . . 9

1.2 Aims, Contributions and Delimitations . . . 10

1.3 Terminology . . . 10

1.4 Thesis Structure . . . 11

Chapter 2 – Background 13 2.1 Visual Impairments and Assistive Technologies . . . 13

2.1.1 Navigation . . . 14

2.2 Perception, Proprioception and Haptics . . . 15

2.2.1 Spatial Perception . . . 15

2.2.2 The Sense of Touch and Proprioception . . . 16

2.2.3 Haptic Feedback Technologies . . . 17

Chapter 3 – Related Work 19 3.1 Navigation Aids . . . 19

3.1.1 GPS Devices and Smartphone Applications . . . 19 vii

(10)

3.1.3 Sensory Substitution Systems . . . 21

3.1.4 Prepared Environment Solutions . . . 23

3.1.5 Location Fingerprinting . . . 24

3.2 Scientific Studies Involving Visually Impaired Participants . . . 24

Chapter 4 – The Virtual White Cane 27 4.1 Overview . . . 27 4.2 Software . . . 28 4.2.1 Haptic Rendering . . . 29 4.3 Field Trial . . . 31 Chapter 5 – LaserNavigator 33 5.1 Overview . . . 33 5.2 Hardware . . . 34 5.3 Software . . . 35

5.3.1 Additional Features and Miscellaneous Notes . . . 36

5.3.2 Manual Length Adjustment . . . 36

5.4 Haptic Feedback . . . 37 5.4.1 Simple Feedback . . . 37 5.4.2 Complex Feedback . . . 37 5.5 Algorithms . . . 38 5.6 Evaluations . . . 39 Chapter 6 – Discussion 41 Chapter 7 – Conclusions 45 References 47

Part II

51

Paper A – Presentation of Spatial Information in Navigation Aids for the Visually Impaired 53 1 Introduction . . . 55

2 Methods . . . 56

3 Non-visual Spatial Perception . . . 57

4 Navigation Aids . . . 58

4.1 Haptic Feedback . . . 58

4.2 Auditory Feedback . . . 59

5 Discussion . . . 59

6 Conclusions . . . 61

Paper B – Obstacle Avoidance Using Haptics and a Laser Rangefinder 67 1 Introduction . . . 69

(11)

2 Related Work . . . 71

3 The Virtual White Cane . . . 71

3.1 Hardware . . . 72

3.2 Software Architecture . . . 73

3.3 Dynamic Haptic Feedback . . . 76

4 Field Trial . . . 76

5 Conclusions . . . 78

5.1 Future Work . . . 78

Paper C – An Initial Field Trial of a Haptic Navigation System for Persons with a Visual Impairment 83 1 Introduction . . . 85 1.1 Delimitations . . . 87 2 Methods . . . 87 2.1 Participants . . . 87 2.2 Test Set-up . . . 87 2.3 Field trial . . . 88 2.4 Interviews . . . 89 2.5 Data analysis . . . 89 3 Results . . . 89

3.1 Findings from the interviews . . . 90

4 Discussion . . . 92

Paper D – A Haptic Navigation Aid for the Visually Impaired – Part 1: Indoor Evaluation of the LaserNavigator 97 1 Introduction . . . 99 1.1 Purpose . . . 102 2 Methods . . . 102 2.1 Participants . . . 102 2.2 Test Environment . . . 103 2.3 Task . . . 103 2.4 Observations . . . 104 2.5 Interviews . . . 105 3 Results . . . 105 3.1 Observations . . . 105 3.2 Interviews . . . 108 4 Discussion . . . 110 4.1 Daniel’s Comments . . . 112

Paper E – A Haptic Navigation Aid for the Visually Impaired – Part 2: Outdoor Evaluation of the LaserNavigator 115 1 Introduction . . . 117

1.1 Purpose . . . 119

2 Methods . . . 121

2.1 Participants . . . 121 ix

(12)

2.3 Observations And Interviews . . . 121 3 Results . . . 122 3.1 Observations . . . 122 3.2 Interviews . . . 123 4 Discussion . . . 124 4.1 Daniel’s Comments . . . 125

Paper F – Developing a Laser Navigation Aid for Persons with Visual Impairment 129 1 Introduction . . . 131

2 Navigation Aid Review . . . 132

3 Laser Navigators . . . 134

3.1 LaserNavigator Evaluations . . . 136

4 Discussion . . . 140

4.1 Intuitive Navigation Aid . . . 141

4.2 Sensor Integration . . . 141

4.3 System Integration . . . 142

4.4 Three Research Paths . . . 143

5 Conclusions . . . 143

(13)

Acknowledgements

This doctoral thesis describes five years of work with navigation aids for visually impaired individuals. The work has been carried out at the Department of Computer Science, Electrical and Space Engineering at Lule˚a University of Technology. I wish to thank Centrum f¨or medicinsk teknik och fysik (CMTF) for financial support, provided through the European Union.

The multidisciplinary nature of the project has allowed me to work with many differ-ent people with diverse backgrounds. This has been a great catalyst for creativity, and has made the work much more fun, interesting and meaningful.

First and foremost, I want to thank my principal supervisor Kalevi Hyypp¨a, whose great skill, knowledge and creativity have been key assets for the project from start to finish. For me, his ever-present support and assistance have been a large comfort in a world that, to a new doctoral student, can at times be both harsh and confusing. I would also like to thank my assistant supervisors: H˚akan Fredriksson, Jan van Deventer and Ulrik R¨oijezon. They have brought fresh views to the project and have helped make the results both broader in scope and richer in detail.

Further, Maria Prellwitz, Jenny R¨oding and Lars Nyberg were instrumental in the work with the first evaluation and its associated article; a great experience and learning process. Maria has continued to aid the qualitative analysis process in the later evalua-tions. I am grateful for that as the articles are far more interesting now than they would otherwise have been.

I am also grateful to Mikael Larsmark, Henrik M¨akitaavola and Andreas Lindner for their work on the LaserNavigator. Further, I would like to acknowledge the support of teachers and other staff at the university who have helped me on the sometimes winding path that started 11 years ago and now comes to an end in the form of this dissertation. Thank you!

Lule˚a, May 2016 Daniel Innala Ahlmark

(14)
(15)

Summary of Included Papers

Paper A – Presentation of Spatial Information in Navigation

Aids for the Visually Impaired

Daniel Innala Ahlmark and Kalevi Hyypp¨a

Published in: Journal of Assistive Technologies, 9(3), 2015, pp. 174–181.

Purpose: The purpose of this article is to present some guidelines on how different means of information presentation can be used when conveying spatial information non-visually. The aim is to further the understanding of the qualities navigation aids for visually impaired individuals should possess.

Design/methodology/approach: A background in non-visual spatial perception is provided, and existing commercial and non-commercial navigation aids are examined from a user interaction perspective, based on how individuals with a visual impairment perceive and understand space.

Findings: The discussions on non-visual spatial perception and navigation aids lead to some user interaction design suggestions.

Originality/value: This paper examines navigation aids from the perspective of non-visual spatial perception. The presented design suggestions can serve as basic guidelines for the design of such solutions.

Paper B – Obstacle Avoidance Using Haptics and a Laser

Rangefinder

Daniel Innala Ahlmark, H˚akan Fredriksson and Kalevi Hyypp¨a

Published in: Proceedings of the 2013 Workshop on Advanced Robotics and its So-cial Impacts, Tokyo, Japan.

In its current form, the white cane has been used by visually impaired people for al-most a century. It is one of the al-most basic yet useful navigation aids, mainly because of its simplicity and intuitive usage. For people who have a motion impairment in addition to a visual one, requiring a wheelchair or a walker, the white cane is impractical, leading to human assistance being a necessity. This paper presents the prototype of a virtual white cane using a laser rangefinder to scan the environment and a haptic interface to

(16)

”poke” at obstacles several meters ahead and without physical contact with the obstacle. By using a haptic interface, the interaction is very similar to how a regular white cane is used. This paper also presents the results from an initial field trial conducted with six people with a visual impairment.

Paper C – An Initial Field Trial of a Haptic Navigation System

for Persons with a Visual Impairment

Daniel Innala Ahlmark, Maria Prellwitz, Jenny R¨oding, Lars Nyberg and Kalevi Hyypp¨a Published in: Journal of Assistive Technologies, 9(4), 2015, pp. 199–206.

Purpose: The purpose of the presented field trial was to describe conceptions of feasi-bility of a haptic navigation system for persons with a visual impairment.

Design/methodology/approach: Six persons with a visual impairment who were white cane users were tasked with traversing a predetermined route in a corridor en-vironment using the haptic navigation system. To see whether white cane experience translated to using the system, the participants received no prior training. The proce-dures were video-recorded, and the participants were interviewed about their conceptions of using the system. The interviews were analyzed using content analysis, where induc-tively generated codes that emerged from the data were clustered together and formulated into categories.

Findings: The participants quickly figured out how to use the system, and soon adopted their own usage technique. Despite this, locating objects was difficult. The interviews highlighted the desire to be able to feel at a distance, with several scenarios presented to illustrate current problems. The participants noted that their previous white cane experience helped, but that it nevertheless would take a lot of practice to master using this system. The potential for the device to increase security in unfamiliar environments was mentioned. Practical problems with the prototype were also discussed, notably the lack of auditory feedback.

Originality/value: One novel aspect of this field trial is the way it was carried out. Prior training was intentionally not provided, which means that the findings reflect im-mediate user experiences. The findings confirm the value of being able to perceive things beyond the range of the white cane; at the same time, the participants expressed concerns about that ability. Another key feature is that the prototype should be seen as a navi-gation aid rather than an obstacle avoidance device, despite the interaction similarities with the white cane. As such, the intent is not to replace the white cane as a primary means of detecting obstacles.

(17)

Paper D – A Haptic Navigation Aid for the Visually Impaired –

Part 1: Indoor Evaluation of the LaserNavigator

Daniel Innala Ahlmark, Maria Prellwitz, Ulrik R¨oijezon, George Nikolakopoulos, Jan van Deventer, Kalevi Hyypp¨a

To be submitted.

Navigation ability in individuals with a visual impairment is diminished as it is largely me-diated by vision. Navigation aids based on technology have been developed for decades, although to this day most of them have not reached a wide impact and use among the visually impaired. This paper presents a first evaluation of the LaserNavigator, a newly developed prototype built to work like a “virtual white cane” with an easily adjustable length. This length is automatically set based on the distance from the user’s body to the handheld LaserNavigator. The study participants went through three attempts at a predetermined task carried out in an indoor makeshift room. The task was to locate a randomly positioned door opening. During the task, the participants’ movements were recorded both on video and by a motion capture system. After the trial, the partici-pants were interviewed about their conceptions of usability of the device. Results from observations and interviews show potential for this kind of device, but also highlight many practical issues with the present prototype. The device helped in locating the door opening, but it was too heavy and the idea of automatic length adjustment was difficult to get used to with the short practice time provided. The participants also identified scenarios where such a device would be useful.

Paper E – A Haptic Navigation Aid for the Visually Impaired –

Part 2: Outdoor Evaluation of the LaserNavigator

Daniel Innala Ahlmark, Maria Prellwitz, Ulrik R¨oijezon, Jan van Deventer, Kalevi Hyypp¨a To be submitted.

Negotiating the outdoors can be a difficult challenge for individuals who are visually impaired. The environment is dynamic, which at times can make even the familiar route unfamiliar. This article presents the second part evaluation of the LaserNavigator, a newly developed prototype built to work like a “virtual white cane” with an easily ad-justable length. The user can quickly adjust this length from a few metres up to 50 m. The intended use of the device is as a navigation aid, helping with perceiving distant landmarks needed to e.g. cross an open space and reach the right destination. This sec-ond evaluation was carried out in an outdoor environment, with the same participants who partook in the indoor study, described in part one of the series. The participants used the LaserNavigator while walking a rectangular route among a cluster of buildings. The walks were filmed, and after the trial the participants were interviewed about their

(18)

that while the device is designed with the white cane in mind, one can learn to see the device as something different. An example of this difference is that the LaserNavigator enables keeping track of buildings on both sides of a street. The device was seen as most useful in familiar environments, and in particular when crossing open spaces or walking along e.g. a building or a fence. The prototype was too heavy and all participant re-quested some feedback on how they were pointing the device, as they all had difficulties with holding it horizontally.

Paper F – Developing a Laser Navigation Aid for Persons with

Visual Impairment

Jan van Deventer, Daniel Innala Ahlmark, Kalevi Hyypp¨a To be submitted.

This article presents the development of a new navigation aid for visually impaired per-sons (VIPs) that uses a laser range finder and electronic proprioception to convey the VIPs’ physical surroundings. It is denominated LaserNavigator. In addition to the tech-nical contributions, an essential result is a set of reflections leading to what an “intuitive” handheld navigation aid for VIPs could be. These reflections are influenced by field trials in which VIPs have evaluated the LaserNavigator indoors and outdoors. The trials di-vulged technology-centric misconceptions regarding how VIPs use the device to sense the environment and how that physical environment information should be provided back to the user. The set of reflections relies on a literature review of other navigation aids, which provide interesting insights on what is possible when combining different concepts.

(19)

List of Figures

1.1 The Novint Falcon haptic interface and the SICK LMS111 laser rangefinder,

used in the first prototype. . . 4 1.2 A picture of the second prototype: the LaserNavigator. . . 8 3.1 The UltraCane, a white cane augmented with ultrasonic sensors and haptic

feedback. . . 21 3.2 A picture of the Miniguide, a handheld ultrasonic mobility aid with haptic

feedback. . . 22 4.1 This figure shows the virtual white cane on the MICA (Mobile Internet

Connected Assistant) wheelchair. . . 28 4.2 The Novint Falcon haptic display. . . 29 4.3 A simple environment (a) is scanned to produce data, plotted in (b). These

data are used to produce the model depicted in (c). . . 30 5.1 A picture of the latest version of the LaserNavigator. The primary

com-ponents are the laser rangefinder (1), the ultrasound sensor (2), the loud-speaker (3), and the button under a spring (4) used in manual length adjustment mode to adjust the “cane length”. . . 34 5.2 Basic architecture diagram showing the various components of the

Laser-Navigator and how they communicate with each other. . . 35 B.1 The virtual white cane. This figure depicts the system currently set up on

the MICA wheelchair. . . 70 B.2 The Novint Falcon, joystick and SICK LMS111. . . 73 B.3 The X3D scenegraph. This diagram shows the nodes of the scene and the

relationship among them. The transform (data) node is passed as a refer-ence to the Python script (described below). Note that nodes containing configuration information or lighting settings are omitted. . . 74 B.4 The ith wall segment, internally composed of two triangles. . . 75 B.5 The virtual white cane as mounted on a movable table. The left hand

is used to steer the table while the right hand probes the environment through the haptic interface. . . 77 B.6 The virtual white cane in use. This is a screenshot of the application

depicting a corner of an office, with a door being slightly open. The user’s ”cane tip”, represented by the white sphere, is exploring this door. . . 79

(20)

Falcon haptic interface is used with the right hand to feel where walls and obstacles are located. The white sphere visible on the computer screen is a representation of the position of the grip of the haptic interface. The grip can be moved freely as long as the white sphere does not touch any obsta-cle, at which point forces are generated to counteract further movement ”into“ the obstacle. . . 88 D.1 A photo of the LaserNavigator, showing the laser rangefinder (1),

ultra-sound sensor (2) and the loudspeaker (3). . . 100 D.2 The two reflectors (spherical and cube corner) used alternately to improve

the body–device measurements. . . 100 D.3 A picture of the makeshift room as viewed from outside the entrance door. 103 D.4 One of the researchers (Daniel) trying out the trial task. The entrance

door is visible in the figure. . . 104 D.5 Movement tracks for each participant and attempt, obtained by the

reflec-tor markers on the sternum. The entrance door is marked by the point labelled start, and the target door is the other point, door. Note that the start point appears inside the room because the motion capture cameras were unable to see part of the walk. Additionally, attempt 3 by partici-pant B does not show the walk back to the entrance door due to a data corruption issue. . . 106 D.6 This figure shows the three attempts of participant B, with the additional

red line indicating the position of the LaserNavigator. Note that attempt 3 is incomplete due to data corruption. . . 107 E.1 A picture of the LaserNavigator, showing the laser rangefinder (1), the

ultrasound sensor (2), the loudspeaker (3), and the button under a spring (4) used for adjusting the “cane length”. . . 119 E.2 The tactile model used by the participants to familiarise themselves with

the route. The route starts at (1) and is represented by a thread. Us-ing the walls of buildUs-ings (B1) and (B2) as references, the participants walked towards (2), where they found a few downward stairs lined by a fence. Turning 90 degrees to the right and continuing, following the wall of building (B2), the next point of interest was at (3). Here, another fence on the right side could be used as a reference when taking the soft 90-degree turn. The path from (3) to (6) is through an alley lined with sparsely spaced trees. Along this path, the participants encountered the two simu-lated crossings (4) and (5), in addition to the bus stop (B5). At (6) there was a large snowdrift whose presence guided the participants into the next 90-degree turn. Building B4 was the cue to perform yet another turn, and then walk straight back to the starting point (1), located just past the end of (B3). . . 120

(21)

E.3 This figure shows three images captured from the videos. From left to right, these were captured: just before reaching (6); just before (5), with one of the makeshift traffic light poles visible on the right; between (3) and (4). . . 120 F.1 Indoor evaluation. Motion capture cameras at the top with unique

reflec-tive identifier on chest, head, LaserNavigator and white cane. Door 3 is closed. . . 137 F.2 Paths (black) taken by the three participants (one per row) over three

indoor trials. The red line shows how they used the LaserNavigator. . . . 138 F.3 Model of the outdoor trial environment. . . 140

(22)
(23)

Part I

(24)
(25)

Chapter 1

Introduction

“The only thing worse than being blind is having sight but no vision.” Helen Keller

1.1

Overview – Five Years of Questions

This section presents my personal chronicle of events spanning from my master’s thesis to this dissertation. The purpose is to give a light-weight introduction, and to highlight the underlying thought process and steps that are often not visible in scientific writing. Being my personal story, this section also serves to outline my own contributions to the project that is really a true team effort.

1.1.1

The Beginning

What does a doctoral student do?

Some time during the later period of my computer science studies I found myself com-pletely open to the idea of a post-degree continuation in research. Back then I only had the general idea of what that meant, so questions such as the one above naturally formed in my mind.

In engineering studies, you quickly encounter the idea of breaking down problems into smaller pieces which when solved will allow you to solve the larger problem. This not only allows you to tackle more manageable pieces one at a time, but also makes it possible to distribute subproblems across a team of people. This is thanks to the hierarchical nature of things.

So, what about research? A doctoral student does research in order to become an independent researcher. I had started asking around, and the preceding answer was the one I often received. Still, I was not satisfied; you do research, but what does that mean exactly? To answer my original question, I now had to answer a subquestion. The hierarchical nature of things shows up.

(26)

After some more inquiry I had a clearer perspective, and knew that doctoral studies was something I would be interested in pursuing. I had figured out that the idea was to work on some project, focussing on some very specific problem, solving it, and writing a lot about it. Thus when I graduated from the master of science programme I thought I had a pretty good idea what would be ahead. I did not.

While asking around at the department, I soon met my to be principal supervisor, and came to hear of a project that immediately sparked great interest in me. This project was called the Sighted Wheelchair, and the idea was to enable visually impaired individuals to drive a powered wheelchair using the sense of touch to detect obstacles. At this time, the project had already started, and an initial proof-of-concept system had been developed and was just about to be tested. The system scanned its surrounding environment with a laser rangefinder and allowed the user to perceive these scans by touch. My first connection to the project was as a tester of that prototype.

One day while walking through the campus corridors I passed by a team doing some-thing curious – not an unusual encounter at a university. Then from behind me I heard someone call “excuse me” after which followed a conversation ending with me enthusi-astically saying something along the lines of “I would love to”. This was the first time I encountered a haptic interface (the Novint Falcon), and the experience was amazing. Here was a device through which you could experience computer generated content; a device that was like a computer screen for the hand. The Falcon was originally mar-keted as a gaming device, although it seemed not to cause the great excitement in that market one might have initially expected. Shortly after my encounter with the Falcon, in February of 2011, I joined the project as a research engineer with the task of further developing the software.

Figure 1.1: The Novint Falcon haptic interface and the SICK LMS111 laser rangefinder, used in the first prototype.

With great eagerness I started looking for the pieces I needed for the software puz-zle. That picturesque metaphor is hinting that yet again this was a case of

subprob-lem management. The laser rangefinder

would continually scan the environment, and this needed to be reflected both in a graphical model that was displayed on a screen, and in a haptic model that the user would probe with the Falcon. The biggest challenge was to find a good way to present a haptic model that would constantly be changing. A situation can happen where as the user pushes the handle of the Fal-con towards a dynamically changing ob-ject, the object might change in a way leaving the probing position inside the obob-ject, rather than on the outside surface. This is a known issue with these kinds of haptic interfaces, and is at its core a consequence of one intriguing idea: the user is using the same modality (i.e. touch) for both input (moving the handle) and output (having the

(27)

1.1. Overview – Five Years of Questions 5 handle pushed on by the device). The outcome is described in section 4.2.1.

After a couple of months the first prototype was ready. A short video is available online which shows the system in use [1]. Were we done? An issue had been identified, and a solution had been presented. As a first prototype, there were naturally many practical issues that would need to be dealt with before the system could be put in production. Nevertheless, a solution to the original issue was presented. At this point I started noticing one big difference between research problems and problems you might encounter in an undergraduate textbook: you do not have the solutions manual. This is obvious, for if you did, the problem simply would not be a new one. There is another important consequence of this though: you usually do not have the solutions manual for the subproblems you divide your problem into either. This leads to more questions, that in order to be answered inevitably leads to even more questions. It started occurring to me just how deep one can look into a problem that at first glance seems very simple. At the time I had only worked with this for a few months, but I would have years ahead to look into the problems.

The initial prototype was completed, but was it any good? The scientifically apt question we wanted to investigate was whether the user interaction was deemed feasible by the users. More specifically, is haptics a feasible choice to present spatial information non-visually? The Falcon made it possible to “hit” objects in a way that much resembles how a white cane is used, and as such we thought it valuable to look into whether experienced white cane users could use this system without much training. With these questions in mind we decided to perform a qualitative evaluation with visually impaired participants. Details about that event can be found in paper C.

1.1.2

Next Steps

The evaluation showed that haptics seemed indeed to be a good way to convey informa-tion about one’s surroundings. After all, the white cane which is so ubiquitous among visually impaired is a haptic device, albeit a very basic one. Its very basic nature also makes it successful, as it is easy to learn, easy to use, and easy to trust.

The initial prototype had its drawbacks, most notably the fact that there are a ma-jority of visually impaired individuals who would benefit from such a navigation aid that are not using a wheelchair. This led us in the direction of a portable prototype, later manifesting in the form of the LaserNavigator. While writing articles about the first prototype and attending conferences (notably a robotics conference in Tokyo), I was also involved in developing software for the next prototype.

The team working on the project had grown and changed, but the core ideas were still the same: we wanted to make a portable device with a user interface retaining the simplicity of the first prototype. Unfortunately for us, Newton’s third law makes creating such an interface a challenge. For the user to feel a force (as was the case with the first prototype), there has to be an equal but opposite force; the device cannot push your hand back unless it has something to push against. In the case of the first prototype, the whole system was mounted on a wheelchair, meaning the haptic robot could push on the

(28)

user’s hand by remaining stationary. Fortunately, such directed force feedback is not the only way to provide haptic stimuli, but would any other kind be comparable, or only a bad compromise? Also, a laser rangefinder able to automatically scan the room as used on the wheelchair would be far too bulky for a portable device, which points to another problem: how, and what, to measure? At this point, the number of new unresolved issues had grown to the extent where they could easily make up another doctoral project or two. It would seem that a major part of a doctoral student’s work is to pose new and relevant questions. The hierarchical nature of things strikes again, making sure that every completed puzzle is shattered – broken down into much smaller pieces than before.

1.1.3

The Second Prototype

To quickly test some ideas, we used a smartphone connected to an Arduino electronics prototyping board. The phone’s built-in vibrator initially served as haptic feedback, and the Arduino board was connected to a laser rangefinder, albeit not a scanning one, as a distance measurement unit. Vibration actuators are not uncommon in navigation aids (see e.g. Miniguide in section 3.1), but on the other hand ultrasound is typically used to measure distances. The notion of using ultrasound in those cases is perfectly legitimate given the purpose of many such devices is to alert the user to the presence and often distance to nearby obstacles. Our core idea was a bit different, and makes ultrasound a poor option.

Imagine taking a quick peek inside a room. This short glance already gives enough information to be able to move about the room safely. Our brains have superb processing abilities for this very purpose, making the task of moving about safely effortless. Without vision, exploring a room needs to be facilitated through audition and touch, where the auditory landscape (the soundscape) can provide the big picture while moving about and feeling objects are the key to the details. The equivalent of the quick peek is a far more laborious process of systematically moving about and building the mental view of the room piece by piece. The white cane extends the reach of the arm, but at the cost of surface details, though even with the cane the range is limited compared to the eyes. The idea of providing a “longer cane” seemed a perfect fit for a laser rangefinder. The unit we first used was able to measure distances up to 50 m, 12 times per second, with a maximum error of about 10 cm at 10 m.

With a hardware platform ready, the next issue was how to use the vibration actuator effectively. A very interesting stage of the project followed wherein I experimented with different ideas. My goal was to find the way that felt most promising, so that we later could perform another evaluation with target users.

In the typical case of this kind of vibration feedback, some parameter is varied de-pending on the measured distance, the most commonly used ones being frequency or burst frequency. One novel thing which made my experiments even more interesting was yet another parameter in the equation: another distance measurement.

Because of the great range of the laser rangefinder (50 m), the system would have a use outside in large open spaces, but how do we provide meaningful feedback for such

(29)

1.1. Overview – Five Years of Questions 7 ranges while still retaining the ability to discriminate closer objects? Another physicist comes to mind here, Heisenberg, as it seems we would have to choose one at the cost of the other. Commercial navigation aids such as the UltraCane and MiniGuide (see 3.1) have a button or switch to allow the user to set a maximum distance that the device is reacting to (think “virtual cane length”), but we opted for a completely different approach.

On the device and facing the user, an ultrasound sensor is mounted. This continually measures the distance from the device to the closest point on the user’s body. Instead of using a button or switch to set a maximum distance, we could now use the body–device measurement, which meant that the user could vary said length simply by moving the arm closer to or further away from their body. A physical analogy would be a very long telescopic white cane whose length would automatically be varied when the user moves it further away from or closer to their body. This way, when the user wants to examine close objects, they hold the device close, whereas if they want to detect distant buildings for example, they would reach out far with the device.

Having this additional parameter begged the question of how to relate it to the “‘cane length”. A simple solution that turned out to be quite acceptable indoors is to multiply the body–device distance by 10, meaning that if the user holds the device 50 cm out from their body, they would have a 5 m long “virtual cane”.

Having the body–device distance provides another interesting opportunity. Instead of trying to convey the actual device–object distance with vibrations, we could let the user infer that based on how far they held out their arm. Similarly, when hitting an object with a white cane, the distance to the object is established by knowing the length of the cane and how far away from the body it is held. This idea was intriguing, and prompted me to look further into the way human beings know where their limbs are without looking at them, known as proprioception.

The experiments led me to several alternatives which I found feasible. Those could be divided into two categories: simple and complex feedback. In simple feedback mode, the vibrations only signal the presence of an object, whereas complex mode tries to convey the distance to said object as well. I personally prefer the elegance of simple feedback, because it behaves very much like a physical cane. In this case, the distance to the object is inferred from the length of the cane and how far out it is held.

1.1.4

The LaserNavigator

The next step of the development process was to skip the phone and build something custom. The phone added considerable weight to the system, and the control of the vibrator was limited. With a lot of help from many people, we soon swapped the phone for a custom microcontroller and vibration actuator. At this point there were some issues to attend to: the update rate of 12 Hz for the laser felt too slow, and the spin-up time of the vibrator was too significant. Fortunately, an updated laser rangefinder had become available, featuring an update rate of 32 times per second. As for vibrations, we attached a small loudspeaker on which the user places their finger. Speakers have an insignificant reaction time, and the increased update rate of the laser provided a much

(30)

better experience.

I implemented the different feedback techniques I had previously developed, and through testing concluded that I was still in favour of the simpler alternatives. My justification for this is that simple haptic feedback is intuitive, something we are used to. Voice guidance from GPS devices are similarly intuitive, as we can draw upon experi-ences gained from communicating with fellow human beings. Note that simple feedback techniques do not necessarily equate to a rich experience. Feedback can be made highly complex, providing a lot more information, but at the cost of requiring much more train-ing to use efficiently. If we accept a long traintrain-ing period, it may seem that a complex solution is always better, but we need to look at other factors such as enjoyment. Is the system fun to use? As a thought experiment, consider this art metaphor. Imagine looking at a beautiful painting. Now, we can use a camera to capture that painting with accurate colours and very high resolution. Then we could take this information and con-vey it by a series of audio frequencies, corresponding to the colours. This way, we have reproduced the painting, but it probably does not sound as beautiful as it looks. This is not the fault of the camera being too bad, but the impressions from our senses have evolved to be far more than the sensory information itself. Such a sensory substitution device would make a beautiful painting accessible to someone who has never seen, but there likely is better “music” out there.

1.1.5

Two Trials

Figure 1.2: A picture of the second prototype: the LaserNavigator.

In the late autumn of 2015, trial time was once again upon us. This time, we wanted to perform two trials: one initial indoor trial as a first feasibility check and a first opportunity for potential users to influence development, then a more elaborate out-door trial to see how the LaserNavigator would work in a more practical setting.

The indoor task was finding doorways, and was carried out in the Field Robotics Lab (FROST Lab) at the department. A makeshift rectangular room with walls and doors were constructed inside the lab, and the participants had to find and walk to a

randomly chosen open door. The task turned out to be more difficult and time-consuming than we had expected, with more training needed than was provided. Additionally, we received a lot of feedback regarding the LaserNavigator itself and its potential uses in ev-eryday situations. One big decision made after that trial was to dismiss automatic length adjustment in favour of a manual mode, controlled by a button. We noticed that the au-tomatic mode was difficult to grasp, and a manual mode where the length is fixed during use behaves more like a real cane, and should thus be easier to understand. Additionally,

(31)

1.1. Overview – Five Years of Questions 9 the automatic mode leads to a compressed depth perception, which is manageable with practice, but is not intuitive. The modified LaserNavigator has a button, which when pressed will set the length based on the body–device distance. Additionally, a certain number of feedback “ticks” are given to tell the user roughly how long the “virtual cane” is.

With the improved LaserNavigator, it was soon time for the outdoor trial. The participants who had performed the indoor trial partook in this new test, which consisted of walking a closed path among a cluster of buildings on campus. Before the actual walks, the participants got some time to familiarise themselves with the environment with the help of a tactile model. All participants liked the changes made to the LaserNavigator, and one participant in particular really enjoyed the experience and the ability to use a “very long cane” to keep to the path. One aspect which I find intriguing surfaced during this trial, namely the distinction between an obstacle avoidance device and a navigation aid, and how these two kinds of devices are linked. During the project, this is something we have spent many thoughts on. The distinction becomes blurred when having access to great range, where some objects are used as a guiding reference, and would not otherwise be seen as an obstacle to go to and poke with the white cane. From both observations and interviews from this latest trial, it appears that the participants went through this kind of thought process. At first, the device was seen as a “new kind of white cane”. It seems that the device is first seen as a white cane with mostly limitations, but is later reinterpreted as a navigation aid, at which point possibilities surface. Given the design choice of trying to mimic the interaction with a white cane, it is perhaps not surprising that the participants thought of the device in terms of a cane, with implied limitations. The fact that this familiarity can be utilised is encouraging as it can lead to an easier-to-learn device. The challenge then is to go beyond the familiar concept and realise that the device is something more – something different from a white cane.

1.1.6

The Finish Line?

The evaluations of the LaserNavigator marks the final part of my time as a doctoral student. When I started working in this project, it was like having a small eternity ahead. In hindsight, it is easy to see just how small this “eternity” was, and it is time to reflect on what has been accomplished. This project has contributed to the body of knowledge concerning navigation aids from the perspective of potential users. At the start of the project, when I scoured the scientific literature on this subject, I was left with some concerns about the lack of answers to some basic questions, and the often not so prominent user participation. During the years, we have obtained knowledge based on very early trials, with potential target users. In particular, the final trial shows that it is possible to mimic the interaction of a white cane, but use it for a different purpose. The interviews have also given us many ideas of what constitutes a good navigation aid.

Finally, we can ask: “are we done yet?”

The hierarchical nature of things assures that there is always the next challenge to tackle, and that the answer to the above question might never be “yes”. It is just like

(32)

athletics class at school when running around the oval track. I remember times when, exhausted, I’d reached the finish line and heard, “and you thought you were done?”

Let us hope that in this case, the track is an inward spiral.

1.2

Aims, Contributions and Delimitations

The aim of the work described in this thesis was to further the understanding of how spatial information should be presented non-visually. To investigate this, navigation aids have been developed, and subsequently evaluated, with visually impaired individuals. This can be formulated as the following research questions:

• How should spatial information be presented non-visually?

• What can feasibly be done with current haptic feedback technologies? • What are users’ conceptions of such technologies?

The main contributions of this thesis are in the field of user interaction, more specif-ically on the problem of how to convey spatial information non-visually, primarily to visually impaired individuals. While this thesis focuses on navigation aids for the vi-sually impaired, they are not the only group that benefits from this work. Non-visual interaction focused towards navigation is of interest to e.g. firefighters as well, who can end up in situations where they have to find their way around in a smoke-filled build-ing. In addition, advances in non-visual interaction in general is useful for anyone on the move. Oulasvirta et al. [2] note that when mobile, cognitive resources are allocated to monitor the react to contextual events, leading to interactions being done in short bursts with interruptions inbetween. In particular, vision is typically occupied with nav-igation and obstacle avoidance (not to mention driving a car), thus using a mobile device simultaneously may lead to accidents.

While the focus for this work has been on haptic solutions, other sensory channels (notably audition) are also relevant for navigation aids. Haptics is an appealing choice for the specific task of conveying object location information, and audition has important drawbacks in this regard (see chapter 2 and paper A).

In numerous places throughout this text, both commercial and prototype navigation aids are mentioned. These do not form an exhaustive list, but are chosen based on the novelty they bring to the discussion, be it an interaction or functionality aspect.

1.3

Terminology

Terminology regarding visual impairment as well as disabilities in a more general sense are many and are subject to change over time. For example, the term handicapped might be offending today, despite the fact that the term itself originated as a replacement for other terms. Throughout this text, visual impairment and visually impaired are used.

(33)

1.4. Thesis Structure 11 They refer specifically to the underlying problem, the impairment, and this may in turn be the reason for a disability.

A further challenge is classifying degrees of visual impairment. Terms such as blind, low vision, partially sighted and mobility vision are troublesome as they are not clearly defined. Such definitions are not easily established even if objective eye measurement are used. For this thesis, precise judgement of visual ability (acuity) is not important, but the categorisation is. In a navigation context, the key piece of information is how vision aids the navigation task. A person who can see some light has an advantage over a person unable to see light, and a person able to discern close objects has further advantages. Throughout this thesis and unless otherwise stated, visually impaired is used to denote an individual or group of individuals with a disadvantage in a navigation context compared to what is considered normal sight.

1.4

Thesis Structure

The thesis is organised as follows: Part I

• Chapter 1 contains a personal chronicle of events, some notes on terminology, and scope of the thesis.

• Chapter 2 provides a background on visual impairment and non-visual navigation. It also discusses the physiological systems relevant to this task, as well as haptic feedback technologies.

• Chapter 3 discusses non-visual spatial perception and existing navigation aids, both commercial and research prototypes.

• Chapter 4 describes the Virtual White Cane and the conducted evaluation. • Chapter 5 is about the LaserNavigator and the two associated evaluations. • Chapter 6 discusses results and the research questions formulated in this chapter. • Chapter 7 concludes the first part of the thesis.

Part II

• Paper A discusses non-visual spatial perception in a navigation context, and pro-poses some interaction design guidelines.

• Paper B describes the Virtual White Cane in more detail. • Paper C is about the Virtual White Cane field trial.

(34)

• Paper D is the first part in a series of two about the LaserNavigator. This paper focuses on the first indoor trial.

• Paper E is the second part regarding evaluating the LaserNavigator, this time in an outdoor setting.

• Paper F presents a summary and reflections on the development and evaluation process for the entire project.

(35)

Chapter 2

Background

2.1

Visual Impairments and Assistive Technologies

Vision is a primary sense in many tasks, thus it comes as no surprise that losing it has a large impact on an individual’s life. The World Health Organization (WHO) maintains a so-called fact sheet containing estimates on the number of visually impaired individuals and the nature of impairments. The October 2013 fact sheet [3] estimates the total number of people with any kind of visual impairment to 285 million, and that figure is not likely to decrease as the world population gets older. Fortunately, WHO notes that visual impairments as a result of infectious diseases are decreasing, and that as many as 80% of cases could be cured or avoided.

Thankfully, assistive technology has and is playing an important role in making sure that visually impaired people are able to take part in society and live more indepen-dently. Louis Braille brought reading to the visually impaired community, and a couple of hundred years later people are using his system, together with synthetic speech and screen magnification, to read web pages and write doctoral theses. Devices that talk or make other sounds are abundant today, ranging from bank ATMs to thermometers, egg timers and liquid level indicators to put on cups. Despite all of these successful innovations, there is still no solution for independent navigation that has reached a wide impact [4]. Such a solution would help visually impaired people move about indepen-dently, which should improve the quality of life. A technological solution could either replace or complement the age-old solution: the white cane.

It has likely been known a long time that poking at objects with a stick is a good idea. The white cane, as it is known today, got its current appearance about a hundred years ago, although canes of various forms have presumably been used for centuries. Visually impaired individuals rely extensively on touch, and a cane is a natural extension of the arm. It is easy to learn, easy to use, and if it breaks you immediately know it. These characteristics have made sure that the cane has stood the test of time. Despite it being close to perfect at what it does, notifying the user of close-by obstacles, the white cane is

(36)

also very limited. Because of its short range, it does not aid significantly in navigation.

2.1.1

Navigation

Navigating independently in unfamiliar environments is a challenge for visually impaired individuals. The difficulties to go to new places independently might decrease partic-ipation in society and can have a negative impact on the personal quality of life [5]. The degree to which this affects a certain individual is a very personal matter though. Some are adventurous and quite successful in overcoming many challenges, while others might not even wish to try. The point is that people who are visually impaired are at a disadvantage to begin with.

The emphasis on unfamiliar environments is intentional, as it is possible to learn how to negotiate well-known environments with confidence and security. Even so, the world is a dynamic place, and some day the familiar environment might have changed in such a way as to be unfamiliar. As an example, this happens in areas that have a lot of snow during the winters.

Navigation is difficult without sight as the bulk of cues necessary for the task are visual in nature. This is especially true outdoors, where useful landmarks include specific buildings and street signs. Inside buildings there are a lot of landmarks that are available without sight, such as the structure of the building (walls, corridors, floors), changes in floor material and environmental sounds. Even so, if the building is unfamiliar, any signs and maps that may be found inside are usually not accessible without sight.

There are two parts to the navigation problem: firstly, the current position needs to be known; secondly, the way to go. There are various ways to identify the current position, but one way to think is to view them as fingerprints. A location is identified by some unique feature, such as a specific building nearby. Without sight, it is usually difficult to tell a building from any other, and so other landmarks obtained through local exploration may be necessary to establish the current location. The next problem, knowing where to go, can then be described as knowing how to move through a series of locations to reach the final location. This requires relating locations to one another in space. Vision is excellent at doing this because of its long range. It is often possible to directly see the next location. This is not possible without sight, at least not directly. The range of touch is too limited, while sound, although having a much greater range, does not often provide unique enough fingerprints of locations. The solution to this problem is to use one’s own movements to relate locations to one another in space. Unfortunately, human beings are not very good at determining their position solely based on their own movements [6]. Without vision to correct for this inaccuracy, visually impaired individuals must instead have many identifiable locations close to each other. Consider the task of getting from a certain building to another (visible) building. With sight there is usually no need to use any intermittent steps between those. On the contrary, the same route without sight will likely consist of multiple landmarks (typically intersections and turns). Additionally, a means to avoid obstacles along the way is necessary.

(37)

2.2. Perception, Proprioception and Haptics 15 solves both by being able to look at distant landmarks as well as close-by obstacles. The white cane, on the other hand, is an obstacle avoidance device working at close proximity to the user. An obstacle avoidance device which possesses a great reach could address this issue, as well as aid in navigation. The prototypes presented in this thesis provide an extended reach, limited only by the specifications of the range sensors.

2.2

Perception, Proprioception and Haptics

This section gives a brief introduction to the physiological systems that are relevant for this work. Firstly, spatial perception is discussed in the context of navigation. Secondly, the sense of touch is explained in the sections on proprioception and haptics.

2.2.1

Spatial Perception

Spatial perception, or more broadly spatial cognition, is concerned with how we perceive and understand the space around us. This entails being able to gather information about the immediate environment (e.g. seeing where objects are in a room), and organising this into a mental representation, often referred to as a cognitive map.

Vision plays a major role in gathering spatial information. A single glance about a room can provide all necessary knowledge about where objects are located as well as many properties of said objects. Furthermore, thanks to the way vision has evolved, this process is almost instantaneous and completely effortless.

For individuals not possessing sufficient vision to gather this information, spatial knowledge can be very challenging to acquire. A natural question to pose is whether blind individuals have a decreased spatial ability, as the primary means of gathering such knowledge is diminished. This can fortunately be tested by using a knowledge acquisi-tion phase not dependent on vision. Schmidt et al. [7] did this by verbally describing environments to both sighted and blind individuals. The participants were then tested on their knowledge of this environment. While the researchers found that the average performance in the blind group was worse than for the sighted group, they also noticed that those blind individuals who were more independent in their daily lives (walking about by themselves) performed equally well to their sighted peers. This suggests that the mental resources and tools for spatial perception are not inherently related to vision. It also highlights the importance of spatial training for the visually impaired.

A hundred years ago, this kind of training was not usually provided. In fact, it was even questioned whether blind people could perceive space at all. Lotze [8] expressed the opinion that space is inherently a visual phenomenon incomprehensible without vision. This extremely negative view was perhaps not as odd back then when blind people were not encouraged to develop spatial skills, but is absurd today when we see blind people walking about on the streets by themselves. The question still remains, how do they do it?

In their article A Review of Haptic Spatial Abilities in the Blind [9], Morash et al. gives a more detailed historical account as well as an overview of contemporary studies.

(38)

To fill the gap that the missing vision creates, audition and touch are used. Audition is surprisingly capable, and some people become very proficient using it (see e.g. [10]). Note that audition is not only used to judge the position of objects that make sounds, but also silent objects. This ability, often referred to as human echolocation, can provide some of the missing information about the environment, a typical example being knowing where nearby walls are.

The presence of a piece of large furniture in a room can be inferred from the way its physical properties affect sounds in the room, but aside from its presence, there is usually nothing auditory showing that it is a bookcase. Many things can be logically inferred, but to get a direct experience it may be necessary to use the sense of touch. While audition can provide some of the large-scale spatial knowledge, touch can give the details about objects. Note that unlike vision, exploring a room by touch implies walking around in the room, which means that the relationship among objects has to be maintained by this self-movement.

2.2.2

The Sense of Touch and Proprioception

What is the sense of touch? The answer to that question is not so readily apparent compared to vision or audition. What we colloquially refer to as touch are in fact many types of stimuli, often coinciding. As an example, consider an object such as a mobile phone. Tactile mechanoreceptors (specialised nerve endings) in the skin enable the feeling of texture; the screen might feel smoother than the back. Thermoreceptors mediate the feeling of temperature; the phone might feel warmer when its battery is being charged. Proprioreceptors found mostly in muscles and joints tell the brain where parts of the body are located; by holding the phone you know how big it is, even without looking at it, and you feel its weight. These and a few other receptors combine to create our sense of touch.

Touch can provide much of the details that vision can, yet audition is incapable of. Because of this, touch is key in such diverse tasks as reading braille and finding a door. In particular, proprioception is crucial when walking with a white cane. The cane behaves like an extended arm, and despite not touching objects directly, the proprioceptive and auditory feedback provided is often enough to give a good characterisation of the surface. Proprioception (from the Latin proprius meaning one’s own, and perception) is our sense of where our body parts are and how they move. A detailed description, along with a historical account, can be found in a review article by Proske and Gandevia [11].

Inside muscles, sensory receptors known as muscle spindles detect muscle length and changes in the length of the muscle. This information, along with data obtained from receptors in joints and tendons, are conveyed through the central nervous system (CNS) to the brain where it is processed and perceived consciously or unconsciously. Similarly, receptors in our inner ears (collectively known as the vestibular system) detect rotations and accelerations of the head.

Another important aspect of touch is tactile sensation, mediated by nervous cells (no-tably mechanoreceptors) spread throughout the skin. While proprioception can be used

(39)

2.2. Perception, Proprioception and Haptics 17 to get a grasp of the position and size of an object, it is through those mechanoreceptors one can perceive the texture and shape of the object. The next section discusses the technologies that make use of these physiological systems: haptics.

2.2.3

Haptic Feedback Technologies

Haptic (from the Greek hapt´os) literally means ’to grasp something’, and the field of hap-tic technology is often referred to simply as haphap-tics. Incorporating haphap-tics into products is nothing new, yet for a long time throughout its brief history, such technologies were only available in very specific applications. Early examples include conveying external forces through springs and masses to aircraft pilots, or remotely operating a robotic arm handling hazardous materials in a nuclear plant. A summary of the historical develop-ment of haptic technologies can be found in a paper by Stone [12].

One of the most common encounters people have with haptic feedback today is with their mobile phones demanding attention by vibrating. Such units are often electrome-chanical systems, either unbalanced electric motors (known as Eccentric Rotating Mass (ERM) actuators), or mass-and-spring systems (referred to as Linear Resonant Actuators (LRAs)). For a summary and comparison of these and other types of vibration actuators, see [13].

Haptic interfaces similar to that used by the Virtual White Cane (see papers B and C) have found their place in surgical simulations (e.g. the Moog Simodont Den-tal Trainer [14]), but should also be of interest to the visually impaired community. Such interfaces work by letting the user move a handle around in a certain workspace volume, with the interface able to exert forces on that grip depending on its position. This means that it is possible to experience a three-dimensional model by tracing its contours with the grip. Note that this mechanism provides true directional force feedback, as opposed to just vibrating.

For the visually impaired, one key development was the refreshable braille display, which did the equivalent transition of going from static paper to a dynamic computer screen. Haptics has also been incorporated into navigation aids (see section 3.1), typically in the form of vibration actuators to give some important alert such as the presence of an obstacle or a direction change. One large advantage of using haptic feedback in a navigation aid is that it does not interfere with the audio from the environment. A drawback is that while complex information can be conveyed haptically, doing so efficiently is difficult, and would likely require much training on behalf of the users.

While touch input in the form of touchscreens are now common, the output side of that is still missing. Work on tactile displays is ongoing, and is at a stage where many innovative ideas are tested (see e.g. [15, 16, 17]). When mature, this technology will be very significant for the visually impaired, perhaps as significant as the Braille display.

(40)
(41)

Chapter 3

Related Work

3.1

Navigation Aids

Through the last decades, many attempts at creating a navigation aid for the visually impaired have been done. These come in numerous different forms and functions, and are alternatively known as electronic travel aids (ETAs) or orientation and mobility (ORM) aids. While there have been many innovative ideas, no device to date has become as ubiquitous as the white cane, most attaining only minor impact among the visually impaired. A 2007 UK survey [4] conducted with 1428 visually impaired individuals showed that only 2% of them used any kind of electronic travel aid, yet almost half (48%) of the participants expressed that they had some difficulty going out by themselves.

Below is an overview of some navigation aids, both research prototypes and commer-cial products.

3.1.1

GPS Devices and Smartphone Applications

The Global Positioning System (GPS) has since its military infancy reached widespread public use. Thus it may not come as a surprise that much effort has been put into bringing this technology to visually impaired individuals. Perhaps one of the most successful GPS devices offering a completely non-visual interaction is the Trekker family of products by Humanware (e.g. the Trekker Breeze [18]). The most basic use of such devices is when the user simply walks about with the device turned on, whereby it will announce street names, intersections and close-by points of interest by synthetic speech. Additionally, typical turn-by-turn guidance behaviour is also possible, and some special functions are provided. Examples include “Where am I?” that describes the user’s location relative to close-by streets and points of interests such as shops, and a retrace function allowing users to retrace their steps back to a known point in the route where they went astray.

The advent of accessible smartphones led to apps specifically designed for visually impaired users. GPS applications such as Ariadne [19] are available, providing many of

(42)

the features of the Trekker family of products mentioned above. Another such solution that has generated lots of attention recently is BlindSquare [20]. The surge of interest in this app may be due to the fact that unlike Ariadne and typical GPS solutions, BlindSquare uses crowdsourced data from OpenStreetMap and FourSquare. The use of these services makes the app into a “Wikipedia for maps” where user contribution is key to success. This overcomes one of the fundamental limitations of most GPS systems: the use of static data. Additionally, BlindSquare is trying to overcome the limitations of using GPS indoors by instead placing Bluetooth beacons with relevant information through the building. The team has demonstrated this usage in a shopping centre, where the beacons contain information about the stores and other relevant landmarks such as escalators and elevators. BlindSquare and other similarly connected solutions have large potential, but users must understand the implications of open data.

3.1.2

Devices Sensing the Surrounding Environment

As an alternative to relying on stored maps, devices can use sensors to acquire essential in-formation about the environment surrounding the user. Challenges with such approaches include what information should be collected and how, and also the manner in which said information is presented to the user. Many such devices are designed to alert the user of obstacles beyond the reach of the white cane.

Typically, sensing solutions utilise ultrasonic sensors to measure the distance to nearby objects. Such devices come in the form of extensions to the white cane (e.g. Ultra-Cane [21], figure 3.1) or standalone complementary units such as Miniguide [22], shown in figure 3.2. Both of these have a selectable maximum range (up to 4 m for UltraCane and 8 m for Miniguide) beyond which objects are not reported. Similarly, both devices convey the distance by vibrating in short bursts whose frequencies vary with the mea-sured distance. A major difference between the two is that UltraCane has two vibration actuators as well as two ultrasound sensors, one measuring forwards while the purpose of the other is to alert the user of obstacles at head-height. An important property of ultrasound sensors is the beam spread, which may or may not be advantageous depending on what is desired. They are excellent for alerting the user of present objects, but are a poor choice if detailed information is desired. In such cases, optical sensors are a better option.

Besides ultrasound, optical systems are used, albeit less frequently. One example is Teletact [23] which uses a triangulation approach wherein a laser diode emits laser light that then bounces back off of obstacles at different angles detected by an array of photodetectors. The distance is conveyed by a series of vibration actuators or by musical tones. Advantages of optical sensors include accuracy and range. Another advantage is the insignificant beam divergence, making it possible to determine directions precisely.

Another device in this category deserving a special mention is CyARM [24] – not because of its sensor approach but instead the way feedback is handled. Instead of the typical vibrations, CyARM has a wire connecting the device to the body of the user. The tension of this wire can be controlled by the device, meaning that the user can feel

(43)

3.1. Navigation Aids 21

Figure 3.1: The UltraCane, a white cane augmented with ultrasonic sensors and haptic feedback.

the handle come to a stop as they try to move it “into” an obstacle, much like using a white cane.

3.1.3

Sensory Substitution Systems

The brain has a remarkable way of adapting to accommodate new circumstances. This ability, neuroplasticity, has one wondering how large these adaptations can be. In the 1970s, Bach-y-Rita [25] devised a tactile-visual sensory substitution system (TVSS) where pictures taken by a video camera were transformed to “tactile images” displayed by a matrix of vibration actuators worn by the user. Initial reports seemed very promising, and more recently a similar system (except the actuators being placed on the tongue)

(44)

Figure 3.2: A picture of the Miniguide, a handheld ultrasonic mobility aid with haptic feedback.

was commercialised as BrainPort [26]. Despite seemingly incredible results, we can pose the question of why these solutions have not taken off. Lenay et al. [27] wrote on this:

“However, once the initial flush of enthusiasm has passed, it is legitimate to ask why these devices, first developed in the 1960’s, have not passed into general widespread use in the daily life of the blind community. Paradoxically, an analysis of the possible reasons for this relative failure raises some of the most interesting questions concerning these devices. One way of addressing this question is to critically discuss the very term “sensory substitution”, which carries with it the ambiguity, and even the illusory aspect, of the aim of these techniques.” — Lenay et al. [27]

They further note that while sensory substitution is a good term from a publicity and marketing perspective, it unfortunately is misleading in many ways. One issue the authors raise is whether one can properly call it substitution. The term seems to imply that a simple transformation of stimuli from one sense to another can bring with it all

Figure

Figure 1.1: The Novint Falcon haptic interface and the SICK LMS111 laser rangefinder, used in the first prototype.
Figure 1.2: A picture of the second prototype:
Figure 3.1: The UltraCane, a white cane augmented with ultrasonic sensors and haptic feedback.
Figure 3.2: A picture of the Miniguide, a handheld ultrasonic mobility aid with haptic feedback.
+7

References

Related documents

Based on the feedback from the first user test and the interview, a second prototype was created with some modifications on the previous interface and some added sensor

To mitigate the effects of climate change today, some respondents stated that green areas in the city are important which is also a main focus in the Smart City Project (a

Självfallet kan man hävda att en stor diktares privatliv äger egenintresse, och den som har att bedöma Meyers arbete bör besinna att Meyer skriver i en

In the intervention step, we specified a glove using only one MB1360 XL‑MaxSonar‑AEL1 Ultrasonic sonar sensor and one vibration motor disk to be attached to the middle finger of the

The goal is to identify a set of requirements for an application that can facilitate a learning experience for the visually impaired geometry using an off-the shelf device –

Users can mark objects using barcode and NFC stickers and then add their own message to identify them.. The system has been designed using an iterative process taking the feedback

In order to explore the feasibility of eye tracking as input method for blind and visually impaired people, we imple- mented a sonification system that uses the eye tracking data

This paper presents a study where the same mobile game has been played by subjects with very different prerequisites – ranging from experienced gamers playing it