Interaction Design -‐ by the protocol
Combining user-‐centered design methods for finding user
needs in a time-‐constrained environment
Christoffer Svanberg Anton Westman
[email protected] [email protected]
DEGREE PROJECT IN HUMAN-‐COMPUTER INTERACTION AND EDUCATIONAL SCIENCE AT THE PROGRAM OF MASTER OF SCIENCE IN ENGINEERING AND OF EDUCATION (CL) IN MATHEMATICS AND INFORMATION TECHNOLOGY AND COMPUTER SCIENCE (MADA).
Royal Institute of Technology (KTH) & Stockholm University (SU) Stockholm 2015
KTH School: School of Computer Science and Communication (CSC) Employer: Center For Technology in Medicine and Health (CTMH)
Examiner KTH (CSC): Jan Gulliksen
Supervisor KTH (CSC): Vincent Lewandowski Supervisor SU: Tanja Pelz-‐Wall Supervisor CTMH: Mikael Hillmering
A BSTRACT
Today there are lots of different health care computer systems in use. However, according to recent studies many of them lack necessary usability. Within Nordic pediatric cancer care, analogue treatment protocols on paper are currently used, as a complement to the digital medical records and prescription systems. In these
protocols, doctors and nurses note information regarding the patient’s treatment.
Comments and changes are noted in the margin, which sometimes leads to making the protocol messy and difficult to grasp. Since several people are involved in the handling of the treatment protocols it occasionally happens that the protocol disappears for periods of time. We had two aims with this project. The first was to examine and map requirements for a usable interactive treatment plan for acute lymphoblastic leukemia, ALL. The second was to investigate if our suggested
combination of methods would be sufficient to acquire these requirements in a setting where the users, i.e. physicians, were time-‐constrained.
Based on large variety of theories and methods from educational science and research in human computer interaction, we have conducted a qualitative study, iterating a combination of user-‐centered design methods, with a revision of the requirements as well as the design following each iteration. The requirements analysis was performed in close collaboration with the doctors at the Astrid Lindgren Children's Hospital, Karolinska University Hospital, Stockholm, Sweden.
Our results indicate that by using a combination of methods from usability
engineering and participatory design, a well-‐defined list of requirements from the doctors could be identified which might be sufficient to develop an interactive prototype for a digital treatment protocol. In addition we found that our method enabled an exchange of knowledge between the designers and the users.
In conclusion, these combined methods were suitable for enhancing the software designer’s understanding of the user needs in this time-‐constrained environment.
Keywords:
User-‐centered design, usability, participatory design, time-‐constrained users, evaluation methods, health care, interaction design, prototyping, iterative design, human-‐computer interaction.
A BBREVIATIONS /A CRONYMS
ALB = Astrid Lindgren Children’s Hospital Karolinska University Hospital in Stockholm Sweden
ALL = Acute Lymphoblastic Leukemia AW = Anton Westman
BSA = Body Surface Area
CIF = Clinical Innovation Fellowship CS = Christoffer Svanberg
CTMH = Center for Technology in Medicine and Health ISO = International Organization for Standardization HCI = Human Computer Interaction
KTH = Royal Institute of Technology
NOPHO = Nordic Society of Pediatric Hematology and Oncology PDZ = Proximal Development Zone
Px = Prototype x, where x is 1, 2, 3 SU = Stockholm’s University UCD = User Centered Design WBC = White Blood Cell count
A CKNOWLEDGMENTS
We would like to thank our supervisors Tanja Pelz-‐Wall (Stockholm’s University, SU), Mikael Hillmering (Clinical Innovation Fellowship, CIF, Center for Technology
Medicine and Health, CTMH) and Vincent Lewandowski (Royal Institute of Technology, KTH) for their invaluable support throughout the project.
We would also like to thank Johan Malmros and Stefan Söderhäll, both chief physicians, and their colleagues at the oncology department at Astrid Lindgren
Children’s Hospital, Karolinska University Hospital, for their hospitality and their help with all our medical questions.
A special thanks goes to Linda Svensson (CTMH), for her energizing aura and her efforts in the process of launching the thesis as a real project.
Additionally, we would like to thank Conny Westman and Per Nordberg, for their insights in essay writing as well as our girlfriends Sofia Rytterlund and Sofie Berglöf, for their unconditional support through the long days and nights of designing, testing, evaluating, analyzing and writing.
Finally, we would like to thank all our other friends and family members for
participating in the endless hallway tests and for their support throughout the project.
T ABLE OF CONTENT
INTRODUCTION ... 1
BACKGROUND ... 2
PROBLEM DEFINITION ... 2
AIM AND PURPOSE ... 2
RESEARCH QUESTIONS ... 3
DELIMITATIONS ... 3
DISTRIBUTION OF WORK ... 3
THEORY ... 4
USER-‐CENTERED SYSTEM DESIGN ... 4
User-‐centered design ... 5
Usability and user experience ... 6
Design principles ... 9
Educational theories ... 9
RESEARCH AND DESIGN METHODS ... 11
Research through design ... 11
Participatory design ... 12
Usability engineering ... 12
Prototyping ... 12
Paper prototype ... 13
Computer prototype ... 13
Information gathering techniques and user evaluation methods ... 13
Qualitative and quantitative methods ... 14
Observations ... 14
Ethnographic research ... 15
Think-‐Aloud ... 15
Participants ... 16
Interviews ... 17
A user-‐centered research through design approach ... 17
METHODS AND IMPLEMENTATION ... 18
STUDY DESIGN AND SETTINGS ... 18
Ethical guidelines ... 18
Implementation ... 18
The prototype ... 19
User evaluation ... 19
Test leader ... 19
Observer ... 19
The interviews ... 20
The iterations ... 20
Iteration 0 ... 20
Iteration 1 ... 20
Iteration 2 ... 21
Iteration 3 ... 21
Validity, Reliability and generalizability ... 21
ITERATION 0 -‐ INTERVIEWS, OBSERVATIONS AND HEURISTIC EVALUATION ... 22
Interviews ... 22
Heuristic evaluation of the paper protocol ... 24
Proposed list of requirements ... 25
ITERATION 1 -‐ USER TEST AND INTERVIEWS ... 26
Main observations from think-‐aloud evaluations ... 26
Graded questionnaire ... 26
Comparative and open-‐ended questions ... 27
Proposed list of requirements ... 28
ITERATION 2 -‐ USER TEST AND INTERVIEWS ... 29
Main observations from think-‐aloud evaluations ... 29
Graded questionnaire ... 29
Comparative and open-‐ended questions ... 29
Proposed list of requirements ... 32
ITERATION 3 -‐ USER TEST AND INTERVIEWS ... 33
Main observations from think-‐aloud evaluations ... 33
Graded questionnaire ... 33
Comparative and open-‐ended questions ... 33
Final list of requirements ... 35
General requirements ... 35
Data requirements ... 35
Functional requirements ... 35
Results per test group ... 37
DISCUSSION ... 38
LIST OF REQUIREMENTS ... 38
COMBINING METHODS ... 39
A SIMPLE INFORMATION GATHERING MODEL ... 41
OUR SUGGESTED MODEL ... 43
The different parts of the model ... 43
INFLUENTIAL FACTORS AND SUGGESTED FUTURE RESEARCH ... 45
CONCLUSION ... 45
REFERENCES ... 46
APPENDIX ... 50
APPENDIX 1 – NIELSEN’S HEURISTICS ... 50
APPENDIX 2 -‐ USER TASKS ... 51
Iteration 1 ... 51
Iteration 2 ... 52
Iteration 3 ... 53
APPENDIX 3 – QUESTIONS ... 54
Graded questionnaire ... 54
Open questions ... 55
APPENDIX 4 – ONE PAGE IN THE ANALOGUE TREATMENT PROTOCOL ... 56
I NTRODUCTION
Health care professionals spend a significant part of their working time in front of the computer, administrating patient issues, rather than spending time with the patients (Forsberg, 2014; Larkin & Kelliher, 2011). Many countries, including Sweden, has in the last decades invested in new information technologies, such as digital medical records(Bossen, 2006; Fonville, Choe, Oldham, & Kientz, 2010; Larkin & Kelliher, 2011; Lundgren, Stiernstedt, & Olofsson, 2014; Reuss, Keller, Naef, Hunziker, & Furler, 2007; Tang & Carpendale, 2008). Larkin and Kelliher (2011) as well as, Lundgren et al. (2014) and Reuss et al. (2007) all point out how new systems often lack the necessary usability to be helpful to the professionals. This also tends to be the situation at the Astrid Lindgren Children's Hospital (ALB), Karolinska University Hospital in Stockholm Sweden. The doctors at ALB express that the current computer system is neither intuitive nor easy to use. Therefore, the use of analogue treatment protocols on paper, used as complements to the digital medical records and
prescription systems, is still needed in order to get a satisfactory overview of the medical situation of the patient. The current system with analogue treatment protocols is incoherent and unsafe, according to the doctors at ALB. Several
international studies report that health care professionals often choose to use paper documents, rather than the computer based systems, to handle patient data and perform certain tasks(Chen, 2010; Larkin & Kelliher, 2011; Luff, Heath, & Greatbatch, 1992; Tang & Carpendale, 2008).
Some reports imply that a user-‐centered design (UCD) approach might be preferred while designing health care systems(Ammenwerth, Buchauer, Bludau, & Haux, 2000;
Bossen, 2006; Fonville et al., 2010; Larkin & Kelliher, 2011). Some researchers, e.g.
Thimbleby (2007a); (2007b), state that UCD is necessary but not sufficient while designing safety critical systems. Several research articles focus on the interaction between patients and health care professionals and are hence often based on minimal intrusive observations and interviews. Studies with a higher degree of user inclusion, working with e.g. participatory design, have also proved to be successful
(Ammenwerth et al., 2000; Larkin & Kelliher, 2011).
B
ACKGROUNDIn the process of implementing a new system there will often be some sort of conflict about how the system should be implemented. The main issue is that the designer often lacks sufficient understanding of how the users actually work with the current one. This might result in a design, and subsequently an implementation, of something that might not work well with the users’ daily work or something that is hard for them to learn. This is the case on ALB where the main computer system for handling patient and treatment data for oncology treatments have not been tested on, nor evaluated by, the health care professionals working with the system.
P
ROBLEM DEFINITIONAt Astrid Lindgren Hospital, Karolinska University Hospital, children with oncologic diseases are diagnosed and treated. ALB is one of the pediatric centers within the collaborative Nordic Society of Pediatric Hematology and Oncology (NOPHO). Here, some of the medical treatment protocols are paper journals. On these, doctors and nurses note information regarding the patient’s medical treatment. Comments and changes are noted in the margin, which, as told, sometimes make the protocol difficult to grasp, see appendix 4.
Since several people are involved in the handling of the treatment protocols it happens that the protocol disappears for short periods of time. This allows for misunderstandings and is a risk for the patient’s safety. During an observation on the Pediatric Oncology department at ALB, a wish for easier handling of journals and a need for an interactive treatment protocol was identified.
As a part of our master thesis and on behalf of our employer Clinical Innovation Fellowship (CIF) at the Center for Technology in Medicine and Health (CTMH) we’ve got the opportunity to develop an IT-‐tool for chemotherapy to aid the health care professionals who are treating children with pediatric cancer.
Since we lack knowledge in the medical field and the health professionals, i.e. the users of the treatment protocol, have none or little knowledge about a design process we needed to find a method in which all skills could enrich the process. Therefore we have chosen a qualitative user-‐centered approach for this study.
A
IM AND PURPOSEThe aim for this project is to examine and map the requirements for an interactive treatment plan for ALL treatment at ALB.
The purpose of this study is to enhance the understanding of the pediatricians at ALBs’ needs of documenting, systematizing and handling patient data. This is to lay the foundation for the development of a prototype for a digitalized treatment protocol to aid the health care professionals in their daily work.
R
ESEARCH QUESTIONS1. What are the treating health care professionals’ requirements concerning an improved usable interactive way of documenting and handling patient data?
2. In a time-‐constrained population, such as physicians at a university hospital, how can a combination of methods from user-‐centered design (UCD) be used to enhance the software designer’s understanding of the users’ real needs of improved
documentation and systemized patient data while designing a safety critical computer system?
D
ELIMITATIONSThe range of topics that could be included in this project is huge. This thesis is part of a joint education program between the Stockholm University (SU) and the Royal Institute of Technology (KTH) with focus on both computer and educational science.
Therefore we will focus more on finding needs rather than on a technical implementation.
D
ISTRIBUTION OF WORKThis study has been conducted by Christoffer Svanberg and Anton Westman, last year Master students at SU and KTH. Although the report is written cooperatively the workload during design and evaluations has been some what divided. While AW has served as a test leader in the user testing CS has had the role of an observer and as well responsible for the follow up interviews. Both authors were present during all test sessions.
T HEORY
There is a broad range of topics within interaction design and educational science and these topics are always changing. This chapter is not aiming to provide a generic view of either of the fields but rather to summarize some aspects from the fields of interaction design and educational science that we have found interesting for our project. This in order to provide a theoretical background, which will serve as a framework for our implementation.
Historically the extent of user involvement, where the “user” is defined as a person that interacts with a product(ISO, 2010), in system engineering has been quite low(Gulliksen & Göransson, 2002). E.g. when the first computer software were coded the developers followed what now is called the code-‐fix-‐model, in which the
developers simply started coding and then fixed the problems they found in their code. As software development projects grew larger and more extensive this prompted the needs of thorough requirement analysis, the division of the
development into phases and feedback between these phases became more and more important. With this the importance of user involvement, in different extensions, has grown larger while designing usable computer software(Gulliksen & Göransson, 2002).
The “discipline concerned with the design, evaluation and implementation of interactive computing systems for human use and the study of major phenomena surrounding them”(Larusdottir, 2012) is referred to as Human Computer Interaction (HCI).
There are some people, e.g. Preece and Nielsen, who recurrently appear referenced in literature about HCI, interaction design and usability. One has to bear in mind that many of their text on the subject was written about 20 years ago and that the field of interaction design has changed significantly during this time. Still they do have some key ideas that, as we see it, form the foundation of the subject.
Preece et al. (2007, p. 17) defines interaction design as the process of “designing interactive products to support the way people communicate and interact in their everyday and working lives”.
U
SER-‐
CENTERED SYSTEM DESIGNSelander and Kress (2010) state that design not only should be seen as the forming of ideas, concepts and patterns to create a new product but also as the combining of functional and aesthetic aspects. They talk about interactive design, an approach to design where the designer works in cooperation with the intended user. According to them, this approach is implicitly a critique of the prevailing order.
Gulliksen and Göransson (2002, p. 15) state that it is "well worth spending time on developing computer support which are efficient, stable, minimizing errors, etc.”.
They concretize this by presenting examples on how money can be saved by making computer systems more usable. The amounts presented are, although roughly
estimated, immense. In the CHAOS report ("CHAOS MANIFESTO,") of 2013, presented by the Standish group, it is stated that the success rate of projects including IT-‐
systems in 2012 was about 39%. In the report, success is defined as when a system is delivered on time, within budget and with all required features and functions. It is stated that one success factor in these successful projects were the focusing of the real user needs. Gulliksen and Göransson (2002) state that in many cases not more than one percent of the development is devoted to usability related work.
C.-‐M. Karat (1993, p. 89) write that “Eighty percent of software life cycle costs occur after the product is released, in the maintenance phase. Of that work, 80 % is due to unmet or unseen user requirements only 20 % of this is due to bugs or reliability problems”. One can hence draw the conclusion that the requirement analysis is a crucial part of the development process and that the users have an important role in the design.
U
SER-‐
CENTERED DESIGNRubin and Chisnell (2011, p. 15) describe user-‐centered design (UCD) as an
“evolutionary process whereby the final product is shaped over time”. Karat, in Gulliksen and Göransson (2002, p. 101), describes the term as the “...label under which to continue to gather our knowledge of how to develop usable systems…”. The core of UCD is hence the design of usable systems or products with an active involvement of the intended user. Rubin and Chisnell (2011) state that trial and error, discovery and refinement are important parts of an optimal design.
ISO (2010) 9241-‐210 notes that the term user-‐centered design often in practice is used synonymously with human-‐centered design even though the latter also includes stakeholders other than the users. The ISO-‐definition of human-‐centered design is the
“approach to systems design and development that aims to make interactive systems more usable by focusing on the use of the system and applying human
factors/ergonomics and usability knowledge and techniques”. This report will consistently use the term user-‐
centered design (UCD).
There are several life cycle models for how software design could be conducted. What the more recent models have in common is the notion that the development should be iterative. Preece et al.
(2007) present a simple model for an iterative process, see figure 1.
FIGURE 1 -‐ A SIMPLE MODEL FOR AN ITERATIVE PROCESS
Larusdottir (2012) present, in a similar manner, four important activities that,
according to ISO (2010) 9241-‐210, shall take place during software development. The activities should be based on the others and should be iterated where appropriate.
The four activities are:
• Understand and specify the context of use
• Specify the user requirements,
• Produce design solutions to meet these requirements,
• Evaluate the designs against requirements.
U
SABILITY AND USER EXPERIENCEThe term usability is a well-‐used term in the field of HCI. There are several definitions of the term depending on which field of HCI it is used in.
Preece et al. (2007, pp. 443-‐444) uses six goals to define usability. These are, without regard to the order:
• Effectiveness
• Efficiency
• Safety
• Utility
• Learnability
• Memorability
Nielsen (2012) defines usability as a quality attribute defined by five, quite similar, quality components:
• Learnability
• Efficiency
• Memorability
• Errors
• Satisfaction
Preece and Nielsen’s definitions are equal in the sense of learnability, memorability and efficiency and they share the notion of the importance of safety or few errors. They do, however, differ in the sense of the effectiveness, utility and satisfaction.
The goal, or quality component, learnability concerns how easy a first time user can use a new system for accomplishing basic tasks. Memorability is about how easy it is for a user to remember how to use a system and reestablish proficiency after being away from the system for some time. The efficiency deals with the question of how quickly the user can perform her tasks once the system is learned. Regarding the safety goal and error component Preece and Nielsen have similar, but not quite equal, definitions.
Preece’s (2007) safety goal addresses the systems ability for
“protecting the user from dangerous conditions and undesirable situations”. Both in the sense of the real danger of carrying out unwanted actions unintentionally and in the sense of the perceived fears users might have to the consequences of such action. Nielsen’s error quality component refers to the error rate. From here the definitions start to differ. While Nielsen’s definition includes the satisfaction component, which
addresses how pleasant the user finds the use of the system, Preece instead includes this in the subjective user experience goals, see figure 2.
ISO (2010) 9241-‐210 define user experience as:
person’s perceptions and responses resulting from the use and/or anticipated use of a product, system or service.
There are also three notes in the ISO standard to explain user experience further.
NOTE 1 User experience includes all the users' emotions, beliefs, preferences, perceptions, physical and psychological responses, behaviors and accomp-‐
lishments that occur before, during and after use.
NOTE 2 User experience is a consequence of brand image, presentation, functionality, system performance, interactive behavior and assistive capabilities of the interactive system, the user's internal and physical state resulting from prior experiences, attitudes, skills and personality, and the context of use.
NOTE 3 Usability, when interpreted from the perspective of the users' personal goals, can include the kind of perceptual and emotional aspects typically associated with user experience. Usability criteria can be used to assess aspects of user experience.
FIGURE 2 -‐ PREECE'S USABILITY AND USER EXPERIENCE GOALS (Preece et al., 2007)
Nielsen (2012), on the other hand, has excluded the utility from the usability definition. Both authors refer to the utility as the functionality of the system.
Nielsen means that the utility is a quality attribute as important as usability and that the combination of the two is what he refers to as usefulness, see figure 3.
What these two definitions have in common is the notion that a computer system or a product, which is hard to use, hard to learn how to use or which have large security issues, is not usable.
According to the ISO (1998) 9241-‐11 standard usability is defined as:
The extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use
effectiveness is defined as
the accuracy and completeness with which users achieve specified goals
efficiency as
the resources expended in relation to the accuracy and completeness with which users achieve goals
and satisfaction as
the freedom from discomfort, and positive attitudes towards the use of the product.
Further the context of use is defined as:
Users, tasks, equipment (hardware, software and materials), and the physical and social environments in which a product is used.
FIGURE 3 – NIELSEN’S ACCEPTABILITY (Nielsen, 1993)
Gulliksen and Göransson (2002) prefer the ISO definition since they find that it is concrete and that it allows for measurability, which in turn allows for comparison of the usability of products. This definition of usability includes both the traditionally measurable aspects of effectiveness and efficiency and the subjective satisfaction.
Gulliksen and Göransson also claim that the ISO-‐standard include the functionality. As we see it, the ISO-‐definition hence can be seen as a combination of the two definitions mentioned above.
We have chosen to include all three of the usability definitions since this thesis partly is an educational work and that we find that the previous two definitions emphasize the importance of learnability and memorability in a way the ISO-‐standard does not.
D
ESIGN PRINCIPLESDesign principles can be used to conceptualize usability and to aid the designer while designing a usable system. There are several different guidelines to user interface design. Mostly used is probably Jakob Nielsen’s (1995) heuristics -‐ 10 broad rules of thumb. These can be found in appendix 1.
E
DUCATIONAL THEORIESAccording to Vygotskij (2007/1934)a learning process is promoted by finding a persons current understanding and start from there. Vygotskij calls this point of understanding and the levels above it the proximal development zone (PDZ). The starting point of all learning should begin at the currently achieved knowledge level.
According to Vygotskij all communication implies generalization. Words are abstractions, which symbolize some kind of meaning we want to communicate. A word is so much more than a just description of a phenomenon. It rather represents a class of phenomena.
Säljö (2010) state that all actions are situated in social contexts and that the contexts define which interpretations, actions and reactions that are adequate. This is also stated by Illeris (2007) who claim that the learning situation is part of the learning.
When discussing the interaction between didactics and design Tore West, in Rostvall (2008), lifts the importance of using symbols that are known to the user. He uses the buttons on a washing machine as an example. Even though he has used his fair share of washing machines and has never really had any problems using them, he found himself in a situation where the machine was not understandable. West draws parallels to Peirce’s icon category in which the icon resembles what it represents.
Peirce has two more categories, which are index and symbols. The index refers to what it represents, like smoke indicate fire. The symbol category do not need to have a universal connection to the thing it refer to but rather more or less systematic conventions. West argues that every expression, in order to be understood, need to get its meaning from a surrounding system of symbols.
Dewey writes about the importance to learn from what you already know and by working with the things you are trying to learn. This approach, by some of his followers referred to as Learning by doing, is well known in pedagogics (Hartman &
Lundberg, 2004). This is also confirmed by recent design research on how to implement a new computer system in a medical environment(Larkin & Kelliher,
2011). Dewey states the learning process and the goal with the learning are the same things (Hartman & Lundberg, 2004). This goes hand in hand with Selander and Kress’s (2010) view that learning itself cannot be seen, just the signs of it.
Dewey explains that learning which is not based on previous knowledge might result in a lack of feeling of real life connection. To exemplify this he describes the visit to a school in a town near the Mississippi river where the teachers told him that the kids had been astounded when they found out that the river in their books had anything to do with the one outside their windows.
R
ESEARCH AND DESIGN METHODSDuring this project a research through design approach, with parts from usability engineering and participatory design, has been used. This chapter aim to present the set of different methods used to gather information on how, when and by whom, the
treatment protocols are used.
R
ESEARCH THROUGH DESIGNThere is no agreed upon standard of definition for research through design (Gaver, 2012). Some researchers (Zimmerman, Evenson, & Forlizzi, 2007; Zimmerman &
Forlizzi, 2008; Zimmerman, Stolterman, & Forlizzi, 2010) see this as a problem and try to define guidelines. Zimmerman and Forlizzi (2008, p. 5) for instance states that there are two approaches of research through design. The first is philosophical which means that
researchers begin with a specific philosophical stance that they wish to either investigate or embody through a process of making.
The other approach is called grounded. By taking this approach the
design researchers focus on real world problems by making things that force both a concrete framing of the problem and an articulation of a specific, preferred state that is the intended outcome of situating the solution in a context of use.
Zimmerman et al. (2007, p. 8) also suggests a definition for research through design by comparing it to research for commercial purposes. He states two differences. The first is the intent of the research:
In this way design researchers focus on making the right things, while design practitioners focus on making commercially successful things.
Secondly the artifacts produced should “demonstrate significant invention” and be a
“novel integrations of theory, technology, user need, and context” and not
modifications on what can already be found commercially or in research literature (Zimmerman et al., 2007, p. 8).
Gaver (2012) on the other hand thinks that just as there is no right design there should not be too many rules set up for research through design. In short one can say that if you use design to do research you are doing research through design.
P
ARTICIPATORY DESIGNOne way to really utilize the users’ skill set, knowledge and reactions to the design is by including the users in the decision making as a part of the design team. Gulliksen and Göransson (2002, p. 123) describe this as the most extreme form of user involvement as it relies more “on cooperation than formal descriptions”. In this approach of UCD, which originates from Scandinavia during the 1960s, focus is put on the users’ routines as well as the use of “low level mock-‐ups instead of formal
specifications” allowing the users to experiment with prototypes in a natural
environment (Gulliksen & Göransson, 2002, p. 124). Simulating the work situation has proved to be a way for the designers to better understand the actual work of the user(Preece et al., 2007). Rubin and Chisnell (2011, p. 17) do however express that there might be a danger that the user gets “too close to the design team” and might
“withhold important concerns and criticism”.
U
SABILITY ENGINEERINGUsability engineering is an approach to interaction design in which formal and verifiable usability criteria are specified in advance and used to assess a product.
Gulliksen and Göransson (2002, p. 122) state that this method might put “too much focus on analysis and evaluation and being insufficient when it comes to more pragmatic design solutions”. As the name implies this approach focuses more on the engineering rather than on user participations. Users are involved while performing the empirical evaluations. Heuristic evaluations, in which a product is assessed against design principles, are widely used for analytic evaluations since they are cost efficient, easy to use and easy to communicate.
P
ROTOTYPINGThere are several reasons to build prototypes prior to creating the real software. Two main reasons are that it is much cheaper to build and it is much easier to change a prototype than to build or make changes in a full system.
Selander and Kress (2010) mention that knowledge can be represented not only by models and verbal descriptions but also by actions and interactions with other people or objects. A representation does not truly correspond with the truth but rather recognizes distinctive features in specific contexts. Different people can hence have different representations of the same thing. Some aspects might be taken for granted in one representation while they are more described in another. Some aspects can be more prominent and others more hidden. Selander and Kress state that learning can be seen as the difference in an individual's representations at different times.
In order to give the user a perception of “the real thing” a simulation can be used. In simulations prominent features are chosen to give a meaningful representation of the world. These features, which are clearly related to each other, has then been given a, for the context, clear form. All this in order to enable interaction with artifacts or people in the simulated situation.
A simulation differs from a description in the way that, in a simulation, several senses are used, which gives a “tactile experience” of another kind than from a scientific description (Selander & Kress, 2010, p. 38). Selander and Kress state that both
simulations and descriptions can be used as representations but that the consequences, for the one using the representations, differs.
PAPER PROTOTYPE
A fast and inexpensive way of presenting a design idea to yourself and others is in the form of a paper prototype, either as a single sketch or as a combination of sketches or models. This can in turn lead to new ideas (Nielsen, 2003). Paper prototypes are used for the low cost, ease of use, possibly early implementation in the design and swift modifications and can, as Löwgren and Stolterman (2007) puts it, lead to a divergence of ideas. During software design these can be used to cater critical information about the users’ expectations of functionality and design, before starting writing the code.
Thus, the use of sketches is not only for the designer but also a simple way to include the user in the design process. The use of paper, or other low-‐level media, helps
focusing on the different functions of the interface rather trivial or future features, such as colors or the exact placement of objects. The users will be focused on finding out whether the system can do what is required and that the functions are easy to find.
COMPUTER PROTOTYPE
When a good understanding, through the use of paper prototypes, has been achieved it is possible to move on to high-‐fidelity prototypes. One reason to use computer prototypes is to let the user experience a sense of the real system. Starting up with a computer prototype in an early stage of the project may cause problems, concerning time and work, having to recode every idea the user expresses. An over developed prototype might also result in a misconception that the design process has progressed further than it actually has, which can lead to stagnation in the creative process.
I
NFORMATION GATHERING TECHNIQUES AND USER EVALUATION METHODSTABLE 1 -‐ DIFFERENT METHODS FOR INFORMATION GATHERING. (HOLME & SOLVANG, 1991, P. 85)
Situation Non-‐verbal acts Verbal acts
(Speech) Verbal acts (Writing) Informal context Participatory
observation Conversation,
informants Letters, articles, biographies Informal and
structured context
Observation with
fixed categories Text analysis with
fixed categories Formal and
unstructured context
Systematic
observation Interview with open-‐ended questions
Survey with open-‐
ended questions
Formal and
structured context Controlled
experiment Interview with closed-‐ended questions
Survey with closed-‐
ended questions
Holme and Solvang (1991) list some methods for information gathering and in which kind of situation they can be used, see table 1. They have the opinion that, when considering different methods, there is no “number one method” but rather that a mix of the available methods might be preferred. If the different methods give the same result this implies that the result is not due to the method but rather that the information collected is valid. If different methods don’t give similar results this implies that the result needs to be reinterpreted or that methods need to be refined.
Gulliksen and Göransson (2002) also promote the use of a combination of methods, since different evaluation methods allow for finding different usability issues, and in turn facilitate the evaluations of a system’s usability.
Q
UALITATIVE AND QUANTITATIVE METHODSGenerally social science divides methodological approaches into qualitative and quantitative methods. What distinguishes qualitative and quantitative methods are basically the ways of gathering data and how the outcomes are presented. Qualitative methods benefit from proximity to the information source and focus more on getting a deep understanding of the problem and its context rather than evaluating for general validity. Quantitative methods are characterized by using pre-‐determined quantifiable categories in order to conduct formalized analysis, thereby risking to loose valuable information that a deep inquiry might have resulted in (Holme &
Solvang, 1991).
McCall and Simmons, in Holme and Solvang (1991, p. 69), state that “qualitative method can be seen as a collective term for an approach that in larger or smaller extent combine the following five techniques: direct observation, participatory observation, informant and respondent interviews and analysis of sources”.
OBSERVATIONS
According to Preece et al. (2007), observations are useful during the whole
development process. Observations can be used both early in the design to help the designers understand the users’ needs and later in the design to see if the product, or current prototype, meets these needs.
Sjøberg (2005, p. 388) states that “without observations no scientific thoughts can be evaluated, no old ideas can be discarded and no new ideas can emerge”. The results from an observation depend on the mindset of the observer concerning theories, experience, concepts and intentions but also on the full context surrounding the observation. This notion is supported by Madsen (1994) who state that an
educational leader must be aware of having taken things for granted that really aren’t given.
There are first-‐ and second order observations. In first order observations the observer has this as his or her primary task, with a full focus on the observation.
Second order observation occurs when the observer has another primary task, e.g.
leading a test session. In a first order observation the test leader and the observer are two different people (Bjørndal, 2009).
Another thing that needs to be considered when doing observations is the observer's memory capacity and the fact that the mind might add or lose information. Thus, it is important to order and start analyzing the collected data as soon as possible.
Remembering all the things that happen might be a problem when doing
observations. One way to overcome this is by taking notes directly as things happen. A problem with this approach is that the observer might miss things as he is writing.
This problem can be avoided by using some kind of recording device, video or audio, preferably in combination with e.g. the think-‐aloud technique mentioned below (Bjørndal, 2009).
ETHNOGRAPHIC RESEARCH
To be able to make design decisions throughout the development cycle, without always having to consult the users, it is important to know who the users are, what their goals with the product or the enhancement are and in what context they are going to accomplish these goals. In order to understand this a qualitative,
ethnographic, research can be performed in which one observe the user as they carry out their normal activities(Rubin & Chisnell, 2011). Preece et al. (2007) describes that the users might be so familiar with their surroundings and daily tasks that they might forget to remark them during interviews. Observing the user in her context of work might also reveal information of what really happens rather than how it is supposed to be according to formal descriptions.
THINK-‐ALOUD
A well-‐used method for evaluation is the think-‐aloud method. As the name suggests it requires the users to say out loud what they are thinking and doing and also explain why. Nielsen (2012) defines it as follows
In a thinking aloud test, you ask test participants to use the system while continuously thinking out loud — that is, simply verbalizing their thoughts as they move through the user interface.
The think-‐aloud method is, even though it might feel quite unnatural for the user, an easy way of achieving plenty of information. The cost is low due to the easiness to learn and use. It’s in fact, according to Nielsen, even hard to fail. It is important, however, to be careful not to put words or ideas in the test user mouth while trying to extract information.