• No results found

Interaction Principles of 3D World Editors in MobilePhones with

N/A
N/A
Protected

Academic year: 2021

Share "Interaction Principles of 3D World Editors in MobilePhones with "

Copied!
15
0
0

Loading.... (view fulltext now)

Full text

(1)

INOM

EXAMENSARBETE DATALOGI OCH DATATEKNIK, AVANCERAD NIVÅ, 30 HP

STOCKHOLM SVERIGE 2020 ,

Interaction Principles of 3D World Editors in MobilePhones with

Focus on the User Experience

VINCENT ERIK WONG

KTH

SKOLAN FÖR ELEKTROTEKNIK OCH DATAVETENSKAP

(2)

Sammanfattning

3D v¨arldsredigerare anv¨ands omfattande i mobilapplikationer f¨or ma- nipulering av virtuella v¨arldar, men det finns ingen generell designram- verk f¨or de b¨asta interaktionprinciperna f¨or v¨arldsredigerare i mobiltele- foner. Genom att studera interaktionsprinciper, koncept, och funktioner som anv¨ands i konsumentbaserade v¨arldsredigerare, denna studies fokus ligger p˚ a utveckling och design av en prototyp, sedan evalueras inter- aktionsprinciperna, uppdelade i f¨oljande anv¨andaromr˚ aden: urval, place- ring, manipulering. Resultatet fr˚ an en heuristisk evaluering som st¨odjs av anv¨andartester visar p˚ a att det finns en majoritet som f¨oredrar f¨oljande kombination av interaktionsprinciper f¨or ett b¨attre anv¨andarupplevelse:

en organiserad och simplifierad butiksgr¨ansnitt f¨or urval, rutn¨atssystem

¨ over det friasystemet f¨or placering av virtuella objekt, och en kombina- tion av snabb- och l˚ angsam tryck f¨or manipulering av virtuella objekt i 3D v¨arldsredigerar-prototypen f¨or mobiltelefoner skapad f¨or denna studie.

1

(3)

Interaction Principles of 3D World Editors in Mobile Phones with Focus on the User Experience

Vincent Erik Wong

vwong@kth.se

EECS School of Electrical Engineering and Computer Science KTH Royal Institute of Technology

Stockholm, Sweden

Abstract

3D world editors are widely used in mobile applications for manipulation of the virtual world, however, there are no frameworks of design regarding the best practice interac- tion principles for 3D world editors on mobile phones. By looking at methods, concepts, and functions used in general consumer world editors, this research focused on the devel- opment and design of a prototype, which was then used to evaluate the interactions principles divided in the following use cases: selection, placement and manipulation. The results from a heuristic evaluation backed up by user evaluations proved that there is in fact a majority that prefers the follow- ing combination of interaction principles in terms of user experience: an organized and simpli�ed shop interface for selection, grid-system over the free-system for placement of virtual objects, and a combination of tap and hold to manip- ulate virtual objects for the 3D world editor prototype for mobile phones created in this research.

1 Introduction

Virtual world editors, also known as world editors, are often implemented both within the video game industry (e.g., in the game genres of real-time strategy and real-life simula- tion) and outside of the game world (e.g., interior-design ap- plications). The purpose of using world editors is to provide the user with a simpli�ed sense of control over customization in the virtual world. The simpler the interaction is - while maximizing degrees of customization - the better the user experience (UX) and the success rate are [1].

3D modeling softwares such as Autodesk Maya

1

and Blender

2

require great levels of expertise that cannot be developed in a short amount of time. These three software tools are most commonly used to create 3D world scenarios and 3D models of the virtual world. The task of creating and modifying the 3D virtual world can then be quite tedious, even the simplest tasks of translation and rotation can be hard to comprehend as they require immense background knowledge e.g., tech- nical jargon, key-combinations for di�erent functions, and mapping from 2D to 3D (from a 2D computer display to the 3D virtual world). Most of these modeling softwares tend

1

https://www.autodesk.com/products/maya/

2

https://www.blender.org/

to overload the user interface, which is not an optimal solu- tion in terms of UX especially for small touch-based screen devices (i.e. iPhones for this research).

Broadly, the use-case scenario for 3D world editors is a simpli�ed version of the modeling softwares (e.g., AutoCAD, Maya, and Blender). world editors are not designed for users to invent something novel, but rather they are designed to manipulate already existing items that are provided within the software - to customize and decorate the virtual world in an e�ortless and seamless way.

One main line of thought regarding 3D world editors: in- corporate the virtual world with a grid-system. A grid-system in the context of 3D world editors acts as a guideline to assist users by snapping virtual objects into the correct position in the virtual world. Grid-systems provide users a smaller window of error for placing objects as compared to when a free-system is utilized. On the other hand, a free-system provides more precision and less restriction for users. Both techniques certainly have their advantages and disadvan- tages.

For small screen devices, like the iPhone, simplifying inter- actions and minimizing the degree of manipulation for the 3D world editor is very important. Many applications that use a 3D world editor employ a grid-system; however, there is not an agreed-upon standardized framework for 3D world editors. The purpose of this research is to compile a frame- work of interaction principles in regards to the user-journey and use cases for 3D world editor on mobile phones with focus on the overall UX. Interaction principles includes the use cases: selection, placement and manipulation of virtual object.

This research is supported by a Swedish company called

Friend Factory AB that is working to release a social-media

application called Frever. The Frever app is primarily devel-

oped towards Apple iPhone devices target towards teenagers,

which is why this research focuses on 3D world editors on

mobile phones. The resultant 3D world editor prototype, de-

veloped from this research, will become the foundation for

the Frever app’s world editor.

(4)

2 Research Question

What are the best practice interaction principles for 3D world editors on mobile phones from a user experience perspec- tive?

3 Related Work

This research has two main areas of focus: UX and develop- ment. The UX phase studies user interaction approaches of 3D world editors on mobile phones. The development phase covers the research and principles of 3D world editors.

3.1 User Experience

UX research comes from the �eld of Human-Computer In- teraction where the focus moves beyond a traditional em- phasis on usability [11]. By examining usability, research has focused primarily on e�ciency, satisfaction, error, mem- orability, and learnability [24]. However, UX research also studies users’ behavioral perceptions (e.g., visual, touch, and smell) and positive emotions (e.g., joy, fun, and pride) [12].

The aforementioned UX principles of focus has previously been researched on 3D world editors [1, 5, 13–15, 20, 22].

Ali et. al. studied UX in UI for The Sims 3 [1] where the study was divided in three parts to measure di�erent aspects of the game and background information - namely, players demographic information, game engagement, and UX on the UI (the latter two elements employed a questionnaire using �ve-point likert-scale). Towards the end of the study, overall thoughts and opinions of users were noted via a semi- structured interview. In this format, participants are asked a series of questions that were prepared prior to the interview [7].

3.1.1 De�ning User Journey

At the early stages of application creation, it is paramount that developers have a well de�ned user journey. Mapping out the user journey gives developers both a clear overview of the use case of the app as well as a script to follow for development, according to Endmann and Keßner [8].

Merrick and Maher argued that virtual worlds, often seen in simulation games e.g. The Sims

3

, or virtual world applica- tions, e.g. Active Worlds

4

have an open-ended user journey [21]. This refers to the notion that the user journey of 3D world editors does not follow a predetermined storyline, but rather allows for the user to manipulate app elements in- de�nitely. In open-ended simulation games, there is not a singular, prede�ned way to complete the simulation. There- fore focus should be shifted towards the user journey of the use cases.

The current study has divided interactions of 3D world editors into four use cases [22]: navigation, selection, manip- ulation, and system control. The �rst three interactions refer

3

https://www.ea.com/sv-se/games/the-sims

4

https://www.activeworlds.com/

to the navigation and orientation, selection, and manipula- tion of virtual objects within the virtual world, respectively.

Speci�cally, the manipulation use case can be divided into the subcategories of translation and rotation [15]. The �nal use case system control, refers to the human-computer in- teraction at the core of UX research. These four use cases makes up the user journey for world editors. While Mine’s research focused on 3D world editors in virtual reality, the use case and user journey for 3D world editors on mobile phones would essentially the same as it belongs to the same involves the same concept and idea of manipulating virtual world.

3.1.2 Heuristic Evaluation

Nielsen and Molich introduced Heuristic Evaluation (HE) as a method for identifying issues regarding the design and user experiences of UI designs [23, 26]. Nielsen later devel- oped a set of ten heuristics [25] as a guideline of evaluation.

Evaluators of HE are often referred to as experts and typi- cally kept between three to �ve [23, 26]. Thus, the e�cacy of the HE is heavily dependent on the expertise of the cho- sen evaluators [19]. Huy and VanThanh created another set of heuristics for the evaluation of mobile apps from three di�erent viewpoints: developer, user and service/content provider’s viewpoint [5]. These heuristics were designed for the evaluation of mobile phones, as mobile phones are limited by smaller screens compared to other devices and often in conjunction with touch based interactions as men- tioned by Gomoz et. al. [29]. Gomez et. al. similar to Huy and VanThanh created another set of heuristics for mobile UI.

3.1.3 Empirical Evaluation

Unlike heuristic evaluation, empirical evaluation focuses on actual end users rather than experts [7]. While there are certain �nancial advantages to conducting a heuristic evaluation with a small sample (user) population, it is best that empirical evaluation utilizes a more representative, large sample size.

Bowman et. al. utilized the empirical evaluation method of testbed to study the di�erence between a set of nine inter- action principles for selection and manipulation of virtual objects in virtual reality 3D world editors [5]. The testbed was divided into two tasks (selection and manipulation) and both tasks involved three within-subject variables [7]. Quan- titative data were the focus of this research to evaluate the usability of the di�erent interaction principles, where the task of selection and manipulation were timed respectively [5]. Hrimech et al. conducted a similar research as Bowman et.

al. [5], but only for three interaction principles when it comes

to selection and manipulation of virtual objects in virtual

reality 3D world editor [15]. Unlike Bowman et. al, this study

focused more on the UX rather than usability. Instead of

(5)

collecting quantitative data in terms of time, Hrimech et. al.

focused on qualitative data in the form of statement scores of a seven-point likert-scale. A combination of both qualitative and quantitative data types is a good practice and broadly used within UX research [2].

3.2 Development of Interaction Principles 3.2.1 Drag and Drop

In 1984, Apple introduced the drag and drop interaction [27]

to ease movement, copy, and deletion of �les for their per- sonal Macintosh computers. Later, the drag and drop method has become incorporated in 3D world editors to ease user interactions e.g., translation of virtual objects. From a user experience perspective, there are many elements that must be considered to implement the drag and drop feature - most of which deal with visual feedback. Bill Scott listed �fteen in- teraction moments that he found most important to consider for UX design, e.g., mouse hover, mouse down, drag initiated, drag leaves original location [27]. These �fteen interaction moments bene�t users’ ability to interact with the virtual world in a seamless and intuitive way, e.g., when drag leaves the original location, the virtual object is updated according to the position of the mouse.

3.2.2 Snap-dragging

In 1986, Bier introduced the snap-dragging function for com- puters to simplify the task of drawing, translating, rotating, and scaling 2D �gures (i.e., lines and shapes) by snapping to aligned lines, shapes, and vertices [4]. The snap-dragging function can be viewed as an extension of a grid-system or as an alternative constraint-based system. Bier then applied the same concept of snap-dragging to 3D environments [3].

As described by Bier, the snap-dragging function is a com- bination of three interactive techniques: gravity, alignment object, and interactive transformation. The gravity technique snaps to points, curves, and surfaces of alignment objects.

Alignment objects act as a guideline to de�ne the reference points that objects can snap to. Finally, interactive transfor- mation tracks motion and automatically snaps the object to speci�ed vertice points.

The intention with the snap-dragging function in 3D was to provide users with simpli�ed interactions to create and modify 3D virtual worlds. Speci�cally, snap-dragging min- imizes an overload of information for users by decreasing complex UI design and interactions. The snap-dragging func- tion has been widely used in 3D world editors [9, 13, 20, 28].

3.2.3 Constraints

The manipulation of 3D environments using 2D screens and input devices, such as a mouse and keyboard, can be quite tedious. This manipulation becomes even more complicated when the input device is touch-based with a small screen i.e., smartphones. 3D world editing requires technical skills and

knowledge in a �eld that most people do not have. Even the core interaction of translating objects in world editors can be very complex. Houde demonstrates this point in her 1992 work [14] by reducing the degrees of freedom of manipu- lating virtual objects relative to their position for computer devices. Houde utilized resting planes and gravity constraints to prevent objects that are supposed to be stationary on the ground from �oating mid-air by eliminating one axis.

Bukowski and Séquin [6] took Houde’s technique a step further by introducing the concept of object association - the ability to stack objects onto one another (e.g., placing a plant on a table). Goesele and Stuerzlinger [10] advanced Bukowski and Sequin’s concept by de�ning constraints through de�n- ing areas. The �rst de�ning area is called the o�er area - the surface area in which other objects can be placed on. Of- tentimes, the o�er area is a �at surface with normals facing upwards. The second de�ning area is called the binding area- the area where objects are bound to o�er areas. Binding areas are typically at the bottom of an object.

Holm et al [13] used object association to develop their concept of the hierarchy tree. The hierarchy tree resembles the inheritance tree of C++ where the distance of the branch de�nes the compatibility of an object to snap onto other objects (e.g., a vase in between a table and a bed would snap to the table even if the bed is closer in distance because the table is closer in class hierarchy).

3.2.4 Collision Detection

In 3D virtual worlds, collision detection plays an important role for perceived realism of users [16]. Speci�cally, the pur- pose of collision detection is to keep two or more virtual objects from overlapping one another. Collision detection can be divided into two phases: the broad phase followed by the narrow phase [17]. The broad phase reduces the num- ber of computations by collision culling - a calculation of the expected movement of moving objects. In the narrow phase, the computation of collision for virtual objects is done in proximity to one another. Most commonly, axis-aligned bounding boxes (AABB) are used to simplify the computa- tions of collision detection. A bounding box in 3D, as the name suggests, is a volume in the form of a box encapsu- lating the triangular mesh of the virtual object. The more intricate, precise, and power-consuming method is the com- putation of overlapping triangular meshes. Static bounding boxes are preferred and continuously updating the bounding box increases the computation time [16]. Unity’s built-in col- lision detection utilizes both the mesh and AABB techniques.

The AABB technique in Unity handles 3D primitive shapes such as, boxes, spheres, and capsules. Mesh collider detects collision between 3D meshes [9].

3.2.5 Visual Indication

A visual indicator is typically implemented in a 3D world editor as visual feedback for the user to signify selection

4

(6)

when the user selects (clicks) on a virtual object in the virtual world. These visual indications are implemented di�erently between software. In some cases, visual indicators look like a bounding box that encapsulates the virtual object. This was also implemented by Goesele and Stuerzlinger [10]. An alternative visual indication, used by Kovalčík et. al., in 3D world editor is the change of the shader color upon selection of the virtual object [20].

4 Method & Implementation

In this section, both the method and implementation will be presented together in chronological order, following the whole research process from the beginning to the end. Phase I - background market analysis on the most used interactions principles for 3D world editors on mobile phones and the most common user journey observed from the set of analyzed apps (see Figure 2). The user journey and three frameworks (see Figure 3,4 and 5 respectively) were then designed accord- ingly and evaluated by �ve experts following the principles of heuristic evaluation. Phase II - focuses on the development of the 3D world editor prototype implemented with the ana- lyzed interaction principles following the user journey. Only two frameworks were developed as one of the frameworks was eliminated in the process of the heuristic evaluation.

Phase III - covers the end user evaluation of the 3D world editor prototype.

Figure 1. Overview of the implementation process

4.1 Phase I. Background & Concept Design 4.1.1 Market Analysis

A market analysis was conducted to �nd the most commonly used interaction principles amongst mobile apps embedded with the component of world editor. A total of 23 applications (see Table 1) found on the Apple App Store were analyzed

to identify the most common interaction principles (see Ta- ble 2) and the most common user journey regarding the world editors (see Figure 2). The apps were discovered by analyzing the top rated apps and the respectively suggested similar apps in di�erent categories e.g. social, simulation, and interior design. The following table represents the apps dis- covered and analyzed divided in three columns: 2D, 2.5D and 3D. 2D - apps in two dimensions. 2.5D - apps with semi three dimensional virtual objects. 3D - apps in three dimensions with adjustable camera view.

Mobile apps

2D 2.5D 3D

Design Home Designer City Home Street

Boo Family Guy Minecraft

Home Design Forge of Empires My Cafe Vlogger Go Viral Hay Day Planner 5D

Zepeto Highrise Roblox

Megapolis Room Planner Rollercoaster Ty-

coon Touch Swedish Home

Planner

SimCity BuildIt The Sims Mobile Simpson Tapped

out Virtual Families 2

Table 1. The 23 apps analyzed, presented in alphabetical order divided in sections of 2D, 2.5D and 3D.

4.1.2 User Journey

The four use case interactions by Mine [22] were used as a foundation to �nd the most common user journey, by ob- serving the 23 apps. The use case interaction turned out to be selection, placement and manipulation, these use case interactions could then be grouped together and form the following repetitive three step user journey.

Figure 2. Step by step, most common user journey for world

editors across 23 apps.

(7)

By identifying the most common user journey, a basic framework for 3D world editors could be designed. In line with the steps of the repetitive interaction cycle, users can select, place, and manipulate virtual objects in the virtual world. Speci�cally, they can select by using a shop with a

�ltering method of categories, e.g, chair, bed, and table; place virtual objects in the 3D virtual world; and, manipulate a virtual object by rotating or removing the object from the scene.

Three frameworks were designed to examine the e�ects of a grid, free, and prede�ned system on the user experience.

Speci�cally, framework I was based on the most used inter- actions principles - namely, the �ltered shop, grid-system and hold-manipulation interaction principles. Beyond the selec- tion step of step I (i.e., the shop), framework II followed the second most used interaction principles - namely, the free- system and tap-manipulation interaction principles. Lastly, framework III was designed after the most unique observed interaction - the prede�ned interaction where each type of virtual object can only be placed at a prede�ned area.

Based on the observed user journey and analysis on the most used and most unique interaction principles, the follow- ing three frameworks became the product for the heuristic evaluations:

Figure 3. Framework I, grid-system and hold to edit virtual objects.

Figure 4. Framework II, free-system, and tap-manipulation virtual objects.

Figure 5. Framework III, eliminated as it scored worst out of the three.

The following table represents the observed: interactions from the 23 apps, the total number of apps implemented with the interaction, and a short description per interaction principle.

Interaction

Principles Total De�nition

Shop 18 Categorized shop with �ltra- tion

Inventory 7 Inventory list, usually fol- lowing the stack principle,

�rst in last out Prede�ned

room 5 Designated areas for speci�c

furniture types

Grid-system 14 A grid that assist the users with the action of placement Free-system 9 Full control over placement Collision detec-

tion 14 Prevent virtual objects from intersecting one another Tap- manipulation 10 Tap on virtual object to initi-

ate manipulation mode Hold-

manipulation 13 Hold on virtual object to ini- tiate manipulation mode Drag and drop 18 Translate by dragging Tap and drop 5 Translate by tapping

Scaling 3 Size manipulation

Color 5 Color manipulation

Remove 22 Remove virtual object

Rotation 15 Rotation of virtual object Table 2. Observed interactions from the analysis of 23 apps.

Divided in three sections for better distinction.

6

(8)

4.1.3 Heuristics Evaluation

Since the designed frameworks (Figures 3, 4, and 5) are more conceptual and the set of ten heuristics developed by Nielsen were designed towards evaluating UI designs on desktop [25], there were certain changes made for the applied use of 3D world editor (e.g., ease of use and functionality). The following set of heuristics used in this research was inspired by Nielsen’s set of ten heuristics [25] and the works of Huy and Vanthanh [18]:

Heuristic Description Visibility of sys-

tem status The framework should keep users informed about what is going on, through appropriate feedback.

User control

and freedom The framework provides the users with suitable controls and ways to customize their own home/room.

Error preven-

tion The framework prevents errors

from occuring. Error such as e.g.

stacking objects on each other.

Flexibility and

e�ciency of

use

The framework entails �exibility and e�ciency in room design.

Customization/

Personalization The framework entails opportunity for customization/personalization.

Ease of use The framework entails an easy in- teraction.

Functionality The framework entails functional- ities suitable for the intended out- come.

Non-limitation Limitations from the user’s perspec- tive.

Continuity Will keep the user’s to continuously re-design their room.

Table 3. Description of the heuristics, the �rst �ve heuristics come from Nielsen’s original ten heuristics, the other four from Huy and Vanthanh research. The description of the heuristic has been adapted to �t the purpose of this research.

With the set of heuristics and three frameworks �nalized,

�ve evaluators were invited for individual interviews to va- lidify, suggest, and rank the three frameworks. Four out of the �ve interviews were held through online video calls with Google Hangouts, whereas one was conducted in-person.

Additionally, the evaluators came from various backgrounds with occupations ranging from UX Designer to Game Art Director.

The interviews followed a semi-structured interview style [7], where the issues were �rst presented to the evalua- tors followed by a presentation of the �ndings (see table 2). Throughout the entire interview, it was paramount that experts maintained an open dialogue with the principal in- vestigator. The interview concluded once the experts rated each framework across each heuristic category. Speci�cally, these ratings were given on a �ve point scale.

After the heuristic evaluations, it was safe to eliminate the framework with the lowest score, framework III, from the study. The focus was then shifted towards frameworks I and II for development and user testing. Frameworks I and II mainly di�er in their step II (placement). Thus, the two frameworks could be compared from the standpoint of a grid- system versus a free-system design. The grid-system is when the editor presents a grid upon selection to assist the user interactions of e.g., translation and rotation in increments.

Alternatively, a free-system design gives the user full control in translation to the decimal, which provides a more accurate placing.

The following is a summary of the average score from the heuristics evaluation by the �ve experts.

Heuristic I II III

Visibility of system status 4 3 4

User control and freedom 4 4 1

Error prevention 5 2 5

Flexibility and e�ciency of use 3 3 2 Customization/Personalization 4 5 2

Ease of use 5 3 5

Functionality 4 4 3

Non-limitation 4 3 1

Continuity 5 4 3

Accumulated score 8 4 3

Table 4. Description of the heuristics, the �rst �ve heuristics come from Nielsen’s original ten heuristics, the other four from Huy and Vanthanh research. The description of the heuristic has been adapted to �t the purpose of this research.

The experts were asked to express their opinions regard-

ing the frameworks divided in the three steps. When it came

to step I (selection), the general consensus was that a well cat-

egorized UI with �ltering option would be bene�cial for large

sets of virtual objects, which was also observed for the apps

with an inventory list that was stacked, as �nding speci�c

virtual objects then became tedious. For step II (placement)

the experts expressed that the free-system would yield the

(9)

feeling of customization while the grid-system gives clear guidelines and is more suited for the intended audience. An- other noteworthy opinion was that both frameworks have fairly simple interactions, and the threshold to learn is mini- mum (i.e., regardless of which system is used for placement of virtual objects, there will not be a huge di�erence, rather a subjective opinion). The grid-system will bene�t users who yearn for guidelines and less precision, while the free-system will bene�t users who prefer more control over the 3D world editor. When it comes to accessing the manipulation mode for virtual objects, the experts’ opinions were divided, some argued that Tap is better than hold to initiate manipulation mode, as it is a quicker interaction, while others argued that the interaction of holding down prevents error, which can occur if a user accidentally taps on a virtual object.

Lastly, the three frameworks were also ranked against one another by the experts, from one as the most favorable frame- work to three as the least favorable framework. Framework I was ranked as �rst choice by three experts, followed closely by Framework II, and Framework III was ranked the worst by all �ve experts.

4.2 Phase II. Development of the 3D World Editor Prototype

The 3D world editor was developed in Unity (v. 2019.3.0f3) targeted towards iPhone X (iOS v. 13.3.1. resolution: 2436-by- 1125 pixel). Apart from Unity’s built-in Renderer, which was substituted for Universal Rendering Pipeline (v.7.1.6, Frever app standard), all other standard libraries and packages were unchanged. To build from Unity to iPhone, Xcode (v.11.2.1) was used.

4.2.1 Assets

3D assets in terms of 3D model, shaders, and material were bought in the Unity asset store to eliminate time spent on designing 3D models and to focus on the user interaction. To reduce the time it took to design 3D models, the amount of furniture, i.e., virtual objects, were restricted to four pieces of furniture per second level category e.g., bed, sofa, cab- inet, table, chair, frames, table props, standing lamp, and plants. First level categories included furniture, wallpaper, and �ooring.

As virtual objects were bought online, a one-by-one clean- ing of the asset was done to remove any unwanted and un- necessary scripts and components. All types of furniture had to be assigned an empty parent GameObject to �x the error of pivot positioning (i.e., keeping di�erent assets packages furniture to the same standard). Most importantly, the addi- tion and adjustment of the ObjectInfo-script and a BoxCollider component was done to handle everything regarding the vir- tual object: collision detection, translation, rotation, removal, and change of shader color. The BoxCollider component is essential for collision detection in Unity (explained further under collision detection).

Lastly, all virtual objects were assigned a representative tag(second level category) and Layer (wall, �oor, furniture, frame). Layers in Unity are used to assist with culling for the Physics engine, which is used in collision detection.

4.2.2 Interactions

The principle of the snap-dragging function by Bier [3, 4] was implemented for Framework I (grid-system), where virtual objects snap along the axes (depending on the type of virtual object, furniture, or frame) when dragged. The drag and drop interaction principle [27] was implemented in both frame- works as a result of the original market analysis. The drag and drop interaction principle was implemented to avoid er- ror of misplacement when translating virtual objects, which can occur with the tap and drop.

Upon selection of virtual objects, a change of color was implemented as done by Kovalčík et. al [20], the color chosen to di�erentiate a selected virtual object to an arbitrary virtual object was a shade of purple as no virtual object had a similar color scheme.

4.2.3 Camera

For this particular study the interaction of the camera was neglected to solely focus on virtual object selection and ma- nipulation, however work has been made. Camera interac- tion is described as the interaction principle of navigation by Mine(see chapter 3.1.1). With Unity built-in camera, it is easy to change camera view, for this prototype, perspective was used. A script was added to the camera so that rotation and zoom was handled by simple touch interactions. To zoom - pinch movements. To rotate - left/right swipes. The camera was constantly looking towards the middle and there was no way to move it otherwise, so when rotating, it spun around the middle.

4.2.4 Constraints

The concept of a resting plane, introduced by Houde [14], was implemented where furniture was bound to the �oor by reducing translation in one axis. Hanging items, such as frames, were exposed to all three axes but constrained to be attached to walls (depending on the rotation of the wall in 3D space, one axis would be turned o�). Furthermore, the un- derlying concept of object association, coined by Bukowski and Sequin [6], was also exposed in the 3D world editor, giv- ing users the opportunity to stack smaller objects on tables.

The action of stacking objects was restricted to only allow placement of the smaller objects in the middle of the table regardless of the framework.

4.2.5 Collision Detection

Unity’s built-in collision detection methods include colli- sion between meshes and the AABB method. Therefore, no further implementations were developed as computational complexity was not a necessity or focus of this research.

8

(10)

Unity’s OverlapBox function, which is a part of the AABB method, was utilized and implemented for the purpose of this research. The function was called on each time the se- lected virtual object was translated, where it checked for any other overlapping virtual object but itself. If the function detected an overlapping, the selected virtual object changed shader color to red. If there was no collision detected, the color switched to green until the done or cancel button was pressed. After either of these buttons were pressed, the vir- tual object changed back to its original color scheme.

4.2.6 Shop UI

The UI for the shop was created with Unity’s canvas system.

The UI components consisted of button, text, and panel, with some panels employing Unity’s ScrollRect script for scrol- lable panels. Part of this research is aimed at designing an extension prototype of the Frever app; therefore, the UI for the shop followed the design principles of the Frever app.

Coincidentally, the design principles used in the Frever app correlated with the market analysis of categorized shops seen in Table 2. For �nal result see Figure 6b.

4.3 Phase III. User Study 4.3.1 Background

After development of all the aforementioned interactions, methods, and functions, a user study was designed to test frameworks I and II. The tests were held individually, just like the heuristic evaluation workshops, either through on- line calls with screen sharing of the phone or in-person (three online, seven in-person). All user studies were screen recorded.

4.3.2 Tasks

There were eight tasks in total, four per framework. The

�rst task of each framework was identical; however, the remaining three tasks di�ered depending on the framework being tested. The order of framework - and corresponding set of tasks - presentation was counterbalanced across the ten users to minimize any bias from order of presentation [7]. Speci�cally, �ve users received framework I before II while the other �ve received framework II before I.

(1) Task 1 evaluated how users found speci�c virtual ob- jects using the UI-based shop. (2) Task 2 examined the place- ment of virtual objects. (3) Task 3 investigated users’ abili- ties to initiate manipulation mode within the virtual world, and the manipulations of rotation, translation and deletion.

Lastly, (4) task 4 presented users with a pre-furnished room where they were asked to manipulate items within the en- vironment without guidance. After the completion of each task, users were asked to �ll out an online questionnaire about their experience completing the task. Speci�cally, the questionnaire allowed users to input their opinions on a �ve- point Likert-scale ranging from strongly disagree to strongly

agree. This online questionnaire activity was conducted to gain qualitative numerical data of statements to further in- vestigate in-depth the UX per interaction principle. After all the tasks were completed, a short semi-structured interview was conducted to acquire qualitative data that would inform the results of quantitative data.

4.3.3 Pilot Study

Before the study began, a pilot study was conducted to gauge the approximate time to complete the user study. The pilot study also acted as an opportunity to �nd what was working and what needed to be �xed. At this stage of development, some task descriptions and questionnaire text had to be rephrased for clarity. Additionally, one task was removed from the task compilation as it made the overall test too long (originally 50 reduced to 30 minutes), allowing for the possibility of user fatigue that impacted the results of the pilot study.

4.3.4 User’s Thoughts & Observations

Just as the heuristic evaluation, the users’ commentary from the interviews were logged, analyzed and compiled. Com- pared to the heuristic evaluation where the opinions were on a conceptual level, the users commentary were based on the actual prototype of the 3D world editor, which means that the users opinion were subjective to the prototype. Notable commentary from two users regarding step I - was to make use of generic thumbnails when �ltering of category, as they believed that the size of the category buttons were too small and a thumbnail would yearn a quicker overview of the con- tent of the category. For step II, the overall consensus was similar to what the experts expressed - the grid-system made it easier to place virtual objects, but the free-system provides more precision and the perception of freedom to place as en- visioned. Two users expressed that they would rather prefer a smaller grid-system if the grid-system were used, as that would provide more precision similar to the free-system, but with guidelines and the snap function made easier for plan- ning and interaction. Overall, 8/10 of the users expressed that they would prefer the grid-system over free-system. Step III - the general opinion was once again divided, half of the users preferred hold-manipulation over tap-manipulation as it prevents the users from accidentally initiating manip- ulation for the wrong object. The other group thinks that hold-manipulation made it more tedious to make changes, this could also be due to the duration of the hold (0.5 seconds) The most notable commentary from a user was that their

�rst intuition is to tap, while holding would the interaction principle to initiate a more advanced manipulation mode (e.g.

tap - to translate virtual objects. Hold - to initiate rotation

and removal of virtual objects).

(11)

4.4 Final Results 4.4.1 UI

The following �gures (Figure 6a and 6b) depict the �nal UI design for the Frever app and user study. In �gure 6a, the next button takes the user to the next tasks while the i button shows the task description. In Figure 6b, we see di�erent elements of the shop UI: (1) opens the shop; (2) the scrollable horizontal furniture �ltering; (3) the scrollable horizontal panel of virtual objects; (4) the expansion button that expands the furniture panel; and, (5) the scrollable horizontal panel for

�rst level categories (i.e., furniture, wallpaper, and �ooring).

(a) Task description. (b) Shop UI

Figure 6. (a) and (b) demonstration of how the app was designed for the user studies.

Figure 7 demonstrates the interaction between grid respec- tively the free-system. In �gure 7a a white grid is displayed over the �oor to assist the users in planning and placement of virtual objects. Figure 7b shows the UI of free-system, without a grid for planning assistance. The UI panel at the bottom presents the actions available in manipulation mode:

rotation, and removal. The Cancel button on the left is used to cancel or undo any changes done in manipulation mode.

Lastly the Done button is used to approve any manipulation made to the selected green (visual indication[20]) virtual object.

(a) Grid-system (b) Free-system

Figure 7. (a) and (b) showcase the �nal manipulation mode UI mode. The slider is for rotation of the virtual object.

4.4.2 Quantitative Data

By timing the completion per task, the following two graphs (Figure 8 and 9) respectively two tables (table 5 and 6) were given. This analysis was done to further investigate if there was a signi�cant di�erence between the two frameworks or if it was due to familiarity i.e. iterations when it comes to the time spent to complete a task, another possibility could be up to chance. The graphs represent the average time for the ten users to complete each task. Figure 8 - between frameworks.

Figure 9 - between iterations. The two tables demonstrate the results of calculating the signi�cant di�erence using the statistical model, T-test. Table 5 - between frameworks. Table 6 - between iterations.

Figure 8. Average time of completion

Statistical T-test calculations between the time of comple- tion for the two frameworks proves that there is no signif- icant di�erence between the two frameworks (see Table 5 below) as the results proved to be up to chance.

10

(12)

Task 1 Task 2 Task 3 Task 4 P 0.824 0.459 0.239 0.795

Table 5. T-Test: if the P value is underneath 0.05, there is a signi�cant di�erence between frameworks.

Figure 9. Average time to complete the tasks where the order of framework was not considered.

The table below proves that there is a signi�cant di�erence in terms of completing a task regardless of the framework.

Task 4, was the only task with an open ended description for the users to decide whether they wanted to spend more or less time.

Task 1 Task 2 Task 3 Task 4 P 0.001 0.000 0.000 0.152

Table 6. T-Test: if the p value is underneath 0.05, there is a signi�cant di�erence between �rst and second iteration.

5 Discussion

The initial question this research aimed to answer is: What are the best practice interaction principles for 3D world editors on mobile phones from a user experience perspective?

In the following, one can determine the extent that there are certain best practice interaction principles that should be used on 3D world editors for mobile phones. Additionally, limitations of the research and areas of further research will be discussed.

5.1 Selection

Firstly, the shop UI was developed according to the analyzed most used interaction principle for selection of virtual objects ( implemented in 18 out of the 23 apps, see Table 2), coinci- dentally, the interaction principle for the shop UI was similar to the Frever apps shop UI. Most of the 23 apps designed a virtual object selection section as a shop where users had the

opportunity to buy virtual objects. Additionally, it was bene-

�cial to employ Frever app’s shop UI because it would cut time drastically on the development end. The majority of the 23 apps had a �ltering function to �lter in-between di�erent categories and a thumbnail of the virtual objects. Looking at the �nal user study, it is safe to conclude that the shop was intuitive and easy to follow; however, the users suggested having a representative thumbnail of categories, instead of a tiny text tag bar above the shop panel. This would have provided a quicker overview of what the shop had to o�er.

Apart from that, all users found what they were supposed to look for and used the �ltering function without instructions.

5.2 Placement

Regarding step II, the placement interaction, the three frame- works provided distinct interactions in terms of placing vir- tual objects - speci�cally, they o�ered a grid, free, and pre- de�ned system, respectively. By examining the interaction principle analysis table (Table 2), one could assume that the prede�ned system would have a better ranking amongst the �ve experts. However, the experts from the heuristic evaluation saw Frameworks I and II as two very similar sys- tems with Framework I providing greater simplicity to the placement of virtual objects. The end users expressed similar thoughts, believing that a grid-system was more intuitive and easier to manipulate than a free-system. In short, a grid- system gave an instant visualization of where a virtual object would land whereas a free-system compromised such ease for greater precision. One end user suggested the capability to switch modes, between a grid- and free-system, in order to give users precision or ease on their own terms.

5.3 Manipulation

For tap versus hold interactions, there were mixed reviews amongst experts and end users as both actions work in simi- lar ways. Based on observation and verbal suggestions made by both experts and end users, it can be determined that a combination of both tap and hold interactions would be the most appropriate option. Speci�cally, by a combination of tap and hold, one is referring to tap to translate, and hold to enter more advanced manipulation options such as rotation and deletion. This conclusion is based on observations of users accidentally entering the manipulation mode for the wrong virtual object.

5.4 Quantitative result

When it comes to e�ciency or usability, there was no sig- ni�cant di�erence between the frameworks. However, by examining Figure 9 and Table 6, one can see a signi�cant dif- ference in time of completion between the �rst time testing the app and the second time, regardless of the frameworks.

This further proves that the 3D world editor prototype devel-

oped in this research has an easy UX design, resulting in bet-

ter user performance after practice, regardless of framework.

(13)

Thus, neither framework contributes to greater e�ciency of 3D world editors, but familiarity of the app did.

5.5 3D World editors

World editors in general are very constrained in terms of interaction independent of the input device. Traditionally for computers, the UI tends to be quite complex and cluttered, as mentioned by Ali et. al. [1]. A big research interest in 3D world editors is unfortunately in virtual reality (as seen in chapter 3). What distinguishes these two input devices from mobile phones is that everything from the virtual world to the input device is within the palm of one’s hand.

There are disadvantages when it comes to mobile phones, as the gate to the virtual world and input device on the same surface. While 3D world editors on computers can clutter the UI with e.g. buttons and information, mobile phones tend to simplify as much as possible, as the screen is smaller. This leads to simpler UI design by reducing the amount of text and button, which this research tried to do by designing everything to the bottom of the screen.

Virtual reality on the other hand allows the user to fully immerse and physically translate in the virtual world, again, mobile phones have to simplify interaction principles, and make things work with minimum e�ort on a small screen, therefore an example in this research is to lock the camera position to the middle of the room to avoid unwanted trans- lation in 3d space by constraining the 3D world editor as much as possible.

6 Method Critique

The most e�cient collision detection method in Unity is the function called OnCollisionEnter; however, a problem arose when designing this prototype because Unity does not call the functions if the virtual object is not moving in space by Unity’s Physics engine. Therefore, a function called OverlapBox was used instead. Using OverlapBox resulted in a di�erent problem, however, as this function was called after each translation and it had to re-calculate the bounding box for each function call. This resulted in a minor fps drop as more computational power was used on the phone; however, it was not once mentioned by the users, only noticed by the author (developer) himself.

The grid-system was implemented so that virtual objects were translated in increments by one unit (one Unity unit is supposedly equivalent to one meter in real life), which resulted in a limitation of placement.

Due to Covid-19, readjustments in terms of test users had to be taken. Therefore, only a small number of users were invited to the study as well as the study had to be conducted under a within-subject experimental model. It minimized the opportunity to have more thorough and isolated tests. This was speci�cally observed as the users became confused when the same tasks resurfaced again. The study would bene�t

from a between-subject model with a bigger user population that will allow for more isolated tasks, i.e., there were no isolated tasks for the rotation interaction for di�erent sized virtual objects as some occupy more grid area than others.

7 Future Work

For future work, implementing a smaller grid-system (smaller than one unit) would be of interest. The smaller the grid, the more similar to the free-system the grid-system will become.

There must be a threshold for how small or big the grid size has to be to make a signi�cant or insigni�cant di�erence in terms of UX.

The interactions analyzed in this research were based upon the market analysis, other interaction principles utiliz- ing e.g. augmented reality (AR) technology or accelerometer may change the outcome and be a better practice to imple- ment for mobile phones.

For evaluation of the prototype, this research solely fo- cused on the device choice of iPhone X and teenagers from Sweden. In the future a broader evaluation in terms of de- vices of di�erent phone resolution, size and operative sys- tems as well as users of di�erent ages and cultures will give a broader understanding if the choice of interaction principles is universal or solely based on device, age and culture.

8 Conclusion

As there are interaction principles in 3D world editors not covered in this research, and the number of subjects were minimal. No general conclusion can be drawn for all 3D world editors on mobile phones. However, for this particular research and the interaction principles observed, it is save to conclude that a combination of a well organized catego- rized shop for selection of virtual objects, a grid-system (with smaller increments) for the placement of virtual objects, for the use case of manipulation: drag and drop for translation and hold-manipulation (less than 0.5 seconds with visual feedback) for initiating manipulation mode (rotation and deletion) of virtual object; would be the best practice inter- action principles for this prototype of 3D world editor on mobile phones.

Acknowledgments

Thanks to the whole team at Friend Factory AB. For my time at KTH, I want to thank Max Turpeinen, Mario Romero, and speci�cally Björn Thuresson. Last but not least, thank you Erica M. Williams.

References

[1] Nazlena Mohamad Ali, Siti Zahidah Abdullah, Juhana Salim, and Hy- owon Lee. 2013. Exploring user experience in game interface: a case study of The Sims 3. The Computer Games Journal 2, 1 (2013), 6–18.

[2] Javier A Bargas-Avila and Kasper Hornbæk. 2011. Old wine in new

bottles or novel challenges: a critical analysis of empirical studies of

12

(14)

user experience. In Proceedings of the SIGCHI conference on human factors in computing systems. 2689–2698.

[3] Eric A Bier. 1990. Snap-dragging in three dimensions. ACM SIGGRAPH Computer Graphics 24, 2 (1990), 193–204.

[4] Eric A Bier and Maureen C Stone. 1986. Snap-dragging. ACM SIG- GRAPH Computer Graphics 20, 4 (1986), 233–240.

[5] Doug A Bowman, Donald B Johnson, and Larry F Hodges. 2001.

Testbed evaluation of virtual environment interaction techniques. Pres- ence: Teleoperators & Virtual Environments 10, 1 (2001), 75–95.

[6] Richard W Bukowski and Carlo H Séquin. 1995. Object associations: a simple and practical approach to virtual 3D manipulation. In Proceed- ings of the 1995 symposium on Interactive 3D graphics. 131–�.

[7] Dana Chisnell and Je�rey Rubin. 2008. Handbook of usability testing:

How to plan, design, and conduct e�ective tests. Computer Bookshops Limited (2008).

[8] Anja Endmann and Daniela Keßner. 2016. User Journey Mapping–A Method in User Experience Design. i-com 15, 1 (2016), 105–110.

[9] Will Goldstone. 2011. Unity 3. x game development essentials. Packt Publishing Ltd, Chapter 1, 16–17.

[10] Michael Gosele, Wolfgang Stuerzlinger, et al. 1999. Semantic con- straints for scene manipulation. In in Proceedings Spring Conference in Computer Graphics’ 99 (Budmerice, Slovak Republic. Citeseer.

[11] Marc Hassenzahl. 2018. The thing and I: understanding the relationship between user and product. In Funology 2. Springer, 301–313.

[12] Marc Hassenzahl and Noam Tractinsky. 2006. User experience-a re- search agenda. Behaviour & information technology 25, 2 (2006), 91–97.

[13] Roland Holm, Erwin Stauder, Roland Wagner, Markus Priglinger, and Jens Volkert. 2002. A combined immersive and desktop authoring tool for virtual environments. In Proceedings IEEE Virtual Reality 2002.

IEEE, 93–100.

[14] Stephanie Houde. 1992. Iterative design of an interface for easy 3-D direct manipulation. In Proceedings of the SIGCHI conference on Human factors in computing systems. 135–142.

[15] Hamid Hrimech, Leila Alem, and Frederic Merienne. 2011. How 3D interaction metaphors a�ect user experience in collaborative virtual environment. Advances in Human-Computer Interaction 2011 (2011).

[16] Rui Huang. 2012. Optimizing collision detection in 3D games with model attribute and Bounding Boxes. In 2012 IEEE Symposium on

Electrical & Electronics Engineering (EEESYM). IEEE, 589–591.

[17] Philip M Hubbard. 1993. Interactive collision detection. In Proceedings of 1993 IEEE Research Properties in Virtual Reality Symposium. IEEE, 24–31.

[18] Ngu Phuc Huy and Do Vanthanh. 2012. Evaluation of mobile app paradigms. In Proceedings of the 10th International Conference on Ad- vances in Mobile Computing & Multimedia. 25–30.

[19] Laurie Kantner and Stephanie Rosenbaum. 1997. Usability studies of WWW sites: Heuristic evaluation vs. laboratory testing. In Proceedings of the 15th annual international conference on Computer documentation.

153–160.

[20] Vít Kovalčík, Jan Flasar, and Jiří Sochor. 2007. Extensible approach to the virtual worlds editing. In Proceedings of the 5th international confer- ence on Computer graphics, virtual reality, visualisation and interaction in Africa. 31–37.

[21] Kathryn Elizabeth Merrick and Mary Lou Maher. 2007. Motivated rein- forcement learning for adaptive characters in open-ended simulation games. In Proceedings of the international conference on Advances in computer entertainment technology. 127–134.

[22] Mark Mine. 1995. ISAAC: A virtual environment tool for the interactive construction of virtual worlds. UNC Chapel Hill Computer Science Technical Report TR95-020 (1995).

[23] Rolf Molich and Jakob Nielsen. 1990. Improving a human-computer dialogue. Commun. ACM 33, 3 (1990), 338–348.

[24] Jakob Nielsen. 1994. Usability engineering. Morgan Kaufmann, Chap- ter 2.

[25] Jakob Nielsen. 1994. Usability inspection methods. In Conference companion on Human factors in computing systems. 413–414.

[26] Jakob Nielsen and Rolf Molich. 1990. Heuristic evaluation of user interfaces. In Proceedings of the SIGCHI conference on Human factors in computing systems. 249–256.

[27] Bill Scott and Theresa Neil. 2009. Designing web interfaces: Principles and patterns for rich interactions. " O’Reilly Media, Inc.".

[28] Graham Smith and Wolfgang Stuerzlinger. 2001. Integration of Con- straints into a VR Environment. In VRIC’01: Proc. of the Virtual Reality Int’l Conf. 103–110.

[29] Rosa Yáñez Gómez, Daniel Cascado Caballero, and José-Luis Sevillano.

2014. Heuristic evaluation on mobile interfaces: A new checklist. The

Scienti�c World Journal 2014 (2014).

(15)

www.kth.se

TRITA -EECS-EX- 2020:494

References

Related documents

To fulfil social work’s task of overcoming social injustices and supporting empowerment, social change, and development (cf. the International Feder- ation of Social Workers’

Keywords: virtual reality, VR, interaction, controls, cybersickness, design, interaction design, immersion, presence, guideline, framework analysis... Sammanfattning

Three different sonification modes were tested where the fundamental frequency, sound level and sound rate were varied respectively depending on the distance to the target.. The

For instance, for triangular trimers considered in this work, where there is a local minima at the Néel configuration (in-plane magnetic moments with an angle of 120 ◦ between

Potty training performed daily affected the emptying ability positively in all children including the boys with PUV: at the age of 9 months, no residual urine was found in

You suspect that the icosaeder is not fair - not uniform probability for the different outcomes in a roll - and therefore want to investigate the probability p of having 9 come up in

We will discuss the potential of gaze interaction for diverse appli- cation areas, interaction tasks, and multimodal user interface combinations.. Our aims are to promote this

information content, disclosure tone and likelihood of opportunistic managerial discretion impact equity investors reaction to goodwill impairment announcements?” In order to