• No results found

An empirical study of task support in 3D information visualizations

N/A
N/A
Protected

Academic year: 2022

Share "An empirical study of task support in 3D information visualizations"

Copied!
9
0
0

Loading.... (view fulltext now)

Full text

(1)

An Empirical Study of Task Support in 3D Information Visualizations

Ulrika Wiss David A. Carr

Department of Computer Science Department of Computer and Information Science Lulea University of Technology Linkoping University

S-971 87 Lulea, Sweden S-581 83 Linkoping, Sweden e-mail: ulrika@cdt.luth.se e-mail: davca@ida.liu.se

Abstract

There is still little knowledge about what factors are important for the usability of a 3D user interface. We have performed a comparative study of three 3D in- formation visualizations as a step towards a better un- derstanding of this. The study involved 25 volunteer subjects, performing three di erent tasks with the vi- sualizations.

The results of the study indicates that local and global overview is an overwhelmingly important fac- tor. We also nd that custom navigation is crucial in 3D user interfaces. Finally, the study raises the ques- tion of what types of tasks a 3D user interface is best suited for.

1 Introduction

Lately, technological advances in computer graphics have made 3-dimensional (3D) information visualiza- tion feasible on personal computers. Meanwhile, the World-Wide Web (WWW) has made a vast amount of information available to individuals. In parallel with these developments a number of 3D information vi- sualizations have been invented by both researchers and commercial software developers. But, the research community has still made very few comparative studies of these visualizations.

The study described in this paper aims to explore what factors are important for task support in a 3D information visualization. Three 3D information visu- alizations were studied: the Cam Tree [11], the Infor- mation Cube [9] and an Information Landscape [1, 14].

These information visualizations all visualize hierarchi- cal data. In our study, we choose to visualize a hierar- chical le system, which is familiar to most computer users.

The three user tasks in our study were based on the seven high-level information visualization tasks as de- ned by Shneiderman [13]:

Overview: Gain an overview of the entire collection.

Zoom: Zoom in on items of interest.

Filter: Filter out uninteresting items.

Details-on-Demand: Select an item or group and get details when needed.

Relate: View relationships among items.

History: Keep a history of actions to support undo, replay, and progressive re nement.

Extract: Allow extraction of sub-collections and of the query parameters.

When designing tasks for the study, we re ned three of these tasks into lower-level subtasks relevant to the hierarchical le system domain. Based on the Zoom task, we designed a Search task where subjects needed to zoom into the visualization to nd directories. Based on the Relate task, we designed a Count task where subjects counted the number of les in directories. And nally, we designed a Compare task based on Shneider- man's Overview task, where users compared two direc- tories and selected the largest one.

We rst describe the three visualizations used in the study. The study setup, including tasks, data set, sub- jects, and procedures, are then described. After this, we go on to report on the statistical analysis of the results. These results are then discussed, and conclu- sions about important factors are drawn. Finally, we review some related work in the area and point out future work.

2 The Three Visualizations

In order to be able to compare task support between the visualizations, all three 3D information visualiza- tions used in our study visualize the same type of data, hierarchies. The chosen visualizations are all di erent in the way they visualize the hierarchical data. The In- formation Landscape is a \2.5 D" visualization, mean- ing that the tree layout is restricted to a at surface.

The Cam Tree is more of a true 3D visualization, using

all dimensions to lay out the tree. Di ering from the

two others, the Information Cube does not display the

data in a traditional tree structure. It instead fully ex-

ploits the possibilities of 3D space by representing the

hierarchy as nested cubes.

(2)

2.1 The Cam Tree

The Cam Tree visualizes hierarchies as trees of labeled rectangular shapes representing nodes and leaves, in- terconnected by lines. Each subtree is laid out as a cone with its root at the top of the cone and the chil- dren along the cone base. (Another version of the same idea, the Cone Tree, instead lays out the tree verti- cally.) Rectangles and cones are semi-transparent in order to reduce problems with occlusion. Figures 6 and 3 show zoomed-in views of the Cam Tree in our implementation.

Interactions with the Cam Tree include rotation of the tree when a node or leaf has been selected with the mouse. This brings the path to the selected rectangle closest to the user and highlights the rectangles on that path. It is also possible to prune the tree via a control panel with buttons.

Our implementation of the Cam Tree does not in- clude any independent rotation of sub-trees, nor any pruning operations. Also, in the cited paper [11] the design included the shadow of the tree placed under- neath the tree. We did not implement this since its importance was said to be small.

2.2 The Information Cube

The Information Cube uses semi-transparent, nested cubes to represent leaves or internal nodes. The parent- child relationships are represented by nesting child cubes inside their parent cubes, scaling cubes to en- close the contained cubes. Textual labels are displayed on cube surfaces. Color and degree of transparency in- dicate the currently selected cube. Figure 5 shows a zoomed-in view of the Information Cube in our imple- mentation.

The original system is designed for use with special virtual reality equipment. The output can either be displayed stereoscopically or monoscopically with head motion tracking. The input device allows a model of the user's hand to be displayed within the visualization for grabbing, rotating, and pointing.

Our implementation includes no virtual reality equipment or stereoscopical display techniques. Con- sequently, the model of the user's hand and the associ- ated a ordances are not present in our implementation.

2.3 The Information Landscape

Our implementation of the Information Landscape is based on the File System Navigator (

fsn

) [14] from Silicon Graphics and the Harmony Information Land- scape [1]. These two information visualizations are similar, basically di ering only in how they are used by the surrounding application.

In the Information Landscape, internal nodes are represented as pedestal shapes standing on a at sur- face with lines connecting pedestals to form a tree.

Leaves are represented as box shapes standing on the

pedestals. This makes the pedestal cross-section pro- portional to the number of leaf children. The height of the boxes encodes an attribute such as the size of the data element represented by the box. Being a \2.5 D" visualization, the pedestals are restricted to a at surface in 3D space. Only the box height makes use of the third dimension. Figure 4 shows a zoomed-in view of the Information Landscape in our implementation.

Interactions in the Information Landscape include selecting boxes or pedestals with the mouse and \ y- ing" up to a viewpoint close to the selected element.

Like for the two other visualizations, our implementa- tion of the Information Landscapes does not include this custom navigation feature.

3 Study setup

We designed three tasks based on Shneiderman's [13]

seven high-level information visualization tasks. The three tasks were all related to the application domain in question ( le system) and were expected to reveal di erent factors a ecting the task support of a 3D in- formation visualization.

Task 1: Search. The subjects were instructed to nd two named directories and click on them in the visualization. This task is based on Shneiderman's Zoom task, since we expected that subjects would need to zoom into the visualization to look for the directories, and then zoom up to the directory to click on it.

We thought performance on this task would be a ected by the degree that the visualization sup- ported a global context while zooming in on a di- rectory. The expectation was that the Cam Tree would provide this the best way, while the In- formation Cube and the Information Landscape would su er from the lack of global context.

Task 2: Count. The subjects were instructed to count the number of les in each of the two di- rectories they found in the previous task. This task is based on Shneiderman's Relate task, since users had to understand parent-child relationship in the visualizations.

Here, we expected that performance would be af- fected by the way the visualization helps the user separate directories and les. The Information Landscape was expected to give the best support for this, and the Cam Tree the least.

Task 3: Compare. The subjects were instructed to

compare the two directories and select the one that

contained most decendants (i.e. les, subdirecto-

ries, les in subdirectories etc.). This task was

based on Shneiderman's Overview task. The sub-

jects had to gain a (local) overview of each direc-

tory to discover how many children it contained,

and/or a (global) overview of the two directories

(3)

Level Directories Files

0 1 0

1 3 5

2 7 10

3 4 25

4 0 20

Table 1: Data set description

simultaneously to compare them with less detailed inspection.

The performance for this task was expected to be a ected by the support for getting an overview of the number of children in a directory. Here, the Information Cube was thought to give the least support since children on lower levels are occluded.

We implemented a visualization generator as de- scribed in [17]. The visualization generator takes hi- erarchical data as input, and creates any of the three di erent 3D visualizations in VRML2.0 format. As de- scribed in Section 2 the VRML2.0 implementations do not include any of the custom interaction or navigation features found in the original implementations. Instead we rely entirely on the VRML browser's interface for navigation and interaction. The VRML browser used for the study was CosmoPlayer2.0 from SGI. It o ers all the basic navigation functions, such as moving in all directions, rotating, undoing movements and mov- ing to pre-set viewpoints. In eliminating all custom navigation, the three visualizations were used under similar conditions in the study. We also hoped to get an indication of the importance of custom navigation.

For this study, we generated six di erent data sets, representing les and directories in a hierarchical le system. All six data sets were some permutation of the description in Table 1. The two directories involved in the three tasks were both located on level 2 in the tree.

Using these six data sets as input to the visualization generator, we created one Cam Tree visualization, one Information Cube visualization, and one Information Landscape visualization with each data set.

The user interface for this study consisted of a num- ber of WWW pages generated by CGI scripts. Each WWW page contained a visualization, displayed in an embedded CosmoPlayer browser, and a Java applet that gave instructions and registered times and answers for the three tasks.

Each subject performed the three tasks six times, twice with each visualization. The six data sets were randomly assigned so that no data set was seen twice by the subject. The order in which the visualizations were presented was also randomized, but the subjects always performed the tasks twice in a row with one visualization before moving on to the next. All ran- domization was done in runtime by the CGI scripts, and the results of the randomization was registered to- gether with the task results.

The subjects for the study were mainly recruited from the student body at Lulea University of Technol- ogy. Information about the study was distributed via email and yers. The ages of the 25 subjects ranged from 19 to 35 years, with a median age of 22 years.

Four were female. Three of the subjects stated that they had been using computers 1-3 years, six of them that they had been using computers for 3-6 years, and the majority, 16 of the subjects, stated that they had been using computers for more than six years. Most subjects also used computers frequently; 19 of them stated that they use computers more than 10 hours a week, and none that they use computers less than one hour per week. When asked if they had ever used any software with 3D graphics, seven answered no, and the remaining 18 answered yes.

Each subject spend about 1 { 1.5 hours with the study, including about 20 minutes of practice tasks to make sure the user was familiar with the Cosmo- Player, the visualizations and the three tasks. Dur- ing the study, time per task and error frequency was recorded via the Java applet. The study leader also made notes of the subjects' behavior and comments. If the subject had not nished a task within ve minutes, the task timed out and the subject was instructed to move on to the next task. Subjects were at all times allowed to abort a task if they felt they could not per- form it without guessing.

After nishing all the tasks, a questionnaire form was displayed to the subject. It contained background questions about the subject, such as age and computer experience. The subjects were also given the oppor- tunity to rate the visualizations on the questionnaire.

Each of the three visualizations were given a rating from 1 to 7 on four di erent scales:

1. Was the visualization good or bad?

2. Was the visualization easy to use or dicult to use?

3. Was the visualization boring or stimulating?

4. Was the visualization aesthetically pleasing or not?

4 Statistical Analysis

Due to technical problems during the study, results from two of the subjects were removed from the analy- sis. The remaining results were analyzed in two ways:

ANOVA for the task times, and Chi-Squared Analysis for the error frequencies.

4.1 Analysis of Task Times

A total of 45 tasks were either timed out (at 5 minutes)

or aborted by the user. All these were tasks using the

Information Cube. We rst assigned a time of 5 min-

utes to these 45 tasks for our analysis. This however

(4)

Search Count Compare

Info Land 20.4 42.8 24.7

Cam Tree 67.7 88.7 46.4

Info Cube, censored 219.4 246.8 166.0 Info Cube, simulated 226.4 291.3 160.6

Simulated values 18 20 7

Table 2: Mean task times, seconds

caused a skewed distribution of the data, not the near- normal distribution needed for the ANOVA. We there- fore replaced this censored data with simulated data, using the following procedure:

1. From the non-censored data, randomly select the number of times needed.

2. Find the median of the non-censored data.

3. Add the median to each of the randomly selected times.

4. Replace the censored data with these times.

Figure 1 shows means from simulated data in a graphical form. Table 2 shows mean times for all three tasks. For the Information Cube, two means are shown:

the mean including censored (5 minute) values for time- outs and aborts, and the mean including simulated data (as described above). The simulated data was used in the ANOVA. The table also display the number of simulated values for each task. As can be seen, only seven values needed to be simulated for the Compare task. This explains the fact that the simulated mean for this task is lower than the censored mean. The ran- dom sampling procedure resulted in some lower-range values that had a strong overall e ect on the small number of simulated values.

We found that the variance was considerably larger for the Information Cube than for the two other visu- alizations. The ANOVA requires stable variances, so to achieve this we transformed the values with a loga- rithm function before doing the ANOVA.

ANOVA tables for the three tasks can be found in Tables 3, 4, and 5. To block out unwanted e ects, the ANOVA included e ects of the di erent datasets and individual subjects. We also included the sequence of the visualization, i.e. a number from one to six signi- fying if this was the rst, the second etc. time that the subject performed the task. The visualization e ect is in that way cleared of any contribution from the other e ects.

Looking at the \P-Value" column in the ANOVA ta- bles, we see that the p-value for the visualization e ect in all cases is well below 1%. This means that we an say, with a 99% con dence level that there is a signi - cant di erence between the times to perform the tasks with the tree visualizations. We also performed con - dence interval analysis on the data, at the 95% level.

This analysis con rmed that for all tasks, the mean

Figure 1: Mean task times MAIN EFFECTS F-Ratio P-Value

dataset 2.24 0.0560

subject 1.52 0.0844

visualization 202.78 0.0000

sequence 1.36 0.2467

Table 3: ANOVA table, Search Task

times for the di erent visualizations are distinctly dif- ferent. The mean time is lowest for the Information Landscape, higher for the Cam tree, and highest for the Information Cube.

We also see that the subject e ect is signi cant for the Count and Compare task. These two tasks required the subjects to navigate a lot. Some of the subjects managed navigation quite well, while others had di- culties with the CosmoPlayer navigation controls.

For the Count task, e ects of dataset and sequence

are also signi cant. In four of our six datasets, one of

the two directories contained no subdirectory. It was

those that showed the lowest mean task times. Many of

the subjects did not need to count les and directories

for these datasets, but instead immediately chose the

directory without a subdirectory. The sequence e ect

is very interesting. The lowest means were found for

the second, fourth, and sixth visualization. This means

that the subjects performed the tasks faster the second

time around with each type of visualization. Our ob-

servations and the comments from the subjects give a

possible explanation to this. The subject learned to

remember the relative sizes of the directories from the

Count task, and used this knowledge in the Compare

task. The rst time with each visualization subjects

were not able to do this, but the second time they had

grown more familiar with the visualization and could

use this strategy to perform the task faster.

(5)

MAIN EFFECTS F-Ratio P-Value

dataset 1.23 0.3022

subject 3.66 0.0000

visualization 259.51 0.0000

sequence 1.17 0.3280

Table 4: ANOVA table, Count Task

MAIN EFFECTS F-Ratio P-Value

dataset 4.69 0.0007

subject 2.05 0.0091

visualization 100.11 0.0000

sequence 3.83 0.0033

Table 5: ANOVA table, Compare Task

Info Land Cam Tree Info Cube

No Error 46 46 28

Error 0 (0/0/0) 0 (0/0/0) 18 (0/17/1) Table 6: Error Frequency Table, (erroneous an- swers/timeouts/aborts), Search task.

Info Land Cam Tree Info Cube

No Error 46 36 10

Error 0 (0/0/0) 10 (10/0/0) 35 (15/17/3)

Skipped 0 0 1

Table 7: Error Frequency Table, (erroneous an- swers/timeouts/aborts), Count task.

Info Land Cam Tree Info Cube

No Error 46 44 33

Error 0 (0/0/0) 0 (0/0/0) 9 (2/4/3)

Skipped 0 2 4

Table 8: Error Frequency Table, (erroneous an- swers/timeouts/aborts), Compare task.

Figure 2: Subjects' ratings of the visualizations

4.2 Analysis of Error Frequencies

As errors, we counted erroneous answers (not applica- ble for the Search task), aborts and timeouts. Since the Chi-Square Test requires the expected frequencies for each cell to be at least 5, we summed up the three types of errors into one row. In Tables 6, 7, and 8, the number of erroneous answers, timeouts, and aborts are shown within parentheses next to the sum of errors.

For the Count and Compare tasks, the table also con- tains a row labeled \Skipped". The numbers in this row represent tasks that were accidentally skipped by the subjects. For the Compare task, this row also in- cludes four tasks for which subjects stated that they accidentally selected the wrong directory while navi- gating.

The Chi-Square Test was performed at the 99% level of con dence. For all three tasks, the Chi-Square Test showed that the error frequencies were indeed related to the visualization used to perform the task.

5 Discussion

So, were our initial expectations as described in Sec- tion 3 met? Overall, the answer is no. We can, how- ever, see that some of the factors we anticipated did a ect performance. This is further discussed in Sec- tion 5.1 below.

But, as the statistical analysis of the results show,

the Information Cube performed worst for all tasks,

and the Information Landscape performed best. Look-

ing at the subjects' ratings of the visualizations (Fig-

ure 2) we see that they were overall most satis ed with

the Information Landscape, and least satis ed with the

Information Cube. Our observations during the study

also con rm this.

(6)

Figure 3: Separating les and directories in the Cam Tree

It seems like factors that we had not taken into ac- count, or had not expected to have such an in uence, played an important role for the usability of the visual- izations. Our results indicate that the most in uential factor was overview. This can be further re ned into local and global overview, as described in Sections 5.2 and 5.3. Navigation also played an important role for our results, as described in Section 5.4.

5.1 Initial Expectations

For the Search task, we expected both the Information Landscape and the Information Cube to su er due to lack of global context while zooming. This was however true only for the Information Cube. In the Information Landscape, subjects either did not need to zoom to nd the directories, or they managed to nd their way around the visualization without the global context.

Our initial expectation for the Count task was that the Cam Tree makes it dicult to separate les from directories. This is illustrated in Figure 3. The two rectangles labeled \work" and \java" are directories.

The only indication of this in the visualization are that there are cones attached to their right side. Looking at the error frequencies for the Count task, we can note that this is the only task where there were errors with the Cam Tree. We believe that this indicates that our initial expectation was actually correct, but we believe that the e ect of overview (described below) was a more powerful factor.

For the Compare task, our initial expectation was that the Information Cube would perform worst since children on lower levels were occluded. The Informa- tion Cube did indeed perform worst, but not more so in this task than in the two others, which leads us to believe that once again it was more the lack of local and global overview that a ected the performance.

Figure 4: Local overview in the Information Landscape

5.2 Local Overview

When examining a directory in detail, subjects had to use di erent strategies with the three visualizations.

These strategies varied due to the degree of \3D-ness"

in the visualizations as described in Section 2.

The Information landscape is the \least" 3D of the visualizations. Here, a directory could often be ex- amined without too much navigation. As a matter of fact, a majority of the subjects gured out that if they ew up above the Information Landscape and tilted down, the tree of directories and les virtually became a 2D visualization (Figure 4). This made counting les and directories faster and easier, since subjects did not need to move around in 3D space to look at objects from di erent angles.

The Information Cube is the \most" 3D of the visu- alizations. To examine a directory, subjects had to look into a directory cube from di erent angles to be able to see all the contained cubes. This was made even more dicult by the fact that adjacent cubes would block subjects' line of sight at times, as illustrated in Fig- ure 5. Several of the subjects explicitly wished for the possibility to lter out all other cubes from the view.

The degree of \3D-ness" in the Cam Tree is lower

than in the Information Cube but higher than in the In-

formation Landscape. Consequently, strategies to ex-

amine a directory varied. Subjects were at times able

to get an overview of the contents of a directory from

one single position, but just as often they needed to

navigate around the tree to make sure they had seen

everything in the subtree cone (Figure 6). Adjacent

cones blocking the line of sight was not as big a prob-

lem here, but when we informed subjects about the

pruning function in the original implementation of the

Cam Tree most of them said that it would have made

the tasks easier.

(7)

Figure 5: Local overview in the Information Cube

Figure 6: Local overview in the Cam Tree

5.3 Global Overview

In addition to experiencing a lack of local overview when studying a directory in detail, subjects also suf- fered disorientation and lack of global overview when navigating within the visualizations. The nature of our tasks, which all involved two directories in a le system, made it necessary for the subjects to navigate between the directories to perform the task.

We observed that the subjects had big diculties with this in the Information Cube. The lack of a global context caused them to get lost frequently, head back to the preset viewpoint outside the root cube, and then have diculties locating the directories again since they were occluded by surrounding cubes. Some subjects explicitly asked for an overview map so that they would be able to know where they were.

Contrary to our expectations, subjects did not expe- rience this loss of global context when using the Infor- mation Landscape. They always had a sense of \up"

and \down" and could quickly nd their way again on the few occasions they got lost. (We must however re- member that the data sets used in our study were rela- tively small. 3D visualizations are often claimed to be useful for large data sets, so we can not say whether this result would have been di erent with, say, 100 times more data.)

In the Cam Tree, the horizontal orientation of the tree seemed to provide a similar aid in orientation. In- terestingly enough, two of the subjects rotated the en- tire tree to a vertical position (like a Cone Tree) and stated that the global and local overview improved.

5.4 Navigation

As previously mentioned, no custom navigation was available to the subjects in the study. The Cosmo- Player navigation controls were not easy to master for most of the subjects in spite of the practice tasks per- formed. It is indeed possible that an ecient custom navigation might have improved the performance for those visualizations that required a lot of navigation to overcome the lack of global context.

After the study was nished, we informed the sub-

jects about the navigation features o ered by the orig-

inal implementations of the visualizations, and asked

them if they thought it would have helped them in

performing the tasks. For the Information Landscape,

most subjects thought that it was easy enough as it

was. For the Information Cube, most subjects thought

it would be interesting to try the Virtual Reality equip-

ment, but no one seemed convinced that this would

help alleviate the lack of global context. Only for the

Cam Tree did the subjects reply that features such as

independent rotation of the cones and pruning the tree

would have made a big di erence.

(8)

5.5 Conclusion

To summarize, our study shows that the possibility to get a good local and global overview is the one most important factor in supporting the types of tasks that we studied. One way to overcome a lack of overview is, as suggested by some of our subjects, to provide an overview map of some kind.

Custom navigation might also be a partial solution to the problems presented, but it is not clear how this custom navigation should work. It is clear, however, that the closer we get to a \true 3D" visualization, the more important it is to have an ecient navigation method.

6 Related Work

6.1 Related Visualization Designs

Information visualization designs similar to the Cam Tree include the work of Munzner and Burchard [8]

which displays directed graphs with cycles (such as the WWW) in hyperbolic space. The SeeNet3D informa- tion visualization application [4] visualizes global net- works on a sphere and local networks on a map image.

The Bead [3] system is similar to the Informa- tion Landscape. It displays bibliographic data with documents as cubes interconnected by triangles in a landscape-like space.

Nested designs, such as the Information Cube, are more uncommon. Feiner and Beshers [5] present the n-Vision system for visualizing multi-dimensional data (more than 3 dimensions). The visualization consists of nested 3D coordinate systems with axes. The Web Forager and the Web Book [2] adopt a metaphorical nesting, where WWW pages are contained in books, that in turn can be contained in book shelves placed in a 3D room.

Another type of 3D information visualization design (not represented in our selection) is raised surfaces, where information is displayed on a surface that can be raised towards the user to provide extra detail. An ex- ample of this is the Document Lens [10], which is used for laying out pages of a document on a rectangular surface. 3DPS (3-Dimensional Pliable Surfaces) [12] is another example of this type of design, an information visualization application for distortion based display of maps and graphs.

6.2 Related Studies

Previous 3D user interface studies concentrate on the lower level cognitive aspects. The work of Hubona, Shirah and Fout [6] suggests that users' understanding of a 3D structure improves when they can manipulate the structure. The work of Ware and Franck [16] indi- cates that displaying data in three dimensions instead of two can make it easier for users to understand the data. Our study complements this approach by looking

at higher level aspect { task support. Task support in 2D information visualizations has been studied (see for example [15]), but task-oriented comparative studies of 3D information visualizations are still scarce.

7 Future Work

As can be expected, this study has raised more ques- tions than it has answered. Foremost is the question of the actual usefulness of 3D user interfaces. We be- lieve that for the types of tasks and data sets that we have used in this study, a 3D user interface is probably not preferable to a 2D user interface. But can 3D be useful for other types of tasks? For huge data sets? In order to make use of the possibilities o ered to us by 3D user interfaces, these issues should be addressed.

The results from the subjects' ratings of the visualiza- tions point to an interesting angle to this (Figure 2).

The ratings for aesthetics and stimulation were not as di erent between the visualizations as the ratings for good and easy to use. So perhaps 3D user interfaces are most suitable in applications that are more about exploration and long-term learning, where stimulating, aesthetically pleasing user interfaces can be expected to be important?

It is clear that navigation is crucial in 3D user in- terfaces. It is also clear that \one ts all" is not true { the navigation features must be adapted to the user interface at hand, and, more importantly, to the user tasks. Navigation in 3D and Virtual Reality is being researched, but the importance of tailoring the navi- gation to the application, tasks, and users, should be explored more.

8 Acknowledgments

We would like to thank the 25 subjects for their par- ticipation in the study. Thanks also to Iva Tzankova, Quality Technology and Statistics, Lulea University of Technology, for guidance in the design and statis- tical analysis of the study. We also owe thanks to Brian Johnson for making available his Doctoral dis- sertation [7], which has been helpful in designing and analyzing our study.

This work was partly performed under NUTEK grant number P10552-1.

References

[1] Keith Andrews. Visualizing Cyberspace: Information Visualization in the Harmony Internet Browser. In Proceedings of Information Visualization , pages 97{

104. IEEE, 1995.

[2] Stuart K Card, George G Robertson, and William

York. The WebBook and the Web Forager: An Infor-

mation Workspace for the World-Wide Web. In Pro-

ceedings of CHI'96 , pages 111{117. ACM, 1996.

(9)

[3] Matthew Chalmers, Robert Ingram, and Christoph Pfranger. Adding Imageability Features to Informa- tion Displays. In Proceedings of UIST'96 , pages 33{39.

ACM, 1996.

[4] Kenneth C Cox, Stephen G Eick, and Taosong He.

3D Geographic Network Displays. Sigmod Record , 25(4):50{54, December 1996.

[5] Steven Feiner and Cli ord Beshers. Worlds within Worlds: Metaphors for Exploring n-Dimensional Vir- tual Worlds. In Proceedings of UIST'90 , pages 76{83.

ACM, 1990.

[6] Geo rey S Hubona, Gregory W Shirah, and David G Fout. 3D Object Recognition with Motion. In Ex- tended Abstracts of CHI'97 , pages 345{346. ACM, 1997.

[7] Brian Johnson. Treemaps: Visualizing Hierarchical and Categorical Data . PhD thesis, University of Mary- land, USA, 1993.

[8] Tamara Munzner and Paul Burchard. Visualizing the Structure of the World Wide Web in 3D Hyperbolic Space. In Proceedings of VRML '95 , pages 33{38.

ACM, 1995.

[9] Jun Rekimoto and Mark Green. The Information Cube: Using Transparency in 3D Information Visual- ization. In Proceedings of the Third Annual Workshop on Information Technologies & Systems (WITS'93) , pages 125{132, 1993.

[10] George G Robertson and Jock D Mackinlay. The Doc- ument Lens. In Proceedings of UIST'93 , pages 101{

108. ACM, 1993.

[11] George G Robertson, Jock D Mackinlay, and Stuart K Card. Cone Trees: Animated 3D Visualizations of Hi- erarchical Information. In Proceedings of SIGCHI'91 , pages 189{194. ACM, 1991.

[12] M Sheelagh, T Carpendale, David J Cowperthwaite, and F David Fracchia. 3-Dimensional Pliable Surfaces:

For the E ective Presentation of Visual Information.

In Proceedings of UIST'95 , pages 217{226. ACM, 1995.

[13] Ben Shneiderman. The Eyes Have It: A Task by Data Type Taxonomy for Information Visualizations.

In Proceedings of 1996 IEEE Visual Languages , pages 336{343. IEEE, 1996.

[14] J Tesler and S Strasnick. FSN: 3D Information Land- scapes, 1992. Man page entry for an unsupported but publically released system from Silicon Graphics, Inc.

[15] David Turo and Brian Johnson. Improving the Visu- alization of Hierarchies with Treemaps: Design Issues and Experimentation. In Proceedings of Visualization '92 , pages 124{131, Boston, MA, October 1992.

[16] Colin Ware and Glenn Franck. Viewing a Graph in a Virtual Reality Display is Three Times as Good as a 2D Diagram. In Proceedings of 1994 IEEE Visual Languages , pages 182{183. IEEE, 1994.

[17] Ulrika Wiss, David Carr, and Hakan Jonsson. Evaluat-

ing Three-Dimensional Information Visualization De-

signs: A Case Study of Three Designs. In Proceedings

of 1998 IEEE Conference on Information Visualiza-

tion, IV'98 , pages 137 { 144, London, England, July

1998. IEEE.

References

Related documents

The four model architectures; Single-Task, Multi-Task, Cross-Stitched and the Shared-Private, first went through a hyper parameter tuning process using one of the two layer options

I want to open up for another kind of aesthetic, something sub- jective, self made, far from factory look- ing.. And I would not have felt that I had to open it up if it was

Grounded in research both on formative assessment but also on motivation in connection to foreign language learning, it is hypothesised that sharing the

[r]

OpenMP has always had implicit tasks in the form of parallel constructs which, once en- countered create an implicit task per thread. The notion of creating explicit tasks−with

If distant shadows are evaluated by integrating the light attenuation along cast rays, from each voxel to the light source, then a large number of sample points are needed. In order

A few students miss a larger amount of class than ideal, but as a teacher I am able to plan ahead and I will be scheduling additional time to assist this student in utilizing

Additionally, I had two gifted students working in one of the groups with a few middle of the road students but not any students who really struggle (Group is visible at 3:13 of