• No results found

Training novice robot operators to complete simple industrial tasks by using a VR training program

N/A
N/A
Protected

Academic year: 2021

Share "Training novice robot operators to complete simple industrial tasks by using a VR training program"

Copied!
26
0
0

Loading.... (view fulltext now)

Full text

(1)

SECOND CYCLE, 30 CREDITS

,

STOCKHOLM SWEDEN 2020

Training novice robot operators to

complete simple industrial tasks

by using a VR training program

HAISHENG YU

KTH ROYAL INSTITUTE OF TECHNOLOGY

(2)

Training novice robot operators to complete simple industrial

tasks by using a VR training program

Haisheng Yu

haisheng@kth.se

Examiner: Tino Weinkauf

Supervisor: Mario Romero

Co-supervisor: Andrea de Giorgio

June 2020

Sammanfattning

(3)

Training novice robot operators to complete simple industrial

tasks by using a VR training program

Haisheng Yu

KTH Royal Institutate of Technology Stockholm, Sweden

haisheng@kth.se

ABSTRACT

This paper studies a Virtual Reality (VR) training program for novice industrial robot operators. The VR training program is constructed using Unity and HTC Vive. The paper presents the result of com-parative experiments allowing novices to learn and operate virtual and physical robots to complete simple industrial production tasks. The results include their time to complete the task, the pass rate of the task, and the results of the questionnaire survey as evidence of their learning efficiency and user satisfaction. Finally, through data analysis, we compare the impact the two methods have on novices. The two approaches are the VR training program and the conventional physical robot training method. The experiment re-sults show that novices who use the VR training program at first show a high degree of user satisfaction, and they can more quickly and efficiently master the knowledge of manipulating industrial robots and apply it to practice. The findings of this article also show that the use of VR technology in industrial production improves efficiency and is a feasible and reliable method which still has room for improvement.

KEYWORDS

Virtual Reality, Industrial Robot, Industrial Production, VR training, HTC Vive, Unity

1

INTRODUCTION

The term Virtual Reality (VR) first appeared in the science fiction "Pygmalion’s Spectacles" by Stanly G. Weinbaum in the 1950s. The short story describes in detail the VR system based on olfactory, tactile, and holographic goggles [1]. Before 2010, people do not have a significant improvement in VR technology. Advances were limited to custom-designed hardware and software in research laboratories. Until 2010, when the talented youngster Palmer Luckey brought his product Oculus to the world, VR technology ushered in a new development area [2].

The core concept of VR technology is the simulation of physical reality. This technology can build a virtual world that uses computer simulation to generate a three-dimensional space. It provides users with visual sensory stimulation, making them feel as if they were immersed in the situation and were able to observe and interact with objects in three-dimensional space without restriction from a limited viewport, for instance, a computer screen. This benefit means VR technology can implement many things that seem impos-sible under physical reality. It also makes it have broad development prospects in many fields. In particular, for the current work, we focus on manufacturing.

1.1

Motivation

Industrial production provides sufficient supplies for daily human life and brings significant benefits to the development of society. With the development of technology, mechanical production began to replace the status of human production gradually. The fully au-tomated machine production dramatically increases efficiency and reduces the cost of consumption. Even so, industrial production is still inseparable from human manipulation. Production procedures need to be programmed according to each situation.

Furthermore, mechanical maintenance requires humans inter-vention, and some high-end production tasks still need to be exe-cuted manually. These cases require constant training for skilled workers. However, it is timeconsuming and challenging to train novices to perform robot programming and master the skills to manipulate robots. The conventional method is to use robot pro-gramming software such as ABB’s RobotStudio and Siemens’ Rob-Cad. A slightly more embodied technique for training the robot and the novice is to use the teaching pendant, a pointing device that allows a human operator to guide the machine. Although these tools are powerful, their functions are uneven, complex to oper-ate, and not compatible with different models of robots; some are software-based secondary developments that are expensive to use. Another issue with robot manipulation is human operator safety. During the practice of actual machine operation, even if the robot moves slowly, it still massive and powerful. Dangers can occur after pauses and stops during the movement of the robot. Even if the operators can predict the motion trajectory, the external signal may change the operation and produce unexpected motion without any warning. Therefore, we need a solution that can be compatible with various robot models, be more cost-effective, and let a novice learn robot programming and master the skills of manipulating robots more efficiently, with less effort, and in a safer environment.

(4)

Enterprises and scholars engaged in the field of robotics have also been continually trying to integrate VR technology into indus-trial production, and have developed various products. However, research on how to use VR technology to train novice operators and how the application of VR technology in industrial production specifically affects these novice operators seems to be minimal. This may seem trivial, but it is also the top priority that needs to be considered. VR technology is still evolving, so using virtual display technology to simulate VR robots to train novices must have certain limitations and inconsistencies compared to operat-ing a physical robot. Therefore, research is needed to study the impact of using VR technology in comparison with conventional methods to train newcomers. Through the research, we can know how to make the VR robot consistent with the PR robot to avoid those factors that will misleading novices. Therefore, the purpose of creating a training program for industrial robots based on VR and let novice operators learn to use the VR robot and PR robot to complete the same tasks for comparison and find the answers to the above has appeared. Based on this motivation, I raised the following questions. By using VR technology to train a beginner with a virtual simulated robot in an immersive environment, what difference does it caused compare with the regular method? If we train a novice to complete some simple tasks such as sketching, pick and place tasks, which manner can be more efficient, physical or virtual? Which way is more likely to be accepted? I hypothesize that novice using VR training programs can master the operation skills of industrial robots faster and have a higher task pass rate than those who trained by conventional methods. They will also prefer training methods based on VR.

1.2

Research Question

According to the content displayed in the motivation, I set the research question of this paper as follows: In the process of training novice industrial robot programmers to perform sketching and pick and place tasks, what impact does a virtual reality training program have on the user’s learning efficiency and satisfaction levels when compared against the conventional methods?

2

RELATED WORK

The advancement of science brings the integration of different the-ories and technologies. Nowadays, VR as an auxiliary technology has been entirely used in industrial robots in industrial production. The key performance indicators of industrial production are cost, ef-ficiency, and safety. Therefore, the function and program design for industrial robots have continuously improved. Especially when the VR technology used as an auxiliary skill into robotic programming, significant achievements have been made. Through the results of past years, it can prove that VR technology has a profound influence on the technological innovation of industrial robot programming. First of all, the immersive nature of VR allows people to observe things in three-dimensional space instantly and without restriction. People can simulate the robot to avoid possible problems during a new robot structure design compared to the one based on 2D graphic design. Also, the use of VR can significantly simplify the operation steps and avoid repeated operations. These benefits make

it easier to train industrial robots in a new production process. Simi-larly, it may save time and costs of training new employees. Besides, employees’ safety and the maintenance of the machines are also two essential aspects to ensure the productivity of the industry. The researchers integrate VR into teleoperation technology that made this happened. By programming and manipulating VR robot that are digital twins of the VR robots, operators ensure accomplishing the same tasks through the same set of actions. It provides a safe work cell for both robots and users. Also, it gives the technical support for remotely manipulating robots, which allows the users to control or maintenance robots anywhere and anytime. In this section, the history of the human using VR technology as an aid in industrial production will be introduced followed by some relevant works.

2.1

Operator Training

Robot systems are often complex and sometimes are difficult to operate, especially when the need to control multiple robots to ac-complish combination tasks. This requires a skilled worker who can master the system to make precise settings. But correspondingly, training the operators of these complex robot systems is also time-consuming and costly. Nadine E. Miner and Sharon A. Stansfield presented a VR-based robotic control system which can provide a high, task-oriented, real-time training for novice user [3]. The VR systems simulate behaviors to enhance the training effects of operators. The operators interact with this VR system through the use of task-level voice commands and an immersive stereo viewer. The immersive stereo viewer provides users with a full range of free and intuitive perspectives. The automated voice commands contain two functions: voice recognition and audio feedback. A voice guide gives a detailed tutorial on robot construction, features, and opera-tions. Users operate the VR robot by providing voice commands; the VR system identifies the command and provides feedback. This VR system offers an intuitive and interactive simulation interface for control and training on complex robotic systems; most of the novice users can grasp the operation of the system quickly.

T. Takahashi and H. Ogata designed a VR-based robot teach-ing interface that can provide novice operators a simple way to manipulate the robot [4]. The computer generates assembly tasks based on a virtual workspace and recognizes the operator’s move-ments as robot task-level operations by using a finite automaton. The system translates the recognized activities into manipulator-level commands, and the real robot executes the assembly tasks according to these commands.

2.2

Program and Design

(5)

Flückiger, L. shows an innovative user interface for high-level control of robot manipulators [6]. A VR interface supports the user to interact with the manipulator and represents the user with a 3D graphical interface. It’s a simple, intuitive tool that can control any mechanical structure without requiring any programming skill. The user describes the geometric properties of the manipulator. Then inverse kinematics are automatically calculated in real-time to move any part of the robot through 3D input devices. The system can also provide users with decision capabilities when problems are encountered.

Currently, a flexible work cell in which humans and robots can safely interact and collaborate becomes a trend in manufacturing. VR is a useful tool to simulate such a complex system with a high level of immersion. To optimize the interaction between human and robot, Luigi Gammieria, Marco Schumannb, et al. developed a VR model which allows controlling the end effector in the task space via IK (Inverse kinematics) and the joints in the joint area via FK (Forward Kinematics) algorithms [7]. It gives the possibil-ity to obtain a coupling between the real robot and a completely new virtual scene. It can consider this model as an optimization compared to the already existing algorithm. By using this model, the different connections between the robot and the virtual model can have further improvements. It could be possible to drive the real robot starting from the virtual scene, which can be helpful for teleoperation and the teaching of the robot.

2.3

Teleoperation

Traditional methods of Robot teaching require human demonstra-tors to program with a teaching pendant; sometimes, it’s a com-plicated and time-consuming exercise. Thus, Yang Xu, Chenguang Yang, Et al. proposed a novel method based on teleoperation, which allows a demonstrator to train the robot intuitively [8]. In this method, they used Kinect to control the robot in V-REP by hu-man body motion. With the RBF network support, the robot can reproduce the trajectory through learning and training.

Michail Theofanidis, Saif Iftekar Sayed, Et al. described a novel teleoperation interface system to program a 4 –DoF (Degree of Free-dom) industrial robotic arms [9]. They used leap motion instead of general VR equipment and used gesture recognition instead of operating with controllers, which makes the program more conve-nient and smarter for the users. Contrast through a series of tasks to prove that the participants took less time to complete the task of the experiment by directly interacting with the real robot. 2.3.1 Assembly and Maintenance.Sankar Jayaram Et al. presented the concepts behind a VR-based virtual assembly system in their paper [10]. The authors noticed that VR tools are demonstrating their value in assisting the designer in creating designs that are more ’assemblable.’ Full implementations of this virtual assembly technology can significantly reduce design cycle time, redesign efforts, and design prototypes. They also present four benefits that virtual assembly can bring to the manufacturing. It can reduce product development and fabrication time. It will provide more advanced design methods and tools, and it will improve product design (quality, reliability, accuracy, Et al.) and reduce costs.

Nowadays, the process of robot fault diagnosis and maintenance generally done by teleoperation. And one of the methods is vi-bration diagnosis. However, it still has so many weaknesses, for example, accuracy. M. Bellamine Et al. try to improve on this issue in their paper [11]. The way they used to improve the resultant efficiency depends on the accuracy of the matching step, which they could do with the mouse by adjusting the 3D model on the image taken by the camera. According to the result, their system shows more advantages: the system not influenced by the delay time occurred in the real-time control. And it’s safe for both the op-erators and the equipment. The operator can correct his operations to avoid adverse effects. The remote collection of vibration data is assured. However, resultant accuracy depends on the precision of the manipulation done by the operators; it will go to an error once you get a lousy matching result.

In industrial production, workers’ working environment and their safety are crucial. However, in actual production, workers sometimes have to work in a noisy or risk-filled environment and achieve repetitive motion or carry heavy loads, which may cause repetitive strain injury. One solution for this is using robotics to assist workers. To further enhance its safety, F. Hernoux Et al. provide a pre-collision algorithm by using VR tools for the aim to detect the collisions and prevent before any injury occurs [12]. A Kinect2 has been used in their application to capture people’s actions and transfer the motions to machinery directives, which allowed the robot can learn and repeat the trajectory followed people’s steps. The Kinect2 will also record the distance between the operator and the robot; the robot will slow down the movements to prevent collisions when the range reached a preset minimum value.

There are also multiple studies and products using Augmented Reality to auxiliary industrial production. The fields of use include program and design [13][14], training [15], teleoperation [16], as-sembly and maintenance [17][18]. One advantage of A.R. compared to VR is its low cost. For example, in [15], the only request is to download the app into a teaching pendant; all instructions can complete with this pendant. Although AR is inexpensive and can create an intuitive visual effect in a physical reality environment, it does not provide a high degree of immersion and freedom like it reflects under the VR condition.

3

METHOD

This chapter will introduce all the knowledge, technologies, and software involved in this project in detail, including the program design of the virtual and real-world industrial robot, experimental task design, and user study design.

3.1

Program Design

For the aim to approach the research question in this paper, the experimental needs to be designed in two conditions, which are under the physical reality environment and the VR environment. Under the two terms, it requires the same type of robot, and the robot needs to have the same functions, the robot model used in this research is ABB IRB 120 industrial robot.

(6)

flexible and compact production, the robot provides an excellent so-lution for material handling and assembly applications. It’s compact and lightweight with superior control and path accuracy [19].

Figure 1: ABB IRB 120 Industrial Robot [20]. ABB IRB 120 only weight 25kg, the lightweight makes it able to be mounted virtually anywhere at any angle without any restriction. The 6-axis design makes it more flexible; it has the best working range in class stroke [19]. ABB IRB 120 can reach of 580mm in horizontal and 112mm below, rotation range up to 165 degrees.

Figure 2: A Working range of ABB IRB 120 industrial robots [21].

Fig.3 and the following paragraphs will explain the detailed information for the 6-axis of ABB IRB 120 Robot.

Figure 3: The name and position of the 6-axis for ABB IRB 120 Robot.

Axis name Direction of Working range Maximum

rotation speed

No.1 Axis Y-axis +165° +165° 250°/s Rotation

No.2 Axis Z-axis +110° -110° 250°/s Vertical Arm

No.3 Axis Z-axis +70° -110° 250°/s Cross Arm

No.4 Axis X-axis +160° -160° 320°/s Axis Wrist

No.5 Axis Z-axis +120° -120° 320°/s Wrist Pendulum

No.6 Axis X-axis +400° -400° 420°/s Wrist

Transmission

Table 1: The direction of rotation, working range, and max-imum speed parameters of the six axes of the ABB IRB 120 robot.

The ABB IRB 120 Robot has six axes. They are No.1 Axis Rotation, No2, Axis Vertical Arm, No.3 Axis Cross Arm, No.4 Axis Wrist, No.5 Axis Wrist Pendulum, No.6 Axis Wrist Transmission. Their position can be checked in Fig. 3. Each axis will rotate on a fixed coordinate axis, and each axle has its fixed rotation range. The coordinated rotation of 6 axes enables the robot’s range of motion to reach the area shown in Fig. 2. Their parameters can be found in Table 1.

There have two ways to program the motion of the ABB robot: Online programming and Offline programming.

3.1.2 Online programming.Online programming will stop the robot from its productive work and switching it to a "programming model." The users can then create or update the program while the robot is online. It mainly aimed at some tasks that need the users to move the robot around to program it physically. It provides some systems that make the programming process becomes very easy for non-programmers. There are mainly two different types of online programming: Teach Pendant Programming and Lead-through or hand-guiding.

• Teach Pendant programming– A teach pendant is a de-vice that can plug directly into the robot. By using its in-terface, users can move the robot to the desired positions and record each movement. It provides two types of input methods: Text-based input and Graphical input. The first one allowed users to use the manufacturer’s programming language to program. The second one is more straightfor-ward than Text-based input, and users can program the robot by using a light pen to tap the buttons and options on the screen.

(7)

3.1.3 Offline programming.The offline programming method is the most widely used and most frequently used robot program-ming method in industrial production. It usually uses professional robot programming software such as robot studio [22] to program the robot for tasks. It allows the user to edit the next task pro-gram while the robot is performing a task. When the propro-gram is ready, users can download it onto the robot and debug it. Then the robot can continue to do productive work. Offline programming can reduce downtime, speed up robot integration, and continually improve the robot’s program without impacting productivity dur-ing the whole industrial production process. It mainly contains two types: Text-based programming and Graphical offline program-ming.

•Text-based programming– This is the most traditional method of programming, users write the code offline in a text editor and download it to the robot after complete it. Users can also access more robot’s functionality by using the manufacturer’s programming language. However, it requires many debugging tests for each step.

•Graphical offline programming– This method provides a simulated robot programming interface. Users can program the robot as if they were moving a real robot, it combines the advantages of both lead-through and teach pendant program-ming, and also with the added benefits of being an offline method.

Considering the technical level of the subject, the difficulty of the task, the maintenance of the robot, and personal safety. Using teach pendant programming for this research appears to be the best option.

3.2

Virtual Reality Robot

The virtual robot used the same type of robot used in the physical robot and rendered as a 3D model. This 3D model downloaded from ABB’s official website [23]. The model has been re-textured and been rigging in Blender, the program used in this research built in Unity with the Support of SteamVR plugin and HTC Vive. 3.2.1 Blender and rigging.Blender is a free and powerful com-puter graphics software toolset [24], usually used for 3D modeling of objects and gives objects more usability and authenticity by doing texturing, rigging, rendering, animating, Et al. operations. In 3D modeling, rigging is the process of making the objects to move [25]. We can use this technology to construct a serious of ’bones’ and joints to the robot. Each bone has a three-dimensional transformation from the default bind pose (this includes its position, scale, and orientation), and an optional parent bone. And bones connected with joints. Thus, all the bones form a hierarchy. When one of the bones or a joint moved, the bones and joints associated with it are adjusted accordingly. By giving the robot a skeleton in Blender and setting it in Unity, we can simulate the motion of an industrial robot.

3.2.2 Unity.Unity was used as the engine of choice to develop the virtual robot program in this research. Unity is a fully integrated professional game engine developed by Unity Technologies. It has features that allow developers to focus on the design of the game and ignore the implementation of the underlying technology to

achieve rapid development. Unity support for different Software De-velopment Kits and plugins; the SteamVR plugin is one of them will be used in this research. C , as a programming language supported by Unity, will be used to develop the program in this research. 3.2.3 SteamVR plugin and HTC Vive. SteamVR plugin is a Unity plugin developed by Valve company to support a smooth application programming interface between Unity and SteamVR platform. SteamVR is a platform that provides the interface to con-nect VR hardware dives and software. The HTC Vive is a VR device developed by HTC and Valve company. It uses "room-scale" track-ing technology, allowtrack-ing the user to move in 3D space and interact with the environment [26]. It contains a headset, two wireless hand-held controllers, and two lighthouse base stations. It is a meaningful way to interact with robots and complete task indicators in this research.

3.3

VR robot program introduction

The program provided in this research called ’ViRobot.’ The proto-type developed by the author and other members from a course the author attended. The author continued to develop this prototype in this research, optimizing it, adding new features and systems. 3.3.1 VR robot features.The VR robot has two functions – draw-ing and pinch. The user operates the VR robot by usdraw-ing the two wireless handheld controllers. The left-hand controller with a laser beam projection function, it is the primary way for users to interact with the user interface in the virtual environment. The user needs to press and hold the trigger button of the right-hand controller to unlock the VR robot that controls the movement of the VR robot and sends the movement direction and position to the VR robot by dragging the right-hand controller in the space. The VR robot will make a corresponding movement according to the obtained movement data. The user can control the VR robot to make the pen model held by the robot’s clipper to touch the paper on the desktop to perform drawing operations. By pressing the grip but-ton on the right-hand controller can open and close the VR robot clipper, thereby controlling the VR robot to perform the gripping operations. The trackpad section of the right-hand controller con-tains a list of options. The user switches the options by pressing the up and down face buttons on the trackpad to make the robot switch between different working modes. These options include free rotation mode, play mode, record mode, and pose mode.

The functions of these modes are described below:

Free rotation mode:Free rotation mod is the default mode for robot motion mode. In this mode, all six axes of the robot are active. The robot can not only move horizontally and vertically but also rotate clockwise and counterclockwise. The user can switch the mode by pressing the menu button on the right-hand controller, making it into locked rotation mode. In this mode, the rotation of the No.4 axis of the robot is limited; this causes it can only move in the horizontal or vertical position. This feature improves the accuracy and stability of the robot in drawing and grasping operations.

(8)

Figure 4: Two wireless hand controllers in Virobot, the left-hand controller has a green color lase beam function used to interact with the U.I. in the virtual environment, the right-hand controller with a list of options on the touchpad sec-tion used to switch the different mod of the robot.

Record mode:In this mode, the user’s manual control of the robot will be recorded. The user can use the play mode to make the robot repeat the last operation record to observe the research robot repeatedly.

Pose mode:This mode is the default operation mode that con-trols the movement of the robot. Users can move the robot freely in this mode and let it pose in different poses.

3.3.2 Guiding system. The VR training program also provides a guidance system. Users interact with the system to learn the knowl-edge and practice for how to operate this VR robot and complete designated teaching tasks. The system mainly includes a three-dimensional interface under the virtual environment, and a showing platform surrounded the VR robot.

Three-dimensional initial interface:The user will enter into this interface when the Virobot program started. The interface mainly contains two virtual buttons that can interact with the user, which respectively correspond to the two features of the VR robot – drawing, and pinch. The user interacts with the keys by using two hand controllers, and the interface of the selected key will turn to blue. The player starts the learning of the robot by pressing the trigger button to select and enter the corresponding scene. The display of this interface shown in Fig. 5.

Figure 5: Start menu of Virobot, select the robot features by using the hand controller to interact with the virtual but-tons.

Showing platform:The showing platform consists of four screens – the main screen, help screen, camera screen, and menu screen. Each screen has virtual buttons that users can use to interact with it. The specific functions and uses of each screen will explain in detail below.

Main screen:The main screen consists of a play screen and a play bar. The play screen contains a video tutorial that teaches users how to use controllers to operate the VR robot. It also includes an introductory video of the tasks that the user needs to complete in the current scene. The play bar provides three interactive virtual buttons. Users can use left-hand controllers to press them to play, pause, and stop instructional videos. Users can use the play bar to watch instructional videos and task introduction videos repeatedly.

Figure 6: The user watched the task description video and used the left-hand controller to pause the video.

Help screen:Help menu contains six virtual buttons for inter-action. They are the Robot introduction button, how to start button, blue zone area button, collision warning, voice button, and turn on/off area button.

Figure 7: The display of the help menu interface. • Robot introduction button:Robot introduction contains

an audio file; the audio introduced users to the types of robots used in this project.

• How to start button:This button contains an audio file. It will instruct users on how to use the Main screen to learn robot operation knowledge and to obtain the tasks that need to complete in the current scene.

(9)

IRB 120 robot, allowing users to display or turn off space prompts in the virtual environment by pressing the turn on / off button. At the same time, it also emphasizes to the user that when operating a PR robot, pay attention to staying outside the robot’s work area at all times to ensure their safety.

•Collision warning button:The audio contained in this button emphasizes to the user that when the user controls the robot to contact with the desktop, it should pay attention to slow down the robot speed and slowly control the robot, so as not to let the robot collide with the desktop and cause damage. At the same time, when the user manipulates the VR robot too close to the desktop, the system will pop up a striking collision warning at a conspicuous location to keep the user alert during the whole process.

Figure 8: When the VR robot is too close to the desktop, the system will pop up a collision warning.

•Voice button:Pressing this button will immediately stop the audio tutorial.

•Turn on/off area button:This button will show and close the work area effect of the ABB IRB 120 robot in the virtual environment. Fig. 9 will show this display effect.

Camera screen:Camera screen contains three interactive but-tons – Camera 1, Camera 2, Camera 3. These three butbut-tons provide three different viewing positions. Players can switch the screen to varying cameras by changing buttons to observe how the robot works from separate locations. Camera 1 provides an observation angle embedded inside the robot, which allows users to follow the work progress from the perspective of the robot. Camera 2 provides an observation angle directly in front of the robot, and Camera 3 provides an observation angle directly above the robot. Fig. 10 will show the specific display effect.

Menu screen:Menu screen contains a series of virtual buttons used to switch between different scenes. The user can reload the current scene, move to the previous or next scene, and return to the initial interface scene by using the Menu screen. Fig. 11 provides the interface of the menu screen.

4

USER STUDY DESIGN

In this section, I will describe how to design the user study in detail. To approach the research question, the participants of the experiment need to have at least one condition.

Figure 9: The first picture shows the working area closed, and the second picture shows the working area on display.

(10)

Figure 11: Menu screen’s interface display.

•participants should no experience with industrial robot op-eration

•participants should no experience with VR (no necessary, but will be better)

4.1

Tasks design

Two types of tasks designed relevant to the two features of the robots, they are drawing tasks and pick and place tasks. The purpose of the design of the tasks aims to collect experimental data and user’s feedback for the subject’s learning efficiency and satisfaction level with the design and experience of each task.

The participants were assigned into four groups (Group 1, Group 2, Group 3, and Group 4). Each group contains two sets of experi-ments - Drawing and Pick Place. The experiment consists of the VR part and the PR part. Each part includes several tasks with in-creasing difficulty. For example, the VR part of drawing experiment in group 1 has two types of tasks. Task one introduced basic op-erations to let the participants familiar with how to operate the robot to complete the task. Task two increased the difficulty based on the previous one. If the subject fully understands the content in task one, they will fastly understand the tips covered in task two and complete it as quickly as possible. Each participant can only participate in one experiment at a time. But he can learn to use two types of robots at the same time.

For the aim to counterbalance the task assigned to each group, the control variable and the dependent variable between groups need to be considered. The relationship between the variables and group tasks, as shown in Fig. 12.

The control variable is the different tasks that users need to com-plete, and the dependent variable represents the learning order of the two training method (Virtual training method and Conventional training method). Task 1 represents the drawing task, and task 2 represents the pick and place task. Task1.1 and task 1.2 represent the two difficulty levels of the same task, in which the difficulty of the second part has improved compared to the first one. The letter A represents participants learn the virtual training method, and Letter B represents participants learn the conventional training method. The order of A and B represent which method will be the first one to learn. As we showed in Fig. 12, the control variable is the same task, while the dependent variable is different learning order when Group 1 compared to Group 2. The control variable is the tasks different, and the dependent variable is the same learning

Figure 12: The relationship between variables and group tasks.

order of the two types of the robot when Group 1 compared to Group 3.

4.2

Drawing task

In this task, participants need to find a correct path of a maze from the start point to the endpoint. And operate the two types of robots holding a pen to draw the route. There is no time limit for the user to complete this task. But will provide a standard time to complete this task. The standard time depends on the average completion time of several times for the person who can skilled operate the robot.

The task consists of two parts: the practice part and the experi-ment part. In the practice part, users learn the knowledge to operate the two types of the robot and required to complete a simple exer-cise (see Fig. 13) before continuing to the experimental part. The experiment part contains an ordinary task – task 1 (see Fig. 14), and two difficulty level of tasks – task 1.1 and task 1.2. Participants need to draw the right path of task 1 at first, and then move to the difficulty level tasks. Participants choose to enter corresponding difficulty tasks according to their grouping. For task 1.1, the par-ticipants required to draw the right path of the maze showing in Fig. 15. Compared to task 1.1, the graphics contained in task 1.2 increase the difficulty of the task. Participants need to acquire more skills to pass the task, the maze showing in Fig. 16.

4.3

Pick and place task

(11)

Figure 13: Schematic of the exercise task.

Figure 14: Picture used for the drawing task 1. For the task, there will be two designed placement areas; the two regions have the same size and separated by a certain distance. A block placed in the middle of the two areas. The participants required to operate the robot pick and place the block between the two areas and repeated several times and finally put the block back to the central place of the two regions.

The task also includes two difficulty levels – task 2.1 and task 2.2. Task 2.1 contains 4 scenes, the difficulty coefficient will gradually increase. Participants were required to pick and place the blocks be-tween the two placement areas and repeat the operations four times. The distance between the two placement areas and the size of their own area will change according to these four small experiments. Their changing rules can be found in Fig. 17 and 18. In scene1, the distance between the two placement areas is close, and their area is large size. In scene2, the interval between the two placement areas is unchanged, but their area will change from large size to middle size. In scene3, the distance between the two placement areas be-comes larger, and their area changed to the larger size. In scene4, their distance is the same as in scene3, and their area changed to the

Figure 15: Picture used for the drawing task 1.1.

(12)

harder it is to hit. Through the task design rules in the previous paragraph, we can see that the distance and size of the placement area in the pick and place task are designed according to this law. I present some hypotheses here to see if I can prove them in the results section. The greater the distance between the placement areas, the longer the participants will complete the task. The smaller the placement area, the harder it is for participants to accurately place the square in it. But as the number of operations increases, the accuracy will gradually increase.

Figure 17: Area spacing property description.

Figure 18: Area size property description.

4.4

Judging Criteria

Some evaluation criteria were developed to measure participants’ learning efficiency and satisfaction level. For learning efficiency, it is need to collect and analyze the participants’ task complete time and task pass rate. A recommended time will be used to measure the task pass rate and learning efficiency. The value is determined by the average time of the author to complete the relevant tasks. Each tasks has it own recommended time. Use the time compare with participants’ task complete time to see whether they meet the requirements. Then record how many task the single participant has passed and divide by the total number of participating tasks. Finally, we can get the task pass rate.

A scoring rule is an additional standard specifically set to mea-sure the learning efficiency of pick and place tasks. It gives a 1 to 3 rating based on how many part of the block are placed in the

placement area. Table 2 will detail the scoring rules. Participants get a score every time they place a block in the placement area, add up the scores of all times in a scene, and then take the average to get the final score of the scene. The final learning efficiency of a participant is determined by the participant’s task complete time, task pass rate and total scores. If the participant takes less time, the pass rate is higher, and the total scores are large, then we can say that the participant’s learning efficiency is higher.

However, the satisfaction level of participants cannot be directly obtained from the experimental data, so it needs to be conducted through a questionnaire survey. Participants should fill out a ques-tionnaire after completing the experiment. The survey contains a series of questions that used to assist the authors in collecting user experience and feedback on each step of the participating experi-ment. It includes the evaluation of the functional design of the robot, the evaluation of the design of the guiding system, the practicality of the VR robot program, the rationality of the experimental design, the satisfaction with the use of the VR robot program, the difficulty of using two different types of robots, Et al.

Score Description

0 Less than 60% of the area of the block is within the placement area.

1 60% to 90% of the area of the block is within the placement area.

2 More than 90% of the area of the block is within the placement area.

Table 2: The table explained the scoring rules used in this research.

5

RESULT

Twenty-four participants were recruited in the user study of the present experiment, seventeen of which have no experience using VR products, and two have experience in controlling industrial robots. They were randomly assigned to the mentioned two types of experiments abiding by the rules illustrated in Fig. 11.

To facilitate the comparison of graphs, the participants were rearranged. Table 3 lists the new relationship.

(13)

5.1

Drawing Task

The results of participants operating both virtual and physical robots to complete task 1 and task are 1.1 shown in Fig. 19 and Table 4. The results of participants operating both VR and PR robots to complete task 1 and task 1.2 are shown in Fig. 20 and Table 5.

By comparing the data covered in the following chart, we can found that which methods learn first will have some impact on the identical experiment with another method (Fig. 19 and 20).

Besides, a recommended time and a task pass rate will introduce to measure the final learning efficiency of the participants. Table 4 present the task pass rate for drawing task 1 and 1.1, Table 5 present the task pass rate for task 1 and task 1.2. The recommended time for each task and more detailed comparison information can be found in Appendix A and B. As revealed from the analysis of the data in the two tables, participants who learn the virtual training method first can master the robot faster than those who learn the conventional training method first. They have less time to complete all tasks than the latter, and the time spent on each task is closer to the recommended time. Moreover, they have a higher task pass rate than the latter. After synthesizing the mentioned two data, we can conclude that participants who learn the virtual training method at first will have higher learning efficiency.

However, there are exceptions (e.g., participant 10) in Table 5. His combined results are the highest in the group. The reason is that he already has a wealth of experience in robot control and VR products before participating in the experiment, thereby explaining why he can master operating skills faster and achieve excellent results.

Figure 19: Time spent by participants to complete Task 1 and Task 1.1. P1-P3 participants learn the virtual training method first, and P4-P6 participants learn the conventional training method first.

5.2

Pick and Place Task

The results of participants operate both VR and PR robots to com-plete task 2.1, shown in Fig. 21 and Table 6. The results of partici-pants operate both VR and PR robots to complete task 2.2, shown in Fig. 22 and Table 7. More detailed information can be found in Appendix C and D.

If the conclusions drawn in the drawing task may be accidental, then after analyzing the data in the pick and place task, it can be

Participants P1 P2 P3 P4 P5 P6 Condition AB AB AB BA BA BA VR Task1 × × × o o × VR Task1.1 o o × o o o PR Task1 o × o × × × PR Task1.1 o o o × × × Number of 4 4 4 4 4 4 tasks Number of 3 2 2 1 2 1 passes Passing rate 75% 50% 50% 50% 50% 25% Table 4: Task pass rate for participants to complete task 1 and task 1.1. A means participants learn the virtual train-ing method, B means participants learn the conventional training method. The order of A and B means which meth-ods learn first. VR: The virtual robot in the virtual training method. PR: The physical robot in the conventional training method.

Figure 20: Time spent by participants to complete Task 1 and Task 1.2. P7-P8 participants learn the virtual training method first, and P10-P12 participants learn the conven-tional training method first.

proven again that participants who preferentially use the virtual robot can obtain better learning results.

5.3

Learning efficiency

The present section primarily focuses on visualizing and organize data related to the learning efficiency of the four groups of par-ticipants to perform their corresponding experiments. The data of cost time, difference value, passing rate, and total scores from 4.1 section will act as a parameter to determine learning efficiency. Besides, the arrangement principles provided by Fig. 12 and Table 4 are combined to compare between groups.

(14)

Participants P7 P8 P9 P10 P11 P12 Condition AB AB AB BA BA BA VR Task1 × o × o o o VR Task1.2 o o o o o o PR Task1 o o o × o × PR Task1.2 o o o o o × Number of 4 4 4 4 4 4 passes Number of 3 4 3 4 4 2 passes Passing rate 75% 100% 75% 100% 100% 50% Table 5: Task pass rate for participants to complete task 1 and task 1.2. A means participants learn the virtual train-ing method, B means participants learn the conventional training method. The order of A and B means which meth-ods learn first. VR: The virtual robot in the virtual training method. PR: The physical robot in the conventional training method.

Figure 21: Time spent by participants to complete Task 2.1. P13-P15 participants learn the virtual training method first, and P16-P18 participants learn the conventional training method first. This task contains four scenes, which are de-noted as S1-S4. Each scene includes two parts, i.e., VR and PR.

In the four tables in Fig.23, the first three participants first use VR robot to train themselves in the skills and knowledge of manipu-lating robots. After completing the tasks, they adopted the PR robot to repeat those tasks. The last three participants were in the reverse order. The results reveal that the participants who first learned to use the VR robot have a significantly reduced overall time to complete all tasks than the latter. Compared with the recommended time, they are more likely to create a novel record of completion time. Though each group is composed of 6 different participants, the participants who participated in the advanced difficulty task used less total time than those who participated in the simple difficult task.

The difference value refers to the difference between the time taken by each participant to complete each task and the recom-mended time, which may be upright or negative. By visualizing the

Participants P13 P14 P15 P16 P17 P18 Condition AB AB AB BA BA BA VR Scene1 × × × × o o VR Scene2 o o × o o o VR Scene3 o o o o o o VR Scene4 o o × o o o PR Scene1 o o o × × × PR Scene2 × o o × × o PR Scene3 o o o × o o PR Scene4 o o o o × × Number of 8 8 8 8 8 8 passes Number of 6 7 5 4 5 6 passes Passing rate 75% 87.5% 62.5% 50% 62.5% 75% Table 6: Task pass rate for participants to complete task 2.1. A means participants learn the virtual training method, B means participants learn the conventional training method. The order of A and B means which methods learn first. VR: The virtual robot in the virtual training method. PR: The physical robot in the conventional training method.

Figure 22: Time spent by participants to complete Task 2.2. P19-P21 participants learn the virtual training method first, and P22-24 participants learn the conventional train-ing method first. This task contains four scenes, which are denoted as S1-S4. Each scene includes two parts, i.e., VR and PR.

change of the difference, what effect the use of different robots will have on the learning efficiency of the participants is studied. The four charts in Fig. 24 shows the specific information.

(15)

Figure 23: Relationship between total cost time and recom-mended time for participants to complete all tasks involved in the experiment.

(16)

Participants P19 P20 P21 P22 P23 P24 Condition AB AB AB BA BA BA VR Scene1 × × o o o o VR Scene2 o o o o o o VR Scene3 o o o o o × VR Scene4 × × o o o × PR Scene1 o o o × × o PR Scene2 o o o o o × PR Scene3 o o × o o o PR Scene4 o × o × × × Number of 8 8 8 8 8 8 passes Number of 6 5 7 6 6 4 passes Passing rate 75% 62.5% 87.5% 78% 75% 50% Table 7: Task pass rate for participants to complete task 2.2. A means participants learn the virtual training method, B means participants learn the conventional training method. The order of A and B means which methods learn first. VR: The virtual robot in the virtual training method. PR: The physical robot in the conventional training method.

In the two charts (a) and (b), the value range of the VTF (Par-ticipants who learn virtual training method first) par(Par-ticipants is primarily concentrated in the negative value. The difference value increases with the difficulty of the task. Besides, the length of the time bars of PR part has a further significant increase compared to the recommended time. The value range of the CTF (Participants who learn conventional training method first) participants is pri-marily concentrated in the positive value. Moreover, the length of the time bar of the VR part does not change significantly. Thus, it can be inferred that the overall speed of VRF participants complet-ing tasks is faster than PRF. Furthermore, learncomplet-ing to use another type of robot to complete the identical task takes less time than PRF. The conclusions obtained are more evident in the rest of the two charts (c) and (d).

According to the data provided in Table 6 and Table 7, the average pass rate of VTF participants completing task1.1, task1.2, task2.1, task2.2 is 58.3%, 125%, 75%, 75%. The average pass rate of CTF partic-ipants is 50%, 75%, 62.5%, 66.7%. It is suggested that VTF particpartic-ipants have a higher task pass rate, demonstrating that they have a better grasp of robot manipulation and more top task understanding. It is also revealed that VTF participants using virtual training programs have a more significant effect than PRF participants learning with the conventional method.

The total score of the task is primarily used to determine the learning efficiency of participants in the pick and place experiment. According to the scoring rules described in chapter 3.3 can draw a schematic diagram of scoring. Fig. 25 shows the scoring chart of task 2.1 and task 2.2. As suggested from observation results, the overall scores of VTF participants are higher than those of CTF. Furthermore, the scores in the PR part are usually high, and the changes tend to be stable. However, the ratings of CTF participants in the VR section still fluctuate considerably and are unstable.

Figure 25: The scoring results of the experiment of pick and place.

(17)

Figure 26: The left side picture is the result without using the function, and the right-side picture is the result of using this function.

Figure 27: The assessments of the functional design, UI de-sign, and the guiding system.

5.4

User Feedback (Satisfaction level)

The assessments of the functional design, user interface design, and the guiding system show in Fig. 27. The data in the Fig. shows more than half of the participants agreed and affirmed the three designs, but it still has nearly 30% of the participants thought their designs were mediocre or even harmful.

Figure 28: Participants’ evaluation of the two types of new-bie tutorial.

The guiding system is a newbie tutorial for the novice. It will guide users to understand the various functions and operating procedures of the program, so they will know what they can do and what they should do in a short time. Accordingly, to what extent the user understands the operation and task requirements of the program after using the tutorial should be known. Participants’ evaluation of the two types of robot’s newbie tutorial show in Fig. 28.

According to the results presented in the two pie charts in Fig.28, the tutorial for the virtual robot is more acceptable to the partic-ipants than the one for the physical robot. There are two main reasons for this result. First, the virtual robot-based guiding sys-tem interacts with players in real-time via the video display, audio guidance, and visual effects. Furthermore, for the advantages of VR technology, the entire interaction process is more intuitive. Though the guidance process based on the physical robot includes verbal explanations and actual machine operation demonstrations, it can-not provide participants with the subjective initiative of the former. Second, the operation steps based on the virtual robot were simpli-fied, and controlling the physical robot requires setting parameters for the robot through the teach pendant, and the control steps are relatively complicated. Thus, under a limited time, participants should remember more content.

(18)

Figure 29: The evaluation from the participants for the chal-lenge of the two tasks.

robot is much more complicated than manipulating a virtual ro-bot. Most participants considered that they could not regulate the movement speed of the physical robot well by pushing the joystick. The robot sometimes moved too much or little. Besides, they were afraid that the robot will collide with the desktop and cause damage to the robot. However, the VR robot will not have such concerns, and the collision warning system will promptly remind them that the robot is about to collide.

Ten of the twelve participants who used the virtual robot to complete the task at first agreed that it gave enormous help for them when they were doing the identical tasks with another type of robot. However, only five of the twelve participants who used the physical robot at first considered that it was helpful for them in the next step. Five participants did not consider some significant connection between the two types of the robot. The result is shown in Fig. 30.

Eighteen from the twenty-four participants agreed that it would be more comfortable and practical to train the novice with the virtual robot. Moreover, five participants were neutral, and one participant held disagreement (Fig. 31). This is because the function of the current program is not perfect, and it affects the partici-pants’ experience and final evaluation. The mentioned defects and

(19)

Figure 31: Distribution of participants with different opin-ions.

6

CONCLUSIONS

This study aims to explore the difference when training the novice operators with virtual training program compared with the conven-tional training methods in learning efficiency and the satisfaction level. It is assumed that participants using virtual training program will more efficiently gain insights into how to manipulate robots than participants trained with conventional methods, and thus, achieve higher learning efficiency. Besides, they will feel the virtual training program is easy and comfortable to use.

This study introduces a VR robot training program based on Unity and HTC Vive to demonstrate the assumption mentioned above. The program exploits VR technology to simulate a model of the industrial production robot ABB IRB 120. Besides, through programming, the VR robot can move, rotate, pick and place, and record and repeat actions by controlling the HTC Vive hand con-trollers. Besides, the program also has a set of virtual guidance systems with multiple functions. Such system guides users to un-derstand and learn the knowledge and skills of manipulating robots via video, audio, and interactive visual effects. Moreover, a physical robot employing the same model as the VR robot acts as a compari-son object. Users should learn how to use the teaching pendant to control the physical robot to complete instructions.

The user study design of this study contains two experiments, i.e., drawing and pick place. Each group of experiments includes several small tasks with increasing difficulty. Participants were assigned to different groups according to the distribution rules for experiments. The user study design is classified into two types of comparisons. One is how the robot used first will affect the learning to use another robot. In this comparison, the types of tasks that participants should complete are the same, whereas the order of using the two types of robots is different. Participants in the other comparison have the same order of using the two types of robots, whereas the difficulty of the task will vary. The learning efficiency of participants was assessed by collecting data (e.g., the time to complete the task, the difference from the recommended time, the task pass rate, and the task score). Besides, by inviting participants to fill out a questionnaire to investigate users’ satisfaction using the two types of robots and task design. The results of the experiment can justify the hypothesis. Overall, participants are satisfied with

the design of the program and feel comfortable with the experience process.

7

DISCUSSION

According to the data provided in the result section, though both VTF and CTF can affect participants’ learning efficiency of another robot, the effect of VTF is more significant. Participants of VTF can better master the knowledge and skills of manipulating robots than those of CTF. They can more efficiently complete the experiment with a higher task pass rate. On that basis, it can be demonstrated that in this study, the use of VR training tools can help the novice master the use of robots more efficiently and effectively. As revealed from the comparison of the results of Group1 and Group3, Group2 and Group4, though the tasks of Group3 and Group4 are more complicated than those of Group1 and Group2, the participants of Group3 and Group4 achieved better results than the previous two groups.

Furthermore, VTF team members achieved better results than CTF, which can be explained for three reasons. One is that when participants are told that it is complicated to participate in the ex-periment, they can concentrate more on learning skills. Second, the guidance design of difficult group tasks is adequately designed, so they can better understand the overall situation. Third, the diffi-culty meter set for difficult tasks may not have achieved the desired effects.

Besides, VTF participants achieve an unmentioned result stronger than that of CTF participants. This is the position they stood while manipulating the physical robot to perform the mission and the number of times the robot collided with the mission item or table. Participants will stand outside the danger zone throughout the pro-cess, and the overall number of collisions plummets. It is therefore suggested that the virtual training program can ensure the novice always remembers their safety and allows them to make timely judgments before the machine collides to stop losses in time.

As revealed from the results of the questionnaire, participants’ satisfaction level with the virtual training program’s function, UI, and guidance system design were 54%, 71%, and 79%, respectively. More than 79% of the participants considered that the guidance system of the virtual training program allows them to learn how to control the robot and know what to do subsequently. Over 83% of VRF participants said that using the virtual training program first will significantly help learn the use of the PR robot. Moreover, 75% of VRF participants suggested that the level of usefulness should be higher than four, in which five is the maximum. Among all 24 participants, 75% of participants said that using the virtual training tool would make the task easier and more efficient. The mentioned data can fully demonstrate that participants have a high degree of satisfaction with the experience of adopting the virtual training program.

(20)

Participants were primarily dissatisfied with the design of the drawing function of the VR robot, and the program has some de-fects in this design. The first one is when the robot is moving, it will suddenly turn randomly sometimes, due to the programming limitations of the assets applied in this program. In this program, the movement of the robot is determined by the rotation of differ-ent joints. Besides, all the joints will be rotated according to the position of the control point. The user can control the movement of the control point by the controllers of the HTC Vive. However, the code of this asset does not specify how to move the joint when it exceeds the rotation range. Accordingly, when the control point exceeds the robot’s movement range, all joints will be continu-ously adjusted and even make impossible movements to achieve balance. Though some rules and restrictions were added (e.g., the free rotation mode) to alleviate this problem, it still exhibited poor performance. Furthermore, there are two ways to solve this prob-lem. One is to switch to other smarter assets, whereas their price is high. The other is to continue to modify the code to solve the problem. For the limitations of the author’s programming ability, an effective solution is hard to find.

The second deficiency is the painting method used in the paint-ing function. The paintpaint-ing method used in this program refers to collision detection. In other words, when the pen tip collides with the specified material, it will draw the color. However, the pen tip will continue to make errors as impacted by collision detection, which sometimes makes it unlikely to draw suddenly in the paint-ing process, or to be able to paint continuously after raispaint-ing the pen tip to a certain height. The most effective way to solve this problem is to use the spray-painting method instead of collision detection. According to my idea, spray paint is not required to draw on the collision of objects to detect and render. Let it emit a ray onto the paper; once it touches the specified material, the collision infor-mation and the corresponding UV coordinate point are recorded. Subsequently, the mentioned points are rendered with scripts.

The third defect refers to the lack of comparison instructions for the two different modes of operation. As indicated from the results section, a clear gap is identified between the time spent by the participants in manipulating VR and PR robots to complete the tasks. This is attributed to the different complexity of the operation methods of the two types of robots. The operation steps of the VR robot have been significantly simplified. Participants can directly control the robot to work without being required to preset some values for the robot. When manipulating the PR robot, some prepa-ration should be made in advance. For instance, in the drawing task, participants should first locate 3 points to determine a working plane for the robot to continue the following steps. Moreover, using the control handle is also more convenient and flexible than using the teaching pendant with a rocker, which explains why the time difference between operating two types of robots is significantly massive. Participants may not be able to gain insights into the dif-ferences in the two operating methods when learning to use the mentioned two types of robots. Such situation was evident in the questionnaire survey when the participants were asked whether the prior knowledge of robot operation helps learn another robot operation. Over 58% of PRF participants stated they were unsure whether the two methods of operation exhibited a clear interaction. This result is very adverse, though the knowledge and manipulation

skills involved in this study are relatively basic, and thus, do not cause experimental deviation. However, it is also revealed that the lack of design may mislead users.

This problem can be solved in two approaches. The first is to add a demonstration animation to the virtual training system. Tell users in detail how this operation is different from a PR robot. Use the animation to demonstrate how it is done if users use a PR robot, what the process of the VR robot simplifies. The second method refers to an entirely realistic simulation, i.e., the user should com-plete the identical complicated operation steps with the VR robot to manipulate the PR robot. However, considering the current devel-opment level of science and technology, the second method is more practical. However, the development, integration, and application of technology are to optimize or enhance the convenience and ef-ficiency of existing technologies, thereby gaining higher returns. From this perspective, the first method is considered reasonable.

8

FUTURE RESEARCH

This research sufficiently proves the convenience and practicality of this program in helping to train novice operators. But there is still much to be improved. The program currently supports only one type of robot. In future improvement, other robots of different models need to be added, and users can freely and simply switch to different types of robots through the interactive interface. The accuracy of robot needs to be further improved; The robot needs to be able to move to every point in space more accurately. And be able to do the right action in the right place. The painting method used in this research is collision painting, It needs to constantly judge whether it collides with the paper surface to decide whether to color or not, This is also the reason why it sometimes fails to paint. Here, it can be solved by replacing the dyeing method such as spray painting. It also need to add more intuitive and effective interactive options to help users understand and gain relevant knowledge more efficiently. The most worthwhile research is to connect the VR robot with a real robot through data transmission. Users complete tasks by using VR robots in the virtual space and send them to the PR robot after the task passed. This way can realize remote management of industrial production. The author hopes that the findings in this paper can give some help in the future process of combining VR technology with industrial production.

9

ACKNOWLEDGMENTS

(21)

REFERENCES

[1] Stanley G. Weinbaum. 1935. Pygmalion’s Spectacles. A Martian Odyssey and Others.

[2] Purchese Robert. 2013. Happy Go Luckey: Meet the 20-year-old creator of Oculus Rift. Retrieved December 03, 2018 from https://www.eurogamer.net/articles/2013-07-11-happy-go-luckey-meet-the-20-year-old-creator-of-oculus-rift. [3] N.E. Miner and S.A. Stansfield. 1994. An interactive virtual reality simulation

system for robot control and operator training. IEEE Xplore, San Diego, CA, USA. [4] T. Takahashi and H. Ogata. 1992. Robotic assembly operation based on task-level teaching in virtual reality. Proceedings 1992 IEEE International Conference on Robotics and Automation, vol.2, 1083 - 1088.

[5] Hwa Jen Yap, Zahari Taha, Siti Zawiah Md Dawal1, Siow-Wee Chang. 2008. Virtual Reality Based Support System for Layout Planning and Programming of an Industrial Robotic Work Cell. The International Journal of Industrial Engineering: Theory, Applications and Practice 15(3):314 - 322.

[6] Flückiger, L. 1998. A robot interface using virtual reality and automatic kinematics generator. In Int. Symposium on Robotics, 123-126.

[7] LuigiGammieria, MarcoSchumannb, LuigiPellicciab, Giuseppe Di Gironimoa, PhilippKlimantb. 2017. Coupling of a Redundant Manipulator with a Virtual Reality Environment to Enhance Human-robot Cooperation. Procedia CIRP, Volume 62, Pages: 618 - 623.

[8] Yang Xu, Chenguang Yang, Junpei Zhong, Hongbin Ma, Lijun Zhao, Min Wang. 2017. Robot Teaching by Teleoperation Based on Visual Interaction and Neural Network Learning. IEEE Explore: 2017 9th International Conference on Modelling, Identification, and Control (ICMIC), Pages: 1068 – 1073.

[9] Michail Theofanidis, Saif Iftekar Sayed, Alexandros Lioulemes, Fillia Makedon. 2017. VARM: Using Virtual Reality to Program Robotic Manipulators. IEEE Xplore: Proceedings of the 10th International Conference on PErvasive Technologies Related to Assistive Environments, Pages: 215 - 221.

[10] Sankar Jayaram, Hugh I Connacher, Kevin W Lyons. 1997. Virtual assembly using virtual reality techniques. ScienceDirect: Computer-Aided Design, Volume 29, Issue 8, Pages: 575 - 584.

[11] M. Bellamine N. Abe K. Tanaka, H. Taki. 2002. Remote machinery maintenance system with the use of virtual reality. Proceedings, IEEE Xplore: First International Symposium on 3D Data Processing Visualization and Transmission, Pages: 38 – 43.

[12] F. Hernoux, E. Nyiri, France O. Gibaru. 2015. Virtual reality for improving safety and collaborative control of industrial robots. Proceedings of the 2015 Virtual Reality International Conference Article No. 26.

[13] S.K.Ong, J.W.S.Chong, A.Y.C.Nee. 2010. A novel AR-based robot programming and path planning methodology. Robotics and Computer-Integrated Manufacturing, Volume 26, Issue 3, Pages: 240 - 249.

[14] J.W.S.Chong, S.K.Ong, A.Y.C.Nee, K.Youcef-Youmi. 2009. Robot programming using augmented reality: An interactive method for planning collision-free paths. Robotics and Computer-Integrated Manufacturing, Volume 25, Issue 3, Pages: 689 - 701. [15] Syed Mohsin Abbas, Syed Hassan, Jongwon Yun. 2012. Augmented reality based

teaching pendant for industrial robot. IEEE Xplore, JeJu Island, South Korea. [16] A.W.W.Yew, S.K.Ong, A.Y.C.Nee. 2017. Immersive Augmented Reality Environment

for the Teleoperation of Maintenance Robots. Procedia CIRP, Volume 61, Pages: 305 - 310.

[17] Sabine Webel, Uli Bockholt, Timo Engelke, Nirit Gavish, Manuel Olbrich, Carsten Preusche. 2012. An augmented reality training platform for assembly and maintenance skills. Robotics and Autonomous Systems, Volume 61, Issue 4, Pages: 398 -403.

[18] U. Neumann, A. Majoros. 1998. Cognitive, performance, and systems issues for augmented reality applications in manufacturing and maintenance. IEEE Xplore, Atlanta, GA, USA.

[19] ABB Official Website. Retrieved December 06, 2018 from https://new.abb.com/ products/robotics/industrial-robots/irb-120.

[20] ABB IRB 120 Industrial Robot. Retrieved December 06, 2018 from https://webimages.imagebank.abb.com/public/default/product/9AAC159516/ presentation.

[21] Image downloaded from ABB official website. Retrieved December 06, 2018 from https://new.abb.com/products/robotics/industrial-robots/irb-120/irb-120-data. [22] Robot Studio. Retrieved December 06, 2018 from https://new.abb.com/products/

robotics/robotstudio.

[23] ABB IRB 120 CAD Models. Retrieved December 07, 2018 from https://new.abb. com/products/robotics/industrial-robots/irb-120/irb-120-cad.

[24] Blender Official Website. Retrieved December 07, 2018 from https://www.blender. org/.

[25] Petty Josh. What is 3D Rigging For Animation Character Design?. Retrieved December 14, 2018 from Concept Art Empire. https://conceptartempire.com/ what-is-rigging/.

[26] Dante D’Orazio, Vlad Savov. 2015. Valve’s VR headset is called the Vive and it’s made by HTC. Retrieved December 16, 2018 from https://www.theverge.com/ 2015/3/1/8127445/htc-vive-valve-vr-headset.

(22)

A

APPENDIX A

Time spend and pass rate for participants to complete task 1 and task 1.1. A means participants learn virtual training method first, B means participants learn conventional training methods first. The order of A and B means which methods learn first.

(23)

B

APPENDIX B

Time spend and pass rate for participants to complete task 1 and task 1.2. A means participants learn virtual training method first, B means participants learn conventional training methods first. The order of A and B means which methods learn first.

(24)

C

APPENDIX C

Time spend and pass rate for participants to complete task 2.1. A means participants learn virtual training method first, B means participants learn conventional training method first. The order of A and B means which methods learn first.

(25)

D

APPENDIX D

Time spend and pass rate for participants to complete task 2.2. A means participants learn virtual training method first, B means participants learn conventional training method first. The order of A and B means which methods learn first.

(26)

References

Related documents

The three studies comprising this thesis investigate: teachers’ vocal health and well-being in relation to classroom acoustics (Study I), the effects of the in-service training on

The EU exports of waste abroad have negative environmental and public health consequences in the countries of destination, while resources for the circular economy.. domestically

46 Konkreta exempel skulle kunna vara främjandeinsatser för affärsänglar/affärsängelnätverk, skapa arenor där aktörer från utbuds- och efterfrågesidan kan mötas eller

Exakt hur dessa verksamheter har uppstått studeras inte i detalj, men nyetableringar kan exempelvis vara ett resultat av avknoppningar från större företag inklusive

The increasing availability of data and attention to services has increased the understanding of the contribution of services to innovation and productivity in

Av tabellen framgår att det behövs utförlig information om de projekt som genomförs vid instituten. Då Tillväxtanalys ska föreslå en metod som kan visa hur institutens verksamhet

However, numerous performance tests of compression algorithms showed that the computational power available in the set-top boxes is sufficient to have acceptable frame rate and

Vikten av att arbeta med att se alla elever och att eleverna skulle känna sig trygga med varandra visade sig vara en annan viktig aspekt hos alla fyra respondenterna för att skapa