• No results found

Autonomous navigation in GPS denied environments using MPC and LQR with potential field based obstacle avoidance

N/A
N/A
Protected

Academic year: 2021

Share "Autonomous navigation in GPS denied environments using MPC and LQR with potential field based obstacle avoidance"

Copied!
97
0
0

Loading.... (view fulltext now)

Full text

(1)

AUTONOMOUS NAVIGATION IN GPS DENIED ENVIRONMENTS USING MPC AND LQR WITH POTENTIAL FIELD BASED

OBSTACLE AVOIDANCE

by

(2)

c

Copyright by Rahul Dev Appapogu, 2019 All Rights Reserved

(3)

A thesis submitted to the Faculty and the Board of Trustees of the Colorado School of Mines in partial fulfillment of the requirements for the degree of Master of Science (Mechan-ical Engineering).

Golden, Colorado Date

Signed:

Rahul Dev Appapogu

Signed: Dr. Andrew Petruska Thesis Advisor Signed: Dr. John Steele Thesis Advisor Golden, Colorado Date Signed: Dr. John Berger Head of the Department Department of Mechanical Engineering

(4)

ABSTRACT

Workers in mining industry are posed with hazardous environments due to the nature of the work inside a mine. Gas leaks, explosions, rock falls, entrapment, long exposure to dust are all potentially fatal conditions for the workers. Although solutions for the problems are being implemented, they are not sufficient and mostly are very expensive.

Autonomous robots can reduce the risk for miners by taking over potentially dangerous tasks for them. For instance, an autonomous robot can carry operations like air quality assessment, inspection of dangerous mine conditions and even perform search and rescue tasks in disaster situations.

This thesis presents robots that can traverse through the mine environment with on board sensors collecting data without any human intervention. Control and Obstacle avoidance al-gorithms are designed and presented in this thesis for the robots, ground and aerial platforms. A Model Predictive Control (MPC) approach is presented which includes pre-packing of nec-essary terms that would help decrease the computation costs. A Linear Quadratic Regulator (LQR) approach is also presented and its performance against the Model Predictive Control approach is presented in presence and absence of obstacles. A Potential Fields based obstacle avoidance approach is presented which makes use of octomaps.

Experimental results are promising as both aerial and ground platforms perform naviga-tion without any GPS and avoid obstacles if any, in a simulanaviga-tion. Fast solve times on the order of hundreds of micro seconds are obtained and the results are compared with other existing techniques and presented. A real-time implementation of the ground robot has been made in various GPS denied environments and the results are presented. In real-time as

(5)
(6)

TABLE OF CONTENTS

ABSTRACT . . . iii

LIST OF FIGURES AND TABLES . . . viii

ACKNOWLEDGMENTS . . . 1

CHAPTER 1 INTRODUCTION . . . 2

1.1 Problem Statement . . . 3

1.2 Motivation . . . 3

1.3 Test Environment and Conditions . . . 4

1.4 Thesis Contribution . . . 4

1.5 Thesis Outline . . . 5

CHAPTER 2 RELATED WORK . . . 7

2.1 Autonomous Robots in Underground/GPS Denied Environments . . . 7

2.2 Controllers . . . 8

2.3 Obstacle Avoidance . . . 10

CHAPTER 3 SYSTEMS DESIGN . . . 12

3.1 Clearpath Husky . . . 12

3.2 3DR X8 . . . 14

3.3 Sensor Systems and Data . . . 16

3.3.1 Velodyne VLP-16 . . . 17

3.3.2 Hokuyo UGR-04LX-UG01 . . . 17

(7)

3.3.4 DJI A3 IMU . . . 19

3.3.5 US-Digital E2 . . . 19

3.4 Robot Operating System [15] . . . 20

3.5 Sensor and Data Fusion . . . 22

3.5.1 EKF . . . 22

3.5.2 Odometry in Ground Robot vs Aerial Platform . . . 22

3.5.3 Data Fusion . . . 23

3.6 Mission Planner and Safety Critical System . . . 24

CHAPTER 4 CONTROLLER DESIGN . . . 28

4.1 Robot Model . . . 28

4.1.1 Husky Model . . . 28

4.1.2 Drone Model . . . 31

4.1.3 Special Note on Angular Velocity . . . 44

4.2 Cost Function . . . 46

4.2.1 Model Predictive Control . . . 46

4.2.2 Linear Quadratic Regulator . . . 53

4.3 Obstacle Avoidance . . . 54

4.3.1 Obstacle Detection . . . 54

4.3.2 Obstacle Avoidance Algorithm . . . 57

4.4 Controller and Obstacle Avoidance application . . . 60

CHAPTER 5 EXPERIMENTS AND RESULTS . . . 64

5.1 Parameters and Tuning . . . 64

(8)

5.1.2 Solver Settings . . . 65

5.1.3 Obstacle Avoidance Parameters . . . 65

5.2 Implementation on Ground Platform . . . 66

5.2.1 Simulation Settings . . . 66

5.2.2 Simulation test conditions and goals . . . 66

5.2.3 Simulation test results . . . 68

5.3 Implementation on Aerial Platform . . . 75

5.3.1 Simulation Settings . . . 75

5.3.2 Test Goals . . . 76

CHAPTER 6 CONCLUSIONS AND FUTURE WORK . . . 80

(9)

LIST OF FIGURES AND TABLES

Figure 1.1 Map of the Edgar Experimental Mine . . . 5

Figure 3.1 Clearpath Husky Platform . . . 13

Figure 3.2 Clearpath Husky Simulation in Gazebo (on the right) and vizualization of the data and odometry in Rviz (on the left) . . . 13

Figure 3.3 Husky Platform block diagram . . . 15

Figure 3.4 3DR X8 Platform with sensors . . . 15

Figure 3.5 Drone Platform block diagram . . . 17

Figure 3.6 Velodyne VLP-16 Puck . . . 18

Figure 3.7 Hokuyo UGR-04LX-UG01 laser scanner . . . 18

Figure 3.8 UM6 Inertial Measurement Unit . . . 19

Figure 3.9 Mission Planner - State Machine . . . 25

Figure 3.10 Overall system software block diagram . . . 27

Figure 4.1 Generic Wheel Platform diagram . . . 28

Figure 4.2 3D Mapping using Octomaps in the Edgar Experimental Mine . . . 56

Figure 4.3 Superellipsoid Collection with change in parameters . . . 58

Figure 5.1 Robot with an obstacle in an empty Gazebo environment . . . 67

Figure 5.2 Robot with obstacles in a corridor-like Gazebo environment . . . 67

Figure 5.3 Path taken by the robot to avoid obstacles while using the Laser Based Gap Finding approach. . . 68

Figure 5.4 Path taken by the robot to avoid obstacles while using the MPC approach. . . 69

(10)

Figure 5.5 Path taken by the robot to avoid obstacles while using the LQR

approach. . . 69 Figure 5.6 Plots showing output controls, linear velocity and angular velocity vs

distance traveled in the heading direction. Since obstacles are placed at 3m and 5m from robot’s initial position, changes in velocities can be seen before the obstacle locations which appears because of robot avoiding the obstacle. After 6m, velocities settle to the nominal value of 0.5m/s and 0rad/s . . . 70 Figure 5.7 Plots showing output controls, linear velocity and angular velocity vs

distance traveled in the heading direction. Obstacle is placed at a distance of 6m from robot’s initial position. The robot stops at a distance of approximately 4.5m when run using the Laser Based Gap Finding approach, while it avoids obstacles and reaches target distance

when other approaches are used. . . 71 Figure 5.8 Plots showing output controls, linear velocity and angular velocity vs

distance traveled in the heading direction. Obstacles are placed at a distance of 3m and 5m from robot’s initial position. The robot stops at a distance of approximately 2.75m when run using the Laser Based Gap Finding approach, while it avoids obstacles and reaches target distance

when other approaches are used. . . 72 Figure 5.9 Husky Platform implementing LQR controller and performing obstacle

avoidance in presence of human obstacles. . . 74 Figure 5.10 Husky Platform implementing LQR controller and performing obstacle

avoidance in presence of unsuspecting audience. . . 74 Figure 5.11 Aerial Platform simulation implementing LQR controller. Red line

formed by arrows is the odometry tracked by the EKF . . . 77 Figure 5.12 Aerial Platform simulation implementing MPC controller. Red line

formed by arrows is the odometry tracked by the EKF . . . 78 Figure 5.13 Aerial Platform tracked odometry by EKF. Plots show MPC output

against time vs LQR Output against time . . . 79 Figure 5.14 Aerial Platform simulation implementing LQR controller in presence of

obstacles. Red line formed by arrows is the odometry tracked by the

(11)

Table 5.1 Variance in control output by different control and obstacle avoidance approaches discussed. Variance is represented as a vector with two elements, where first element is variance in linear velocity and the

(12)

ACKNOWLEDGMENTS

I would firstly like to appreciate and thank my mother Dr. Vijaya Kumari, my father Ravi Kishore, my sibling Divya Spoorthy, my grandma Santhamma and family for providing me relentless motivation, support and love.

I am immensely indebted to my advisors Dr. Andrew Petruska, Dr. John P Steele for giving me the opportunity to work with them and for providing necessary funding to carry out research at the Robotics Lab, Colorado School of Mines. Whatever I am now, is because of you two professors and the knowledge and wisdom you two passed on to the team and me is priceless. Thank you for being patient with me throughout my research and providing me necessary guidance whenever required.

A heartful shoutout to the Flying Underground team Mark C. Lesak, Arunabh Mishra and Rupak Dasgupta for your contribution, guidance and support. You people sure were the best team I could wish for.

My sincere gratitude to my roommates for helping me have a great stay in the United States and supporting me in my research. Special thanks to Arshiah Yusuf Mirza, for guiding and supporting me throughout.

I am greatly indebted and grateful to Sushma Siddamsetty for being patient, providing me support and love in all of my journey contributing to this thesis.

I finally would like to thank my friends in India; Vamsi, Nikhila, Nikitha, the TLTP for giving me support and motivation.

(13)

CHAPTER 1 INTRODUCTION

In the recent years, autonomous navigation has gained popularity in the community of Robotics. While the potential and existing applications for autonomous navigation are plenty, solving the problem is not an easy task. In order to navigate from one place to another, a robot has to know where it is in a given surrounding with respect to some frame of reference known as localization, plan a path or track a provided path, and execute locomotion in the desired trajectory without any collision with the environment.

Localization directly influences the state estimation and control of the robot and thus, accurate measurements from sensors are required to ensure the navigation is smooth and desired. Most robots use GPS for localization which provides very good estimates of position, orientation and velocity. However, it is a challenge to perform navigation in a GPS denied environment, since the localization is prone to drifts and inconsistencies. Achieving robust control and obstacle avoidance in such environments is a challenging task and this thesis aims to provide a solution to one such task which is, autonomous navigation with obstacle avoidance in an underground mine environment.

The project described/discussed by this thesis is based on autonomous navigation imple-mented on two robot platforms.

Clearpath Husky Ground Vehicle: Clearpath Husky is a four wheeled differential drive robot developed by Clearpath, Canada [10]. It offers API support for ROS and supports C++ development. Robot is discussed further in the methods section.

3DR X8-M Octocopter: The aerial platform used is the 3DR X8-M Octocopter, built by 3DR Robotics [18]. We used the octocopter frame work with a DJI A3 flight controller which offered API for ROS [3] and C++. Further discussed in the methods.

(14)

Although the aerial platform used was the 3DR X8-M, development for autonomous flight is also nearly applicable to the DJI-S900 framework. Since the flight controller used is the DJI-A3, there are no compatibility issues with changing the flight framework. The only major change would be with the sensor locations, which in turn would change the transforms. 1.1 Problem Statement

• Design a controller that enables the Clearpath Husky as well as the X8 drone to au-tonomously navigate to a depth of 100m into the army adit of the Edgar Experimental Mine [4] and get back to its starting location without any human intervention.

• Design control models for the ground vehicle and the aerial vehicle.

• Design a navigation plan for both of the robots to navigate into the army adit.

• Design an obstacle avoidance paradigm that enables the robot to not hit any obstacles in the navigation process, including the ribs/walls/roof/floor(in case of aerial robot) 1.2 Motivation

The mining industry is well known for its potentially hazardous working environment and conditions. According to International Labor Organization, mining industry accounts for a total of 8% of fatal workplace accidents [11]. According to US News, in the first three months of 2016, at least a total of 63 people died in Russia, China and Pakistan due to mining related accidents such as gas leak and gas explosion. Apart from leaks and explosions, hazards for miners include rock falls, entrapment inside the mine, falling from a height, exposure to mine dust and other environmental conditions at the mine [26]. Although safety issues at mines are being resolved with introduction of the improved equipment and operating procedures, the methods to date are not sufficient and are expensive [1]

The M3Robotics Lab at Colorado School of Mines takes a step ahead to try and resolve the safety issues in underground mines. We introduce an autonomous robot that can traverse

(15)

through the mine environment carrying sensors to measure critical data/information with-out any human intervention. Thus personnel’s exposure to the potentially hazardous mine environment would be reduced, potentially reducing fatalities. When fully developed, these systems can carry on operations like underground mapping, inspection or quality assessment for different mine conditions and search and rescue operations in disaster situations where an aerial platform would assist in searching and a ground platform can help in rescue. This thesis presents an initial scope for the project which is, autonomous navigation in the mine environment along with obstacle avoidance.

1.3 Test Environment and Conditions

All navigation tests for both the robots are carried at the Edgar Experimental Mine. The Edgar Mine is Colorado School of Mines experimental mine used primarily to carry out research for federal, state and industrial organizations along with giving educational training to future mining engineers [5]. The Edgar mine has underground openings and the horizontal openings sum upto 9250ft. The testing adit, the army, chosen for tests is about 400ft long with a cros section varying from 8ft x 6ft to 12ft x 10ft[6]. A map of the Edgar mine and pictures are added for reference.

Temperate inside the army adit is usually maintained constant at 13◦C (-/+3C) and

the tunnel surfaces are watered usually to avoid mine dust. There is no lighting inside the Army Tunnel and the only lighting provided is the headlamps from the mine safety outfit. 1.4 Thesis Contribution

• Control Models for a differential drive robot and an aerial robots were developed • A robust control paradigm that allows both the robots to autonomously navigate in

the GPS denied mine environment was developed. Other control paradigms that can achieve the target are implemented and presented.

(16)

Figure 1.1: Map of the Edgar Experimental Mine

• An implementation for usage of super-ellipsoids for obstacle avoidance was developed 1.5 Thesis Outline

Section Two of the thesis discusses related previous work done in robots in underground and GPS denied autonomous navigation, usage of Model Predictive Control and Linear Quadratic Regulator to control robots in recent times and potential field approach and su-per ellipsoid approach in obstacle avoidance. Section Three discusses system design for both ground and aerial platforms, sensors and sensor data used for implementation of autonomous navigation and related software architectures implemented. Section Four discusses mathe-matical robot models for ground and aerial platforms, control models used and proposed approaches to implement both Model Predictive Control and Linear Quadratic Regulator

(17)

approaches. The section also discusses a proposed obstacle avoidance approach to be im-plemented and the overall implementation methodology. Section Five discusses experiment setup and various experiments conducted in both simulation and real-time along with the experimental results. The final Section, Section Six discusses the conclusions for this thesis and potential future work.

(18)

CHAPTER 2 RELATED WORK

2.1 Autonomous Robots in Underground/GPS Denied Environments

One of the first works of using autonomous robots in abandoned mine environments was first done by Ferguson et al in [27]. Their Groundhog robot explored abandoned mines in Pennsylvania with an objective of exploration and mapping. The robot equipped with laser scanners, video recorder, created a 2D representation of the mine environment using laser scanners and a 3D map was created from post processed data. However, being a huge robot, Groundhog platform cannot be used in narrow spaces.

Another work done by Bennetts et al [22], uses a Clearpath Husky platform to automate methane gas emission monitoring and generate a 3D map of gas concentration levels using a modelling algorithm. The team used GPS data for localization and mapping, which is not available for the purpose of the thesis and hence, different methods for localization and mapping had to be used.

Related work very close to Bennetts et al was performed without using GPS was per-formed by Grehl et al [29]. Their robot Alexander (a Clearpath Husky), equipped with SICK LMS Laser scanners, IMUs, PTUs and environmental sensors mapped the Reiche Zeche edu-cational mine in Germany using the Hector SLAM [37] approach. Although, they used only the onboard sensors, without the use of GPS data for mapping, their robot was controlled remotely by team personnel and autonomous navigation was not performed. Also, their mapping algorithm was not accurate and improving the mapping accuracy was discussed as a future step. The team mostly focussed on collecting the environmental data and the gas detection data. Their robot setup and working environment is close to the ground platform for the project discussed in this thesis.

(19)

Work by Achtelik et al [19], discusses autonomous navigation of aerial platforms in GPS denied environments. Although, the environmental conditions do not match, the sensor data and the robot platform used are closer to the project discussed in this thesis. They designed a custom quadrotor platform with a Hokuyo laser range finders and stereo camera mounted on it and thus, localization was obtained using a scan matching algorithm using data from Hokuyo and visual odometry algorithms from the stereo camera. They used a LQR controller for controlling the robot pose and used an Extended Kalman Filter to fuse the sensor data to estimate the robots pose. Not much was discussed about the obstacle avoidance routine. A work very close to the thesis was done by Kaul et al [34], at CSIRO. Their hovermap with custom sensor setup and custom Simultaneous Localization and Mapping (SLAM) al-gorithm, is currently the state of the art in GPS Denied underground Mapping with Collision avoidance. Their custom sensor system relies on rotating a laser scanner about the vertical axis, giving a 360 degree scan of the environment. A non rigid registration of the point cloud data is performed by applying varying corrections along the sensor trajectory. This is used for online incremental motion estimation and overall global path optimization. However their platform requires a pre-existing base-line PointCloud map for path planning and regis-tration before autonomous operations. While the controller is not discussed, occupancy map based obstacle avoidance was used. Their flight, however is very much disturbed by cross winds/headwinds and the sensor rotation is stalled significantly, giving out an irrecoverable error in the mapping. Making their mapping robust and accurate was mentioned as a future work.

2.2 Controllers

Usage of a Model Predictive Control has been increased in popularity in the recent times [28], [23], [20], [35]. Robots designed for multiple purposes varying from autonomous navi-gation to robotic medical surgery use Model Predictive Control. Implementation of Model Predictive control is presented in Liniger et al [41], Houska et al [31]. Work by Quirynen et al [47] uses a Nonlinear Model Predictive Control approach to control fast attitude dynamics

(20)

of a UAV. Most of the approaches try to solve the constrained cost function, which is nearly same as the implementation in the thesis. Solving a constrained Nonlinear MPC problem is computationally expensive because integrating and computing the higher order term takes time and energy. Hence a generic term packing is proposed in this thesis to avoid integration everytime to solve for Jacobians and Hessians.

There are other variants of MPC implemented on UAV platforms such as Learning-Based MPC or LBMPC presented in [23]. LBMPC is a approach to adaptive control techniques where an Oracle function is used to update the learned system model in successive iterations based on the error from true model and the nominal model. [33] used SLQ algorithm for optimization in the MPC problem, which is currently the fastest as far as the author knows. The paper implements Sequential Linear Quadratic (SLQ) solver alongside an infinite horizon LQR controller which helps the feedback gains converge to steady state gains, there by stabilizing the system even if the solver fails to compute updated trajectory and gains in time. But the paper solves nonlinear unconstrained equation as opposed to a constrained equation.

LQR has also been studied vastly in control literature and also been applied to control systems to give stable and high performance results. The main advantages of using an LQR control/design is the system is stable [39], can achieve infinite gain margin and one can perform desired output tracking as opposed to state tracking by using a selection matrix and operating the selection matrix on the LQR gain matrix Q.

In work done by Taimoor et al [51] control of an unmanned aerial vehicle model is discussed using LQR. Considering the theoretical flight model from [25], the paper presented simulated results of the flight model using Matlab. [53] presented controlling a Quadrotor using LQR control and used a novel method to construct obstacles called LQG Obstacles. In [40] LQR is applied on a mobile platform which is a car-like robot. The non linear robot model is linearized about its current state and LQR control is applied. The problem would be harder to solve when using a system with more degrees of freedom for example a

(21)

drone platform. Controlling the attitude quaternion is a challenge since the quaternion has a constraint that the magnitude of the quaternion should be unity.

All related work for LQR discuss similar implementation of LQR with different platforms or models and paired with different obstacle avoidance techniques.

2.3 Obstacle Avoidance

Obstacle avoidance is crucial for mobile robots and is studied widely in literature. There are many approaches in literature such as Bugs Algorithms [42],[43], which lies on principle of contouring the obstacles unless a path to the destination is found. However, bugs algorithms are not optimal and there are many situations where the robot can get stuck at a position. The algorithm also does not consider robots kinematics into account.

There are Potential field approaches, in which the robot is pulled towards the goal by an attractive force and a repulsive force by the obstacles prevents the robot colliding into the obstacles. This approach also has short coming, for example a local minimum could arise given how the obstacle configuration is and the robot could potentially get stuck there. Many modifications and changes to the approach have been made since its first publication in 1990s and many aerial systems or drones use potential field approaches for obstacle avoidance [38], [50], [48], [45]. Methods like Time To Collision (TTC) using SLAM based approach have been used in [24], [44] where time to collision is computed and based on the threat level action is taken, like slowing down or coming to a halt or maneuvering about the obstacle.

A similar approach to the approach in thesis can be found in [32], where the authors use the robot model to predict the robots future state in 2D space and take actions now, to avoid any potential collisions in the future. Unlike [24], [44], this method relies on robot model and the slam algorithm to work and hence is computationally expensive.

An approach closer to the approach in this thesis is found in work done by Qasim et al [46] which uses the method of super ellipsoids for potential field force calculations and to perform reactive collision avoidance. This method requires modelling a pair of super ellipsoids one being the robot structure and the other enclosing the robot structure within

(22)

itself such that there is space in between robot structure and the outer structure. Repulsive potential fields are created in the space between the two structures and any obstacle in the space would cause the robot to be repelled from the obstacle. However, this approach was only implemented in simulation and testing on a real quadrotor was mentioned as future work. Implementation in simulation gives the user access to ground truth of the robots position and obstacle position in absolute or world frame and this would be an ideal case compared to implementation on a real robot.

Although not a similar approach to Potential fields, a laser-based gap finding obstacle avoidance approach was developed by Ayoade et al [21] at Colorado School of Mines. The algorithm computes a trajectory in presence of an obstacle, towards the widest gap available for the robot to pass through and closest to the trajectory pre-planned by a global planner. Before using potential fields, this algorithm was implemented on the ground platform with a PID controller for autonomous navigation in the mine and one successful run was accom-plished. Potential fields technique presented in this thesis paired with MPC and LQR, is compared against the laser based approach with PID and the results are presented.

(23)

CHAPTER 3 SYSTEMS DESIGN

All development work was done on two robots, a Clearpath Husky and a 3DR X8 Oc-tocopter. The husky-gazebo [9] package was used to simulate Husky in Gazebo simulation platform. Since a model of 3DR-X8 is not available in Gazebo, simulations are performed using a DJI Matrice 100 model available as a package for Gazebo [2]. To make it easier to extend the development to the 3DR-X8 platform, parameters are designed in the obstacle avoidance algorithm. More about the parameters would be explained in later sections. Both systems have different on-board computers and are entirely different in their mechanical structure and locomotion constraints. An overall description of both robot platforms, sensor systems, sensor data, communication, robot software environment and interface is provided in this chapter.

3.1 Clearpath Husky

Husky A200TM, manufactured by Clearpath Inc., is a four wheeled differential drive, robot mobile platform. It was factory designed to be tele-operated or driven in an au-tonomous mode. Its underlying software also enables the robot to take input control com-mands from the user via a remote or onboard computer, and execute the provided command while the user monitors the robots performance on the computer and in real-time. Figure 3.1 shows the Husky platform with sensors in real environment.

Figure 3.2 shows the Husky in ROS Simulation environments Gazebo, and Rviz.

• Robot Platform: The Clearpath Husky is a 50kg mobile robot platform, see Figure 3.1. The robot platform is approximately 1m long, 0.67m wide and is 0.4m high and has a maximum payload capacity of 75kg. It is powered by a 24V lead acid battery with a total available power of 192W. The platform consists of a voltage converter which

(24)

Figure 3.1: Clearpath Husky Platform

Figure 3.2: Clearpath Husky Simulation in Gazebo (on the right) and vizualization of the data and odometry in Rviz (on the left)

converts and provides power at 5V, 12V and 24V, each fused at 5A, which can be used to power different sensor systems along with the on-board computer. The battery provides about 2 hours of drive time and 8 hours of standby time after integrated with different sensors and other systems. The robot platform has a maximum speed of of 3.6km/hr. It can be controlled using a Logitech controller and accepts linear and angular velocity commands. Software, internal to the robot, allows the user to monitor different states like X,Y, Z position, orientation in a quaternion representation, linear and angular velocities in X, Y and Z directions and battery status.

(25)

• Onboard Computer: Husky has an onboard computer which is a ASUS ROG Strix Mini ITX board with Intel i5 - generation 7 processor. It has 8GB of RAM and the motherboard configuration supports USB 3.1, 802.11ac Wi-Fi and Gigabit LAN communication protocols. It has an external memory of 512GB and runs Ubuntu 14.04 a Linux operating system, with middleware known as ROS Indigo installed with it. More about ROS, its components and functionalities will be explained in later sections. The onboard computer communicates with the motor controllers using an RS232 protocol at a BAUD rate of 115200.

• Sensor Systems: For the purpose of this project, the Husky platform uses the following sensor systems:

• Velodyne VLP-16: Lidar which provides LiDAR data at 10Hz

• UM6 Orientation sensor: provides orientation quaternion, angular velocities in X, Y and Z directions and linear acceleration in X, Y and Z directions at 10Hz • Wheel Encoders: provide wheel motion, position, orientation and velocities data

of the robot indirectly.

More about the sensor systems will be discussed in the later sections.

Overall system block diagram for the Husky Platform is shown in the Fig. 3.3 3.2 3DR X8

3DR X8-M, manufactured by 3DR, is an X configured octocopter platform. It was factory designed to be tele-operable and semi-autonomous with a Pixhawk autopilot and Arducopter firmware. For the purpose of the project, the autopilot was replaced with a DJI-A3 flight controller, and a Lightbridge 2 remote flight controller was added. That allowed the user to tele-operate the drone with the lightbridge controller and give thrust and angular rate and linear velocity commands using the lightbridge.

(26)

Figure 3.3: Husky Platform block diagram

Figure 3.4: 3DR X8 Platform with sensors

• Robot Platform: The 3DR X8-M is a 3.2kg octocopter platform, see figure 3.4. The aircraft is about 0.5m long, 0.5m wide and is 0.35m high and has a maximum payload capacity of 5.5kg. It is powered by a 15.2V Lithium High Volt battery with a current rating of 5.2Ah. The battery can be used to power different sensor systems along with the on-board computer. The battery provides about 10 minutes of flight time and 2 hours of standby time after integrating with different sensors and other systems. The robot platform can travel at 25km/hr with the existing propulsion systems. It can be controlled using a Lightbridge 2 controller and accepts thrust, linear and angular

(27)

velocity commands from the controller. Software internal to the robot allows the user to monitor different states like X,Y, Z position, orientation in a quaternion representation, linear and angular velocities in X, Y and Z directions, battery status.

• Onboard Computer: The 3DR has an onboard computer which is a Jetson TX2 board with HMP Dual Denver and Quad Arm A57 processors. It has 8GB of RAM and the motherboard configuration supports USB 3.1 and 2.1, 802.11ac Wi-Fi, Gigabit LAN, CAN, UART, I2C and SPI communication protocols. It has an external memory of 512GB and runs on Ubuntu 16.04 a linux operating system, with middleware known as ROS Kinetic installed with it. More about ROS, its components and functionalities will be explained in further sections. The onboard computer communicates with the flight controller using an USB 3.1 protocol.

• Sensor Systems: For the purpose of this project, 3DR platform has the following sensor systems:

• Velodyne VLP-16: Lidar which provides LiDAR data at 10Hz • Hokuyo Laser Sensor: provides Laser Scan data at 10Hz

• DJI A3 IMU: provides orientation quaternion, angular velocities in X, Y and Z directions and linear acceleration in X, Y and Z directions at 400Hz.

Note: Although GPS and Barometer sensors are on the aircraft, the data is not used for navigation or localization.

Overall system block diagram for the Drone Platform is shown in the Fig. 3.5 3.3 Sensor Systems and Data

Both of the robots were equipped with several sensors for localization and obstacle avoid-ance. Both platforms have Velodyne VLP-16 as the common sensor used while the rest of the sensors are different, mostly because they serve different purposes. Sensors like IMUs are different because of different manufacturers. Different sensors are discussed below.

(28)

Figure 3.5: Drone Platform block diagram

3.3.1 Velodyne VLP-16

Velodyne is a 3D LiDAR sensor providing PointCloud data of its surroundings. The sensor operates at 12V and consumes a typical 8W of power usage. It has 16 (15 opera-tional) channels, a horizontal FOV of 360◦ along with a vertical FOV of 30with an angular

resolution of 0.1◦ and 2respectively. Operating range is from 0.9m - 100m.

The PointCloud2 sensor data encodes x, y, z positions, intensity and ring/channel number of each point and is of sensor msgs/PointCloud2 format [17]. Velodyne is used on Husky to perform obstacle avoidance, while it serves multipurpose of localization and obstacle avoidance on the drone.

3.3.2 Hokuyo UGR-04LX-UG01

Hokuyo UGR-04LX-UG01 is a 2D LiDAR sensor providing laserscan data of its sur-roundings. The sensor operates at 12V. It has a horizontal FOV of 240◦ and an angular

resolution 0.352◦. Operating range is from 0.02m - 4m. The laserscan sensor data encodes

x, y, z positions and intensity of each point and is of sensor msgs/LaserScan format.

Hokuyo is used on the drone platform primarily to estimate the drones height and create an obstacle list above and below the drone. It is mounted perpendicularly with respect to the velodynes mounting position, providing laserscan points above and below the drone.

(29)

Figure 3.6: Velodyne VLP-16 Puck

Figure 3.7: Hokuyo UGR-04LX-UG01 laser scanner

3.3.3 CHR-UM6

CHR-UM6 is an orientation sensor on Clearpath Husky, providing magnetometer data, angular velocity in body frame and linear acceleration data at a rate of upto 300Hz. Software internal to the onboard computer converts the magnetometer data to orientation data for convenience. Measurable rotation rates for the sensor are in the range of +/ − 2000o/s and

(30)

and pitch angles and an accuracy of 5o for yaw angle. To access the data, software internal

to the system publishes the data as sensor msgs/IMU format, which is a ROS standard for publishing IMU data. It encodes orientation, angular velocity and linear acceleration.

Figure 3.8: UM6 Inertial Measurement Unit

3.3.4 DJI A3 IMU

DJI A3 IMU is an orientation sensor on 3DR X8, providing magnetometer data, angular velocity in the body frame and linear acceleration data at a rate of up to 400Hz. Software internal to the onboard computer converts the magnetometer data to orientation data for convenience. The IMU is internal to the DJI A3 flight controller and no specs for the sensor are available or published at the moment. To access the data, software internal to the system publishes the data as sensor msgs/IMU format, which is a ROS standard for publishing IMU data. It encodes orientation, angular velocity and linear acceleration.

3.3.5 US-Digital E2

E2 is the optical encoder on Clearpath Husky which provides about 20,000 pulses for one complete rotation of a wheel. Clearpath Husky, being a skid steered vehicle, depending on number of revolutions per side and the direction or sign of revolutions, position and orientation of the robot can be estimated. To access the data given by the sensor, internal software publishes out the data in the form of nav msgs/Odometry, which is a ROS standard for publishing Odometry data. It encodes position, orientation represented as quaternion, linear and angular velocities.

(31)

3.4 Robot Operating System [15]

Robot Operating System or ROS, is an open source collection of tools, libraries and con-ventions that help to write robot specific software in an organized fashion. It is a framework that simplifies the user’s task by providing software packages that help the user interface with several sensor modules and robot platforms. For this thesis, ROS is a key component in software integration of the systems.

Software components integrated using ROS include

• Transforms: All the transformations between different frames are defined using ROS. For instance, transformation between the base frame of the robot and the frame of a sensor, say velodyne is encoded using ROS. The tf package from ROS includes a functionality to publish static transforms between frames and also has functions to compute the transformations, that is, translate data in one frame to data in another frame. Those functions were used in both robot platforms.

• Sensor Data: Interfacing or accessing the sensor data in a complex robot system is often a challenge, given different types of sensors used, different forms of data representation for each sensor. ROS provides a standard data representation format for each sensor and that makes it easy for the user to parse and analyze the data across different sensor platforms. For most standard sensors, ROS also provides software packages to interface with the sensor directly and fetch the data, which also reduces the user’s burden to communicate with and read data from a sensor. For all the sensors used for the purpose of this thesis, their respective standard ROS packages were used to interface with the sensors. Most of the message types in ROS include a header stamp which allows the user to encode time stamp for the message, number of messages in the sequence and frame id of the message. Following are some important sensor message types used.

(32)

• sensor msgs/IMU: ROS Message type used to represent data from an IMU sen-sor. It has orientation, angular velocity and linear acceleration fields.

• sensor msgs/PointCloud2: ROS Message type used to represent data from a LiDAR sensor. Since most LiDARs have different attributes and different ouputs, this data type is designed to be flexible to most LiDARs. It has a pointfields section which encodes the necessary sequence for different outputs like [x, y, z] location for differnt points, the intensity returns etc., and a data vector in which all necessary data is stored. The data vector is parsed according to the pointfields section.

• sensor msgs/Laserscan: ROS Message type used to represent data from a Li-DAR sensor. This message type is different from the PointCloud2 as this type is primarily used to represent data from 2D Laser Scanners. It has a ranges field which represents the range of different points scanned from the center of the laser sensor and all other fields are the limits regarding FOV and ranges.

• Communication b/w software packages: An executable from a software package is known as a node in ROS. There is a special node called master which takes care of communication and data transfer between different nodes. This functionality in ROS is very useful while integrating different sensors with other software packages, since if not for ROS, one needs to develop a framework for data transfer and communication between different executables.

• Issuing control commands to robot platforms: Another major challenge while de-veloping mobile robots is to issue commands from the software to the hardware motors. It involves several layers of interfaces and might prove harder than expected in several cases. Thanks to ROS, most mobile platforms now have software packages to issue control commands. Clearpath and DJI have open-source packages for ROS that are used to control the respective robots for the purpose of this thesis.

(33)

3.5 Sensor and Data Fusion 3.5.1 EKF

With different sensors and outputs defined, this section discusses fusing different sensor data to provide accurate localization and obstacle information. For localization, we use a technique named the Extended Kalman Filtering or EKF which is one of the most popular robot localization algorithm used. It relies on a principle of predicting the state of the system given the previous state and the applied input and then updates the prediction using sensor measurements. It is widely used and computationally inexpensive [52].

For the purpose of this thesis, we use the robot localization package developed by Clearpath Robotics, which implements an EKF. The package has the flexibility to accept data from var-ious sensors such as IMU, GPS, Wheel encoders and can also accept different data types such as sensor msgs/IMU, nav msgs/Odometry, geometry msgs/Pose, geometry msgs/Twist with their respective covariances. This package proved reliable and the same package is used on both ground and aerial platforms for localization, however, with different parameters. 3.5.2 Odometry in Ground Robot vs Aerial Platform

As discussed in the earlier sections, the objective is to achieve autonomous navigation in an underground environment, which implies lack of GPS signal. Hence localization be-comes a challenge with the sensors onboard for both the platforms. On Clearpath Husky, the challenge is relatively easy due to the presence of wheel encoders that provide a good odometry estimate for the robot even without the presence of GPS. The odometry data from the encoders along with the IMU data is fed to the EKF to provide good odometry estimates.

On the other hand, the drone system does not have any measurements from the motor shaft revolutions, speed or direction and thus needs to rely on visual odometry for local-ization. For this thesis, LOAM [12] was used for visual odometry for the drone platform. Laser Odometry And Mapping or LOAM, is a LiDAR vision based odometry algorithm used

(34)

for mapping and real-time state estimation. It runs an odometry thread and a mapping thread in parallel, the odometry thread calculates the odometry by computing motion of the lidar and distortion in the PointClouds and the mapping thread takes the undistorted PointClouds and builds a map while computing the sensor frame at a lower rate. The ROS package for LOAM known as loam velodyne was used on the drone platform [12].

3.5.3 Data Fusion

Using the robot localization and loam velodyne packages on the drone system proves to be a challenge as there are many sensors and different frames of references involved. The table below shows different transforms from the base of the robot, base link and other frames. The transform is mentioned as a vector with first three elements as position and the other four elements as orientation represented using a quaternion.

Table 3.1: Frames and their transformation from baselinkf rame

Initial Frame Final Frame Transform base link velodyne [0 0 0.0844 0 0 0 1]

base link imu [0 0 0 0 0 0 1]

base link hokuyo [0.173 0 -0.03 0.5 0.5 0.5 0.5] base link loam odometry [0 0 0 0.5 0.5 0.5 0.5]

In order to maintain consistency while using the ROS packages, a data fusion technique is implemented in the form of a data conversion package. The package consists of three major components

• data conversion: Transforms the odometry and IMU data from DJI into base link frame to feed into the EKF.

• PointCloud fusion: Transforms and fuses the PointCloud data from Velodyne and Hokuyo into a single PointCloud in base link frame called PointCloud fused. Data from Hokuyo is converted from sensor msgs/LaserScan to sensor msgs/PointCloud2 before transformation and fusion

(35)

• PointCloud consolidation: Transforms and fuses the obstacle points close to the robot (a point within 1m of the robot’s centroid is considered close) into PointCloud fused. Further details about this is discussed in the obstacle avoidance section.

3.6 Mission Planner and Safety Critical System

For this thesis, a Mission Planner was designed to carry out the state transitions in a smooth and error prone fashion. The Mission Planner also publishes the targets or target states for the controller to accomplish. The targets are loaded in a queue with the help of a YAML file. The user can edit the YAML file to provide multiple targets in a single mission. Once a target is reached, the Mission Planner pops out the finished target and publishes a new target. Once all targets are achieved, Mission Planner removes the control from the control package so that it is safe for the user to approach the robot.

The state machine consists of different states. Their functionalities are defined as follows • START UP: The Initial state for the robot. In this state, the Mission Planner checks for any possible errors or failures. If they exist, Mission Planner waits for a certain mentioned time to check if the inconsistencies are resolved. If they do not resolve in the mentioned time, the state machine is switched to ERROR state. If everything is acceptable, state machine is transitioned to the IDLE state.

• IDLE: Control is obtained in this state and the Mission Planner waits for the user to give a ’GO’ command. Once received, state machine transitions to MISSION INITIATE. • MISSION INITIATE: Motors are turned on and the robot is ready to start a mission given, thus everything happens appropriately in this state. This state is important on the drone platform, since a drone platform needs to start motors, fly up and hover at a target height before starting a mission. Once the robot is ready, the state machine switches to MISSION EXECUTE.

(36)

• MISSION EXECUTE: The user defined Mission is executed in this state. Once all missions are executed, control is transferred to MISSION TERMINATE

• MISSION TERMINATE: This state ensures the robot stopping procedure is smooth, motors are turned off and the robot is safe to handle by the user. This state has an importance in the drone system as the platform needs to land smoothly and the motors are to be turned off before a user approaches the robot.

• ERROR: This state is reached if there is any failure before or after a mission is initiated or terminated respectively. Control is taken away from the controller and it is ensured that robot is safe to handle.

Figure 3.9 illustrates the flow of the state machine for the drone system

(37)

A System Health Monitor node, which is a safety critical system was also developed to monitor all systems, sensors and software nodes to check if there is any failure. Any failure is categorized into three sub divisions.

• INFORMATIONAL: Failure of this category requires the user look at the infor-mation published upon failure and keep a note of it. No necessary action from the user or the robot is required.

• WARNING: Failure of this category requires the user take action on the robot re-garding the information published upon failure and keep a note of it. No necessary action by the robot is required.

• SEVERE: Failure of this category would immediately send a request to stop the robot and transfer control from the controller so that it is safe for the user to handle the robot and diagnose the error/failure.

Overall System Block diagram for the robots showing the software overview is shown in Fig. 3.10

(38)
(39)

CHAPTER 4 CONTROLLER DESIGN

4.1 Robot Model

This section discusses the robot models designed and used for controlling the ground and aerial platforms. Separate models were designed for the ground and aerial platform as their dynamics are vastly different. Simple kinematic models for both the platforms were designed and used.

4.1.1 Husky Model

Consider a skid steered platform and an inertial frame X, Y where X is pointing forward as shown in Figure 4.1. Origin of the inertial frame is considered to be at the centroid of the robot and the body frame of the robot is considered to be the same as the inertial frame as an initial condition.

(40)

Define

[x, y] Location of the robot in the inertial frame θ Heading/Yaw angle of the robot in inertial frame v Velocity of the robot in inertial frame

ω Angular Velocity of the robot in body frame k v k Vector Two-Norm of a given vector v t0 Initial trajectory solve time

tf Final trajectory solve time

t Time elapsed between initial and final trajectory solve times tf − t0

With those defined, we define our system states X and controls U as

X =   x y θ   (4.1) U =  kvk kωk  (4.2)

The kinematic control equation thus formed from the given states and controls is then

˙ X =   kvk cos (θ) kvk sin (θ) kωk   (4.3)

A closed form solution for the differential equation is computed and is used as a state update. X = X0+    kvk kωk(sin (θ + kωkt) − sin (θ)) −kvkkωk(cos (θ + kωkt) − cos (θ)) θ + ωt    (4.4)

(41)

From the computed closed from solution, we calculate Jacobians for the state, which are

Sensitivity of the state X with respect to X, represented by Jx

Jx=    1 0 kωkkvk(cos (θ + kωkt) − cos (θ)) 0 1 kvkkωk(sin (θ + kωkt) − sin (θ)) 0 0 1    (4.5)

Sensitivity of the state X with respect to U, represented by Ju

Ju =     1 kωk(sin (θ + kωkt) − sin (θ)) kvk  cos(θ+kωkt)t kωk − (sin(θ+kωkt)−sin(θ)) kωk2  −1 kωk(cos (θ + kωkt) − cos (θ)) kvk  sin(θ+kωkt)t kωk − (cos(θ+kωkt)−cos(θ)) kωk2  t 0     (4.6)

We also calculate Hessians which are

Sensitivity of the Jacobian Jx with respect to X, represented by Jxx

Sensitivity of the state Jx with respect to U, represented by Jxu

Sensitivity of the state Ju with respect to U, represented by Juu

Since the Hessians are three dimensional tensors, we express each Row-Column slice horizontally in the order of their depth that is, left most Row-Column slice being depth 1 and the right most slice being the maximum depth. Juu, however is expressed

vertically for convenience which has the top most slice as depth 1 and the depth progresses as one goes down.

Jxx =   0 0 0 0 0 0 0 0 0     0 0 0 0 0 0 0 0 0      0 0 −kvkkωk(sin (θ + kωkt) − sin (θ)) 0 0 kvkkωk(cos (θ + kωkt) − cos (θ)) 0 0 0    (4.7)

(42)

Jxu =    0 0 (cos(θ+kωkt)−cos(θ))kωk 0 0 (sin(θ+kωkt)−sin(θ))kωk 0 0 0        0 0 −vsin(θ+kωkt)tkωk − (cos(θ+kωkt)−cos(θ))kωk2  0 0 vcos(θ+kωkt)tkωk −(sin(θ+kωkt)−sin(θ))kωk2  0 0 0     (4.8) Juu =     0 cos(θ+kωkt)tkωk − (sin(θ+kωkt)−sin(θ))kωk2  0 sin(θ+kωkt)tkωk +(cos(θ+kωkt)−cos(θ))kωk2  0 0     (4.9) 4.1.2 Drone Model

The drone model is relatively complex when compared to the ground robot model, given the dynamics of the system. This model was developed such that it can be ap-plied to any generic aerial platform that can accept thrust/linear acceleration in the z direction and angular velocity commands, irrespective of the drone configuration, make, and model.

Consider an inertial frame X, Y, Z where X points forward, Y points to the left and Z points upward. Origin of the body frame is considered to be at the center of mass of the robot, and the body frame is considered the same as the inertial frame as an initial condition.

Define

[px, py, pz]⊤ = p Position of the robot in inertial frame

[q0, q1, q2, q3]⊤ = [q0, q] = Q Orientation of the robot in body frame expressed as a

Quaternion

[vx, vy, vz]⊤ = v Velocity of the robot in inertial frame

[ωx, ωy, ωz]⊤= ω Angular Velocity of the robot in body frame

az Linear Acceleration of the robot in Z direction, in inertial frame

t0 Initial trajectory solve time

(43)

δt Time elapsed between initial and final trajectory solve times tf − t0

R Initial orientation of the robot expressed as a Rotation Matrix. [0; 0; 1] = e3 Unit Z vector

I Identity Matrix. Size of the matrix will be considered 3x3 unless mentioned in the subscript

O Zero Matrix. Size of the matrix will be considered 3x3 unless mentioned in the subscript

[v]× Cross Product Matrix of a given vector v k v k Vector Two-Norm of a given vector v ˆ

v Unit vector in the direction of a vector v v⊤ Transpose of a given vector v

g Acceleration due to gravity. Is 9.8m

s2 for all purposes in this thesis

IRC A N xN x1 3D Tensor which is Identity in its Row-Column slice. It is represented

by IN xN x1

IRD A N x1xN 3D Tensor which is Identity in its Row-Depth slice. It is represented

by IN x1xN

ICDA 1xN xN 3D Tensor which is Identity in its Column-Depth slice. It is represented

by I1xN xN

η (Q) Null space of a given Quaternion Q

d

dQη (Q) Is the differential of Null Space of Q with respect to Q

(44)

X =                 q0 qx qy qz vx vy vz px py pz                 =   Q v p   (4.10) U =     az ωx ωy ωz     = az ω  (4.11)

Provided the states, the kinematic control equations thus formed turns out to be

˙ X =   ˙ Q ˙v ˙p   (4.12) Where ˙ Q = 1 2  −q⊤ q0I − [q]×  (4.13) ˙v = Re3az+ g (4.14) ˙p = v0 (4.15)

A closed form solution for the differential equation is computed and is used as a state update. X (δt) =   Q (δt) v (δt) p (δt)   (4.16) Where Q(t) = q0 −q ⊤ q q0I − [q]×   cos ω 2δt  sin ω 2δt Rˆω  (4.17)

(45)

v(δt) = v(0) + R  Iδt+  (1 − cos(k ω k δt)) k ω k  [ ˆω] ×+  δt− sin(k ω k δt) k ω k  [ ˆω]2 ×  e3az (4.18) p(δt) = p(0)+v(0)δt+ R  Iδ2t 2 +  (t− 1 kωksin(kωkδt)) kωk  [ ˆω] ×+ h δ2 t 2 + (cos(kωkδt)−1) kωk2 i [ ˆω]2 ×  e3az(4.19)

Where Q0 = [q0, q], v(0) ,p(0) are initial states

From the computed closed form solution, we calculate Jacobians for the state, which are

Sensitivity of the state X with respect to X, represented by Jx

Sensitivity of the state X with respect to U, represented by Ju just as for the Husky

system.

The Jacobian Jx is represented as

Jx=    ∂Q ∂Q0η (Q0) ∂Q ∂v0 ∂Q ∂p0 ∂v ∂Q0η (Q0) ∂v ∂v0 ∂v ∂p0 ∂p ∂Q0η (Q0) ∂p ∂v0 ∂p ∂p0    10x10 (4.20) Where ∂v ∂v0 = ∂p ∂p0 = I (4.21) ∂p ∂v0 = Iδt (4.22) ∂v ∂p0 = O (4.23) ∂Q ∂v0 = ∂Q ∂p0 = O4x3 (4.24) ∂v ∂Q0 = 2 q0I + [q]× k1 q⊤k1 I−k1q⊤+k1q − q0[k1]×  (4.25)

(46)

∂p ∂Q0 = 2 q0I + [q]× k2 q⊤k2 I−k2q⊤+k2q − q0[k2]×  (4.26) ∂Q ∂Q0 = p0 −p ⊤ p p0I + [p]×   +  −q⊤ q0I − [q]×  sin k ω k 2 δt  ∂ ∂Q0 (R ˆω) (4.27) In which k1 =  Iδt+  (1 − cos(k ω k δt)) k ω k  [ ˆω] ×+  δt− sin(k ω k δt) k ω k  [ ˆω]2 ×  e3az (4.28) k2 = ( Iδ 2 t 2 + " (δt− kωk1 sin(k ω k δt)) k ω k # [ ˆω] ×+  δ2 t 2 + (cos(k ω k δt) − 1) k ω k2  [ ˆω]2 × ) e3az (4.29) p0 = cos  k ω k 2 δt  (4.30) p = sin k ω k 2 δt  R ˆω (4.31) ∂ ∂Q0 (R ˆω) = 2 q0I + [q] × ˆω q⊤ωˆ I−ˆωq⊤+ˆωq − q0[ˆω]×  (4.32) η (Q0) = I − qq⊤ kqk2 (4.33)

The Jacobian Ju is represented as

Ju=   ∂Q ∂az ∂Q ∂ω ∂v ∂az ∂v ∂ω ∂p ∂az ∂p ∂ω   10x 4 (4.34) Where ∂Q ∂az = O4x1 (4.35)

(47)

∂Q ∂ω =  q0 −q⊤ q q0I − [q]×    −δt 2 ω⊤ kωksin  kωk 2 δt  Rhδt 2 cos  kωk 2 δt  ωω⊤ kωk2 + sin  kωk 2 δt   I kωk − ωω⊤ kωk3 i   (4.36) ∂v ∂az = R ( Iδt+  (1 − cos(k ω k δt)) k ω k  [ω] × k ω k +  δt− sin(k ω k δt) k ω k  [ω]2 × k ω k2 ) e3 (4.37) ∂p ∂az = R  Iδ2t 2 +  (δt−kωk1 sin(kωkδt)) kωk  [ω]× kωk + h δ2 t 2 + (cos(kωkδt)−1) kωk2 i [ω]2 × kωk2  e3(4.38) ∂v ∂ω = R {T1 + T2+ T3− T4} az (4.39) ∂p ∂ω = R {T5+ T6+ T7 + T8} az (4.40) In which T1 = [ˆω]×e3  (cos(k ω k δt)+ k ω k δtsin(k ω k δt) − 1) ω⊤ k ω k3  (4.41) T2 = − [e3]×  1 kωkI − ωω⊤ kωk3   (1 − cos(k ω k δt)) k ω k  (4.42) T3 = 1 kωk ωˆ ⊤e 3I − 2 ˆω⊤e3 ˆω ˆω⊤+ ˆωe⊤3   t − sin(k ω k δt) k ω k  (4.43) T4 = [ˆω]2×e3 (− sin(k ω k δt)+ k ω k δtcos(k ω k δt)) ω⊤ k ω k3 ! (4.44) T5 = [ˆω]×e3(−(k ω k δtcos(k ω k δt) − 2 sin(k ω k δt)) − δt k ω k) ω⊤ k ω k4 (4.45) T6 = − [e3]×  1 kωkI − ωω⊤ kωk3 " (δ t−kωk1 sin(k ω k δt)) k ω k # (4.46)

(48)

T7 = [ˆω]2×e3(−(k ω k δtsin(k ω k δt) + 2 cos(k ω k δt)) + 2) ω⊤ k ω k4 (4.47) T8 = 1 kωk ωˆ ⊤e 3I − 2 ˆω⊤e3 ˆω ˆω⊤+ ˆωe⊤3  δ 2 t 2 + (cos(k ω k δt) − 1) k ω k2  (4.48) Similar to the Husky Model, we compute Hessians which are Jacobians’ sensitivities with respect to states and controls. On similar lines for Husky model, they are

Sensitivity of the Jacobian Jx with respect to X, represented by Jxx

Sensitivity of the state Jx with respect to U, represented by Jxu

Sensitivity of the state Ju with respect to U, represented by Juu

Since the Hessians are three dimensional tensors, we express each Row-Column slice vertically in the order of their depth ie., top most Row-Column slice being depth 1 and the bottom most slice being the maximum depth.

In the Hessian Jxx, the term ∂Q0



∂Q ∂Q0



is computed numerically since the closed form solution is difficult to compute and it is represented in the same fashion.

Jxx = ∂ ∂x(Jx) = ∂ ∂x    ∂Q ∂Q0η (Q0) ∂Q ∂v0 ∂Q ∂p0 ∂v ∂Q0η (Q0) ∂v ∂v0 ∂v ∂p0 ∂p ∂Q0η (Q0) ∂p ∂v0 ∂p ∂p0    (4.49) Where ∂ ∂v0  ∂Q ∂v0  = ∂ ∂p0  ∂Q ∂v0  = ∂ ∂v0  ∂Q ∂p0  = ∂ ∂p0  ∂Q ∂p0  = O4x3x3 (4.50) ∂ ∂Q0  ∂Q ∂v0  = ∂ ∂Q0  ∂Q ∂p0  = O4x3x4 (4.51) ∂ ∂v0  ∂Q ∂Q0  = ∂ ∂p0  ∂Q ∂Q0  = O4x4x3 (4.52) ∂ ∂p0  ∂v ∂Q0  = ∂ ∂v0  ∂v ∂Q0  = ∂ ∂p0  ∂p ∂Q0  = ∂ ∂v0  ∂p ∂Q0  = O3x4x3 (4.53) ∂ ∂Q0  ∂v ∂v0  = ∂ ∂Q0  ∂v ∂p0  = ∂ ∂Q0  ∂p ∂v0  = ∂ ∂Q0  ∂p ∂p0  = O3x3x4 (4.54)

(49)

∂ ∂v0  ∂v ∂v0  = ∂ ∂p0  ∂v ∂v0  = ∂ ∂v0  ∂v ∂p0  = ∂ ∂p0  ∂v ∂p0  = O3x3x3 (4.55) ∂ ∂v0  ∂p ∂v0  = ∂ ∂p0  ∂p ∂v0  = ∂ ∂v0  ∂p ∂p0  = ∂ ∂p0  ∂p ∂p0  = O3x3x3 (4.56) ∂ ∂Q0  ∂Q ∂Q0 η (Q0)  =  ∂ ∂Q0  ∂Q ∂Q0  η (Q0) +  ∂Q ∂Q0  ∂η (Q0) ∂Q0  η (Q0) (4.57) ∂ ∂Q0  ∂v ∂Q0 η (Q0)  =  ∂ ∂Q0  ∂v ∂Q0  η (Q0) +  ∂v ∂Q0  ∂η (Q0) ∂Q0  η (Q0) (4.58) ∂ ∂Q0  ∂p ∂Q0 η (Q0)  =  ∂ ∂Q0  ∂p ∂Q0  η (Q0) +  ∂p ∂Q0  ∂η (Q0) ∂Q0  η (Q0) (4.59) Where ∂ ∂Q0  ∂Q ∂Q0  = 2 k ω k sin  k ω k 2 δt      −qxωx− qyωy− qzωz −q0ωx −q0ωy −q0ωz 3q0ωx− qzωy+ qyωz qxωx qyωx+ q0ωz qzωx− q0ωy qzωx+ 3q0ωy − qxωz qxωy− q0ωz qyωy q0ωx+ qzωy qxωy− qyωx+ 3q0ωz q0ωy + qxωz qyωz − q0ωx qzωz         −q0ωx 3.0qxωx− qyωy − qzωz −qyωx− qxωy −qzωx− qxωz qxωx q0ωx− qzωy+ qyωz qxωz −qxωy qxωy− q0ωz qzωx+ q0ωy− 3qxωz −qyωz qxωx− qzωz q0ωy + qxωz 3qxωy− qyωx+ q0ωz qyωy− qxωx qzωy         −q0ωy −qyωx− qxωy −qxωx− 3.0qyωy − qzωz −qzωy − qyωz qyωx+ q0ωz qxωz q0ωx− qzωy + 3qyωz qzωz − qyωy qyωy −qyωz qzωx+ q0ωy− qxωz qyωx qyωz− q0ωx qyωy − qxωx qxωy− 3qyωx+ q0ωz −qzωx         −q0ωz −qzωx− qxωz −qzωy − qyωz −qxωx− qyωy− 3.0qzωz qzωx− q0ωy −qxωy qzωz − qyωy q0ωx− 3qzωy+ qyωz q0ωx+ qzωy qxωx− qzωz qyωx 3qzωx+ q0ωy − qxωz qzωz qzωy −qzωx qxωy − qyωx+ q0ωz     (4.60) ∂ ∂Q0  ∂v ∂Q0  = ∂ 2Rk 1 ∂Q2 0 (4.61)

(50)

∂ ∂Q0  ∂p ∂Q0  = ∂ 2Rk 2 ∂Q2 0 (4.62) Where dR2k dQ2 = 2  k − [k]×  IRC◦ (ICD◦ k) + IRDk⊤− kICD  (4.63)

Note: In equation 4.63, the quantity on the right hand side is a 3D Tensor. The left most matrix in the tensor is the first Row-Column slice and the latter matrix is again a Tensor which is behind the first Row-Column slice.

∂η (Q0) ∂Q = − 1 kqk2 I414q ⊤+ qI 144 + 2 kqk4 qq ⊤◦ q⊤ (4.64) The Hessian Juu is represented as

Juu = ∂ ∂u(Ju) = ∂ ∂u   ∂Q ∂az ∂Q ∂ω ∂v ∂az ∂v ∂ω ∂p ∂az ∂p ∂ω   (4.65)

In which when looked at each element ∂2Q ∂a2 z = O4x1x1 (4.66) ∂ ∂ω  ∂Q ∂az  = O4x1x3 (4.67) ∂ ∂az  ∂Q ∂ω  = O4x3x1 (4.68) ∂2V ∂a2 z = ∂ 2P ∂a2 z = O3x1x1 (4.69) For ∂∂ω2Q2, ∂2 v ∂ω2 and ∂2 p

∂ω2 we consider the following equations.

∂ ∂ω  ωT k ω kn  = I k ω kn − nωω⊤ kωkn+2 (4.70) ∂ ∂ω  ωω⊤ k ω k2  =  1 kωkI − ωω⊤ kωk3  ω⊤ k ω k + ω k ω k  1 kωkI − ωω⊤ kωk3 ⊤ (4.71)

(51)

∂ ∂ω  1 kωkI − ωω⊤ kωk3  = − 1 k ω k ∂ ∂ω  ωω⊤ k ω k2  −  I − ωω ⊤ kωk2  ◦ ω T k ω k3 (4.72) ∂ ∂ω  ω⊤ k ω ksin  k ω k 2 δt  =   1 kωkI − ωω⊤ kωk3 ⊤ sinkωk2 δt  +hω⊤ kωk  δt 2 cos  kωk 2 δt  ω⊤ kωk i (4.73) ∂ ∂ω  δt 2 cos  k ω k 2 δt  ωω⊤ k ω k2  = δt 2 h ωω⊤ kωk2  −δt 2 sin  kωk 2 δt  ω⊤ kωk i +hcoskωk2 δt   ∂ ∂ω  ωω⊤ kωk2 i (4.74) ∂ ∂ω  sin k ω k 2 δt   I k ω k − ωω⊤ k ω k3  = h I kωk − ωω⊤ kωk3   δt 2 cos  kωk 2 δt  ω⊤ kωk i +hsinkωk2 δt   ∂ ∂ω  I kωk − ωω⊤ kωk3 i (4.75) ∂ ∂ω (cos(k ω k δt)+ k ω k δtsin(k ω k δt) − 1) = -δtsin (k ω k δt) ω ⊤ kωk + δt(k ω k δtcos (k ω k δt) + sin (k ω k δt)) ω⊤ kωk(4.76) ∂ ∂ω  (1 − cos(k ω k δt)) k ω k  = (cos(k ω k δt)+ k ω k δtsin(k ω k δt) − 1) ω⊤ k ω k3 (4.77) ∂ ∂ω  t − sin(k ω k δt) k ω k  = (− sin(k ω k δt)+ k ω k δtcos(k ω k δt)) ω ⊤ k ω k3 (4.78) ∂ ∂ω (− sin(k ω k δt)+ k ω k δtcos(k ω k δt)) = (−δtcos (k ω k δt) − δt(− k ω k δtsin (k ω k δt) + cos (k ω k δt))) ω

kωk(4.79)

∂ω(−(k ω k δtcos(k ω k δt) − 2 sin(k ω k δt)) − δtk ω k) = (−δt(− k ω k δtsin (k ω k δt) + cos (k ω k δt)) − 2δtcos (k ω k δt) − δt) ω

⊤ kωk(4.80) ∂ ∂ω " (δt− kωk1 sin(k ω k δt)) k ω k # =

(52)

(−(k ω k δtcos(k ω k δt) − 2 sin(k ω k δt)) − δtk ω k) ω⊤ kωk4(4.81) ∂ ∂ω  δ2 t 2 + (cos(k ω k δt) − 1) k ω k2  = (−(k ω k δtsin(k ω k δt) + 2 cos(k ω k δt)) + 2) ω⊤ kωk4(4.82)

With the above equations considered the following are defined.

∂2Q ∂ω2 =  q0 −q⊤ q q0I − [q]×    ∂ ∂ω  ω⊤ kωksin  kωk 2 δt  ∂ ∂ω  δt 2 cos  kωk 2 δt  ωω⊤ kωk2 + sin  kωk 2 δt   I kωk − ωω⊤ kωk3    (4.83) ∂2v ∂ω2 =  R ∂ ∂ω [T1+ T2+ T3− T4]  az (4.84) ∂2p ∂ω2 =  R ∂ ∂ω[T5+ T6+ T7+ T8]  az (4.85)

Where terms T1 to T8 are obtained from equations 1.41 to 1.48. Simplifying the terms

above ∂T1 ∂ω = ∂ [ˆω] ×e3 ∂ω  (cos(k ω k δt)+ k ω k δtsin(k ω k δt) − 1) ωT k ω k3  +h[ˆω]×e3  ωT kωk3 ∂ ∂ω (cos(k ω k δt)+ k ω k δtsin(k ω k δt) − 1) i +h[ˆω]×e3  (cos(k ω k δt)+ k ω k δtsin(k ω k δt) − 1)∂ω∂ ωT kωk3 i (4.86) ∂T2 ∂ω = − [e3]×  ∂ ∂ω  1 kωkI − ωω⊤ kωk3   (1 − cos(k ω k δt)) k ω k  +-[e3]× nh 1 kωkI − ωω⊤ kωk3   ∂ ∂ω h (1−cos(kωkδt)) kωk iio (4.87) ∂T3 ∂ω =  I ∂ ∂ωωˆ ⊤e 3+ ∂ ∂ωωeˆ ⊤ 3 − 2e3 ∂ ∂ωωˆ ⊤   1 kωkI − ωω⊤ kωk3   δt− sin(k ω k δt) k ω k  + I ˆω⊤e 3+ ˆωe⊤3 − 2e3ωˆ⊤  ∂ ∂ω  1 kωkI − ωω⊤ kωk3  + I ˆω⊤e 3+ ˆωe⊤3 − 2e3ωˆ⊤  1 kωkI − ωω⊤ kωk3  ∂ ∂ω h t − sin(kωkδt) kωk i (4.88)

(53)

∂T3 ∂ω =  I ∂ ∂ωωˆ ⊤e 3+ ∂ ∂ωωeˆ ⊤ 3 − 2e3 ∂ ∂ωωˆ ⊤   1 kωkI − ωω⊤ kωk3   t − sin(k ω k δt) k ω k  + I ˆω⊤e 3+ ˆωe⊤3 − 2e3ωˆ⊤  ∂ω  1 kωkI − ωω⊤ kωk3  + I ˆω⊤e 3+ ˆωe⊤3 − 2e3ωˆ⊤  1 kωkI − ωω⊤ kωk3  ∂ ∂ω h δt−sin(kωkδkωk t) i (4.89) ∂T4 ∂ω = " ∂ [ˆω]2×e3 ∂ω  (− sin(k ω k δt)+ k ω k δtcos(k ω k δt)) ωT k ω k3 # +h[ˆω]2×e3 ωT kωk3 ∂ ∂ω (− sin(k ω k δt)+ k ω k δtcos(k ω k δt)) i +h[ˆω]2×e3(− sin(k ω k δt)+ k ω k δtcos(k ω k δt))  ∂ ∂ω ωT kωk3 i (4.90) ∂T5 ∂ω =  ∂ [ˆω]×e3 ∂ω (−(k ω k δtcos(k ω k δt) − 2 sin(k ω k δt)) − δt k ω k) ωT k ω k4  +h[ˆω]×e3ωT kωk4 ∂ ∂ω(−(k ω k δtcos(k ω k δt) − 2 sin(k ω k δt)) − δtk ω k) i +h[ˆω]×e3(−(k ω k δtcos(k ω k δt) − 2 sin(k ω k δt)) − δtk ω k)∂ω∂ ωT kωk4 i (4.91) ∂T6 ∂ω = − [e3]× ("  ∂ ∂ω  1 kωkI − ωω⊤ kωk3 " (δ t− kωk1 sin(k ω k δt)) k ω k ##) -[e3]×   1 kωkI − ωω⊤ kωk3  ∂ ∂ω  (δt− 1 kωksin(kωkδt)) kωk  (4.92) ∂T7 ∂ω = " ∂ [ˆω]2×e3 ∂ω  (−(k ω k δtsin(k ω k δt) + 2 cos(k ω k δt)) + 2) ωT k ω k4 # +h[ˆω]2×e3 ωT kωk4 ∂ ∂ω (−(k ω k δtsin(k ω k δt) + 2 cos(k ω k δt)) + 2) i +h[ˆω]2×e3(−(k ω k δtsin(k ω k δt) + 2 cos(k ω k δt)) + 2)∂ω∂ ω T kωk4 i (4.93) ∂T8 ∂ω =  I∂ω∂ ωˆ⊤e3+∂ω∂ ωeˆ ⊤3 − 2e3∂ω∂ ωˆ⊤  1 kωkI − ωω⊤ kωk3  hδ2 t 2 + (cos(kωkδt)−1) kωk2 i

(54)

+ I ˆω⊤e 3+ ˆωe⊤3 − 2e3ωˆ⊤  ∂ω  1 kωkI − ωω⊤ kωk3  hδ2 t 2 + (cos(kωkδt)−1) kωk2 i + I ˆω⊤e3+ ˆωe⊤3 − 2e3ωˆ⊤  1 kωkI − ωω⊤ kωk3  ∂ ∂ω hδ2 t 2 + (cos(kωkδt)−1) kωk2 i (4.94) The Hessian Jxu is represented as

Jxu= ∂ ∂u(Jx) = ∂ ∂u    ∂Q ∂Q0η (Q0) ∂Q ∂v0 ∂Q ∂p0 ∂v ∂Q0η (Q0) ∂v ∂v0 ∂v ∂p0 ∂p ∂Q0η (Q0) ∂p ∂v0 ∂p ∂p0    (4.95) Where ∂ ∂az ∂v ∂v0 = ∂ ∂az ∂v ∂p0 = ∂ ∂az ∂p ∂v0 = ∂ ∂az ∂p ∂p0 = O3x3x1 (4.96) ∂ ∂ω ∂v ∂v0 = ∂ ∂ω ∂v ∂p0 = ∂ ∂ω ∂p ∂v0 = ∂ ∂ω ∂p ∂p0 = O3x3x3 (4.97) ∂ ∂az ∂Q ∂v0 = ∂ ∂az ∂Q ∂p0 = O4x3x1 (4.98) ∂ ∂az ∂Q ∂Q0 = O4x4x1 (4.99) ∂ ∂ω ∂Q ∂v0 = ∂ ∂ω ∂Q ∂p0 = O4x3x3 (4.100) ∂ ∂az ∂v ∂Q0 = ∂v ∂Q0 /az (4.101) ∂ ∂az ∂p ∂Q0 = ∂p ∂Q0 /az (4.102) ∂ ∂ω ∂Q ∂Q0 =  ∂p0 ∂ω − ∂p⊤ ∂ω ∂p ∂ω ∂ ∂ω p0I − [p]×   +  −q⊤ [q]×+ qoI   ∂ ∂Q0 (R ˆ ω)  δt 2 cos (k ω k δt) ω⊤ kωk  + sinkωk2 δt   ∂ ∂ω ∂ ∂Q0 (R ˆ ω)  (4.103) Where ∂p0 ∂ω = − δt 2 sin (k ω k δt) ω⊤ k ω k (4.104)

(55)

∂p ∂ω = R  ω k ω k δt 2 cos (k ω k t) ω⊤ k ω k  + sin k ω k 2 δt   I k ω k − ωωT k ω k3  (4.105) ∂ ∂ω p0I − [p]× = −I δt 2 sin (k ω k t) ω⊤ k ω k − ∂ ∂ω [p]× (4.106) And ∂ ∂ω ∂v ∂Q0 = dR 2k 1 dωdQ (4.107) ∂ ∂ω ∂p ∂Q0 = dR 2k 2 dωdQ (4.108) Where dR2k dvdQ =  q0I + [q]× dk dv " IRC⊛  q⊤dk dv  − dk dv ⊛q ⊤+ q dk dv ⊤ − dk dv  × # (4.109) k1 and k2 can be found using Equations 4.28 and 4.29.

Note: In Equation 4.107, the quantity on the right hand side is a 3D Tensor. The left most matrix in the tensor is the first Row-Column slice and the latter matrix is again a Tensor which is behind the first Row-Column slice in Equation 4.109.

Note: For computing Jxu, η (Q0) has to be multiplied at the end to all the terms related

to ∂Q∂Q0, ∂v ∂Q0 and

∂p ∂Q0.

4.1.3 Special Note on Angular Velocity

Angular Velocity term ω can cause both system models to be unstable when the magnitude of ω is close to zero or zero. One solution to ensure the designed systems are stable is to take approximations wherever necessary to eliminate the k ω k term if it appears in the denominator. But as k ω k increases, the approximation does not

(56)

hold. Hence there is a trade-off as to where one should use an approximation model vs using the close-form solution model. For the purpose of this thesis, the trade off margin was calculated numerically using Matlab which is applied as follows.

1: procedure Use Models 2: System States ← CurrentX 3: System Controls ← CurrentU

4: if k ω k> Margin then Use Closed-Form solution model 5: if k ω k<= Margin then Use Approximation solution model

For the purpose of this thesis, the calculated margin values for the Husky model and the Drone model are 0.08 and 0.015, respectively. The following taylor series approxi-mations are used wherever necessary.

sin(k ω k t) =k ω k t − k ω k 3 t3 6 + k ω k5 t5 120 (4.110) cos(k ω k t) = 1 − k ω k 2 t2 2 + k ω k4 t4 24 − k ω k6 t6 720 (4.111) sin k ω k 2 t  = k ω k t 2 − k ω k3 t3 48 + k ω k5 t5 3840 (4.112) cos k ω k 2 t  = 1 − k ω k 2 t2 8 + k ω k4 t4 384 − k ω k6 t6 46080 (4.113)

Having defined the robot models and computing the necessary terms, we define the control paradigm in the next section.

References

Related documents

We found significant correlations with high density in the serotonin receptor in the temporal cortex, hippocampus, amygdala, and orbital frontal cortex and high density of

To test the altitude control a scenario was created in gazebo with different obstacles placed along the flight path, three boxes of varying heights as wall as a ramp as shown in

For sensor window 110/110/15, without the additional safety distance, the UAV is able to follow the wall, although at a very slow velocity and not at a desirable distance.. With

I verkligheten är kopplingen nödvändig för att motorn ska kunna stå still när elmotorn driver bilen. Det bör utredas djupare vilken verkningsgrad Honda-motorn och HCCI

Wiktors Hemma i förhållande till dessa två frågor kan det även här dras paralleller till det faktum att resurserna kanske är en aning ojämnt fördelade även om de finns i

förkortning för General health questionnaire (engelska för frågeformulär om generell hälsa), FFMQ-15 förkortning för five-facet mindfulness questionnaire (engelska

The study of the lead times in Örebro showed a difficulty to respect the required time between the different stages of treatment for the high grade astrocytoma.. Most of the

Tommie Lundqvist, Historieämnets historia: Recension av Sven Liljas Historia i tiden, Studentlitteraur, Lund 1989, Kronos : historia i skola och samhälle, 1989, Nr.2, s..