• No results found

Automated Recognition of Human Activity: A Practical Perspective of the State of Research

N/A
N/A
Protected

Academic year: 2022

Share "Automated Recognition of Human Activity: A Practical Perspective of the State of Research"

Copied!
28
0
0

Loading.... (view fulltext now)

Full text

(1)

Automated Recognition of Human Activity

A Practical Perspective of the State of Research Hampus Hansson and Martin Gyllström

Computer Science with specialization in Computer Application Development Bachelor level (BSc)

15 ECTS Spring 2021

Supervisor: Carl Magnus Olsson Examiner: Sergei Dytckov

Date of final seminar: 2020-05-31

(2)

Abstract

The rapid development of sensor technology in smartphone and wearable devices has led research to the area of human activity recognition (HAR). As a phase in HAR, applying classification models to collected sensor data is well-researched, and many of the different models can recognize activities successfully. Furthermore, some methods give successful results only using one or two sensors. The use of HAR within pain management is also an existing research field, but applying HAR to the pain treatment strategy of acceptance and commitment therapy (ACT) is not well- documented. The relevance of HAR in this context is that ACT:s core ideas are based on the perspective that daily life activities are connected to pain. In this thesis, state- of-the-art examples for sensor-based HAR applicable to ACT are provided through a literature review. Based on these findings, the practical use is assessed in order to provide a perspective to the current state of research.

(3)
(4)

Table of Contents

1. Introduction ... 6

2. Background ... 7

2.1 Research domain and problem description ... 7

2.2 Research setting ... 8

3. Method ... 10

3.1 Research objectives ... 10

3.2 Research approach ... 10

4. Literature review ... 11

4.1 Literature review conclusions ... 13

5. Workshop results ... 17

5.1 General opinions ... 17

5.2 Opinions on smartphone sensor articles ... 18

5.3 Opinions on wrist-worn wearable articles ... 19

6. Discussion ... 21

6.1 Sensors ... 21

6.2 Third-party data and SDK:s ... 22

6.3 Intelligence ... 23

7. Conclusions and future work ... 24

References ... 25

(5)
(6)

1. Introduction

Multitudes of digital health applications are released every day [1], aiming to support healthcare using technology [2]. With an estimated 20 percent of the world’s adult population suffering from chronic pain [3], the high prevalence is indisputable.

Chronic pain is commonly defined as recurrent or long-lasting pain [2], and research shows that a fifth of all people with chronic pain have suffered for 20 years or more [5]. Common outcomes include unemployment or inability to work outside the home, sleeping difficulties, and depression.

Several digital health applications are offering self-management of pain through self-reporting [2]. This self-managing approach has been described as an opportunity of improving the general field of pain management with its accessibility and efficiency. However, this method has its drawbacks. One of the challenges is finding ways to keep users engaged and motivated, as well as ensuring accuracy of self- reported data. Issom et al. [1] proposed a solution of using automatic data collection to enhance the quality and value of self-management applications. Suitably, the research setting is a self-management application based on a treatment strategy connecting pain to daily activities. In this setting, utilizing sensor data from smartphones and wearable devices as an input source for human activity recognition (HAR) is highly relevant.

The use of smartphone and wearable technology in the context of pain management has consistently been explored in existing research. Chan et al. [28] presented a way to assess pain based on gait patterns recognized by a smartphone accelerometer sensor. Zhang et al. [27] used accelerometer, gyroscope, and magnetometer data to analyze postures for pain prevention among construction workers. Within HAR, studies have seen success in the ability to classify daily activities. Lu et al. [16] were able to classify activities such as vacuuming and sweeping based on accelerometer and gyroscope data. Kwon et al. [24] used accelerometer and position data to classify 11 activities, for example office work and washing dishes. However, research on applying these classification techniques within the specific context of activity-based pain management is yet to be explored.

The purpose of this study is to identify state-of-the-art examples for sensor-based HAR. In order to provide a practical perspective to the current state of research, the identified examples and findings are then discussed with the creators and developers of PainDrainer (a novel chronic pain management application), in order to understand differences in perspective. With a growing body of research associated with sensor-based technology, identifying practical relevance and needs in relation to the current state of research is the main contribution of this work.

To reach this, a study with three phases is conducted: First, relevant research is identified through a literature review. The results are then synthesized to describe the sensor technology and device types, as well as system intelligence and main results. Secondly, a company-wide workshop is conducted based on the results.

Finally, further practical design implication needs are identified based on the workshop outcomes, and these are contrasted with current research.

(7)

2. Background

This section explains the research domain and problem description, as well as the research setting.

2.1 Research domain and problem description

Self-reporting is a common method for gathering data from users, within contexts such as user studies and applications. It is a subjective measure in which participants are asked to report data manually. Within healthcare, self-reporting is often used to recognize patients’ needs and understand how to support their conditions in a more efficient manner [9] [10]. In the context of pain, using a scale to subjectively quantify and report pain intensity during the day is one practical example [8]. Digital health applications based on self-reporting presents an opportunity to support health professionals in disease assessment and enhancement of doctor-patient interactions.

There are challenges that come with self-reporting within digital health applications. One challenge is ensuring the accuracy of the self-reported data, since the manually entered information may be imprecise. Misunderstandings and unclear questions may also cause worse quality and accuracy of the data. Another challenge is engagement and motivation, i.e. users losing interest in digital health applications over time. Safi et al. [15] explore this issue, with results indicating that only 24% used their application for more than a year. As a consequence of this issue, the chances of health improvement might decrease. In order to increase motivation and minimize inaccuracies within self-reporting, Adams et al. [2] suggest four cognitive components to be considered when designing a self-reporting application or survey for chronic pain patients:

• The patient’s ability to understand the focus and meaning of the questions (comprehension)

• The patient’s ability to recall generic and specific memories for the information related to the question, and decrease the risk of missing significant details (retrieval)

• The patient’s ability to summarize and integrate the retrieved information, draw a conclusion, and assess their situation (judgment)

• The patient’s ability to map their judgment onto a response category or value, with a possibility to edit the response if necessary (response)

Using a self-reporting method is common within self-management applications. In a health context, the concept of self-management is described as taking control of one’s personal situation in order to improve quality of life [8]. Within pain management, the hope is to reduce the disease’s impact on the overall life situation, including physical health status. Applications offering self-management of pain vary from focusing on specific pain types [1] [11], and taking a wider approach handling many kinds of pain-related problems [12]. The quality and accuracy of these applications are often hard to measure, and some studies have been questioning their practical worth. Some of the concerns include the lack of professional research behind, and not consulting health service personnel while developing these applications [2] [11]

(8)

[12] [13]. In conclusion, the challenges of self-management and self-reporting range from data accuracy assurance and motivation, to trustworthiness and reliability.

2.2 Research setting

This study is conducted within the context of the digital health application PainDrainer. It is a self-management application for people with chronic pain and is created around a treatment strategy called acceptance and commitment therapy (ACT) [35]. ACT is a type of cognitive behavior therapy involving mindfulness and acceptance, with the six core processes defined as the following [36]:

• Accepting events and consequent feelings without struggling to change them

• Stepping back and observing one’s thoughts

• Experiencing psychological events in a non-judgmental manner by bringing enhanced awareness

• Becoming aware of one’s personal experiences

• Understanding what is really important and identifying important directions for life

• Taking concrete actions that fulfill personal values

Figure 1. Screenshots depicting PainDrainer’s self-reporting form in overview: the left picture shows an empty form, and the right picture shows the form with a couple of parameters filled in.

The last two processes are particularly in line with the core ideas of PainDrainer.

On a daily basis, the user is meant to fill in data on six different kinds of activities performed during the day; sleep, work, physical activity, housework, leisure and rest.

The user fills in the time duration, and satisfaction (on a scale from 1 to 5) for all six

(9)

activities. For work, physical activity, housework and leisure, there is also a parameter for intensity of the activity (scale 1-5) to be filled in. Lastly, the user enters data about their pain; lowest pain (scale 0-10), highest pain (scale 0-10), time duration of the highest pain, and the average pain (scale 0-10). This procedure allows the artificial intelligence (AI) within the application to recommend a plan based on the six activities mentioned above, in order to reach a selected average pain goal (scale 0-10). The end goal is helping people with chronic pain to find a balance of pain and activity, as well as having the energy to do what they value the most [37].

Figure 2. Screenshots of PainDrainer’s self-reporting model: the left picture is an example of an activity input form, and the right picture shows the form that provides AI recommendations for planning a day.

In its current state, PainDrainer uses a self-reporting model. Therefore, this application faces the same challenges described in the former section, i.e. user motivation and data accuracy issues. The company is discussing automatic data collection as a step towards solving these issues, which Issom et al. also proposed [1].

With PainDrainer’s ambition of expanding its services into smartphone and wearable devices, countless possibilities featuring a variety of different sensors open up. Since the application is centered around human activity, the relevance of sensor- based human activity recognition is strong.

(10)

3. Method

In this section, the research objectives are identified, as well as the research approach.

3.1 Research objectives

For this study, two research objectives are defined:

• Identifying state-of-the-art examples for sensor-based human activity recognition (HAR)

• Providing a perspective to the current state of research by assessing and evaluating the practical use of the identified HAR examples

3.2 Research approach

Overall, this work follows design-science research [26] using a descriptive and observational approach for design evaluation. Design-science research has its background in engineering and artifact creation [6], and is characterized by the focus on problem solving. It also recognizes contributions to practice as a commonly emphasized element. Peffers et al. [38] outline six main phases in what they describe as the design science research methodology (DSRM). This study involves three of the phases: designing and developing an artifact, demonstrating the use of the artifact to solve one or more instances of a problem, and evaluating how well the artifact solves the problem. The research entry is in the phase of design and development, also known as design and development-centered initiation.

Furthermore, iterations between the phases of DSRM are suggested. However, this study only covers one iteration. Further iterations can be carried out by the company going forward, however, this is not within the scope of this work. Instead, this thesis covers the initial iteration of a company interested in working with human activity recognition technology. Conclusively, the company is in a DSRM cycle, and this work focuses on a few parts within the greater process of the company.

In this study, the design artifact is a theoretical model consisting of summarizing tables from a literature review. The literature review focuses on articles within sensor-based HAR applicable to the research setting of this study. For the demonstration and evaluation phases, the design artifact along with an analysis is presented to a collaborating industrial partner interested in exploring options for product development using sensor-based technology as complement to the current user input based on self-reporting.

(11)

4. Literature review

Concerning the gathering of data, an alternative to the self-reporting approach is to use automated methods. The combination of digital health applications and the ever- growing possibilities of sensor technology presents an opportunity to automatically recognize activities by analyzing sensor data from the user. Gupta et al. [14] identify three popular approaches for human activity recognition (HAR); video-based pose estimation, wearable-based, and smartphone-based. However, as the video-based approach only suits applications within controlled environments [19], it is not as relevant as the other two.

In order to identify the state-of-the-art of HAR within smartphones and wearable devices, three tables are created containing relevant examples. These three tables represent the main findings from the conducted literature review of this study. Table 1 shows articles using smartphones, Table 2 addresses studies working with wrist- worn wearables, and Table 3 presents studies combining the use of both smartphones and wrist-worn wearables. The key aspects taken into consideration in the literature review are the following:

• Which sensors are used?

• How is the data collected and where is the device placed?

• What intelligence is applied to the data?

• What result is achieved and/or what activities are classified?

The data extracted based on these questions are placed in the tables, each containing five columns; “Data type”, “Data source”, “Intelligence”, and “Result” as well as a

“Study” column to be able to identify the article and its authors. The “Data type”

column contains information on the sensor types and additional data used in the HAR process of the article. “Data source” is dedicated to data collection methods and device placement. This column also specifies if the data was collected as part of the study, or acquired from a public dataset. “Intelligence” presents the classification models used for HAR in the study. Finally, “Result” describes the achievement of the article, most commonly the activities that were classified.

Regarding the HAR process, there are several phases to consider [4]. The first phase is the process of collecting data from sensors, which is represented in the tables. The following phases are signal processing, as well as feature extraction and selection.

Signal processing is the segmentation of the collected data, such as noise filtering.

Feature extraction is reducing the collected data to informative values, and feature selection is deciding which of these data values to use. The final phase is data classification, i.e. categorizing the collected data, for which there are many models available. Classification models are represented in the tables, as opposed to signal processing, feature extraction, and feature selection. The aim of the literature review is providing a simple and introductory overview of the HAR process, and as these phases hold a technical complexity, they are excluded from the tables. Classification is also a complex process, but due to the comparative nature of many reviewed studies, as well as differing results, it might be of interest to introduce a familiarity to some of the options available. Furthermore, since it is the final phase of HAR, it might be considered an essential phase as it delivers the final result.

(12)

Table 1. Smartphones

Study Data type Data source Intelligence Result

Tran and Phan

(2016) [18] Accelerometer

and Gyroscope Android application with device placed in unspecified pocket of subjects

Support-Vector

Machine (SVM) Classifying 6 activities;

walking, standing, sitting, laying down, ascending and descending stairs Hussein et al.

(2017) [32] Accelerometer

and Gyroscope One dataset of raw sensor data with smartphone worn on waist of subjects

Random Forest

(RF) Classifying 6 activities;

walking, standing, sitting, laying down, ascending and descending stairs San

Buenaventura and Tiglao (2017) [29]

Accelerometer, Gyroscope and Magnetometer

One dataset of raw sensor data with Android

smartphone placed in four different locations of subjects measured

separately; right trouser pocket, belt, right arm, and right wrist

Nearest Neighbor and Decision Tree

Classifying 6 activities;

walking, running, sitting, standing, ascending and descending stairs

Jain and Kanhangad (2018) [19]

Accelerometer

and Gyroscope Two datasets of raw sensor data;

one with Android smartphone attached to waist belt on subjects, and one with iPod Touch device placed in front trouser pocket of subjects

SVM and K- Nearest

Neighbor (k-NN)

Classifying 6 activities and various intensities of these; walking, standing, sitting, laying down, ascending and descending stairs

RoyChowdhury et al. (2018) [31]

Accelerometer Android application with no information on device

placement

SVM, k-NN, Decision Tree, Bagged Trees and Boosted Trees

Classifying 12 activities;

sitting on chair, sitting on floor, lying on different sides, jogging, running, walking in three intensities etc.

Asim et al.

(2020) [33] Accelerometer One dataset of raw sensor data with device used naturally

RF, Bagging, Decision Tree, k- NN, SVM and Naïve Bayes

Classifying 15 activities;

lying down and sleeping, lying down and

watching TV, standing outdoor, standing and talking, sitting in a car, sitting and surfing the Internet etc.

Chan et al.

(2013) [28] Accelerometer Self-made iPhone application with device placed on lower-back of subjects

SVM, k-NN, Multilayer perceptron (MLP), Decision Tree

Classifying people as pain-free or suffering from lower back pain based on gait patterns Zhang et al.

(2017) [27]

Accelerometer, Gyroscope and Magnetometer

One dataset of raw sensor data with device placed on upper-back of subject

Mathematic calculations to convert raw sensor data into angular units (degrees), upon which the data was assessed using defined at- risk angles

Classifying safe and at- risk ergonomic situations by assessing angles of posture

(13)

Table 2. Wrist-worn wearables

Study Data type Data source Intelligence Result

Shahmohammadi

et al. (2017) [20] Accelerometer

and Gyroscope Self-made Android smartwatch application with no information on device placement

Random Forest (RF), Extra Trees, Naïve Bayes, Logistic

Regression (LR) and Support- Vector Machine (SVM)

Classifying 5 activities based on subjects’ own tagging; standing, sitting, laying down, walking and

“none”

Kongsil et al.

(2019) [21] Accelerometer

and Gyroscope Two datasets of raw sensor data;

one with Android smartphone attached to right wrist of subjects, and one with Android

smartwatch with no information on device placement

SVM, K-Nearest Neighbor (k-NN), Decision Tree, RF, Linear

Discriminant Analysis and Naïve Bayes

Classifying 7 activities; standing, sitting, lying, walking, running, ascending and descending stairs

Kwon et al.

(2018) [24]

Accelerometer and Activity Information, such as Position

Self-made Apple Watch

application with device placed on dominant wrist of subjects

Convolutional Neural Network

Classifying 11 activities of daily living; resting, washing dishes, office work, playing a computer game, eating, writing etc.

Lu et al. (2019)

[16] Accelerometer

and Gyroscope Two datasets of raw sensor data;

both of which with wearable device attached to right wrist of subjects

Gradient Boosting Decision Tree, RF, SVM, k-NN and LR

Classifying 13 activities in the first dataset, and 12 activities in second dataset, all being daily life activities;

vacuuming, lying, doing forward waist bends, rope

jumping, sweeping, treadmill running etc.

Akbari et al.

(2021) [17]

Accelerometer, Gyroscope and Bluetooth Low Energy (BLE) Data

Self-made Android smartwatch application with no information on device placement

SVM, RF and Naïve Bayes

Detecting change in daily life activities based on context, and presenting a list for users to label activities upon change detection

4.1 Literature review conclusions

In the matter of sensor usage, the accelerometer is by far the most used sensor in the reviewed literature, with all articles using accelerometry data. Gyroscope comes in at second place, with 12 out of 17 articles using data from the sensor. Additionally, 11 articles restrict sensor usage to only accelerometer and gyroscope; eight articles using both, and three only using accelerometer data. Together with magnetometers, these sensors are commonly referred to as inertial sensors. San Buenaventura and

(14)

Table 3. Smartphones and wrist-worn wearables

Study Data type Data source Intelligence Result

Nandy et al.

(2019) [25] Smartphone Accelerometer and Wearable Heart Rate Sensor

One dataset of raw sensor data with smartphone device placed in trouser pocket and wearable device attached to chest of subjects

Support-Vector Machine (SVM), Linear Regression, Decision Tree, Bagged Trees, K- Nearest Neighbor (k-NN), Multilayer Perceptron, Naïve Bayes and an ensemble of all the classifiers above

Classifying 6 activities; walking, standing,

climbing, stairs, and intensity of these

Shoaib et al.

(2015) [22]

Accelerometer and Gyroscope

Self-made Android application with smartphone device placed in right trouser pocket and wearable device placed on

dominant wrist of subjects

SVM, k-NN and Decision Tree

Classifying 13 activities for bad habit detection;

smoking, drinking coffee and eating etc.

Mekruksavanich and

Jitpattanakul (2020) [23]

Accelerometer

and Gyroscope One dataset of raw sensor data with smartwatch placed on dominant wrist of subjects and no information on smartphone device placement

Convolutional Neural Network (CNN), Long Short-Term Memory (LSTM) and a CNN-LSTM hybrid

Classifying 18 activities of daily living; jogging, typing, eating various foods, brushing teeth, doing various sports etc.

Vaizman et al.

(2017) [34] Accelerometer, Gyroscope, Magnetometer, GPS Location, Microphone and Phone- State Indicators

Self-made Android and iPhone applications with device used naturally, paired with a Pebble smartwatch with no information on device placement

Linear Regression Providing a dataset of raw sensor data collected in-the- wild, self-classified by users into detailed classes, with an additional probability feature of what activity is performed

Tiglao [29] compare the classification accuracies of inertial sensors, reaching the several interesting conclusions. Firstly, they conclude that accelerometer and gyroscope data reach the highest accuracy out of all combinations. That includes the combination of all three sensors, leading to the conclusion that data from more sensors does not equal higher classification accuracies. As a final conclusion, using accelerometer data by itself resulted in a higher classification accuracy compared to solely using gyroscope or magnetometer data [29]. Other examples in the tables [33]

have also reached high classification accuracies only using accelerometer data.

RoyChowdhury et al. [31] further underline the availability of accelerometer sensors in smartphones, and that using only one sensor “makes the system cost-effective and ubiquitous”.

(15)

Evidently, many things are possible only using a few sensors when gathering data.

In the articles presented in Table 1, a range of static and dynamic activities have been classified only using smartphone accelerometer data. RoyChowdhury et al. [31]

classify 12 activities; including three intensities of walking, and Asim et al. [33]

classify 15 context-aware activities. Furthermore, using few sensors decreases the risk of collecting redundant data from different sensors, which might lead to more successful classification results [29]. Other advantages include saving battery and memory, an important aspect to consider in long-term activity tracking, since smartphones usually have multiple power-consuming services running simultaneously.

The results presented in the “Intelligence” column show a diversity of classification models between the articles, as many different methods have reached high accuracies. Several articles compare the accuracies of different classification models [20] [25] [32] [33], in order to make conclusions based on the results. These results and conclusions tend to vary distinctly between the different studies, and thus the process of trial and error is expected in order to successfully build a HAR system.

Concerning the results of classification, Hussein et al. [32] conclude that dynamic activities have better potential of reaching high accuracies compared to static activities, explaining that differences between actions are exaggerated by body movement. Furthermore, the use of wearable devices presents a greater opportunity to classify complex activities [16] [24]. Combining smartphone and wearable devices further enhance this possibility [23], e.g. for activities involving hand movements [22]. However, regarding the activities mentioned in the research setting section, it might be a challenge to classify some of those activities, such as leisure, due to the vague definition of the activity and the difficulty to introduce a general definition.

Definitively, using wrist-worn wearables possess multitudes of opportunities. Akbari et al. [17] present a change detection model based on accelerometer, gyroscope, and Bluetooth Low Energy (BLE) data. The BLE device collects contextual data in order to assess the user’s environment, which helps detect changes in activity. Upon detecting change, the user is presented with an activity list and is expected to select the activity performed at the moment. The article seems to focus on the change detection mechanism, and thus the use of the self-reported data is unknown. This concept of presenting an activity list in the wearable device as a form of self-reporting is brought up in another article. Vaizman et al. [34] provides a self-reporting application, mainly suited for data labeling, with the feature of sorting the activity list by probability rate, i.e. likelihood of a user performing a certain activity. This mechanism is made possible by the classification model of linear regression.

In the context of pain management, two articles in Table 1 show interesting results.

As opposed to the majority of the studies, the aim of these articles is not to classify activities. Chan et al. [28] describes the use of a smartphone accelerometer to collect relevant data for assessing gait patterns. When analyzing these patterns, the classification model is able to detect if a user is suffering from lower back pain. Zhang et al. [27] successfully separate safe and at-risk ergonomic postures by using accelerometer, gyroscope, and magnetometer sensors, in order to prevent pain for construction workers. However, due to the artificial placements of the devices, and the results being somewhat unrelated to the research setting, these articles mainly function as a basis of discussion and inspiration.

(16)

Finally, the “Data source” column shows a fairly equal distribution of including data collection within the study, and using an already available dataset. Some articles describe using datasets available to the public [16] [22], and this availability presents accessible opportunities for developers to test HAR systems during the implementation process. Vaizman et al. [34] describes and executes a data collection process “in-the-wild”, resulting in a large dataset of manually labeled user data from 60 subjects. The article collects data from many different sensors; accelerometer, gyroscope, magnetometer, GPS, and microphone, as well as phone-state indicators.

This dataset (also known as the ExtraSensory dataset) is also publicly available, representing a great motivation for testing HAR systems. During this “in-the-wild”

process, data was collected continuously in real-life situations with the devices placed or worn naturally, i.e. in the preferred way of the user. However, apart from some studies [33] [34], many of the data collection processes described in the articles are seemingly conducted within controlled environments, resulting in highly concentrated data.

(17)

5. Workshop results

The workshop with PainDrainer provided qualitative data on the organization’s thoughts in relation to the research addressed in the literature review of this study.

During the workshop, discovered opportunities involving sensors in smartphone and wrist-worn wearable devices were presented. These discovered opportunities were introduced to PainDrainer in three separate segments; one on smartphone sensors, one on wrist-worn wearable sensors, and one on the combination of both. After each segment, members of the PainDrainer organization expressed their thoughts on the presented topic. In this section, a summary of the results from the workshop is presented.

5.1 General opinions

The wish of combining and analyzing data from multiple sensors to recognize complex activities was expressed, in alignment with some examples from the literature survey. Therefore, stating the prevailing trend of only using accelerometer and gyroscope data was met with critical reactions. Furthermore, only using one measuring point (e.g. accelerometer) was not expected to be enough in order to make a conclusion. Enthusiasm was articulated regarding the abilities of these sensors as well, but in the context of grouping data with other parameters (such as GPS) rather than utilizing only these two sensors. However, PainDrainer does not want to do this kind of grouping themselves as it would require too much effort. Instead, they wish that devices come pre-programmed with classification technology and would like to identify the existence and prevalence of these types of devices.

The grouping of data was commonly referred to as aggregated data during the workshop, and it was speculatively exemplified in the context of sleeping. In this example, the accelerometer and gyroscope would recognize the activity of lying down.

In combination with the GPS registering no motion, this would result in the assumption that the user is sleeping or resting. After classification, the duration of the activity would be measured. Apart from the activity of sleeping, two other activities from PainDrainer’s model were presumed to have been successfully classified by commercial devices already; physical activity and resting. Identifying and using the aggregated data and fitting it into the PainDrainer model was expressed as an ambition.

As the tables were introduced to the participants, a possible reason for focusing on accelerometer and gyroscope data in state-of-the-art research was presented; the fact that using as few sensors as possible saves battery life. This point was met with disagreement, using GPS as an example of a device going from extremely energy- consuming to energy-efficient within a few years. In the scenario of GPS being too battery draining, alternative solutions were suggested to find out a user’s location;

for instance, with Wi-Fi or 4G. Based on the constant improvements of electronic devices, the battery life problem was written off as a transitional problem.

Concerns were raised on the nature of many studies being in controlled and artificial settings. Presenting the in-the-wild ExtraSensory dataset [34] as an opportunity for testing, one initial critique was on the data collection method for having too many

(18)

data points and too few subjects. The desire to have it the other way around with more subjects and lesser data points was stated, with the argument that correlations always can be found within large datasets. The process of validating classification performance with leave-one-out experiments described in the article had its importance underlined. Even though this dataset had initial critique, having testing and training data was described as a desirable approach. Thus, a slight insight of the dataset’s possible usability ultimately seeped out.

Mentioning the idea of classifying the six activities of the PainDrainer model, an organization member remarked that it “would be ideal if we could do it” while also recognizing the difficulty of the task. Another organization member specified that classifying all variables within PainDrainer is not the aim, and that a few variables automatically entered would be a great accomplishment. Aside from classification of these six activities, the act of recognizing pain based on sensor data was declared feasible because of its connection to heart rate and sweat, both of which can be measured by sensors. It is “essentially a measurement of stress” and “pain and stress are very close to each other”. Personalization issues were also addressed, recognizing the fact that people perform activities differently. Therefore, an assumption was made that every user’s classification procedure has to be personalized based on individual movement patterns.

5.2 Opinions on smartphone sensor articles

The possibility of “measuring different things” with smartphones was acknowledged, but the relevance of the presented articles for solving problems in general was questioned. This point of critique took issue with the presented articles focusing on what technically can be done instead of providing concrete purposes. As mentioned above, the trend of only using accelerometer and gyroscope data was excessively criticized, specifically in the context of the publication year of one article [28]. The tendency of using these specific sensors was written off by one organization member as a dated practice, commenting that “smartphones in 2013 were not very smart”.

On the contrary, another organization member introduced the topic of utilizing sensors within games, with a concrete example of dance move recognition in the mobile version of Just Dance. When bringing the topic up, “accelerometer, and gyroscope, and alike” is mentioned, but the sensors used within the dance move recognition example are not specified. Based on this, he continued his statement saying he was not surprised that advanced things can be done using only this type of data.

Although none of the articles in Table 1 claims to have classified bicycling, an issue within activity recognition personally experienced by an organization member was brought up, concerning the difficulty to distinguish between sitting and bicycling.

She explained a heart rate monitor was needed for this particular situation. This led to a question on how one of the studies [31] was able to classify different intensities of walking and if this really is possible without heart rate data. Another organization member emphasized that sitting still while bicycling at a constant speed and sitting still on a chair cannot be distinguished using only an accelerometer, and that this would require additional information such as GPS data. A third organization member added that calculating speed based on acceleration data and its time

(19)

duration is possible, but presumed that using GPS data is more commonly applied being easier to implement.

Regarding the concerns on the artificial nature of the studies, the examples of using sensors to classify people as pain-free or suffering from pain based on gait patterns [28], and to classify safe or at-risk ergonomics [27] had its practical use questioned because of the smartphone placement being too artificial. However, the relevance of analyzing gait patterns in the context of pain was corroborated. The gaming context described earlier was problematized in relation to the PainDrainer context; “...in the context of a game, specific movements are searched for during a specific time, which simply reduces the search area. This [example] deals with recognition only during a short time span.”

5.3 Opinions on wrist-worn wearable articles

The article featuring an activity change detection model [17] sparked a discussion with the opinions generally in favor of the concept. One organization member found it interesting and pointed out a similarity to Apple Watch functionality. His Apple Watch would notice brisk walks and show a dialogue along the lines of “it seems as if you are briskly walking” with an option to confirm some form of activation. After describing this scenario, he questioned the need for a confirmation dialogue since there never had been any other activities that could have been mixed up with the brisk walk. He then expressed the wish to learn what happens in detail when he confirms to activate, and guessed that it might add responsiveness such as monitoring heart rate more persistently. Further curiosity was voiced on whether the user behavior was learned by the device in this scenario, and an interest in behavioral learning combined with the self-reporting model of PainDrainer was aired.

The concept of presenting an activity list in the wearable device as a form of self- reporting when detecting change [17] or at constant intervals [34] was disregarded, as it was reckoned that fully automated data collection to the greatest extent was preferred and requested by users. It was argued that this only moved the self- reporting into the wearable device. In this context, the earlier dismissed concept of confirmation dialogues was reconsidered and perceived as an interesting idea. Even though the data collection is not fully automatic, it was argued that this process of inserting data into the application is simpler than the current process. Since automatically inserting all variables of PainDrainer was written off as unrealistic, the aim was described as finding the right balance between self-reporting and automatically collecting data in order to make the application simple and smooth.

When putting forth the observation that wrist-worn wearables allow for complex activity recognition, a present-day example involving another feature in Apple’s watchOS was pointed out by an organization member. Implemented after the outbreak of the COVID-19 pandemic, the “Handwashing” feature keeps track of when users wash their hands and commands them to continue for 20 seconds.

The placement of the wearable device was shortly touched upon (i.e. which wrist) after remarking that most studies placed the device on the right or dominant wrist, additionally mentioning the paradoxical claim that most people wear their device on

(20)

the non-dominant wrist. Although one of the participants in the room wore his device on his dominant wrist, this observation was acknowledged and not further discussed.

Discussing the next steps to take for activity recognition within PainDrainer, one suggestion was to download software development kits (SDK) and API:s by various manufacturers and identify which parameters can be extracted, and what level the delivered data is at. Within the SDK:s, PainDrainer’s hopes are that there are existing ways to detect certain activities, with an example phrased as “if [an activity]

is performed, I want something specific to happen on the screen.” This example was specified by suggesting that a confirmation dialogue could show up on the screen upon noticing an activity, for instance physical work, and subsequently start measuring time and heart rate. Wrapping up the example, a change detection method could notice when the activity stops, and the wearable device would show queries matching the PainDrainer model; confirmation of the measured time, and filling out experience and intensity. From the perspective of the company, the wish of a compiled overview of commercial wearables and their features and possibilities was expressed.

In summary, PainDrainer wishes not to come up with the mathematics in order to execute classification and would prefer that this technology comes pre-programmed by the manufacturers of wearable devices. This would allow PainDrainer to extract data from the devices on what activity is performed, bypassing the classification process.

(21)

6. Discussion

This research set out to identify examples of sensor-based human activity recognition (HAR) through a literature review, and provide a practical perspective through a workshop with a company seeking opportunities within HAR. The literature review findings and the workshop results show an observable difference in perspectives between the researchers and practitioners within this study. The following subsections discuss the differences, and help consolidate the contribution of this study being a practical perspective.

6.1 Sensors

In the presented research, the evident practice is to use sensors sparingly when recognizing human activity. This pattern is remarkably apparent, with a majority of the reviewed literature restricting the sensor usage to accelerometer and occasionally gyroscope. However, this does not align with the expectations of PainDrainer as the workshop results indicate. Their desire to use more sensors to classify complex activities is a logical standpoint, given the extra information that can be extracted from additional sources such as GPS and heart rate sensors.

Questions might arise on why researchers persistently adhere to the practice of using accelerometer and gyroscope data only. The literature review concluded that using few sensors has a lesser impact on battery life and memory. Other benefits include non-intrusiveness [29] and availability [31]. Furthermore, several studies only use the accelerometer sensor. Reasons for this include superior individual performance compared to the other inertial sensors (gyroscope and magnetometer), and computational complexity as a result of fusing multiple sensors [33]. Taking all of these factors into account might explain the dominant approach of only using accelerometer and occasionally gyroscope sensors in human activity recognition (HAR). While the additional information from other sensors might be useful in some cases, the practice of basing entire activity recognition systems on collecting data from a high number of sensors appears to be non-existing within the research field.

Some studies in the literature review use location as a parameter in the activity recognition process, collected by sensors such as GPS. In the present, GPS receivers are widely known to be power-hungry, particularly in small electronic devices [39].

As described in the workshop results, this problem was disregarded as transitional due to the constant improvements of electronic components. Concerning PainDrainer’s description of the transitional problem, it is unclear whether the issue is perceived to be resolved or ongoing. If the issue is perceived to be resolved, there is a different perception held within research. In the case of the issue being perceived as ongoing, monitoring the effect of GPS usage on battery life is important, especially if PainDrainer’s plan is to implement functionality involving GPS in the near future.

As PainDrainer suggested, the future will probably bring both better batteries in electronic devices and more energy-efficient GPS sensors. In conclusion, the importance of being aware of the power consumption consequences when using GPS is highly emphasized.

(22)

6.2 Third-party data and SDK:s

For PainDrainer, the ambition is seemingly to explore what is possible with already classified data from third parties. The process of implementing automatic data collection into the application seems highly dependent on what is found during this exploration. As mentioned in the workshop results, the ultimate scenario would be to automatically classify all parameters within the PainDrainer model. However, the perspective of this being an unlikely scenario is shared by both the company and the research indications. Nonetheless, taking an active role in developing novel activity recognition technology is not an aspiration of PainDrainer. Keeping up with the available technology on the market involving sensors appears to be the essential factor in the company’s satisfaction regarding activity recognition. The PainDrainer representatives seem to be holding this position steadily, although it would be interesting to see the company experiment with functionality presented in the papers from the literature review. Innovating within HAR is not a focus area of PainDrainer, which makes their perspective understandable.

On the topic of extracting data from existing technology, three activities are put forward as realistic contenders of automatic data collection in the workshop results;

sleeping, resting and physical activity. Looking at SDK documentation from three major players on the smartphone and smartwatch markets, this assessment seems fairly sensible. The Apple HealthKit [40], Google Fit [41], and FitBit documentations [7] [30] all provide a range of sleep and activity data types that seem readable or extractable within devices using their software. However, it should not be expected that these devices can provide all types of data inherently, as different devices come with different features.

Based solely on the specified SDK documentations, collecting sleep and physical activity data seems viable. Classifying the activity of resting is more challenging, as more issues emerge. The initial issue is the definition of resting, and deciding what qualifies as resting based on measurable data. Something that might come to mind is using heart rate sensors in order to detect when a user has their resting heart rate. Yet, there is a probability of heart rate data not being enough to classify resting by itself. This may result in the company needing to fuse heart rate data with additional sensor data (for instance accelerometer and gyroscope data) and come up with a classification mechanism. Hypothetically, this would also be the case for the remaining activities of the PainDrainer model. As described earlier, this is not a process PainDrainer wishes to go through.

One aspect of activity recognition in the PainDrainer context is ensuring accurate and trustworthy data. The developing companies of the aforementioned SDK:s are doubtlessly working with advanced classification technology, as they are billion dollar corporations with presumably massive amounts of resources. Completely neglecting their technology as useless would be ill-considered. However, there is a risk that the data sources do not deliver according to PainDrainer’s expectations. In the case of this occurrence, the possibility of taking full control of the data collection process might be restricted, since a part of the data gathering is being operated by third parties.

(23)

6.3 Intelligence

Building intelligence for HAR is not an aim for PainDrainer as repeatedly stated in this thesis, but since data collection methods and classification models are identified in the literature review, they are necessary discussion points for this study. As explained in the literature review, the tables have one column dedicated to intelligence, and one dedicated to the data source. This information is displayed to present a simple overview of the HAR process and introduce familiarity to some of the classification models available. This section elaborates on the information discovered within this study.

In a HAR process, testing is a desirable step to ensure the quality of the system.

Presumably, this requires labeled data to validate the test results. As stated in the literature review, many articles use data collected in controlled settings, which might not be fitting for applications running services continuously. In these regards, the position of valuing the ExtraSensory dataset [34] is defendable. This enormous dataset provides an opportunity to thoroughly test a HAR system, in the case of lacking resources for collecting testing data. The dataset is labeled in detail to a certain extent, with two descriptors of activity; a basic activity such as sitting or lying, and secondary activities such as eating or doing computer work. Finally, this dataset provides data from many sensors, contributing to the wide range of testing possibilities.

Concerning intelligence, this study does not focus on comparing classification models. However, there are some assessments that can be made from the literature review. The most commonly used classification model is support-vector machine (SVM), used by 11 out of 17 articles. Following SVM is nearest neighbor (NN/k-NN) models used in nine articles, decision tree used in eight, and random forest (RF) used in six. SVM seems to be the popular and well-tried method based on the research landscape. However, RF consistently reaches comparatively good results in the studies reviewed, making it a considerable model for building a HAR system. As concluded in the literature review, the process of trial and error seems to be the favorable approach in finding what works best for the purpose.

(24)

7. Conclusions and future work

In this thesis, state-of-the-art examples of human activity recognition (HAR) with smartphone and wearable technology are identified by conducting a literature review. The examples are summarized in three tables displaying data type and data source, as well as applied intelligence and results. This section indicates minimal sensor usage, possibilities of classifying complex activities, and varying methods of classification. Secondly, a practical perspective to the current state of research is provided by presenting the state-of-the-art examples to a collaborating industrial partner. The results from the following workshop based on the findings are presented to represent the perspective of the company. Finally, the differences and similarities in perspectives are discussed. Main conclusions of this thesis include the company’s lack of interest to build a HAR system, different perspectives in sensor usage, and the wish of using already classified data. The future work implications of this thesis might be to examine different phases in implementing a HAR system for the company. More specifically, analyzing further steps towards implementation might be of interest as the functionality is still desired.

(25)

References

1. D. Issom, A. Henriksen, A. Z. Woldaregay, J. Rochat, C. Lovis and G.

Hartvigsen, “Factors Influencing Motivation and Engagement in Mobile Health Among Patients With Sickle Cell Disease in Low-Prevalence, High- Income Countries: Qualitative Exploration of Patient Requirements,” in JMIR Human Factors, vol. 7, no. 1, pp. 16-34, Mar. 2020, doi: 10.2196/14599.

2. P. Adams, E. L. Murnane, M. Elfenbein, E. Wethington and G. Gay,

“Supporting the self-management of chronic pain conditions with tailored momentary self-assessments,” in Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, Denver, CO, USA, May 2017, pp.

1065-1077, doi: 10.1145/3025453.3025832.

3. D. S. Goldberg and S. J. McGee, “Pain as a public health priority,” in BMC Public Health, vol. 11, no. 1, pp. 770:1-770:5, Oct. 2011, doi: 10.1186/1471- 2458-11-770.

4. W. Sousa, E. Souto, J. Rodrigues, P. Sadarc, R. Jalali and K. El-Khatib, “A Comparative Analysis of the Impact of Features on Human Activity Recognition with Smartphone Sensors,” in Proceedings of the 23rd Brazillian Symposium on Multimedia and the Web (WebMedia '17), Gramado, Brazil, Oct. 2017, pp. 397-404, doi: 10.1145/3126858.3126859.

5. H. Breivik, B. Collett, V. Ventafridda, R. Cohen and D. Gallacher, “Survey of chronic pain in Europe: prevalence, impact on daily life, and treatment,” in European Journal of Pain, vol. 10, no. 4, pp. 287-333, May 2006, doi:

10.1016/j.ejpain.2005.06.009.

6. H. A. Simon, The Science of the Artificial (Third Edition). Cambridge, MA, USA: The MIT Press, 1996.

7. FitBit SDK, “Activity & Exercise Logs | Web API”, 2020. [Online]. Available:

https://dev.fitbit.com/build/reference/web-api/activity/. [Accessed: 19-May- 2021].

8. J. Barlow, C. Wright, J. Sheasby, A. Turner and J. Hainsworth, “Self- management approaches for people with chronic conditions: a review,” in Patient Education and Counseling, vol. 48, no. 2, pp. 177-187, Oct. 2002, doi:

10.1016/S0738-3991(02)00032-0.

9. K. D. Keele and M. D. Lond, “The pain chart,” in The Lancet, vol. 252, no.

6514, pp. 6-8, Jul. 1948, doi: 10.1016/S0140-6736(48)91787-5.

10. C. Fyhn and J. Buur, “Chronic Pain Scales in Tangible Materials,” in TEI ‘20:

Proceedings of the 14th International Conference on Tangible, Embedded, and Embodied Interaction, Sydney, Australia, Feb. 2020, pp. 811-822, doi:

10.1145/3374920.3375003.

11. A. Najm, E. Nikiphorou, M. Kostine, C. Richez, J. D. Pauling, A. Finckh, V.

Ritschl, Y. Prior, P. Balážová, S. Stones, A. Szekanecz, A. Iagnocco, S. Ramiro, F. Sivera, M. Dougados, L. Carmona, G. Burmester, D. Wiek, L. Gossec and F. Berenbaum, “EULAR points to consider for the development, evaluation and implementation of mobile health applications aiding self-management in people living with rheumatic and musculoskeletal diseases,” in RMD Open, vol. 5, no. 2, e001014, Aug. 2019, doi: 10.1136/rmdopen-2019-001014.

12. C. Reynoldson, C. Stones, M. Allsop, P. Gardner, M. I. Bennett, J. Closs, R.

Jones and P. Knapp, “Assessing the Quality and Usability of Smartphone Apps for Pain Self-Management,” in Pain Medicine, vol. 15, no. 6, pp. 898- 909, Jun. 2014, doi: 10.1111/pme.12327.

(26)

13. K. Lancaster, A. Abuzour, M. Khaira, A. Mathers, A. Chan, V. Bui, A. Lok, L.

Thabane and L. Dolovich, “The Use and Effects of Electronic Health Tools for Patient Self-Monitoring and Reporting of Outcomes Following Medication Use: Systematic Review,” in Journal of Medical Internet Research (JMIR), vol. 20, no. 12, pp. 19-35, Dec. 2018, doi: 10.2196/jmir.9284.

14. A. Gupta, K. Gupta, K. Gupta and K. Gupta, “A Survey on Human Activity Recognition and Classification,” in 2020 International Conference on Communication and Signal Processing (ICCSP), Chennai, India, Jul. 2020, pp. 915-919, doi: 10.1109/ICCSP48568.2020.9182416.

15. S. Safi, G. Danzer and K. J. Schmailzl, “Empirical Research on Acceptance of Digital Technologies in Medicine Among Patients and Healthy Users:

Questionnaire Study,” in JMIR Human Factors, vol. 6, no. 4, pp. 30-36, Nov.

2019, doi: 10.2196/13472.

16. J. Lu, X. Zheng, Q. Z. Sheng, Z. Hussain, J. Wang and W. Zhou, “MFE-HAR:

multiscale feature engineering for human activity recognition using wearable sensors,” in Proceedings of the 16th EAI International Conference on Mobile and Ubiquitous Systems: Computing, Networking and Services (MobiQuitous '19), Houston, TX, USA, Nov. 2019, pp. 180-189, doi:

10.1145/3360774.3360787.

17. A. Akbari, J. Martinez and R. Jafari, “Facilitating Human Activity Data Annotation via Context-Aware Change Detection on Smartwatches,” in ACM Transactions on Embedded Computing Systems, vol. 20, no. 2, pp. 15:1-15:20, Mar. 2021, doi: 10.1145/3431503.

18. D. N. Tran and D. D. Phan, “Human Activities Recognition in Android Smartphone Using Support Vector Machine,” in 2016 7th International Conference on Intelligent Systems, Modelling and Simulation (ISMS), Bangkok, Thailand, Jan. 2016, pp. 64-68, doi: 10.1109/ISMS.2016.51.

19. A. Jain and V. Kanhangad, “Human Activity Classification in Smartphones Using Accelerometer and Gyroscope Sensors,” in IEEE Sensors Journal, vol.

18, no. 3, pp. 1169-1177, Feb. 2018, doi: 10.1109/JSEN.2017.2782492.

20. F. Shahmohammadi, A. Hosseini, C. E. King and M. Sarrafzadeh,

“Smartwatch Based Activity Recognition Using Active Learning,” in 2017 IEEE/ACM International Conference on Connected Health: Applications, Systems and Engineering Technologies (CHASE), Philadelphia, PA, USA, Jul. 2017, pp. 321-329, doi: 10.1109/CHASE.2017.115.

21. K. Kongsil, J. Suksawatchon and U. Suksawatchon, “Physical Activity Recognition Using Streaming Data from Wrist-worn Sensors,” in 2019 4th International Conference on Information Technology (InCIT), Bangkok, Thailand, Oct. 2019, pp. 274-279, doi: 10.1109/INCIT.2019.8912130.

22. M. Shoaib, S. Bosch, H. Scholten, P. J. M. Havinga and O. D. Incel, “Towards detection of bad habits by fusing smartphone and smartwatch sensors,” in 2015 IEEE International Conference on Pervasive Computing and Communication Workshops (PerCom Workshops), St. Louis, MO, USA, Mar.

2015, pp. 591-596, doi: 10.1109/PERCOMW.2015.7134104.

23. S. Mekruksavanich and A. Jitpattanakul, “Smartwatch-based Human Activity Recognition Using Hybrid LSTM Network,” in 2020 IEEE SENSORS, Rotterdam, Netherlands, Oct. 2020, pp. 1-4, doi:

10.1109/SENSORS47125.2020.9278630.

(27)

24. M. Kwon, H. You, J. Kim and S. Choi, “Classification of Various Daily Activities using Convolution Neural Network and Smartwatch,” in 2018 IEEE International Conference on Big Data (Big Data), Seattle, WA, USA, Dec. 2018, pp. 4948-4951, doi: 10.1109/BigData.2018.8621893.

25. A. Nandy, J. Saha, C. Chowdhury and K. P. D. Singh, “Detailed Human Activity Recognition using Wearable Sensor and Smartphones,” in 2019 International Conference on Opto-Electronics and Applied Optics (Optronix), Kolkata, India, Mar. 2019, pp. 1-6, doi: 10.1109/OPTRONIX.2019.8862427.

26. A. R. Hevner, S. T. March, J. Park and S. Ram, “Design science in information systems research,” in MIS Quarterly, vol. 28, no. 1, pp. 75-105, Mar. 2004, doi: 10.2307/25148625.

27. M. Zhang, Z. Bai and X. Zhao, “Real-time risk assessment for construction workers' trunk posture using mobile sensor,” in 2017 International Conference on Robotics and Automation Sciences (ICRAS), Hong Kong, Aug.

2017, pp. 153-157, doi: 10.1109/ICRAS.2017.8071935.

28. H. Chan, H. Zheng, H. Wang, R. Sterritt and D. Newell, “Smart mobile phone based gait assessment of patients with low back pain,” in 2013 9th International Conference on Natural Computation (ICNC), Shenyang, China, Jul. 2013, pp. 1062-1066, doi: 10.1109/ICNC.2013.6818134.

29. C. V. San Buenaventura and N. M. C. Tiglao, “Basic Human Activity Recognition based on sensor fusion in smartphones,” in 2017 IFIP/IEEE Symposium on Integrated Network and Service Management (IM), Lisbon, Portugal, May 2017, pp. 1182-1185, doi: 10.23919/INM.2017.7987459.

30. FitBit SDK, “Sleep Logs | Web API”, 2020. [Online]. Available:

https://dev.fitbit.com/build/reference/web-api/sleep/. [Accessed: 19-May- 2021].

31. I. RoyChowdhury, J. Saha and C. Chowdhury, “Detailed Activity Recognition with Smartphones,” in 2018 Fifth International Conference on Emerging Applications of Information Technology (EAIT), Kolkata, India, Jan. 2018, pp.

1-4, doi: 10.1109/EAIT.2018.8470425.

32. R. Hussein, J. Lin, K. Madden and Z. J. Wang, “Robust recognition of human activities using smartphone sensor data,” in 2017 International Conference on the Frontiers and Advances in Data Science (FADS), Xi'an, China, Oct.

2017, pp. 92-96, doi: 10.1109/FADS.2017.8253203.

33. Y. Asim, M. A. Azam, M. Ehatisham-ul-Haq, U. Naeem and A. Khalid,

“Context-Aware Human Activity Recognition (CAHAR) in-the-Wild Using Smartphone Accelerometer,” in IEEE Sensors Journal, vol. 20, no. 8, pp.

4361-4371, Apr. 2020, doi: 10.1109/JSEN.2020.2964278.

34. Y. Vaizman, K. Ellis and G. Lanckriet, "Recognizing Detailed Human Context in the Wild from Smartphones and Smartwatches," in IEEE Pervasive Computing, vol. 16, no. 4, pp. 62-74, Oct. 2017, doi:

10.1109/MPRV.2017.3971131.

35. PainDrainer.com, “Forskning”, 2021. [Online]. Available:

https://paindrainer.com/forskning. [Accessed: 21-Apr-2021].

36. S. Langrial, H. Oinas-Kukkonen, P. Lappalainen and R. Lappalainen,

“Rehearsing to control depressive symptoms through a behavior change support system,” in CHI EA '13: CHI '13 Extended Abstracts on Human Factors in Computing Systems, Paris, France, Apr. 2013, pp. 385-390, doi:

10.1145/2468356.2468425.

(28)

37. PainDrainer.com, “Home - PainDrainer”, 2021. [Online]. Available:

https://paindrainer.com. [Accessed: 21-Apr-2021].

38. K. Peffers, T. Tuunanen, M. A. Rothenberger and S. Chatterjee, “A Design Science Research Methodology for Information Systems Research” in Journal of Management Information Systems, vol. 24, no. 3, pp. 45-77, 2007, doi:

10.2753/MIS0742-1222240302.

39. C. Mandrioli, A. Leva, B. Bernhardsson and M. Maggio, “Modeling of energy consumption in GPS receivers for power aware localization systems,” in Proceedings of the 10th ACM/IEEE International Conference on Cyber- Physical Systems (ICCPS '19), Montréal, Canada, Apr. 2019, pp. 217-226, doi:

10.1145/3302509.3311043.

40. Apple Developer Documentation, “Data Types”, 2021. [Online]. Available:

https://developer.apple.com/documentation/healthkit/data_types. [Accessed:

19-May-2021].

41. Google Developers, “DataType | Google Play services”, 2020. [Online].

Available: https://developers.google.com/android/reference/com/google/

android/gms/fitness/data/DataType. [Accessed: 19-May-2021].

References

Related documents

We have extracted the Smartphone accelerometer data using an application called accelerometer data logger version 1.0 available in Smartphone market and have processed the data

relationships affect adolescents’ delinquency: a) we examined the relationships adolescents have with mothers and fathers separately, b) we used peers’ own reports of their delinquent

The format which gazebo uses to the description of robot models is Simulator Description Format (SDF) but It has been parsed all the information of the previous project model to the

Det är inte heller lika fördelaktigt för de som bor på landsbygden utanför tätorterna eftersom många av de personerna bor mellan Fengersfors och Åmål och således lika gärna

sequentially attending to different objects, segmenting them from ground in a wide field of view, in particular using 3D cues from binocular disparities or motion, and then foveat-

Figure 1 shows a flow chart of the method, divided into seven steps: (1) training a baseline RF with the Sweden (labeled) data sets, (2) training an autoencoder [4] with the

The ambiguous space for recognition of doctoral supervision in the fine and performing arts Åsa Lindberg-Sand, Henrik Frisk & Karin Johansson, Lund University.. In 2010, a