• No results found

Positioning with Map Matching using Deep Neural Networks

N/A
N/A
Protected

Academic year: 2021

Share "Positioning with Map Matching using Deep Neural Networks"

Copied!
8
0
0

Loading.... (view fulltext now)

Full text

(1)

http://www.diva-portal.org

Postprint

This is the accepted version of a paper presented at 17th EAI International Conference on

Mobile and Ubiquitous Systems (MobiQuitous 2020), 2020.

Citation for the original published paper:

Bergkvist, H., Davidsson, P., Exner, P. (2020)

Positioning with Map Matching using Deep Neural Networks

In: MobiQuitous '20: Proceedings of the 17th EAI International Conference on Mobile

and Ubiquitous Systems: Computing, Networking and Services Association for

Computing Machinery (ACM)

https://doi.org/10.1145/3448891.3448946

N.B. When citing this work, cite the original published paper.

Permanent link to this version:

(2)

Positioning with Map Matching using Deep Neural Networks

Hannes Bergkvist

Sony, R&D Center Europe Lund, Sweden hannes.bergkvist@sony.com

Paul Davidsson

Malmö University Malmö, Sweden paul.davidsson@mau.se

Peter Exner

Sony, R&D Center Europe Lund, Sweden peter.exner@sony.com

ABSTRACT

Deep neural networks for positioning can improve accuracy by adapting to inhomogeneous environments. However, they are still susceptible to noisy data, often resulting in invalid positions. A related task, map matching, can be used for reducing geographical invalid positions by aligning observations to a model of the real world. In this paper, we propose an approach for positioning, en-hanced with map matching, within a single deep neural network model. We introduce a novel way of reducing the number of in-valid position estimates by adding map information to the input of the model and using a map-based loss function. Evaluating on real-world Received Signal Strength Indicator data from an asset tracking application, we show that our approach gives both in-creased position accuracy and a decrease of one order of magnitude in the number of invalid positions.

CCS CONCEPTS

• Computing methodologies → Neural networks.

KEYWORDS

Deep neural networks, Localization, Positioning, Map matching, Loss function, Adaptation.

ACM Reference Format:

Hannes Bergkvist, Paul Davidsson, and Peter Exner. 2020. Positioning with Map Matching using Deep Neural Networks. In MobiQuitous 2020 - 17th EAI International Conference on Mobile and Ubiquitous Systems: Computing, Net-working and Services (MobiQuitous ’20), December 7–9, 2020, Darmstadt, Ger-many. ACM, New York, NY, USA, 7 pages. https://doi.org/10.1145/3448891. 3448946

1

INTRODUCTION

Positioning (or localization) can be done in various environments, indoors or outdoors, using many kinds of measurements from dif-ferent sources. In this work, we focus on radio-based positioning, but the methods can be used with other technologies for example camera-based or sound-based positioning. Outdoor positioning is usually based on Global Positioning System (GPS) or Long-Term Evolution (LTE). For indoor positioning, Wi-Fi or Bluetooth Low

Also with Malmö University.

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org.

MobiQuitous ’20, December 7–9, 2020, Darmstadt, Germany

© 2020 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 978-1-4503-8840-5/20/11. . . $15.00

https://doi.org/10.1145/3448891.3448946

Energy (BLE) Received Signal Strength Indicator (RSSI) are among the most used sources since the hardware is widely available, cheap and easy to use.

There are a wide range of methods for positioning. This work focuses on methods for static positioning, where the objective is to compute a single location estimate from several simultaneously taken measurements, independent of previous or future measure-ments or estimates [1]. Popular approaches for static indoor posi-tioning based on radio RSSI are lateration methods such as iterative least square (ILS) or machine learning methods such as support vector machines (SVM) and deep neural networks (DNN). Later-ation methods usually make use of the radio in air propagLater-ation function to convert RSSI measurements to distances, and known access point locations to estimate a position. These methods will work without training data, but a major drawback is that although there are ways to tune the propagation function and input weights to the environment, these methods are limited in the ability to adapt to a non-homogeneous environment. Machine learning can be used to generate a mapping between measurements and positions. These methods need training data in order to learn the mapping and a major drawback is that performance is highly dependent on the sample density of this training data. In many cases, no training data is available before system deployment. A possible solution can then be to use synthetic training data generated through simu-lation to build an initial model and then further adapt the model as real-world sensor data becomes available. Although DNNs can improve accuracy by adapting to inhomogeneous environments, they are still susceptible to the noisy characteristics of radio RSSI, often resulting in invalid positions.

In many scenarios, position estimates can be further improved by using domain information and applying map matching. Map matching is the task of matching geographical observations to a model of the real world. Common applications include navigation and route guidance systems where GPS-based position estimates are matched to street maps. Map matching can be done for a single point or a sequence of points, in real-time for every observation, or offline after all observations for a trajectory are collected [15].

We propose an approach for positioning enhanced with map matching within a single DNN model. Our goal is both to increase positioning accuracy and to reduce the number of invalid position estimates. As a first step, map information is used as an additional input to the model. We then build on the idea that map matching can be viewed as step for correcting misaligned position estimates. In a DNN setting, one way of constraining positions to geograph-ical segments is by introducing an additional loss function. An additional loss function from a second domain can also act as a regularizer and improve the generalization of the first domain. We introduce a map matching loss function (MM loss). It is based on the constraining loss function introduced in [3], but here we apply

(3)

MobiQuitous ’20, December 7–9, 2020, Darmstadt, Germany Hannes Bergkvist, Paul Davidsson, and Peter Exner

it to a real world asset tracking problem using actual facility maps and sensor data.

The main contributions of our work are:

1) A method for using map information as an additional input to a DNN model.

2) A map matching loss function with map lookup and gradient interpolation.

3) We show that it is possible to train a model on synthetic position data and still achieve good performance when applying the model to a real world positioning problem.

2

RELATED WORK

Existing work on indoor positioning with DNN includes several ap-proaches. Xiao et al. [16] achieve better results with DNN than with a support vector machine (SVM). They also show that, if one build-ing floor has less trainbuild-ing data, it is possible to reduce positionbuild-ing error by applying transfer learning from other floors.

Félix et al. [7] investigates DNN for positioning with supervised and unsupervised training. The results indicate good performance for an unsupervised approach, although not as good as with a supervised approach. They also show that increasing the number of layers improves accuracy, but beyond 5 layers there were no significant improvements.

While previous work has shown good results when trained and tested exclusively on either synthetic data or measured sensor data, we explore the important aspect of training a model on synthetic data while testing on measured sensor data from a real world setting. Training on synthetic data is one key aspect in order to deploy a functioning system without extensive prior data collection.

Previous work on map matching has mainly focused on non-DNN methods. Examples of approaches applied are; geometrical analysis[4], Hidden Markov Models [10] and Kalman filters [8], [13]. Recent work introduced DNN approaches to map matching with sequence-to sequence deep learning models [18].

Combined positioning and map matching can be done by sepa-rate methods in sequence, or simultaneously with one method. For example particle filters for position estimation can also perform map matching by avoiding particles in invalid areas. In scenarios where a DNN positioning method is used, map matching can be done in sequence by another DNN or non-DNN method.

3

POSITIONING WITH MAP MATCHING

We will now describe our novel approach in which map matching is integrated within a DNN model for positioning.

As baseline we use the DNN for positioning as proposed by Félix et al [7] with seven layers and five hidden layers of size (n1h, n2h, n3h, n4h, n5h, ) = (1000, 1000, 500, 100, 10). The input layer sizenxis set according to the number of features in the training data. The output layer size isny = 2, according to the estimated

position coordinates. We use Rectified Linear Unit (ReLU) as acti-vation functions [9] and the MSE loss function (1) to calculate the regression loss of the position estimates during training.

3.1

Binary map input

In order to use a map as additional input to the model, we expand the input layer according to the number of pixels in the map. The

Map is converted to a binary matrix with pixel value zero for valid areas and one for invalid areas Z ∈ {0, 1}r ×c. The matrix is then vectorized z= vec(Z), where z ∈ {0, 1}rc×1, and concatenated to the list of features. The conversion from map to z is only done once, before training, since the same map applies for the entire area.

3.2

Map matching loss function

The MM loss uses the output of the model, ˆy, and the facility map in form of the binary matrix Z. The loss is based on where the output of the model, the predicted position estimate, is located on the facility map.

As a first step Z is used to generate a topographic matrix Ztop

where invalid area pixel values are increased as a function of the distance to the closest allowed area. As with the binary map input, this conversion is only done once, before training, since the same map applies for the entire area. We describe the process of creating the topographic matrix in Algorithm 1.

Algorithm 1 Topographic map Input: Z

Z0= {(i, j)|Zij= 0, i = 0, ...,r − 1, j = 0, ...,c − 1} Z1= {(i, j)|Zij= 1, i = 0, ...,r − 1, j = 0, ...,c − 1}

dmax = max ||(i, j) − (k,l)|| where (i, j) ∈ Z1, (k, l) ∈ Z0

Ztop= [0]r ×c

fori, j in Z1do

Ztop[i, j] = 1 + min||(i, j) − (k,l)||/dmax where (k, l) ∈ Z0 end for

Output: Ztop

The MM loss function should return a loss from the topographic map corresponding to the position output estimate. Further we need to consider that the resolution of the topographic map is limited to the number of pixels, which might be less than the resolution of the position estimate. Additionally, the loss needs to have derivatives with respect to the position estimate. We achieve all these aspects by applying bilinear interpolation for the estimated position on the topographic map.

Bilinear interpolation uses the four positions with known values (1), closest to the position with unknown value (x,y). Starting with interpolating in the x-direction as in (2). Next use this result and interpolate in the y-direction to get an approximate topographic value at the position estimate (3), with partial derivatives at the position estimate coordinates (4).

Q11= (x1,y1) Q12= (x1,y2) Q21= (x2,y1) Q22= (x2,y2) (1) f (x,y1) ≈ x2 −x x2−x1 f (Q11)+ x − x1 x2−x1 f (Q21) f (x,y2) ≈ x2 −x x2−x1 f (Q12)+xx − x1 2−x1 f (Q22) (2)

(4)

f (x,y) ≈ y2−y y2−y1f (x,y1 )+ y − y1 y2−y1f (x,y2 ) = y2−y y2−y1 x 2−x x2−x1f (Q11 )+ x − x1 x2−x1f (Q21 )  +y − y1 y2−y1 x 2−x x2−x1f (Q12 )+ x − x1 x2−x1f (Q22 )  (3) ∂f ∂x = 1 (y2−y1)(x2−x1)((y − y2)f (Q11)+ (y2−y)f (Q21) +(y1−y)f (Q12)+ (y − y1)f (Q22)) ∂f ∂y = 1 (y2−y1)(x2−x1) ((x − x2)f (Q11)+ (x1−x)f (Q21) +(x2−x)f (Q12)+ (x − x1)f (Q22)) (4)

For bilinear interpolation with a topographic matrix Ztop ∈ Rr ×c, the coordinatesx and y need to be normalized according to the size of the matrix (5). The points for interpolation are then (6). Resulting in a loss function that outputs a low or zero loss for valid positions and a higher loss for invalid positions with derivatives negative towards valid positions (7).

x′= x xmax(c − 1) y′= y ymax(r − 1) (5) [r1, c1]= [⌊y′⌋, ⌊x′⌋], [r1, c2]= [⌊y′⌋, ⌊x′⌋+ 1] [r2, c1]= [⌊y′⌋+ 1, ⌊x′⌋], [r2, c2]= [⌊y′⌋+ 1, ⌊x′⌋+ 1] (6) Lmm((x,y), Ztop)= r2−y′ r2−r1 c 2−x′ c2−c1 Ztop[r1, c1]+x ′c 1 c2−c1 Ztop[r2, c1]  +y′−r1 r2−r1 c 2−x′ c2−c1Ztop [r1, c2]+ x′c 1 c2−c1Ztop [r2, c2]  (7)

3.3

MSE and MM loss balance

In order to simultaneously train a model for both positioning and map matching with backpropagation, the MSE and MM loss func-tions are combined to form a total loss (8). Herep represents the weighting between the losses. The most straight forward approach is to use a static weighting between the losses. A problem is to find the optimal value forp that avoids overfitting one of the tasks.

Ltot(y,y) = Lˆ mse(y,y)p + Lˆ mm(y, Zˆ top)(1 −p) (8) In this work, we apply an adaptive weighting, where Lmseacts as the primary loss while Lmmis introduced over time. Initially p = 1 resulting in Ltot = Lmse. For every epoch, p is decreased or increased with step sizes, dependent on Lmseand a thresholdt, as described in Algorithm 2. By introducing the Lmmloss over time we ensure learning a representation with low positioning error. Witht it is possible to control how much the model is allowed to optimize for each loss function.

Algorithm 2 Dynamicp Constants:s, t Input:p, Lmse

if Lmse ≤t and p ≥ s then p = p − s

else if Lmse ≥t and p ≤ (1 − s) then p = p + s

end if Output:p

4

DATA

The system used for collecting data for the experiments includes Access Points (AP), Beacons and a cloud service. The AP’s are based on Nordic Semiconductors nRF52832 [11] System on a Chip (SoC) including a ARM Cortex-M4 CPU [2] and Bluetooth 5 2.4 GHz transceiver. The nRF52832 is interfaced with an ESP32 [6] for Wi-Fi capabilities. The mechanical design allows for easy mounting in power outlets. The beacons are also based on a nRF52832, but without an ESP32 and powered from a 210mAh coin battery. The Beacons broadcast a unique identifier using the iBeacon protocol [5]. The broadcasts are received by the AP’s. The RSSI of the broadcast together with time and the Beacon identity are registered. The AP’s are connected to a local Wi-Fi and communicates with the cloud service using the MQ Telemetry Transport (MQTT) protocol[12]. The registered data are published as a MQTT message once every 10 seconds. The web service subscribes to the messages posted by the AP’s and stores the data to a data base. The installation consists of 122 beacons and 42 AP’s mounted in a 750m2office environment as described in Fig. 1.

4.1

Measured sensor data

The data used for test and validation was collected in an office environment with the system and setup described above.

The RSSI sensor measurements are converted to distances using (10). We use the common Path-loss shadowing model (9) [17]:

¯

PRX(d) = PL0− 10η log10(d d0

) (9)

where ¯PRX(d) is the received power in dB at distance d, PL0is the

path loss at the reference distance (1 m),η is a constant depending on the environment.

Solving for d, and assumingd0= 1.0m

(5)

MobiQuitous ’20, December 7–9, 2020, Darmstadt, Germany Hannes Bergkvist, Paul Davidsson, and Peter Exner 60 40 20 0 20 40 60 80 100 20 10 0 10 20

30

Beacons

Access Points

Figure 1: System installation setup

4.2

Data format

The dimensionality of the dataD is decided by number of AP’s in the environmentnap, with coordinates (p1,p2) and distance

measure-mentd for each AP. The AP positions {(p1i, p2i);i = 1, ...,nap)}, and distances {d1, ..., dnap} from target to the AP positions, are

combined to form the features (d1, p11, p21, ...dnap, p1nap, p2nap),

therebynx= 3 · nap. The labels (y1,y2) represent the targets

posi-tion in Cartesian coordinates in 2-dimensions.

In experiments with a map as additional input to the model, the binary map vector z ∈ {0, 1}rc×1is concatenated to the list of fea-tures such as (d1, p11, p21, ...dnap, p1nap, p2nap, z1, ..., zrc), thereby

nx= 3 · nap+ r · c.

4.3

Generated training data

The training data {(xi,yi);i = 1, ...,m} is generated with a simple

procedure: A training example (xi,yi) is created by first

generat-ing a label positionyi = (y1i,yi2) by drawing two samples from a uniform distributiony1,y2∼ U(a,b), a and b are the maximum

co-ordinates of the area. Then calculating the distances {d1, ..., dnap}

to all AP positions {(p1j, p2j);j = 1, ...,nap)} and adding Gaussian noise,dдj = dj+ Zj,Z ∼ N(µ, σ2). This is combined to form the featuresxi = (dд1, p11, p21, ...dдnap, p1nap, p2nap). As a last step,

thendroplargest distances are removed,ndrop∼ U(1,nap). This is done to better generalize to real scenarios where AP’s at large distances often are out of reach.

Two sets of training data are generated. The first covering all valid and invalid positions, such as A= {(Y1, Y2)|Y1∈ R[0, a], Y2∈

R[0, b]}. The second with positions only in valid areas, excluding positions outside the building and inaccessible areas, such as B= {(Y1, Y2)|(Y1, Y2) ∈ Z, Z== 0}. The second data set is generated with the same procedure as the first, with the addition of using map information to only keep valid positions in the first step.

5

EXPERIMENTS

To validate our method for positioning with map matching we perform five experiments. In the first, we train the Baseline model on data covering all positions. This is the baseline where no map

information is used. In the second experiment, we train the Base-line model on data with only valid positions. This represents the improvement possible by simulating training data positions based on map information. In the third experiment, we introduce the MM loss, again trained on data covering only valid positions. This demonstrates the further improvements possible with map based loss function. In the fourth experiment, we extend the input layer and add the map to the input data. This investigates the approach of utilizing map information as input data. Lastly, we run the training on data with valid positions, with map input, and MM loss. Thereby evaluating the use of map information in data generation, input data, and loss function combined.

5.1

Experimental setup

The code for all experiments are implemented in Python with mod-els, loss functions and training using PyTorch [14]

The data for training consists of 100k samples of generated data as described above withµ = 0 and σ = 5. The validation and test data consists of 100k samples of measured sensor data each, col-lected during 12 hours. The data is converted from RSSI to distance as described above withPL0= −65.1 and η = 2.3. The constants

PL0andη are estimated using RSSI measurements from broadcasts

between one AP and one Beacon at five different distances. The same training parameters are used for all experiments. Training is done for 1000 epochs with batch size 1024. When training with MM loss, the loss balance weighting has step sizes = 0.001 and thresholdt = 5. Training and evaluation is carried out on a Nvidia GeForce RTX 2080 Ti.

5.2

Evaluation

We evaluate the models by running inference on the test data set. The positioning error is calculated as theL2norm between the

model inference output and label of the data (11). Invalid position ratio is calculated as the percentage of the model inference outputs that are at invalid positions.

||(y − ˆy)||2= v u tny Õ i=1 (yi−yˆi)2 (11)

(6)

Table 1: Positioning error and invalid positions - best training epoch.

Experiment Method Training Data Positioning error m (std) Invalid positions % (std)

1 Baseline A 7.59 (0.49) 12.82 (0.86)

2 Baseline B 7.22 (0.13) 10.35 (1.56)

3 MM loss B 6.12 (0.05) 2.36 (0.70)

4 Map input B 5.89 (0.14) 6.12 (0.24)

5 MM loss & Map input B 6.05 (0.08) 0.60 (0.34) Table 2: Positioning error and invalid positions - best validation epoch.

Experiment Method Training Data Positioning error m (std) Invalid positions % (std)

1 Baseline A 6.91 (0.03) 21.19 (1.05)

2 Baseline B 6.18 (0.05) 9.13 (0.77)

3 MM loss B 5.67 (0.17) 3.19 (0.13)

4 Map input B 5.80 (0.11) 6.16 (1.25)

5 MM loss & Map input B 5.67 (0.07) 1.37 (0.25)

6

RESULTS

Table 1 and Table 2 represents different scenarios and therefore show divergent results. In Table 1, parameters from the epoch with lowest positioning error on a subset of the training data is used. This represents the performance in scenarios where no measured sensor data are available. In Table 2, parameters from the epoch with the lowest positioning error on a subset of the validation data are used. This represents the potential performance if some measured sensor data are available to validate the model, while still being trained on synthetic data only. The results for experiment one and five in Table 1 and Table 2 are visualized in Figure 2.

The positioning error is significantly reduced between Table 1 and 2. This indicates the difference between synthetic training data and measured test data. As the training proceeds the model overfits to the training data. In Table 2 this is managed by choosing an epoch based on measured data. The percentage of invalid positions does not see an overall reduction between Table 1 and Table 2. This is because the distribution of invalid and valid positions are more similar between the training and test data. For experiments with MM-loss, invalid positions are increased in Table 2. This is because the weight of MM-loss is increased during training, and therefore map matching improves over time. This correlates more with the positioning error on the training set, while the positioning error on the validation data usually has the best epoch at an earlier stage.

Experiment one has both the largest positioning error and largest percentage of invalid positions in both tables. It also has the largest increase in invalid positions from Table 1 to Table 2, while a decrease in positioning accuracy is evident. This indicates the tendency for the DNN model to overfit to the training data. In experiment two, the map information in the training data reduces both positioning error and invalid positions. In Table 2, the percentage of invalid positions is reduced to half. A large improvement is expected since no invalid positions are presented when training the model. The remaining invalid positions show the model’s ability to generalize when unseen data are presented. The third experiment shows that it is possible to further improve on both positioning error and invalid positions by using the MM loss, compared to only introducing map

information during data generation. In experiment four, the map as input to the model improves on positioning error and invalid posi-tions compared to experiment two. This demonstrates the model’s ability to learn from multi modal input, even though the additional map input is consistent for all samples. The positioning error is com-parable with experiment three, although invalid positions are still at 6%. In Experiment five, our MM loss function reduces the invalid positions to the best result in this work. As in experiment three, this shows how introducing map matching with a dedicated loss function during training, gives significant improvements compared to only having map information as part of the input data. The result is an order of magnitude improvement compared to experiment one for both Table 1 and Table 2 as well as >1 meter improvement in positioning error. This improvement is clearly visible in Figure 2 where experiment five has no invalid position estimates outside the building and only a few invalid estimates inside sealed off areas (grey colored areas).

7

CONCLUSION AND FUTURE WORK

This paper investigated how to improve positioning with map matching within a single DNN model. We showed how map infor-mation can be used as an additional input to a model to increase positioning accuracy and reduce the number of invalid positions. Further, we introduced a map matching loss function with map lookup and gradient interpolation.

We presented several experiments validating our methods on real-world RSSI data. The experiments demonstrate that these meth-ods can be used, separate or combined, to significantly improve the performance of a DNN positioning model. The results show about 20% decrease in positioning error and a decrease of one order of magnitude in the number of invalid positions.

Future work includes evaluating the MM loss function on other DNN positioning models, as well as other approaches for MSE and MM loss weighting.

(7)

MobiQuitous ’20, December 7–9, 2020, Darmstadt, Germany Hannes Bergkvist, Paul Davidsson, and Peter Exner

A

B

C

D

(8)

REFERENCES

[1] Simo Ali-löytty and Jussi Collin. 2008. MAT-45806 Mathematics for Positioning TKT-2546 Methods for Positioning. Technical Report.

[2] ARM. 2015. ARM Cortex M-4 Technical Reference Manual. (2015), 1–577. [3] Hannes Bergkvist, Peter Exner, and Paul Davidsson. 2020. Constraining neural

networks output by an interpolating loss function with region priors. In NeurIPS workshop on Interpretable Inductive Biases and Physically Structured Learning. [4] David Bernstein and Alain Kornhauser. 1996. An introduction to map matching

for personal navigation assistants. Technology 24064, August (1996), 587–602. [5] Andy Cavallini. 2013. iBeacons Bible 1.0. (2013), 1–15. http://meetingofideas.

files.wordpress.com/2013/12/ibeacons-bible-1-0.pdf

[6] Espressif. 2019. ESP32 Series Datasheet. Espressif Systems (2019), 1–61. https:// www.espressif.com/sites/default/files/documentation/esp32_datasheet_en.pdf [7] Gibrán Félix, Mario Siller, and Ernesto Navarro Álvarez. 2016. A fingerprinting

indoor localization algorithm based deep learning. In International Conference on Ubiquitous and Future Networks, ICUFN, Vol. 2016-Augus. IEEE Computer Society, 1006–1011. https://doi.org/10.1109/ICUFN.2016.7536949

[8] Edward J. Krakiwsky, Clyde B. Harris, and Richard V.C. Wong. [n.d.]. A Kalman filter for Integrating DR MM and GPS positioning.

[9] Vinod Nair and Geoffrey E Hinton. 2010. Rectified linear units improve Restricted Boltzmann machines. In ICML 2010 - Proceedings, 27th International Conference on Machine Learning. 807–814.

[10] Paul Newson and John Krumm. [n.d.]. Hidden Markov Map Matching Through Noise and Sparseness. 17th ACM SIGSPATIAL International Conference on Ad-vances in Geographic Information Systems (ACM SIGSPATIAL GIS 2009) ([n. d.]), 336–343.

[11] Nordic Semiconductors. [n.d.]. Multi-protocol Bluetooth Low Energy and 2.4GHz proprietary system-on-chip. X ([n. d.]), 0–1.

https://www.nordicsemi.com/- /media/Software-and-other-downloads/Product-Briefs/nRF52832-product-brief.pdf?la=en&hash=2F9D995F754BA2F2EA944A2C4351E682AB7CB0B9 [12] OASIS. 2019. MQTT Version 5.0 OASIS Standard. March (2019).

https://docs.oasis-open.org/mqtt/mqtt/v5.0/os/mqtt-v5.0-os.html

[13] Dragan Obradovic, Henning Lenz, and Markus Schupfner. 2006. Fusion of map and sensor data in a modern car navigation system. In Journal of VLSI Signal Processing Systems for Signal, Image, and Video Technology, Vol. 45. 111–122. https://doi.org/10.1007/s11265-006-9775-4

[14] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury Google, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Köpf Xamla, Edward Yang, Zach Devito, Martin Raison Nabla, Alykhan Tejani, Sasank Chilamkurthy, Qure Ai, Benoit Steiner, Lu Fang Facebook, Junjie Bai Facebook, and Soumith Chintala. 2019. PyTorch: An Imperative Style, High-Performance Deep Learning Library. In Advances in Neural Information Processing Systems. 8024–8035.

[15] Francisco Câmara Pereira, Hugo Costa, and Nuno Martinho Pereira. 2009. An off-line map-matching algorithm for incomplete map databases. European Transport Research Review 1, 3 (2009), 107–124. https://doi.org/10.1007/s12544-009-0013-6 [16] Linchen Xiao, Arash Behboodi, and Rudolf Mathar. 2018. Learning the Localiza-tion FuncLocaliza-tion: Machine Learning Approach to Fingerprinting LocalizaLocaliza-tion. (3 2018). http://arxiv.org/abs/1803.08153

[17] Andrea Zanella. 2016. Best practice in RSS measurements and ranging. IEEE Communications Surveys and Tutorials 18, 4 (10 2016), 2662–2686. https://doi. org/10.1109/COMST.2016.2553452

[18] Kai Zhao, Jie Feng, Zhao Xu, Tong Xia, Lin Chen, Funing Sun, Diansheng Guo, Depeng Jin, and Yong Li. 2019. DeepMM: Deep learning based map matching with data augmentation. In GIS: Proceedings of the ACM International Symposium on Advances in Geographic Information Systems. Association for Computing Machinery (ACM), 452–455. https://doi.org/10.1145/3347146.3359090

Figure

Figure 1: System installation setup
Table 2: Positioning error and invalid positions - best validation epoch.
Figure 2: Result plots. A: Table 1, experiment 1. B: Table 1, experiment 5. C: Table 2, experiment 1

References

Related documents

Eftersom diffraktion är en process som hela tiden skapar nya vågmönster passar den bra att använda som analytiskt redskap när en vill öppna upp för nya synvinklar och alterna-

In this chapter we present a novel approach for handling occlusions in the people tracking system. We treat occlusions explicitly, i.e. we first detect them and then reason about

Det automatiska bindslet möjliggör att alla kor kan lösgöras samtidigt utan att skötaren behöver komma i närkontakt med korna samt att korna automatiskt binds fast då de för

Resultaten visar att det är viktigt att använda rätt redskap, för stora eller små red- skap i förhållande till fordonets kapacitet påverkar kraftigt både bränsleförbrukning

Men en början för detta kan vara för hotell att arbeta mer med marknadsföring av CSR för att få ett större förtroende och lojalitet till företaget, som sedan kan leda till att

förhållningssätt inom pedagogisk dokumentation. Vi kan i denna studie se att pedagogisk dokumentation som verktyg i förskolan delvis förhåller sig till etiska aspekter utifrån

Geometrisk matchning kan även användas i andra algoritmer, exempelvis i en topo- logisk algoritm, för att projicera mätningpunkter på den länk som algoritmen tror att

In our project, this could be a good way to improve the user experi- ence by not letting a user walk inside an object or visually step out of the map due to the error margin in