• No results found

High precision frequency synchronization via IP networks

N/A
N/A
Protected

Academic year: 2021

Share "High precision frequency synchronization via IP networks"

Copied!
103
0
0

Loading.... (view fulltext now)

Full text

(1)

Institutionen för systemteknik

Department of Electrical Engineering

Examensarbete

High Precision Frequency Synchronization via IP

Networks

Examensarbete utfört i elektroniksystem vid Tekniska högskolan i Linköping

av

Andreas Gustafsson Danijel Hir

LiTH-ISY-EX--10/4394--SE

Linköping 2010

Department of Electrical Engineering Linköpings tekniska högskola Linköpings universitet Linköpings universitet SE-581 83 Linköping, Sweden 581 83 Linköping

(2)
(3)

High Precision Frequency Synchronization via IP

Networks

Examensarbete utfört i elektroniksystem

vid Tekniska högskolan i Linköping

av

Andreas Gustafsson Danijel Hir

LiTH-ISY-EX--10/4394--SE

Handledare: Tomas Bornefall

Ericsson

Examinator: Oscar Gustafsson

isy, Linköpings universitet

(4)
(5)

Avdelning, Institution Division, Department

Division of Electronics System Department of Electrical Engineering Linköpings universitet

SE-581 83 Linköping, Sweden

Datum Date 2010-02-16 Språk Language  Svenska/Swedish  Engelska/English   Rapporttyp Report category  Licentiatavhandling  Examensarbete  C-uppsats  D-uppsats  Övrig rapport  

URL för elektronisk version

http://www.es.isy.liu.se http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-54015 ISBNISRN LiTH-ISY-EX--10/4394--SE Serietitel och serienummer Title of series, numbering

ISSN

Titel Title

Högprecisions frekvens-synkronisering via IP

High Precision Frequency Synchronization via IP Networks

Författare Author

Andreas Gustafsson Danijel Hir

Sammanfattning Abstract

This report is a part of a master thesis project done at Ericsson Linköping in cooperation with Linköpings Tekniska Högskola (LiTH). This project is divided in two different parts. The first part is to create a measurement node that collects and processes data from network time protocol servers. It is used to determine the quality of the IP network at the node and detect potential defects on used timeservers or nodes on the networks.

The second assignment is to analyze the collected data and further improve the existing synchronization algorithm. Ip communication is not designed to be time critical and therefore the NTP protocol needs to be complemented with additional signal processing to achieve required accuracy. Real time requirements limit the computational complexity of the signal processing algorithm.

Nyckelord

(6)
(7)

Abstract

This report is a part of a master thesis project done at Ericsson Linköping in cooperation with Linköpings Tekniska Högskola (LiTH). This project is divided in two different parts. The first part is to create a measurement node that collects and processes data from network time protocol servers. It is used to determine the quality of the IP network at the node and detect potential defects on used timeservers or nodes on the networks.

The second assignment is to analyze the collected data and further improve the existing synchronization algorithm. Ip communication is not designed to be time critical and therefore the NTP protocol needs to be complemented with additional signal processing to achieve required accuracy. Real time requirements limit the computational complexity of the signal processing algorithm.

Sammanfattning

Den här rapporten är en del i ett examensarbete utfört för Ericsson AB i samarbete med Linköpings Tekniska Högskola. Rapporten är uppdelat i två delar. Den första delen är att skapa en mätnod som används till att mäta och beräkna signalkvalitén över IP vid en radiobas-station genom att samla NTP-paket. Den andra uppgiften är att analysera uppmätt data och ytterligare förbättra synkroniseringsalgoritmen.

(8)
(9)

Acknowledgments

We would like to thank our supervisor Tomas Bornefall for all the help and guid-ance during this thesis and Oscar Gustafsson for being an excellent examiner. We would also like to thank Martin Enqvist and Fredrik Gustafsson at Automatic con-trol, Linköpings Tekniska Högskola, for assisting us with Kalman filter theoretics and also Mikael Johansson, Richard Jönsson and Mikael Olofsson at Ericsson, Älvsjö, for supporting us in our work. Finally we would like to thank everyone else who has otherwise contributed to this thesis.

(10)
(11)

Contents

1 Introduction 1 1.1 Background . . . 1 1.2 Problem definition . . . 1 1.2.1 Quality measurement . . . 2 1.2.2 Frequency synchronization . . . 2 1.3 Purpose of thesis . . . 2 1.3.1 Goal 1 . . . 2 1.3.2 Goal 2 . . . 3

2 Concepts, clarifications and abbreviations 5 2.1 Concepts . . . 5 2.2 Clarification . . . 5 2.3 Abbreviations . . . 6 3 STN 7 3.1 Introduction . . . 7 3.2 SIU . . . 7 3.3 Pico-station . . . 8 3.4 Design of the STN . . . 8 3.4.1 Local terminal . . . 8 3.4.2 Time server . . . 9 3.4.3 NTP handler . . . 9 3.4.4 Remote calibration . . . 9 4 NTP 11 4.1 Introduction . . . 11 4.2 Precision . . . 11 4.3 Description . . . 12 4.4 PTP . . . 14 5 Existing Algorithm 15 ix

(12)

x Contents 6 Measurement Instrument 17 6.1 Introduction . . . 17 6.2 Implementation . . . 17 6.2.1 Functionality . . . 17 6.2.2 Important changes . . . 18 6.2.3 Prerequisites . . . 19 6.2.4 Activation . . . 20

6.3 Functional description for measurement mode in synchronization . 20 6.4 Measurement mode SW . . . 20

6.4.1 Description . . . 20

6.5 Data analysis . . . 27

7 Configuration Management 31 7.1 SYN_1.16 Administration of Measurement Mode . . . 31

7.1.1 General . . . 31 7.1.2 Pre-conditions . . . 31 7.1.3 Initiation . . . 31 7.1.4 Description . . . 31 7.1.5 Termination . . . 32 7.1.6 Fault Handling . . . 32 7.1.7 Capacity . . . 32 7.1.8 Performance . . . 32 7.1.9 Administration . . . 32

7.2 Function: OAM_3.8 Send Measurement data . . . 33

7.2.1 General . . . 33 7.2.2 Actors . . . 33 7.2.3 Pre-condition . . . 33 7.2.4 Post-condition . . . 33 7.2.5 Main Flow . . . 33 7.2.6 Alternative Flows . . . 33

7.2.7 Performance and Characteristics . . . 33

7.3 Function: OAM_3.9 Clear Measurement data . . . 34

7.3.1 General . . . 34 7.3.2 Actors . . . 34 7.3.3 Pre-condition . . . 34 7.3.4 Post-condition . . . 34 7.3.5 Main Flow . . . 34 7.3.6 Alternative Flows . . . 34

7.3.7 Performance and Characteristics . . . 34

8 Kalman filter 35 8.1 Introduction . . . 35

8.2 General Kalman filter . . . 35

8.3 The discrete steady-state Kalman filter . . . 37

8.4 Variable sample time adaptive (VSTA) Kalman filter implementation 38 8.5 Investigation . . . 41

(13)

Contents xi

8.5.1 Investigation of the implemented Kalman filter . . . 41

8.5.2 Investigation of the adaptive Kalman filter . . . 41

8.5.3 Investigation of the VSTA Kalman filter . . . 44

9 Data Selection 47 9.1 Selecting good measurements . . . 47

9.2 Comparative Data Selection Algorithm (CDSA) . . . 47

9.3 Investigation of CDSA . . . 50

9.4 CDSA with adaptive vector length . . . 53

9.5 Separate minimum selection SMS (minUD and minDD) . . . 56

9.6 Comparison of SMS relative the minRTD selection . . . 56

9.7 SMS error approximation . . . 59

9.7.1 Tilted value selection . . . 59

9.8 Investigation of using SMS with the implemented algorithm . . . . 61

9.8.1 Tilting effect in fixed window size . . . 62

9.9 CDSA with SMS and lookback . . . 63

9.9.1 Look-back . . . 64

9.9.2 Investigation of reusing samples . . . 70

9.10 Comparison . . . 70

9.11 Computational effort . . . 77

9.12 Simulation data . . . 79

9.13 Outlier detection . . . 82

9.14 Rapid synchronization . . . 85

10 Conclusions and Discussion 87 10.1 Conclusions . . . 87

10.1.1 Estimation . . . 87

10.1.2 Data selection . . . 88

10.2 Future Studies . . . 88

(14)
(15)

Chapter 1

Introduction

1.1

Background

Today’s mobile communication has very strict requirements on maintaining a sta-ble and controlled radio frequency. Depending on the type of base station (BTS) the accuracy requirements are between 50 and 100 parts per billion (ppb). These strict requirements demand extreme precision and stability of the clock generator. The radio frequency in today’s BTS is controlled by a crystal oscillator that is kept in an oven (OCXO) to keep the right temperature and thereby minimizing deviation. Even though the crystal is of high quality and at constant temperature it ages and thus drifts over time. This means that it occasionally has to synchronize to an external time source to keep the correct frequency. There are primarily two ways of connecting to an external time source. The first way is via GPS that can deliver time stamps with an accuracy of ± 340 ns [1] and the second one is to connect a BTS to a frequency reference using a wired connection.

The GPS alternative is already fast and accurate enough to synchronize base stations. Although there exists some problems with GPS that make it inadequate to use as the only synchronization source. One problem is that it requires a clear line of sight in order to function and that is not always possible. Another reason is that the GPS signals can be jammed. This is why synchronization via IP is used instead.

1.2

Problem definition

Old and large BTS have a separate time division multiplex (TDM) connection to base station controllers that provide the accurate time. Because of this separate and synchronous direct connection the BTS can synchronize in a few hours.

In modern and smaller BTS the synchronization data is sent via IP over an ethernet connection instead, mostly due to cost optimization. This creates some problems for the frequency synchronization because IP is not designed for time critical communication. This means that it may take long time to synchronize.

(16)

2 Introduction

Another limitation is that in some cases it is not possible to synchronize at all due to poor quality of the net, high traffic load or other factors. This creates a need to measure whether or not it is possible to synchronize on certain locations. This request has also been issued by potential customers that want to decide if certain locations are suitable for IP BTS placement.

1.2.1

Quality measurement

Due to unsteadiness and irregularities in IP networks there is a need to decide the quality of the connection at different places. The definition of quality in the case of synchronization with NTP is the stability of the transport delay. This means that there are no problems to synchronize over IP if the transport delay is large as long as it is constant and there are no large variations in uplink and downlink delay, which means that the time it took for the packet to go from A to B is the same as the time for the packet from B to A. In order to decide the quality of the signal it is important to know what kind of variations can be expected and how these variations affect the ability to calculate correct values. With the information it is possible to calculate the time it takes to synchronize or if it is possible at all.

1.2.2

Frequency synchronization

The crystal oscillator that controls the frequency drifts over time and have to be synchronized with the assistance of an external time source. This is called frequency synchronization.

The device described in this document is currently able to synchronize via the IP network. The main issue with it today is that it is very sensitive for specific irregularities. One specific example being temporal high load on a server during a granularity period. This issue can contaminate all the collected data and thereby force the device to discard the data and change timeserver. Another issue is that the algorithm is rigid and does not consider the quality of the signal in the calculations. In some cases this results in an unnecessary long synchronization period.

1.3

Purpose of thesis

1.3.1

Goal 1

The purpose of this part is to create a measurement instrument. The measurement instrument (MI) will be used to measure the signal quality at potential future BTS locations and thereby decide if it is possible to place an IP-connected BTS on that location or not. The plan is that the MI will be placed at the desired location and left there to collect measurement data. When enough data has been collected in order to decide the quality, the data will be analyzed and/or sent to an external filestore for further analysis.

(17)

1.3 Purpose of thesis 3

1.3.2

Goal 2

The purpose of this assignment is to improve the existing synchronization algo-rithm. To do this assignment it is required to investigate and analyze the existing algorithm in order to identify any existing areas that can be further improved. The most important improvement requested is the ability to synchronize in networks with poor signal quality and/or high load.

(18)
(19)

Chapter 2

Concepts, clarifications and

abbreviations

2.1

Concepts

ABIS: The interface between the Base Transceiver Station (BTS) and the Base

Station Controller (BSC) in a GSM system is called ABIS.

Granularity Period: One measurement period when data is being collected. Stratum: The quality of a clock used for synchronization, generally the lower the

number, the better precision.

TOWA:Time Offset Wander Amplitude is a measurement of the maximal

disper-sion of values. It is calculated by using least squares fitting in order to estimate a straight line from available data. Then both the positive and the negative max-imum deviations from the line are extracted and summarized. This gives us the absolute maximal possible difference between two values.

MAFE: Mean Absolute Forecast Error is a measurement of the mean dispersion

of the values. Like in the TOWA case least squares fitting is used in order to ap-proximate a straight line from the data. The difference is that MAFE calculates the mean absolute value deviation from the line.

2.2

Clarification

The sample time mentioned in the text refers to the distance between two selected measurements and not the sample time of incoming NTP packages.

Chapter 7 is an standard Ericsson configuration management form and will not be useful for anyone outside Ericsson.

(20)

6 Concepts, clarifications and abbreviations

2.3

Abbreviations

API Application Programming Interface BSC Base Station Controller

BSU Base Station Unit BTS Base Transceiver Station

CDSA Comparative Data Selection Algorithm CDSAWSMS CDSA with Windowed SMS

CDSAWSMSL CDSAWSMS with Look-Back CLI Command Line Interface DSA Data Selection Algorithm GPS Global Positioning System HW Hardware

IP Internet Protocol LT Local Terminal

MI Measurement instrument MSQF Mean Square Fit

NA Not Applicable

NTP Network Time Protocol

OaM Operation and Maintenance Software Package OCXO Oven Controlled Crystal Oscillator

ODC Old Data Compensation OS Operating System OSP On-Site Personel

OSS Operation and Support System PM Performance Management ppb Parts per Billion

RF Radio Frequency

RS Requirement Specification RTD Round Trip Delay

SIU Site Integration Unit

SMS Separate Minimum Selection SSH Secure Shell

STN Site Transport Node for BTS SW Software

TDM Time Division Multiplexer TO Time Offset

TS Time Server

UDP User Datagram Protocol

(21)

Chapter 3

STN

3.1

Introduction

The Site Transport Node (STN) is a node in a BTS site that handles communica-tion over IP between a BTS and a Base stacommunica-tion controller (BSC). The STN can be implemented on various platforms, E.g. a Site Integration Unit (SIU), or it can be an integrated part of a Base Station Unit (BSU) such as a Pico-station. Because of the fact that the measurement part of this thesis has been implemented and tested on the STN in SIU, each time the STN is mentioned the reference is to the STN in SIU.

3.2

SIU

Figure 3.1: SIU

The SIU, seen in Fig. 3.1 is a physical unit that adds transport sharing func-tionality for a radio base station (BTS) site. For example ABIS over IP and site-LAN functionality. The SIU contains a STN that maintains the synchroniza-tion and current time of BTS’s over Ip by Network Time Protocol (NTP) or IEEE 1588-2008 (PTP).

(22)

8 STN

3.3

Pico-station

The pico station is a small BTS designed to be used inside facilities. For example large supermarkets, office buildings, tunnels etc. The pico-station has an internal STN that is connected to a BSC through IP and is synchronized with NTP-packets. This is mainly done because a connection to the macro-net or GPS can not be guaranteed inside buildings and basements and it is very expensive to make a separate E1/T1-line on various small office complex.

3.4

Design of the STN

The STN can handle a lot of interfaces as can be seen in Fig. 3.2, although the only system dealt with in this thesis is the synchronization subsystem

Figure 3.2: Design of the STN.

Expanding the box including Synchronization and timeserver from Fig. 3.2, Fig. 3.3 is revealed. It describes how the synchronization subsystem is designed. The parts used in this thesis are marked in a box and described below

3.4.1

Local terminal

The local terminal is used to configure the STN. It is accessible through the console port or through SSH. All the necessary commands described in chapter 7 can be executed from the local terminal.

(23)

3.4 Design of the STN 9

Figure 3.3: Synchronisation subsystem of the STN.

3.4.2

Time server

A time server is a server that holds correct time. The STN needs a timeserver with accuracy of stratum-1. NTP-requests are sent from the NTP-handler to a timeserver and the timeserver will send a response containing timestamps.

3.4.3

NTP handler

The NTP-handler sends NTP-requests to a timeserver and calculates the response. More about NTP in chapter 4

3.4.4

Remote calibration

The filtering software calculates the Roundtrip delay, the timeoffset and controls the frequency of the VCXO given these variables. The basic frequency is given by the OCXO and deviates over time. The OCXO is internally connected to a VCXO that delivers a frequency depending on the voltage input. When adjusting the frequency only the voltage of the VCXO is changed to compensate for the frequency error of the OCXO.

(24)

10 STN

The Remote synch calibration algorithm calibrates the frequency of the SIU with the help of an OCXO. The interconnection between the OCXO and the VCXO can be seen in Fig. 3.4. It is also possible to see the connection between incoming NTP-packages and the corresponding frequency.

(25)

Chapter 4

NTP

4.1

Introduction

NTP stands for Network Time Protocol and is a standard protocol for synchroniz-ing computers over the internet. NTP time synchronization servers are available on the internet. NTP exists in different levels where each level is represented by a stratum number. Stratum-0 is a national time source. Stratum-1 is directly synchronized to a national time source, Stratum-2 is synchronized to stratum-1 and so on. Extended information about the network time protocol can be found in [7].

4.2

Precision

There exists a lot of public timeservers on the internet but they can not deliver extremely precise time stamps and the expected accuracy is about between 50 µs and 1.4ms[2]. This precision is not very adequate for synchronizing BTS’s.

To get accurate time stamps it is essential to have accurate timeservers. The required accuracy for BTS’s is hard to achieve in public timeserver mainly because there are occasionally high loads on timeservers and the net in general that cause delays and thereby decreases the accuracy. For this reason Ericsson has deployed private timeservers where they are needed.

It is not enough to only eliminate delays on the timeservers, the NTP packages have to be sent and received on the STN without software delay. This is solved by introducing hardware time stamping on the SIU. This means that the NTP packages that are sent and received get their timestamp directly in the hardware e.g. in the form of an FPGA before being sent and will not get affected by IP package queues. By taking these precautions, possible error sources on the receiving and transmitting end can be minimized. But there is still one source of error left. Although there exists some possible ways of prioritizing certain packages in the IP network, the sent NTP packages will be delayed when passing through different nodes, like routers and switches on the way. This will cause the NTP

(26)

12 NTP

packages transport delay to vary depending on the different load of the nodes.

4.3

Description

The information sent in a NTP package can be seen in Fig. 4.1. It starts with the STN (client) sending one time stamp (T1) corresponding to the local time on the STN. The timeserver receives the message and marks when it arrived (T2). Then the timeserver sends a response containing T1, T2, the time when the response was sent (T3) and a reference timestamp. When the STN receives the message it marks the time when it was received (T4) and saves T1-T4 to the memory. Using these, it is possible to calculate the round trip delay (RTD) and the time offset (TO).

Figure 4.1: NTP message timestamps.

T1 can be written as the correct time t1 plus a time error 1 where 1 is the

difference between the STN clock and the timeserver clock. T2 can be written as

t1 plus transport delay δ1from STN to the timeserver. T3 is the correct time t2

when the message was sent.

T4 is t2 plus the time error 1plus the transport delay δ2from the timeserver

to the STN.

T1=t1+ 1

T2=t1+ δ1

T3=t2

(27)

4.3 Description 13

RTD = (T4 - T1) - (T3 - T2) = t2 + 1+ δ2− t1 − 1− t2 + t1 + δ1= δ1+ δ2

RTD can be divided in three different delay categories as can be seen in Fig. 4.2.

Constant delay: This is the minimum amount of time spent by the package

trav-eling to and from the timeserver. If the connection was ideal then this would be the total RTD.

Wander delay: This delay origin from different traffic loads on the network. Random delay: This is the random variation and is produced by a variety of

different reasons, some of them being temporal node delays and queues.

Figure 4.2: Round trip delay variations.

Time offset (TO) is the difference between the time on the STN and the time on the timeservers

2TO = (T2 - T1) - (T4 - T3) = (t1 + δ1−t1−1)−(t2−t2+12) = −211−δ2

As seen in the equation above the difference in transport delay creates an error in the calculation of the time offset. T Oerror= δ1−δ2 2

(28)

14 NTP

4.4

PTP

The precision of NTP can be increased by the use of the Precision Time Protocol (PTP or IEEE 1588). PTP and NTP are very similar to each other but the main difference is that PTP has the ability to log the delays in all nodes through the network. When a PTP packet arrives to a node that supports it, it receives a time stamp of when it arrived to the node and another time stamp when it leaves the node. This feature makes it possible for the client to deduct the time spent in queue and thereby minimize the difference in transport delay. However although queuing delays are minimized, as long as there exist software timestamping in the nodes the scheduling delays still exist and can still cause some problems. Another drawback of this method is that it assumes that all IP packages travel the same way to and from the timeserver and this is not always the case. [3]

The reason why this is not being used yet is that the nodes of today do not have PTP support which is essential for this method to work. The good news is that PTP support is growing and in the future it might be possible to get better time precision through the IP network.

(29)

Chapter 5

Existing Algorithm

There is already an implemented synchronization algorithm on the STN that is good enough to synchronize the oven controlled crystal oscillator (OCXO) over NTP. The assignment was to investigate how this solution could be improved or if there are some better ways of solving this problem. The assignment of improv-ing somethimprov-ing can be interpreted in many different ways. The algorithm can be improved in order to synchronize faster, it can be more stable, have the ability to synchronize with worse signal quality or just run faster or use less memory. The initial goal was to investigate in what way the existing algorithm could be improved considering the ability to synchronize in networks with poor quality and synchronizing faster in better quality networks without sacrificing stability. Furthermore some different methods of synchronization should be investigated in order to map the different advantages and disadvantages. The description of the currently implemented solution is classified and thereby will not be described in this report.

(30)
(31)

Chapter 6

Measurement Instrument

6.1

Introduction

One assignment for this thesis was to create a measurement instrument (MI) that collects NTP measurements through IP in order to find out if it is possible to synchronize a STN from specific physical locations. To avoid creating completely new hardware and software which is a huge and expensive project for a small MI, existing technology is modified and reused.

6.2

Implementation

The implementation of this MI has been in C and testing of the code has been done on a SIU. The operating system of the SIU has been used in combination with additional functions and modifications. Detailed information about the different functions is described in this chapter.

6.2.1

Functionality

There are two primary ways to use something as a MI. The first alternative is to do the signal processing directly on the hardware. The second alternative is to collect measurement data and send it where it can be stored and processed. The second alternative is possible because the SIU is connected to an IP Network and it has a lot of available bandwidth to send the data. The advantage of this alternative is that more advanced data processing algorithms can be applied and thereby detect anomalies or temporal variations. The disadvantage is that the data have to be analyzed by someone using Matlab or a similar application.

To summarize, the first alternative is simpler and the second is more flexible. There are pros and cons with both alternatives but after careful consideration and discussions with the project manager it was decided to implement both alterna-tives. Mostly in order to not sacrifice functionality that may be useful in certain situations. In some cases it is important to analyze all the data to get a more

(32)

18 Measurement Instrument

detailed analysis of the network and in some cases only the information whether it is possible to synchronize on a specific location is of interest.

Figure 6.1: Setup of the measurement instrument.

The requirement for the STN to be able to synchronize is that the difference between the maximum and the minimum deviation from the mean square fit calcu-lation for each measurement is less than a predefined limit, otherwise the frequency error could be too large. This is also a reason why it is the only signal processing algorithm on the STN. If this requirement is satisfied then the result will be that the timeserver is accurate enough for synchronization. There are only two differ-ent modes to classify the timeservers: good enough and not good enough. The reason for having this strict classification is that the STN only differs between good and bad timeservers. If a timeserver fulfills the requirement it is used for synchronization. Otherwise it switches to another more appropriate timeserver.

In the measurement mode, a too high time offset wander or too few valid mea-surements will also result in an invalid timeserver but with one difference. The data in the granularity vector will be discarded and it will be noted that the SIU has failed the TOWA test. The collected measurement data will on the other hand not be discarded for that timeserver and the measurement will continue as usual. When the information about the timeservers is collected it will be pos-sible to see how many deviating measurements that have been received from an invalid timeserver. This feature might be useful for future improvements of the synchronization algorithms that can filter contaminated data through e.g. outlier detection (see chapter 9.13).

6.2.2

Important changes

Some changes in the current code needed to be implemented and some functions were created to minimize the extra amount of data needed to be processed to implement a measurement node.

(33)

6.2 Implementation 19

Store measurement data in RAM

One major difficulty was how to store the collected measurement data. The first approach would be to store it in vectors on the STN, although this was not a great idea because it had to be stored in a way that both the Operation and Maintenance system (OAM) and the STN could find it. The STN needs to collect measurement data and the OAM is required to send it to a filestore. Using this approach means that large signals will be sent between these processes, which takes a lot of unnecessary resources. One solution to this problem is that the STN stores the measurement data in temporary files on the RAM. When the CLI-command sendmeasdata <filestore> later is executed the OAM can send these files to filestore. The command clearmeasdata can later on remove the files from the RAM.

Changed vector sizes

Previously the TO_Cal_Vector, RTD_Cal_Vector, and all the other vectors needed for synchronization were as large as needed. They are now extended so that they can contain values from all six timeservers. Before measurement mode they were allocated to a size of 300 but contained at most 90-96 values because only one timeserver was active at a time. After measurement mode has been im-plemented this vector is increased to a size of 600, with all the synchronization data inside. Data from the first timeserver is stored from index 0 to [size of stored data-1], data from the second timeserver is stored from index [size of stored data] to index [(2*size of stored data)-1] and so on. In normal operations this will not cause a problem because the amount of storage needed for calibration data has only been slightly increased. There is a possibility that the current test programs have to be altered a bit to support this increased vector length. But in general the remote synch calibration functions as usual.

6.2.3

Prerequisites

If the STN is to be used as a MI, there are a few prerequisites that needs to be fulfilled if measmode is able to perform as requested.

• testmode in combination with measmode has not been thoroughly tested and therefore should not be used together. If a faster measurement period is desired, it is possible to use the pboot parameter MeasModeAcceleration set to 1 by default.

• Timeservers (if any) need to be configured in order, i.e. timeserver 0 needs to be configured if timeserver 1 is configured etc.

There are also some prerequisites that need to be fulfilled before measmode is acti-vated. When these are fulfilled and measmode is enabled, measmode is actiacti-vated.

• synchtype must be of type timeserver.

(34)

20 Measurement Instrument

6.2.4

Activation

To make the MI easy to use for the on site personnel (OSP) it is necessary to stay consistent when implementing new functionality on the STN. The STN has a mode that is called testmode that is used to test the STN’s functionality. What it does is to accelerate everything that is associated to synchronization over IP by 100 times. This functionality is very useful for quickly ensuring correct func-tionality. The testmode is activated on site through the local terminal or through remote login via SSH to the SIU. It is very intuitive and easy to use. By typing

testmode enable/disable the state of testmode is changed. This is also how the MI

functionality is implemented.

When typing measmode enable/disable the MI functionality is activated/deactivated. When measmode is enabled all other remote synchronization functions are dis-abled. This means that the STN is completely dedicated to collect measurement data. During normal operation the STN is connected to one of up to six time-servers. When measmode is enabled the STN receives data from all configured timeservers. For example if two timeservers are configured, a measurement is requested twice as fast as if only one was configured.

In measmode the timestamps T1-T4, TO, RTD, "if the measurement was valid" and "if a warm restart has been issued" are stored continuously in the memory. If

measmode is enabled it is suggested that no or few functions that require memory

access are used at the same time.

6.3

Functional description for measurement mode

in synchronization

Several flowcharts for how the measurement mode is implemented can be seen in Fig. 6.2 to Fig. 6.5

6.4

Measurement mode SW

The Measmode SW is an alternative program flow of the Process Measurement SW which collects measurements for a configured timeserver and stores the values in a file on RAM. The main difference between Process Measurement when running in Measurement Mode is that Measurement Mode stores T1-T4, RTD, TO, validmeas and warmrestart in a file instead of saving RTD and TO in a vector (see chapter 4.3). Several different functions have been created/altered to be able to execute in Measurement Mode. Unused in table 6.1 means that Measurement mode never uses that input/output

6.4.1

Description

(35)

6.4 Measurement mode SW 21

Figure 6.2: Flowchart for remoteSynchCalibrationHandler.

calcTOandRTD

This function calculates the time offset and the round trip delay from the times-tamps. The only changes made in this function is that the current timeserver and the number of timeservers is passed on as an input to this function otherwise the measurement mode will not function correctly.

getMeasModeTimeMeas

New function that calculates the timestamps received from the NTP-server. If no measurement was received, it stores the current timeserver instead.

initialization

Executes when the first measure for each timeserver is received. Initiates all the necessary data and sets the first time offset to zero.

(36)

22 Measurement Instrument

Figure 6.3: Flowchart for remoteSynchCalMeas.

processMeasurement

The only thing changed in this function is that if the TOWA fails, the measfailed data for that timeserver is increased and no reset is requested. It also has the current timeserver as an input so that measmode will function correctly. Executes the function timeOffsetWanderSupervision.

remoteSynchCalibration

Not executed in measmode, changed to take a timeserver ID as an input because processMeasurement need that variable.

(37)

6.4 Measurement mode SW 23

Figure 6.4: Flowchart for getMeasModeTimeMeas.

Figure 6.5: Flowchart for writemeasdata.

remoteSynchCalibrationHandler

This function is called by the OAM every measurement period if the current synchtype is timeserver. A measurement period in measmode is the standard

(38)

24 Measurement Instrument

measurement period divided by the number of configured timeservers. This is the main function for the remote synch calibration.

remoteSynchCalMeas

This function has two new inputs, the number of timeservers and a boolean meas-Mode. It also executes initialization, calcTOandRTD, processMeasurement, re-moteSynchCalibration and updateMeasData.

requestNTPMeas

Requests a NTP measurement for the current timeserver.

setMeasVar

Updates state variables to NVRAM.

SNC_getMeasmodeReq_handler

Handler for the CLI command measmode.

SNC_setMeasmodeReq_handler

Handler for the CLI command measmode enable/disable/status

timeOffsetWanderSupervision

Calculates the mean square fit and finds out if the time offset wander is to large for the current timeserver.

updateMeasData

If measmode is activated, this function updates the measurement data each mea-surement period. Measdata is stored in the following format in a file:

<T1>,<T2>,<T3>,<T4>,<TO>,<RTD>,<validmeas>,<reset> <T1>,<T2>,<T3>,<T4>,<TO>,<RTD>,<validmeas>,<reset> . . .

<T1>,<T2>,<T3>,<T4>,<TO>,<RTD>,<validmeas>,<reset>

writeMeasData(1-6)

(39)

6.4 Measurement mode SW 25

Name Description Input Output

calcTOandRTD Calculates the TO and the RTD. bool validMeas, bool saveMeas, U8 TSid, U8 noTS NONE getMeasMode-TimeMeas Requests an NTP-measurement from a timeserver and stores the response in a structure packet_t. U8 noTS, packet_t ?time, bool firstmeas bool valid-Meas

initialization Initiates all variables af-ter the first valid mea-surement has been re-cieved.

U8 TSid NONE

processMeasurement Evaluates the TO and RTD-vector to find a value to save in the cal-ibration vector.

U16 TSid NONE

remoteSynch-Calibration

Used when measmode is disabled, changed so that it now needs the timeserver id to regulate on the correct data.

U8 TSid NONE

remoteSynch-CalibrationHandler

This is the main pro-gram of the remote synchronization, when synchtype=timeserver and at least one time-server is configured, this function is called periodically.

NONE NONE

remoteSynch-CalMeas

Calculates TO and RTD and stores the measure-ment data in measmode.

bool validMeas, bool measRestart, bool calRestart, unusedFLregOK, unused ?TSchanged, unused ?switchToNext, unused testMode, bool measMode, U8 noTS, packet_t ?time NONE

(40)

26 Measurement Instrument

requestNTPMeas This function is called from the RemoteSynch-CalibrationHandler to request a mea-surement from the timeserver. Previously this functionality was included in function getTimeMeas, but due to timestamping in SW it was better to put this functionality at the end of RemoteSynchCali-brationHandler.

NONE NONE

setMeasVar Changes the parameter values in the NVRAM.

U8 TSid NONE

SNC_getMeasmode Req_handler

Handler for CLI com-mand measmode.

*rec_p NONE

SNC_setMeasmode Req_handler

Handler for CLI com-mand measmode en-able/disable/status.

*rec_p NONE

timeOffset-WanderSupervision

calculates the wander amplitude, which is the largest value that de-viates from the mean square fit calculation.

U16 TSid NONE

updateMeasData Updates the measure-ment data in measmode, Sets the T1-T4, RTD, TO and validmeas vari-ables correctly in the current timeserver data.

bool resetMeasMode, U8 TSid, U8 noTS NONE writeMeasData (1-6) depending on which timeserver being mea-sured, this function writes measdata to the RAM. Called from RemoteSynchCali-brationHandler when measmode is activated.

NONE NONE

(41)

6.5 Data analysis 27

6.5

Data analysis

The data file sent from the SIU contains the RTD and the TO calculated by the STN. In addition to that it also contains the separate timestamps (T1-T4) received by the STN, the notation if a measurement has been classified as valid and if the SIU has performed a warm restart. The values are stored in colon vectors separated by a comma and in descending time order.

The reason for storing the data in colon vectors is that it will be easy to extract the data in Matlab. By using the command data=load(filename) the file will be saved as the variable data and then it is easy to extract separate colons by writing for example: TO=data(:,5). To avoid writing this manually a function that does everything automatically has been implemented. The plot_data function reads the data from a file, extracts necessary information, reports number of invalid measurements, calculates separate time offset wander supervision (TOWA) and plots the data.

In order to plot the data received from the SIU while in measurement mode, (measmode) type the following in the Matlab console window plot_data(’filename’) Example: plot_data(’server1.txt’) or plot_data(’filepath\server1.txt’)

In the Matlab console it will be displayed how many invalid measurements there are and how many times the TOWA supervision has failed in Matlab.

It is also possible to remove the highest RTD and TO values in order to get a more detailed plot. By typing plot_data(’filename’, percentage) the function will only plot the percentage of the lowest RTD values and remove the highest values. This is necessary because some packages are extremely delayed and this will reduce the clarity of the plots. As seen in Fig. 6.6 where the RTD is plotted it is impossible to get any relevant information.

Removing two percent of the highest RTD values gives a more clear plot as can be seen in Fig. 6.7

Example: plot_data(’server1.txt’, 80) or plot_data(’server1.txt’, 98). When not specified the default percentage value is 100.

When using the procedure described above, the plotted RTD and TO values are calculated on the SIU. It is also possible to calculate RTD, TO, uplink de-lay (UD) and downlink dede-lay (DD) by using the T1-T4 timestamps. By typing

plot_data(’filename’, percentage, calc_RTD_TO), e.g.: plot_data(’server1.txt’, 80, 1) three plots will be displayed. The usual RTD and TO but also the UD and

DD.

Important to know is that it is not possible to get the exact size of the UD and DD values. The difference relative the first measurement will be exact but the amplitude is not correct. In order to get more realistic plots the UD and DD values are initiated to half of the first RTD value. It is possible to change this and initiate UD and DD to zero. An example of how the UD and DD plot can look like can be found in Fig. 6.8.

(42)

28 Measurement Instrument

Figure 6.6: Every measurement collected in measurement mode.

Figure 6.7: The 98 lowest percent of the measurements collected in measurement mode.

(43)

6.5 Data analysis 29

Figure 6.8: UD and DD of the 98 lowest percent of the measurements collected in measmode.

(44)
(45)

Chapter 7

Configuration Management

7.1

SYN_1.16 Administration of Measurement Mode

7.1.1

General

This function applies to the SIU. This function is used from the Local Terminal or the SSH interface to activate, terminate and retrieve the Measurement Mode. It is also used to retrieve the status of the timeservers.

7.1.2

Pre-conditions

OSP or OSS has logged on in accordance with function Local User Login or func-tion Remote User Login.

Testmode is disabled.

7.1.3

Initiation

By OSP or OSS command.

7.1.4

Description

Measurement Mode is used to collect data in the Remote Synch Calibration func-tion for purpose of examine the quality of the configured timeservers or rather the quality of the IP-network that it is connected to. The Remote Synch Calibration is by default not in Measurement Mode and the Measurement Mode parameter is used only by the Remote Synch Calibration function. When the Remote Synch Calibration function is not active, changing of the Measurement Mode parameter has no influence. If the Remote Synch Calibration function is then later activated, i.e. synchType changed to timeServer, this Measurement Mode parameter value which was entered earlier will be used.

When the Remote Synch Calibration function is already active (i.e. synchType == timeServer) and the Measurement Mode value is changed then the Remote Synch Calibration function is restarted using this new measurement Mode value.

(46)

32 Configuration Management

Measurement Mode survives warm start, e.g. STN node restart due to configura-tion change. Measurement Mode does not survive cold start (Power Off, restart command etc.)

Command Syntax

To activate the Measurement Mode from the Local Terminal enter

measmode enable

To terminate the Measurement Mode enter

measmode disable

To get the status from the Measurement Mode enter

measmode status

The response is

Server1:<Number of times failed> Server2:<Number of times failed> Server3:<Number of times failed> Server4:<Number of times failed> Server5:<Number of times failed> Server6:<Number of times failed>

To retrieve if measurement mode is enabled enter

measmode

The response is

measmode enabled or measmode disabled

7.1.5

Termination

The command is terminated when the OSmon command prompt is shown.

7.1.6

Fault Handling

NA

7.1.7

Capacity

NA

7.1.8

Performance

The measurement data is only stored in RAM, which means that it will not survive a cold restart.

7.1.9

Administration

(47)

7.2 Function: OAM_3.8 Send Measurement data 33

7.2

Function: OAM_3.8 Send Measurement data

7.2.1

General

This function describes how the OSS or OSP requests sending of measurement data

7.2.2

Actors

OSS, OSP

7.2.3

Pre-condition

OSS or OSP is logged in and has access to the command prompt.

7.2.4

Post-condition

The measurement data has been sent to a filestore.

7.2.5

Main Flow

1. OSS or OSP issues a sendmeasdata command with fileStore parameter.

2. OAM requests that the measurement data from all six timeservers should be sent to the fileStore location.

3. The security manager sends measurement data from all timeservers to file-store

4. STN returns OperationSucceeded.

7.2.6

Alternative Flows

1. Measdata do not exist for all timeservers

(a) the security manager will fail to send data for these servers. (b) the log will say that the security manager failed to send.

7.2.7

Performance and Characteristics

The fileStore given by the sendmeasdata command contains the user name and password used for sFTP. The fileStore value is not stored after this function is finished.

(48)

34 Configuration Management

7.3

Function: OAM_3.9 Clear Measurement data

7.3.1

General

This function describes how the OSS or OSP requests clearing of the measurement data

7.3.2

Actors

OSS, OSP

7.3.3

Pre-condition

OSS or OSP is logged in and has access to the command prompt.

7.3.4

Post-condition

No measurement data exists on the SIU, although if measurement mode still is active, new data will be stored as before.

7.3.5

Main Flow

1. OSS or OSP issues a clearmeasdata command without additional parameters.

2. OAM requests and deletes the measdata for all timeservers. 3. STN returns OperationSucceeded.

7.3.6

Alternative Flows

1. Measdata do not exist for all timeservers

(a) the OAM will fail to delete measurement data for that timeserver. (b) the log will say that the file did not exist.

7.3.7

Performance and Characteristics

(49)

Chapter 8

Kalman filter

8.1

Introduction

A Kalman filter is a recursive estimation filter that estimates a value from a series of noisy measurements. It is an important tool in control theory and signal processing because it is reliable and well-tested. Because of the recursiveness no large history of old measurements is needed. All that is required are the gain, the estimated value from previous time step and the current measured value. These three are used to calculate the estimated value for the current time step. If the measurement noise error is estimated to be constant, the gain can be approximated with a constant that is optimal to exactly that error. However, if the noise is fluctuating, the gain will also fluctuate[9][10].

8.2

General Kalman filter

The Kalman filter addresses the problem of estimating the state xk by solving the

linear stochastic difference equation:

xk = Fkxk−1+ Bkuk+ wk

where xk is the state at the current time step.

xk−1 is the state at the previous time step.

Fk matrix is the state transition model that relates the state at the previous time

step k-1 to the state at the current time step k.

uk is the control vector.

Bk matrix is the control input model that relates the control vector to the current

state xk.

wk is the random process noise which is assumed to be white and with normal

probability distribution with covariance Q.

wk∼ N (0, Q)

The measurements are modeled as:

zk= Hkxk+ vk

Where zk is the actual measured state at the time step k.

(50)

36 Kalman filter

Hk is the observation model that maps the true state space to the observed state

space.

vkis the random measurement noise which is assumed to be white and with normal

probability distribution with covariance Rk.

vksec N (0, R)

The Kalman filter is divided in two phases, predict and update. The predict phase uses the state estimate from the previous time step in order to estimate the current state. It is also called the priori state estimate because it does not include the current measurement in its estimation. The update phase combines the pri-ori estimate and the current measurement in order to refine the current estimate. This procedure is continuously repeated as long as the estimation proceeds.

Writing notations: ˆ

xn|mis the estimate of x at time n using observations up to and including m.

Pn|mis the error covariance matrix at time n using observations up to and

includ-ing m. It is a measurement of the estimated accuracy of the state estimate.

Kk= Pk|k−1HTkS

−1

k

Figure 8.1: Prediction and Update of the K and P values.

This is the optimal Kalman gain and hence the value that returns the minimum mean square error.

Pk|k= (I − KkHk)Pk|k−1 is a simplification of the posterior error covariance

formula and is only valid if the optimal Kalman gain is used. It is computationally cheaper and thereby the dominating implementation in practice. If a non-optimal Kalman gain is used the error covariance formula is:

(51)

8.3 The discrete steady-state Kalman filter 37

8.3

The discrete steady-state Kalman filter

By analyzing the general Kalman filter structure it is possible to identify the main parameters [8]. A general Kalman filter is constructed so that all the parameters can be variable and the filter adapts itself depending on the variable values. If all parameters are constants it is unnecessary to treat them as variables. In this Kalman filter the matrixes F, H and B are constant. B = 0

Fk = F =  1 T s 0 1  = constant.

This is the standard F matrix with regular sampling intervals.

Ts = sample period. Hk = H = 1 0  = constant. x0=  eT O eF O  =  0 0 

x0 start at time zero with time and frequency offset set to zero.

The steady-state model is:

xk+1= Fxk+ K(zk− Hxk)

Since the measurement errors at time Tk are uncorrelated, Q and R can be

denoted as scalars. The only variable parameter is the error covariance R. This variable affect the covariance matrix P. The process noise variation Q is in this case limited by the minimum value that the current algorithm is able to manage. Q is set by the VCXO DAC limitations. Variance is the standard deviation squared [5].

This is the calculated size of the standard deviation but this parameter can also be used to size the gain. By setting the standard deviation smaller than the actual value, the gain will decrease and the filter will be less wavering. This is equivalent to increase the magnitude of the observation noise standard deviation. The observation noise variance R can be extracted from the measured data. Due to limitations in calculating the correct noise it is recommended to set R a little higher than the estimated error. Rkis modeled as the time offset wander amplitude

squared.

Larger Rk results in lower gain and lower gain leads to a slower estimation.

In this case there are no problems with unnecessarily slow algorithms because the OCXO is quite stable. There are potentially more severe problems related to underestimating the error but these problems can be solved by identifying the magnitude of the error and if it turns out to be too large, either suspend the cali-bration until the error is within acceptable limits or try switching synchronization source. By rewriting some equations it is possible to derive the discrete time Ric-cati equation.

(52)

38 Kalman filter

The solution to the matrix equation is the covariance matrix for the optimal gain given a specific estimation error. A solution to the algebraic Riccati equation may not always exist and even if it does, it is not guaranteed to result in a stable Kalman filter. The stability of discrete filters can be confirmed by controlling if the poles are inside the unit circle and if the eigenvalues of (F−KH) have negative real parts. By using the calculated covariance matrix it is possible to calculate the optimal gain. K = (HTPH + R)−1HTPF

To decrease the computational effort it is possible to solve the Riccati equation in advance and only save the results. This is possible because there is only one variable. Solving the Riccati equation for different magnitudes of the standard deviation gives Fig. 8.2 that shows the gain for standard deviation between 1ms and 100ms. This implementation is further on called the adaptive Kalman filter.

Figure 8.2: K values.

8.4

Variable sample time adaptive (VSTA) Kalman

filter implementation

This implementation consider the difference in sample time, the actual measure-ment noise, the control parameter and the delay between the control and the measured result. The state model is the same as for the steady-state Kalman filter except that the sample time is variable and the gain has to be calculated for every new measurement.

P0|0 is tricky to set because it is impossible to know the initial time and

frequency offset. The safe way to choose P0|0 is to set its diagonal to a very

(53)

8.4 Variable sample time adaptive (VSTA) Kalman filter

implementa-tion 39

unknown and means that the filter will be more receptive for measurements in the beginning to get a faster first estimation of the initial values. The Pk matrix will

change very fast to a more accurate value due to the update phase.

P0|0=

 ∞ 0 0 ∞



Control parameter and old data compensation (ODC)

The data is collected approximately 24 hours before it is used to make an estima-tion and the state of the estimated values might have changed during that time. The reason for using 24 hours old data is that the data selection algorithm will have more time in order to select good measurements. One big problem that oc-curs when making an estimation based on old data is that it might not be accurate anymore. This is usually the case when a frequency adjustment has been made. Especially important knowledge is that after a frequency adjustment is made the estimated frequency value is also adjusted. This causes some problems because the incoming data during the future 24 hours is inaccurate. The data that has been collected when there was a frequency offset will still indicate that there is a frequency offset present.

This means that the Kalman filter will believe that there is still a frequency offset and will still try to adjust for it even though it has already been compensated for. This creates some unwanted characteristics. In the worst case it could result in an unstable system. In the normal case it will only result in an overshoot of the estimated value as seen in Fig. 8.3.

Figure 8.3: Regulating frequency before ODC.

The reason for the overshoot is that the estimated TO (eTO) is calculated by both the estimated FO and the TO error as seen in the equation below. To

(54)

40 Kalman filter

get a better overview the gain calculation and the predict/update phases are not considered.  eT Oi+1 eF Oi+1  =  1 T s 0 1  ×  eT Oi eF Oi  +  k1 k2  × (eT Oi− T O) =  eT Oi+ T s × eF Oi+ k1 × (eT Oi− T O) eF Oi+ k2 × (eT Oi− T O) 

Reducing the eFO implicates that eTO calculations will be reduced by Ts × (F Oreduction). Furthermore the difference between TO and eTO will increase and the eFO calculation will also start to increase. Because of that the eFO will be reduced further which leads to further decrease of the eTO and so on. The solution to this problem lies in the control vector uk by modeling B and uk as:

B =  T s 0  , uk=  rF Oi 0 

As seen in the equation below the eFO will be reduced with the same size as the reduced FO (rFO) will increase and therefore there will be no difference in the eTO calculations.  eT Oi+1 eF Oi+1  =  1 T s 0 1  ×  eT Oi eF Oi  +  T s 0  ×  rF Oi 0  + +  k1 k2  × (eT Oi− T O) =  eT Oi+ T s × eF Oi+ T s × rF Oi+ k1 × (eT Oi− T O) eF Oi+ k2 × (eT Oi− T O) 

The problem in this case is that every reduction of the FO is different and at different times. It is important to keep track of when it is no longer necessary to compensate for the reduction. There are several different ways to solve this but in order to reduce the computational effort the least computational expensive solution is used. Creating a control vector uk with the same size as the average

number of samples during a 24h period and place it inside the automatic control function that is responsible for reducing the FO makes it possible to add the reduction to all the elements in the vector.

When a new sample arrives and the estimation is completed the first element in uk is removed. A new element is inserted at the end of the vector that contains

the value zero. The value zero indicates that it is correct data and no frequency adjustment has been done since the data has been collected. Because the reduction is done in small steps it will be added to all the elements in the vector and the first value is the sum of all the small reductions done during the last 24h period. The drawback of this simplified solution is that the average number of samples during

(55)

8.5 Investigation 41

Figure 8.4: Regulating frequency after ODC.

a 24h period is used. Comparisons to exact and more computationally expensive solutions shows that there are no noticeable differences between the results. The results when using ODC can be seen in Fig. 8.4

The improvement is significant in this case because the difference between the estimated TO and the actual measured TO is reduced. This is because the filter will continue to estimate a TO based on the actual measured data and the actual FO and not the new FO which is inaccurate for the next 24 hours. It is important to observe that this does not allow us to adjust the FO more often because the actual frequency is not visible until 24 hours later. This only reduces the error that is introduced when compensating for the FO reduction directly. Therefore it is only necessary to use when the gain is large. A Kalman filter with a very small gain will not be affected by this because the estimation of the frequency offset takes longer time than 24 hours.

8.5

Investigation

8.5.1

Investigation of the implemented Kalman filter

In order to investigate how to further improve the implemented Kalman filter it was necessary to do a detailed analysis of the implementation. The detailed analysis is not included in this report.

8.5.2

Investigation of the adaptive Kalman filter

A request was to create an adaptive filter that has the ability to consider the quality of the signal and thereby adapt itself. In order to be able to make a fair

(56)

42 Kalman filter

comparison a steady-state Kalman filter was designed with the difference that the gain is continuously calculated considering the error. In order to model the error the TOWA is used as the standard deviation. The TOWA is not correct to use as a measurement of the signals standard deviation because it is a measurement of the maximal difference. The reason for using it is to minimize the chance that the error is underestimated. By modeling the standard deviation as the maximal amplitude a margin is introduced in the estimation. The initial difference between these two filters is that the adaptive filter is faster.

Figure 8.5: Currently implemented Kalman filter.

This was not the primary goal of this implementation but nevertheless a posi-tive side effect. The reason for the increased speed is that the rise time decreases with the decreased estimated error. When increasing the error above the maximum limit a difference is noticed between these two filters. The gain in the adaptive filter is smaller than the gain of of the implemented filter. This means that it is theoretically possible to use this filter in networks with worse signal quality. In Fig. 8.7 the adaptive Kalman filter is synchronizing with an extremely bad signal quality.

In the simulation shown in Fig. 8.7 and other simulations where there is a constant frequency offset the estimated frequency does not converge to zero. This is due to the large continuous frequency drift used in the simulations. A large continuous drift in combination with poor signal quality will result in a continu-ous offset in the frequency estimation. The timescale in figure 8.7 is about three times longer compared to figure 8.6 due to the slow rise time which is a result from the signal quality. The downside of this algorithm is that it requires continuous data input and that the Riccati equation has to be solved with every new gain calculation. The Riccati equation is a computationally expensive nonlinear differ-ential matrix equation that calculates the optimal gain value. It is not reasonable

(57)

8.5 Investigation 43

Figure 8.6: Adaptive Kalman filteralgorithm with low TOWA.

Figure 8.7: TOWA of about 100 ms with a large frequency drift.

to calculate this equation during runtime but there is a solution to that problem. The solution is to pre-solve the Riccati equation for different error magnitudes and save the calculated gain values. Later on it is simple to extract the corresponding gain value when the magnitude of the error is known.

(58)

44 Kalman filter

the magnitude of the error. Because of the direct relation between the expected error, gain and the rise time, it is easy to calculate how long it will take for the algorithm to be ready for calibration.

Figure 8.8: Step responses for different standard deviations.

8.5.3

Investigation of the VSTA Kalman filter

The reason for choosing a Kalman filter is due to its low computational cost, stability and the fact that it is the best linear unbiased filter. There are a few differences between this Kalman filter and the adaptive Kalman filter. The first difference is that this Kalman filter has the ability to start with a large gain and thereby makes a faster estimation of the offset. The gain is reduced during runtime in order to make a more accurate estimation as seen in Fig. 8.9. Another difference is that with this implementation it is not required to have a constant input of data which makes it possible to use more advanced selection algorithms. The final difference is that this estimator considers the fact that old data is used which affects the control parameter.

The only negative about this filter compared to the steady state Kalman filter is the increased computational cost. The positive thing is that it initially has substantially shorter rise time which makes it possible to use as a fast calibrating filter. More about that in section 9.14. The main reason for developing this filter and its most valuable feature is that it is compatible with variable window length data selection algorithms.

(59)

8.5 Investigation 45

(60)
(61)

Chapter 9

Data Selection

In this chapter, two different selection criterium called minRTD (section 9.1) and SMS (section 9.5) will be compared. The comparative selection algorithm (sec-tion 9.2) will be compared to the currently implemented selec(sec-tion algorithm in combination with different selection criterium.

9.1

Selecting good measurements

While trying to adapt the filter it became clear that there is not much to gain by only adjusting the filter. The results will not improve significantly unless the signal quality is improved. One way of improving the signal quality is to improve the data selection algorithm. In order to improve this it is necessary to have a way of determining which values are good and should be selected. It turned out that the strongest relation between good values is also the most obvious correlation. Measurements with minimum RTD have least variation in the TO calculations. This conclusion can be confirmed by Fig. 9.1. The reason for this is that NTP works under the assumption that the transport delay to and from the timeserver is equally long and any difference in the transport delay will be noticeable as variations in the TO. The shortest RTD is most probable to have shortest transport delays and thereby minimal difference between them.

9.2

Comparative Data Selection Algorithm (CDSA)

The main purpose of the CDSA is to select which values that should be forwarded to the Kalman filter. To know which values that are good it is necessary to classify them in some way. Due to limited information about the measurements it is hard to classify the data in an ideal manner. In this case data was classified by the most evident relation between good and bad data. The relation between minimum RTD and accuracy of the measurements. This means that the measurements with the shortest RTD are classified as the ones with best accuracy as can be seen in section 9.1 and thereby forwarded to the Kalman filter. It is not possible to use constant

References

Related documents

Parallellmarknader innebär dock inte en drivkraft för en grön omställning Ökad andel direktförsäljning räddar många lokala producenter och kan tyckas utgöra en drivkraft

Putin’s words regarding Zoroastrianism were used as a sort of self-promotion on the websites of different groups of Russian Zoroastrians (zoroastrian.ru and blagoverie.org) where

Andrea de Bejczy*, MD, Elin Löf*, PhD, Lisa Walther, MD, Joar Guterstam, MD, Anders Hammarberg, PhD, Gulber Asanovska, MD, Johan Franck, prof., Anders Isaksson, associate prof.,

Besides this we present critical reviews of doctoral works in the arts from the University College of Film, Radio, Television and Theatre (Dramatiska Institutet) in

Microsoft has been using service orientation across its entire technology stack, ranging from developers tools integrated with .NET framework for the creation of Web Services,

According to Mead (1998 p.405) “frustration arises when the balance between the parent company and subsidiary interests are uncertain. The home country managers do not know how

Active engagement and interest of the private sector (Energy Service Companies, energy communities, housing associations, financing institutions and communities, etc.)

Prolonged UV-exposure of skin induces stronger skin damage and leads to a higher PpIX production rate after application of ALA-methyl ester in UV-exposed skin than in normal