• No results found

Development of a Digital Optimal Filter Platform

N/A
N/A
Protected

Academic year: 2021

Share "Development of a Digital Optimal Filter Platform"

Copied!
50
0
0

Loading.... (view fulltext now)

Full text

(1)

UPTEC E18 019

Examensarbete 30 hp Juni 2018

Development of a Digital Optimal Filter Platform

Joakim Brykt

(2)

Teknisk- naturvetenskaplig fakultet UTH-enheten

Besöksadress:

Ångströmlaboratoriet Lägerhyddsvägen 1 Hus 4, Plan 0 Postadress:

Box 536 751 21 Uppsala Telefon:

018 – 471 30 03 Telefax:

018 – 471 30 00 Hemsida:

http://www.teknat.uu.se/student

Abstract

Development of a Digital Optimal Filter Platform

Joakim Brykt

This report is the result of a Master Thesis project which is a part of the Master programme in Electrical Engineering at Uppsala Universitet.

The purpose of this Master Thesis project is to develop an embedded platform for the design and implementation of optimal digital

filters, in particular, the Kalman and the Wiener filter. In this project these filters are used for noise reduction on noisy signals.

The project is a further development of a previous Master Thesis project where a Universal Filter Bank was developed. The Filter Bank is used for designing and implementing various linear digital filters such as lowpass, highpass, bandpass and bandstop. The Filter Bank is a hand held box with two input and two output connections and a human-device interface (HDI) including a Liquid Crystal Display (LCD) and a keypad. It contains anti-aliasing and reconstruction (analog) filters and an ARM 32-bit Microcontroller Unit (MCU) which is programmed in the C programming language. The HDI lets the user specify a desired digital filter.

In this project the Kalman and Wiener filtering algorithms were first developed in MATLAB and tested with simulated autoregressive–moving- average (ARMA) processes (signals) in additive white noise. After shown to work, they were implemented on the ARM 32-bit MCU development kit, and finally ported to the Filter Bank. A user interface specially for the specifications of the filters has been created.

The Kalman and Wiener filtering algorithms have been tested using the same noisy ARMA processes and assessed in terms of the Normalized Mean Square Error (NMSE). The results have shown that both the Wiener and Kalman filters running on the development kit and the Filter Bank are successful in reducing noise. The Kalman filter is shown to perform better than the Wiener filter, which can be due to the extra information about the signal used in the Kalman filter. The

performance of both algorithms are heavily dependent on the pre- knowledge about the desired signal.

(3)

Sammanfattning

Den h¨ar rapporten ¨ar resultatet av ett examensarbete som ¨ar en del av civilingenj¨orsprogrammet i elektroteknik vid Uppsala Universitet.

Uppm¨atta elektriska signaler ¨ar alltid t¨ackta i n˚agon form av brus. Ibland ¨ar storleken p˚a detta brus s˚a pass stort i f¨orh˚allande till signalens storlek att den uppm¨atta signalen blir oanv¨andbar f¨or vidare analys. Ett s¨att att bevara signalen av intresse och samtidigt filtrera bort det o¨onskade bruset ¨ar att filtrera signalen med optimala filter.

Det h¨ar examensarbetet ¨ar en vidareutveckling av ett tidigare examensarbete utf¨ort vid Uppsala Universitet. I det tidigare examensarbetet utvecklades en Universal Filter Bank. Filter Banken ¨ar en l˚ada liten nog att h˚allas i handen med tv˚a in- och tv˚a utsignalkopplingar och ett anv¨andargr¨anss- nitt. Anv¨andargr¨anssnittet l˚ater anv¨andaren skapa olika frekvensselektiva digitala filter baserat p˚a parametrar valda av anv¨andaren. Ett eller flera filter kan sedan placeras i serie eller parallellt f¨or att skapa en utsignal med ¨onskat frekvensinneh˚all.

Syftet med detta examensarbete ¨ar att utveckla och implementera tv˚a optimala filteralgoritmer f¨or brusreducering i Filter Banken. Dessa filter ¨ar Kalman- och Wienerfiltret. ¨Aven ett anv¨andargr¨anss- nitt ¨ar programmerat s˚a att en anv¨andare kan v¨alja dessa filter och specificera filterparametrar.

Filter Banken ¨ar utrustad med en 32-bitars mikrokontroller som ¨ar programmerad med program- meringsspr˚aket C.

Filtrena kr¨aver att anv¨andaren har information om signalen innan filtrena kan anv¨andas. Om denna information finns utf¨or filtrena brusreducering p˚a ett tillfredst¨allande s¨att.

(4)

Contents

1 Introduction 1

1.1 Background . . . 1

1.2 Purpose and goals . . . 2

1.3 Tasks and methodology . . . 2

1.4 Thesis outline . . . 3

2 Theory 4 2.1 Filtering problem . . . 4

2.2 Discrete-Time Random Processes . . . 4

2.2.1 ARMA processes . . . 5

2.2.2 AR processes . . . 5

2.2.3 Correlation . . . 6

2.2.4 Properties of the noise . . . 7

2.3 Kalman filter . . . 9

2.3.1 The Kalman gain K(k) . . . 10

2.3.2 Initial values . . . 10

2.3.3 Kalman filter for noise reduction . . . 11

2.4 Wiener filter . . . 12

2.5 Performance metrics . . . 12

2.5.1 Normalized Mean Square Error . . . 12

2.5.2 Signal to Noise Ratio . . . 13

3 Hardware implementation 14 3.1 Prototype . . . 14

3.2 Analog signal chain . . . 15

3.3 Microcontroller Unit . . . 16

3.4 User interface . . . 16

4 Algorithm implementation 18 4.1 Filter structure . . . 18

4.2 Output structure . . . 19

4.3 The input to output data transfer . . . 19

4.4 Optimal filters . . . 20

4.4.1 Kalman filter . . . 20

4.4.2 Wiener filter . . . 20

4.5 User interface . . . 21

5 Results 25 5.1 MATLAB simulations . . . 25

5.1.1 Kalman filter . . . 25

5.1.2 Wiener filter . . . 31

5.2 Real time results . . . 33

5.2.1 Kalman filter . . . 33

(5)

6 Conclusions and future work 39 6.1 Kalman filter . . . 39 6.2 Wiener filter . . . 39

Appendices 41

A Conversion to state space form 41

B Derivation of cross-correlation function 44

(6)

Abbreviations

ADC- Analog to Digital Converter.

AR- Autoregressive.

ARMA- Autoregressive Moving Average.

CPU- Central Processing Unit.

DAC- Digital to Analog Converter.

DMA- Direct Memory Access.

FIR- Finite Impulse Response.

FPU- Floating Point Unit.

GPIO- General Purpose Input/Output.

HDI- Human Device Interface.

IDE- Integrated Development Environment.

ISR- Interrupt Service Routine.

LCD- Liquid Crystal Display.

LSI- Linear Shift Invariant.

MCU- Microcontroller Unit.

NMSE- Normalized Mean Square Error.

SNR- Signal to Noise Ratio.

(7)

1 Introduction

1.1 Background

Often times signals are observed in white noise which can make them unusable. Since white noise covers the entire frequency spectrum, ordinary frequency selective filters may not be sufficient to both filter out the noise and keep the desired signal in a satisfactory way. For these situations optimal filtering can be applied.

In 2016, a Universal Filter Bank was developed as a Master Thesis project by Mats Larsson [3].

The Filter Bank is a handheld box with two inputs and two outputs and a user interface which lets a user configure frequency selective digital filters. The Filter Bank lets the user attenuate unwanted frequency components in a signal in an easy way without the need of an additional system, such as a PC.

The purpose of this thesis project is to develop and implement two optimal filter algorithms for noise reduction in the above mentioned Filter Bank. These are the Kalman filter and the Wiener filter.

The main focus of this report is the optimal filtering algorithms and how they are implemented in software. The system which they are implemented in is described briefly. For a detailed description of the hardware system design in the Filter Bank, the reader is referred to [3].

(8)

Figure 1.1: The Filter Bank prototype [3].

The Filter Bank prototype is shown in Figure 1.1. It has two BNC connectors for the input on the left and two for the output on the right, a LCD display and a 4x4 keypad that make a HDI for the user to specify the desired digital filter.

1.2 Purpose and goals

The project work, in particular, involves the development of the algorithms for Kalman and Wiener digital filters and the implementation of them on a 32-bit microcontroller in C. The goal of the project is that the user can choose and specify the optimal filters on the filter bank prototype, and then the prototype can implement the filters that work according to the specifications.

1.3 Tasks and methodology

The two main steps to achieve the goal of the project are MATLAB development and C code im- plementation.

The MATLAB development is an easy way to verify that the algorithms work as intended. In MATLAB, some parameters in the filters are varied to see how they effect the performance of the algorithms. To be able to implement the algorithms in the existing Filter Bank code this first had

(9)

to perform good enough in MATLAB simulations, they were converted into C code.

The C code was first implemented on the development kit presented in section 3.3. When the results from the development kit were similar to the MATLAB simulations the code was pro- grammed on to the Filter Bank. Also the user interface on the Filter Bank was programmed.

The noisy signals that are treated in this report are AR and ARMA random processes observed in noise. The signals that are analyzed are created in MATLAB. To analyze these signals in real-time the MATLAB data is imported and sent through the Arbitrary Waveform Generator in the Pico- Scope 2207A [6]. The result is then viewed in the PicoScope oscilloscope.

1.4 Thesis outline

In section 2, theory regarding random processes and optimal filtering is covered. In section 3, the hardware parts are presented. In section 4 the code framework that is used to implement filters from user input is explained. The results from MATLAB simulations as well as real time results are shown in section 5. Some conclusions and suggested future work is discussed in section 6.

(10)

2 Theory

2.1 Filtering problem

A filter is a system that is designed to process an obesrved signal into a signal that is desired. Filters can be used to seperate two signals mixed in one measurement, transform signals into different do- mains or as in this case, filter out a desired signal that is observed in noise. The filters considered in this project are so-called optimal linear discrete-time filters. These filters are optimal because they are designed based on a criterion called the Minimum Mean Square Error criterion, abbreviated MMSE. This criterion states that an optimal filter minimizes the mean square of the difference between the processed and the desired signal. Equivalently, they provide the best estimation of the desired signal from noisy observations.

The goal of the filter is to estimate a desired signal based on noisy observations of that signal such that the MMSE ciriterion is fulfilled. The observation of a signal x(n) can be expressed in the following way

y(n) = x(n) + v(n) (2.1)

where y(n) is the observation, x(n) is the desired signal and v(n) is measurement noise. n are positive integers indicating time instances in the digital domain. Stated mathematically, an optimal filter performs the task of determining the best estimate of x(n), denoted ˆx(n), from its observation y(n) in (2.1) by minimizing E[|x(n) − ˆx(n)|2].

2.2 Discrete-Time Random Processes

The theory regarding discrete-time random processes is based on [1].

The signals dealt with in this project are random signals of real values. Random signals are also called random processes. A discrete-time random process x(n) is a collection of discrete-time signals xk(n), where k is an integer. Each discrete-time signal xk(n) is a single realization of that process.

At a certain time instance (n = n0) the process x(n0) be- comes a random variable which depends only on the realiza- tions xk(n0) and may take any of the possible values accord- ing to some probability distribution. So, a discrete-time ran- dom process is just an indexed sequence of random variables and the knowledge on these can be used to treat random pro- cesses.

In this report, the random processes analyzed will be Wide Sense Stationary (WSS) processes.

A random process x(n) is WSS if it fulfills the following three con- ditions

1. The mean of the process is independent of n, i.e. mx(n) = mx

2. The autocorrelation rx(k, l) = E[x(k)x(l)] only depend on the difference k − l.

(11)

If two random processes x and y are WSS and their cross-correlation only depend on the difference k − l, rxy(k, l) = rxy(k − l), then x and y are said to be jointly WSS.

Two special types of processes that can be generated by filtering white noise with a Linear Shift- Invariant filter with a rational transfer function is considered. These two types are called autore- gressive moving average (ARMA) processes and autoregressive (AR) processes.

2.2.1 ARMA processes

An ARMA process is generated by filtering white noise w(n) through a causal linear shift-invariant (LSI) filter with rational system function of the form

H(z) = Bq(z) Ap(z) =

Pq

k=0bq(k)z−k 1 +Pp

k=1ap(k)z−k (2.2)

The characteristics of the filter is determined by the polynomials Bq(z) and Ap(z) which have co- efficients bq and ap respectively.

- H(z) -

Pw(z) = σ2w Px(z)

Figure 2.2: A random process is generated by filtering white noise.

Generating the random process is illustrated in figure 2.2 written in Z-domain. Taking inverse Z-transform gives Z−1[Pw(z)] = w(n) and Z−1[Px(z)] = x(n). The white noise w(n) is called process noise and has variance σw2.

A filter with this form will have p poles and q zeros and the output is referred to as an ARMA(p,q) process. If the filter H(z) is stable, the output x(n) is WSS.

Applying input and output to the filter in equation (2.2) and taking inverse Z-transform gives the following difference equation.

x(n) +

p

X

k=1

ap(k)x(n − k) =

q

X

k=0

bq(k)w(n − k) (2.3)

2.2.2 AR processes

An AR process is a special case of an ARMA process where q = 0. The LSI filter is in this case.

H(z) = b(0)

1 +Pp

k=1ap(k)z−k (2.4)

(12)

This filter has 0 zeroes and p poles and the output is referred to as an AR(p) process. Input and output are related by the following difference equation.

x(n) +

p

X

k=1

ap(k)x(n − k) = b(0)w(n) (2.5)

2.2.3 Correlation

The autocorrelation of a random process x which is needed for obtaining the Wiener filter is defined as

rx(k, l) = E[x(k)x(l)] (2.6)

where ∗ denotes complex conjugate.

For a WSS random process the autocorrelation only depend on the difference k − l and the notation can be simplified to

rx(k) = E[x(n)x(n − k)] (2.7)

where n is an arbitrary time instance.

Also, for a WSS process the auto-correlation are conjugate symmetric

rx(k) = rx(−k) (2.8)

If the process is real, then

rx(k) = rx(−k) (2.9)

An estimate of the auto-correlation sequence of length p is calculated from a real-valued data sequence, x, of N samples as

ˆ

rx(k) = 1 N − k

N −k

X

i=1

x(i)x(i + k), k= 0, 1, 2, · · · , p − 1 (2.10)

Where N > p. If N and k are close the autocorrelation is based on few samples and becomes inaccurate so N  k is preferable.

The auto-correlation matrix of a WSS random process x with an auto-correlation sequence of length p is defined as.

Rx=

rx(0) rx(1) · · · rx(p) rx(1) rx(0) · · · rx(p − 1) rx(2) rx(1) · · · rx(p − 2)

... ... · · · ... rx(p) rx(p − 1) · · · rx(0)

(2.11)

Using properties in Equations (2.8) and (2.9) the auto-correlation matrix of a real-valued WSS

(13)

random process can be written as

Rx=

rx(0) rx(1) · · · rx(p) rx(1) rx(0) · · · rx(p − 1) rx(2) rx(1) · · · rx(p − 2)

... ... · · · ... rx(p) rx(p − 1) · · · rx(0)

(2.12)

A matrix with this form is called a symmetric toeplitz matrix.

2.2.4 Properties of the noise

All noise sources studied in this report are Gaussian noise sources. This is also called white noise.

These are created using the MATLAB function randn.

The autocorrelation function, rv(k), of a white noise process v(n) with variance σv2 is defined as.

rv(k) =

v2, if k = 0,

0, if k 6= 0. (2.13)

The Power Spectral Density (PSD), P , of a white noise process has a constant frequency spectrum equal to

P(e) = σ2v (2.14)

-500 -400 -300 -200 -100 0 100 200 300 400 500 Lag index [k]

-0.2 0 0.2 0.4 0.6 0.8 1 1.2

rv(k)

Figure 2.3: Autocorrelation function estimate of a zero mean white noise process, v(n), with unit variance. Calculated with the MATLAB function xcorr on 100,000 samples.

(14)

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Normalized frequency [ rad/sample]

-80 -70 -60 -50 -40 -30 -20 -10 0 10

Power Spectral Density [dB/Hz]

Figure 2.4: Power Spectral Density estimate of a zero mean white noise process, v(n), with unit variance. Calculated with the MATLAB function periodogram on 100,000 samples.

Figures 2.3 and 2.4, show estimates of (2.13) and (2.14) respectively.

(15)

2.3 Kalman filter

The theory on the Kalman filter is based on [2].

The Kalman filter presented in this section is the discrete-time Kalman filter.

All Kalman filtering problems are formulated on state-space form consisting of the state and ob- servation equations expressed as follows.

x(k + 1) = Ax(k) + Bu(k) + w(k) (2.15)

y(k) = Cx(k) + v(k) (2.16)

If the system is of order p, the number of output signals measured is n and the number of input signals is m, then Ap×p, Bp×m, Cn×p, x(k)p×1, u(k)m×1, w(k)p×1, y(k)n×1and v(k)n×1.

Note that, in this project, the input vector u(k) is the zero vector 0n×1. So, the B matrix and the u(k) vector is excluded from the Kalman filter algorithm.

v(k) is a zero mean white noise process with covariance matrix R2which is mathematically described as

E[v(k)] = 0 E[v(k)vT(k)] = R2

E[v(k)vT(l)] = 0, k 6= l

w(k) is a zero mean white noise process with covariance matrix R1 and is uncorrelated with v(k).

E[w(k)] = 0 E[w(k)wT(k)] = R1

E[w(k)wT(l)] = 0, k 6= l E[w(k)vT(l)] = 0

The Kalman algorithm is designed in a way such that the error covariance matrix P of the es- timate ˆxn×1(k|k − 1) is minimized at each time step k.

P(k|k − 1) = E[(ˆx(k|k − 1) − x(k)((ˆx(k|k − 1) − x(k))T] (2.17) The index stands for the estimate at time k based on samples up to time k − 1.

The Kalman filter algorithm consists of two steps: prediction and correction.

1. The predicted estimate ˆx(k|k − 1) and its covariance matrix P (k|k − 1) is known from the previous time step. When the measurement y(k) becomes available, a correction is made to the predicted estimate: ˆx(k|k), P (k|k).

2. Predictions are made: ˆx(k + 1|k), P (k + 1|k)

The estimate equations that minimizes (2.17) is shown below.

(16)

1.

K(k) = P (k|k − 1)CT(R2+ CP (k|k − 1)CT)−1 ˆ

x(k|k) = ˆx(k|k − 1) + K(k)(y(k) − C ˆx(k|k − 1))

P(k|k) = P (k|k − 1) + P (k|k − 1)CT(CP (k|k − 1)CT + R2)−1CP(k|k − 1) 2.

ˆ

x(k + 1|k) = Aˆx(k|k)

P(k + 1|k) = AP (k|k)AT + R1

Figure 2.5: The Kalman algorithm.

The Kalman algorithm is illustrated in a block diagram in Figure 2.5.

2.3.1 The Kalman gain K(k)

The Kalman gain K(k) is chosen in an optimal way to update the state vector based on new input that has become available. It is seen to depend on the covariance matrices R1 and R2. These are generally not known and can be treated as design variables which can be adjusted until satisfactory performance is achieved. If R1 is ”large” compared to R2 we have have a ”large” gain K(k). In this case, the process noise is ”big” compared to the measurement noise and more trust is placed on the measurements than the model. If R1 is ”small” compared to R2 more trust is placed on the model than the measurement and the state will follow the model more than the measurements.

2.3.2 Initial values

Before the algorithm starts initial values on ˆxand P must be chosen which is denoted ˆx0 and P0

respectively. If one has big confidence in that the initial states are close to the true states of the system, a ”small” P0can be chosen. If the initial guess has more uncertainty, a ”bigger” P0should be chosen. If no knowledge on the initial states are available, a common choice is to choose P0 as the identity matrix, Ip×p.

(17)

2.3.3 Kalman filter for noise reduction

In order to perform noise reduction with the Kalman filtering algorithm the signal observation equation (2.1) needs to be converted into state space form, (2.15) and (2.16).

For an ARMA model observed in noise the matrices written on controllable canonical form are given below.

A=

−a(1) −a(2) · · · −a(p − 1) −a(p)

1 0 · · · 0 0

0 1 · · · 0 0

· · · ·

0 0 0 1 0

 C=h

b(0) b(1) · · · b(p − 1) i

R1=

σw2 0 · · · 0 0 0 · · · 0

· · · · 0 0 · · · 0

 R2= σv2.

where the a’s and b’s are the AR and MA coefficients respectively. σw2 and σv2 is the variance of the process and the measurement noise respectively. These parameters are generally not known and must be replaced with estimates: ˆa, ˆb, ˆσ2wand ˆσv2.

If instead, an AR model is observed in noise all the matrices are the same except for the C vector which is now C =h

b(0) 0 · · · 0i

A derivation of how to convert the signal observation equation into state space form and how the matrices are written on observable canonical form is found in appendix A.

(18)

2.4 Wiener filter

The Wiener filter theory is based on [1].

The Wiener filter presented in this section is the Finite Impulse Response (FIR) causal Wiener filter.

A signal x(n) is observed in zero mean white noise v(n).

y(n) = x(n) + v(n) (2.18)

yand x are assumed jointly WSS. An estimate of x(n) denoted ˆx(n) is made by filtering the observed signal with the (p − 1)th order FIR Wiener coefficients as

ˆ x(n) =

p−1

X

k=0

w(k)y(n − k) (2.19)

The coefficients, w, is found by minimizing the mean square error, E[|e(n)|2] = E[|x(n) − ˆx(n)|2].

This is done by solving the following equations for w

p−1

X

l=0

w(l)ry(k − l) = rxy(k) k = 0, 1, · · · , p − 1 (2.20) where ry is the auto-correlation of y and rxy is the cross-correlation of x and y. These equations are known as the Wiener-Hopf equations.

These equations can be written more compactly in matrix form

Ryw= rxy (2.21)

and the coefficient vector w containing the coefficients w is found by performing matrix inversion.

w= R−1y rxy (2.22)

For real-valued signals, Ry is the auto-correlation matrix in Equation (2.12). An estimate of Ry

can be found by calculating the auto-correlations as in (2.10). rxy can be found as follows.

rxy(k) =

(ry(k) − σ2v, if k = 0.

ry(k), if k 6= 0. (2.23)

Where v and x are assumed uncorrelated and σ2v is the variance of the noise v. So, rxy contains the same auto-correlaton sequence used in Ry except that the variance of the noise is subtracted from ry(0). A derivation of Equation (2.23) can be found in appendix B.

2.5 Performance metrics

2.5.1 Normalized Mean Square Error

The performance of the output from the filtering algorithms is measured by the Normalized Mean Square Error (NMSE)

(19)

which is the mean square error normalized by the mean square of the signal. It is very often in reality that only one realization of a random signal is available over a limited time interval, the mean of the random signal is approximated by the time average of the single realiztion over the given time.

2.5.2 Signal to Noise Ratio

The quality of the signal in a noisy measurement is measured with the Signal to Noise Ratio (SNR).

For a measurement of a digital signal x(i) in noise v(i) with N samples this is calculated as

SN R= 10 × log10

XN

i=1

|x(i)|2.XN

i=1

|v(i)|2

(2.25)

(20)

3 Hardware implementation

3.1 Prototype

Figure 3.1: A system overview [3].

In Figure 3.1 all the hardware parts in the Filter Bank prototype is shown. The parts in the system which is dealt with in this project is the CPU and the HDI. The CPU is programmed with the algorithms and the HDI is programmed so that a user can choose the Wiener and Kalman filter and specify parameters in those filters.

(21)

3.2 Analog signal chain

The entire analog signal chain from input to output is shown in Figure 3.2. The different parts is described in short below. How these are designed is explained in [3].

Figure 3.2: Analog signal chain [3].

• DC-blocker In order to keep the input voltage inside the ADC voltage range (0-3.3V) a high-pass Bessel filter with a cut-off frequency of 1Hz is used to filter out any DC component in the incoming signal.

• DC-adaption To be able to use the entire range of the ADC, a DC-offset of 3.3/2=1.65 V is added to the incoming signal. This is done using a differential amplifier circuit.

• AAF1-3 According to Nyquist sampling theorem, frequency components at frequencies above half of the sampling frequency, fs/2, will become distorted. So, any frequency components above fs/2 is filtered out using any of the three Anti-Aliasing Filters (AAF) depending on which pre-defined sampling frequency is chosen. If a custom sampling frequency is chosen, the signal passes by without filtering. To route the signal through the right filter, Multiplexers (MUX) and Demultiplexers (DMUX) are used. The AAF filters are Bessel filters.

• RECF1-3 The sampling creates frequency components in the signal at the sampling frequency and its harmonics. These frequency components are filtered out after the DAC using any of the Reconstruction Filters (RECF) depending on what pre-defined sampling frequency is chosen.

If a custom sampling frequency is chosen no Reconstruction Filter is used. These filters are identical to the Anti-Aliasing filters.

• Buffer The maximum output current is increased from 25mA, which is the maximum output current from the DAC, to 250mA using a voltage follower circuit.

• Current limiter To reduce the risk of damaging the circuit in case of a short-circuit at the output, a resettable fuse is used to prevent the output current to exceed 200mA.

(22)

3.3 Microcontroller Unit

The filtering algorithms is implemented on the STM32F407 microcontroller unit from ST Micro- electronics. It has an ARM 32-bit Cortex -M4 CPU with a maximum clock frequency of 168 MHz which supports floating-point unit arithmetic. It is equipped with 3 12-bit Analog to Digital Con- verters (ADC) and 2 12-bit Digital to Analog Converters (DAC), all operating in the voltage range 0-3.3V. The ADCs has a maximum sampling frequency of 2.4 MSPS and the DACs has a maximum sampling frequency of 1 MSPS.

The development of the C code was done on the STM32F4-Discovery which is shown in Figure 3.3. It is a development kit equipped with a STM32F407 mictrocontroller, a large number of General-Purpose Input/Output (GPIO) pins, LED lights etc. It was used as a test platform for the algorithms.

Figure 3.3: STM32F4-Dicovery [5].

3.4 User interface

The options shown to the user is displayed on a Liquid Crystal Display (LCD) [8]. The display is shown in Figure 3.4. The LCD is programmed with the help of a library [10].

Figure 3.4: The LCD display [7].

In Figure 3.5 the 4x4 keypad used for user input is shown.

(23)

Figure 3.5: The keypad [9].

(24)

4 Algorithm implementation

The C programming language is used for implementing the noise reduction algorithms in the Filter Bank. The IDE used for development of the algorithms is the free of charge System Workbench toolchain called SW4STM32 which is based on Eclipse and built by AC6.

Two data types called structure and pointer are important for the filter implementation. A struc- ture is a user defined data type which can combine data items of different data types. As an example, a structure called ”Book” can include a variable called ”Author” with data type char and another variable called ”Book ID” of data type int. A pointer is a variable whose value is the memory address of another variable. Pointers are needed when values are changed to global variables inside functions. If pointers are not used the value is not changed outside the scope of the function. Pointers are also needed when making dynamic memory allocations. A pointer variable called ”var” is declared with an asterisk, *var.

Two different structures are used to connect the input to the output through the filters. A Filter structure and an Output structure.

4.1 Filter structure

Each filter is represented by a Filter structure. Each Filter structure contains variables with prop- erties specific to each filter. It also contains input and output variables.

The Filter structure contains an array of pointer variables in order to retrieve the input signal(s) to the filter. The array of pointers points to the output variables of other filters or any of the two ADC data registers. A function sums up all the pointer variables in the array to get the input to the filter. The output of the filter is stored in the output variable.

The Filter structure is illustrated in Figure 4.1. Here, only some of the relevant variables for the Kalman filter and the Wiener filter are included. For example the A,P and x variables are matrices in the Kalman algorithm. measurement noise and process noise are variances of the measurement and process noise respectively and wiener coeff is a vector with Wiener coefficients.

Figure 4.1: Filter structure.

(25)

4.2 Output structure

Two Output structures are used, one for each DAC. An Output structure contains an array of pointers which point to the output variables of one or more filters. A function sums up all of the output variables in the array of pointers and outputs it to the corresponding DAC data register.

The Output structure is illustrated in Figure 4.2.

Figure 4.2: Output structure.

4.3 The input to output data transfer

The program uses a timer to trigger the ADC when to start perform a conversion. Each time a conversion is done, the converted value is transferred to a location in the memory using DMA.

DMA stands for Direct Memory Access and is a way of moving data between memory locations without using the CPU.

In order to make sure that the filters is not missing any input samples, the filters must have time to calculate the output before the next input sample is available. This is solved in the following way. Each time a converted value has been moved from the ADC to the location in memory, a flag is set and an Interrupt Service Routine (ISR) is entered. If the code in the ISR is not executed before a new flag is set, the outputs are set to zero and the error message ”Overload” is displayed on the LCD screen. A high order filter will take longer time to execute and may not be done calcu- lating within the sampling period. So, there is a trade-off between filter complexity and sampling frequency.

Figure 4.3 shows pointers and data transfers in the Filter Bank.

(26)

ADC1 data register

ADC2 data register

DMA

DMA

Filter structure

Filter structure

Filter structure

Output1 structure

Output2 structure

DAC1 output register

DAC2 output register pointer

Data transfer

Figure 4.3: The pointers and data transfers in the Filter Bank [3].

4.4 Optimal filters

Since both the Wiener and Kalman filter are built on matrix operations, a C library which handles this is needed. For this, a C library called ”CMSIS DSP software library” by ARM Ltd. [4] is used.

This library contains a number of signal processing functions for use on Cortex-M processor based devices. These functions have been optimized for the data types that are supported by Cortex-M processor devices. In this project the 32-bit floating point data type is used.

4.4.1 Kalman filter

Before the first input sample is processed in the Kalman algorithm, all the matrices are initialized.

Matrices are dynamically allocated different amount of memory space using the function calloc depending on the order of the system specified by the user. After this the matrices are set to their initial values. When an AR model is chosen to be filtered, the system is written on controllable canonical form. If an ARMA model is chosen to be filtered, the system is written on observable

canonical form. The initial values are always set to: P0= In×n, xˆn×10 =

 0 0 ... 0

 .

4.4.2 Wiener filter

Before the Wiener filter algorithm starts filtering, it fills a buffer vector of 1000 input values.

This buffer is then used to calculate the auto-correlation function. The auto-correlation matrix is dynamically allocated memory space using calloc and then filled with the auto-correlation function as in equation (2.12). When the Wiener coefficients has been solved for, the filtering starts and output is added to the output variable. When filtering, the Wiener filter needs access to previous

(27)

4.5 User interface

In order for a user to be able to specify parameters in a filter, choose sampling frequency and make connections between filters, ADCs and DACs, a menu system is programmed. When a user starts the Filter Bank prototype a sampling frequency is chosen. Then the user enters the main menu.

In the main menu there are three additional menus to configure the filters and the connections and an option to run the configuration. The main menu is visualized in a flowchart in Figure 4.4.

Figure 4.4: The main menu [3].

In the add new filter menu, properties of each filter is specified. This menu is illustrated in Figure 4.5 where only the Kalman filter and Wiener filter options are shown.

(28)

Figure 4.5: The add new filter menu with only the Kalman and Wiener filter type options shown.

In Figure 4.6 the add filter input menu is shown. First an input source is chosen which can be either an ADC or another filter. Then the filter to which the input is connected to is chosen with a target ID. After this, the input source is specified with either a Filter ID or an ADC ID.

(29)

Figure 4.6: Add filter input menu [3].

In Figure 4.7 the add output menu is shown. First, which one of the DACs that is used as output is chosen with a DAC ID. Then which of the filters the output is coming from is chosen with a Filter ID.

(30)

Figure 4.7: Add output menu [3].

(31)

5 Results

5.1 MATLAB simulations

In this section, the filters are tested in MATLAB. When some parameters in the filter are varied, 100 realizations of the random process is simulated and the mean and the variance of the NMSE is presented in a table. The mean of the NMSE is a measure of how well the filter performs on average. The variance of the NMSE is a measure of how robust the filter is.

Two WSS random processes are used for validation of the algorithms. One AR(3) process and one ARMA(4,4) process. Some decimals have been left out in the coefficients in equation (5.1) and (5.2).

The AR(3) process is described by the following difference equation.

x(n) − 1.99x(n − 1) + x(n − 2) − 0.001x(n − 3) = w(n) (5.1) The ARMA(4,4) process is described by the following difference equation.

x(n) − 1.05x(n − 1) − 0.45x(n − 2) + 0.07x(n − 3) + 0.43x(n − 4) =

1.98w(n) + 1.47w(n − 1) + 0.64w(n − 2) + 0.28w(n − 3) (5.2) The two processes are then observed in zero mean white noise y(n) = x(n)+v(n). The two processes are simulated in MATLAB with process noise variance σw2 and measurement noise variance σv2. 5.1.1 Kalman filter

The coefficients of the AR and ARMA process in Eq. (5.1) and (5.2) are assumed to be known and are used in the Kalman filter.

The measurement and process variance used in the Kalman filter are denoted ˆσ2vand ˆσ2wrespectively.

Using the correct variances in the Kalman filter gives the result shown in Figure 5.1.

(32)

Figure 5.1: Kalman estimate of the AR(3) process with ˆσ2v= σv2 and ˆσw2 = σw2.

(33)

Figure 5.2: Kalman estimate of the AR(3) process with σˆσˆw22

v =σw2σ×1002 v .

In Figure 5.2 the ratio between the measurement and process noise variance is chosen as σˆσˆw22 v =

σw2×100 σ2v .

(34)

Figure 5.3: Kalman estimate of the AR(3) process with σˆσˆw22

v =σ2σ2w v×100.

In figure 5.3 the ratio between the measurement and process noise variance is chosen as ˆσˆσ2w2 v =

σ2w σv2×100.

The SNR of the noisy signal in figure 5.1, 5.2 and 5.3 is 4.5 dB.

It is found that the scaling of the variances, ˆσw2 and ˆσv2, does not matter so much but the ra- tio between them is important.

In Figures 5.2 and 5.3 this ratio is investigated. In figure 5.2, the measurement noise variance estimate is ”small” compared to the process noise variance estimate. Then the Kalman estimate is spiky and looks like it follows the measurements a lot. When the measurement noise variance estimate is ”big” compared to the process noise variance estimate (figure 5.3), the Kalman estimate follows the smooth shape of the true signal and thus follows the AR model more. This is in agree- ment of the discussion held in section 2.3.1.

(35)

ˆ

σ2v ˆσw2 mean of NMSE variance of NMSE

σ2v σw2 0.029 9.2 ∗ 10−5

σv2× 100 σw2 0.23 0.0026

σ2v σ2w× 100 0.074 5.7 ∗ 10−4

Table 5.1: Mean and variance of NMSE for 100 realizations of the AR(3) process.

In table 5.1 the mean and variance of NMSE for 100 realizations of the AR(3) process with different noise variances are shown.

0 1000 2000 3000 4000 5000 6000 7000

Samples -2.5

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

Amplitude

104

Noisy signal Kalman estimate 4 Kalman estimate True signal

Figure 5.4: Kalman estimate of the ARMA(4,4) process with the state space equations written on controllable canonical form.

(36)

1000 2000 3000 4000 5000 6000 7000 Samples

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

Amplitude

104

Noisy signal Kalman estimate True signal

Figure 5.5: Kalman estimate of the ARMA(4,4) process with the state space equations written on observable canonical form.

The SNR of the noisy signal in figure 5.4 and 5.5 is 8.5 dB.

In Figure 5.4 and 5.5 the ARMA(4,4) process is Kalman filtered when the filter is written on controllable and observable canonical form respectively. If the filter is written on controllable canonical form it is seen that some scaling of the estimate is needed in order for the estimate to follow the true signal well. If instead the filter is written on observable canonical form no scaling is needed. So, if an ARMA model is chosen to be filtered the filter is written on observable canonical form.

(37)

5.1.2 Wiener filter

In this section a Wiener filter of order 20 is evaluated.

Figure 5.6: Wiener estimate of the AR(3) process with ˆσ2v= σv2.

In Figure 5.6 the AR(3) process is Wiener filtered when the correct noise variance is used in (2.23). The SNR of the noisy signal in the figure above is 4.5 dB.

ˆ

σv2 mean of NMSE variance of NMSE

σv2 0.069 5.9 ∗ 10−4

σv2× 1.5 0.29 0.012

σv2× 0.5 0.21 0.0067

Table 5.2: Mean and variance of NMSE for 100 realizations.

(38)

In table 5.2 the noise variance estimate ˆσ2v used in (2.23) is varied to see how it affect the performance of the algorithm. It is seen that over or under estimating the noise variance by 50%

dramatically decrease the performance.

(39)

5.2 Real time results

Because of hardware problems, the results obtained with the Filter Bank are distorted. Since the main task of this project is the software implementation of the algorithms, the real time results presented in this section comes from the development kit that was used as a test platform for the algorithms.

5.2.1 Kalman filter

0 1000 2000 3000 4000 5000 6000 7000

Samples -1

-0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1

Volts

Noisy signal Kalman estimate True signal

Figure 5.7: Kalman estimate of the AR(3) process. σˆσˆ2w2 v = σσ2w2

v. In Figure 5.7 the AR(3) process is Kalman filtered with the noise ratio ˆσˆσ2w2

v = σσ2w2 v.

(40)

Figure 5.8: Kalman estimate of the ARMA(4,4) process. σˆσˆ2w2 v =σσ2w2

v. In Figure 5.8 the ARMA(4,4) process is Kalman filtered with the noise ratio ˆσˆσ2w2

v = σσw22 v.

(41)

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Normalized frequency [ rad/sample]

-120 -100 -80 -60 -40 -20 0

Amplitude [dB]

Spectrum of Kalman estiamte Spectrum of noisy signal True spectrum

Figure 5.9: Spectrum of the noisy signal and the Kalman estimate in figure 5.7, calculated with the MATLAB function fft, aswell as the true spectrum, calculated with the MATLAB function freqz.

In Figure 5.9 a comparison of spectrums are made. The spectrum of the noisy signal and the Kalman estimate of the AR(3) process (from figure 5.7) aswell as the true spectrum is shown.

The spectrums are normalized for easier comparison. It is clear that the flat frequency spectrum corresponding to the white noise is lowered in amplitude and hence the noise is reduced.

(42)

5.2.2 Wiener Filter

Figure 5.10: Wiener estimate of the AR(3) process.

In Figure 5.10 real time results from the Wiener algorithm is shown. Here, the correct measurement noise variance is used (ˆσv2= σ2v) and the order of the filter is 20.

(43)

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Normalized frequency [ radians/sample]

-120 -100 -80 -60 -40 -20 0

Amplitude [dB]

Wiener estimate Noisy signal True signal

Figure 5.11: Comparison of spectrums. The spectrum of the Wiener estimate and the noisy signal are calculated with the fft function in MATLAB. The spectrum of the true signal is calculated using the function freqz.

In Figure 5.11 a comparison of spectrums are made. The spectrums of the noisy input signal and the Wiener estimate from Figure 5.10 are shown. The spectrums are normalized for easier comparison. The flat frequency spectrum corresponding to the noise is lowered in amplitude.

(44)

5.3 User interface

Figure 5.12: Display for setting up a Kalman filter.

Figure 5.12 shows an example of how to set up a filter in the Filter Bank prototype from the user interface. In the example a Kalman filter of order 2 is chosen. The noise variances are chosen to ˆ

σv2= 10 and ˆσw2 = 0.1. An AR(2) process with difference equation x(n) + x(n − 1) − x(n − 2) = w(n) is chosen to be filtered. The input to the Kalman filter is ADC channel 1 and the output of the filter is output channel 1.

(45)

6 Conclusions and future work

The Wiener and Kalman filter perform noise reduction successfully in real time. The Kalman filter performs better than the Wiener filter based on the NMSE. However, the Kalman filter requires more information about the observed signal than the Wiener filter which gives more work to the user prior to using the Filter Bank.

The Kalman filter can be placed in series or parallel with other filters in any constellation wanted.

The Wiener filter however, can only be placed in series or parallel with other filters if its input is from any of the ADCs directly.

There is no specific maximum sampling frequency in the Filter Bank. The sampling frequency must be chosen such that the signal processing has time to execute within the sampling period. If only one Kalman filter of order three is configured in the Filter Bank, a sampling frequency of 10 kHz is appropriate. With this sampling frequency, the highest order of the filter that can be used is 6. If the same sampling frequency is used and only one Wiener filter is configured in the Filter Bank, the highest order of the filter that can be used is around 45.

6.1 Kalman filter

The need to fit a signal model to the signal before filtering gives a lot of work to the user and might not be possible to some signals which makes the filter limited.

The Kalman filter could be made more automatic by also including a system identification algorithm that identifies a signal model to the signal before filtering. However, identifying a model to a signal with the SNR used in this report might be challenging. Also a noise variance estimation algorithm for the Kalman filter could be implemented.

There could also be a function that alerts the user if the Kalman filter is unstable.

6.2 Wiener filter

The performance of the Wiener filter is highly dependent on an accurate noise variance estimate.

The algorithm could be further developed with an accurate noise estimation algorithm. Perhaps the user could specify some range of the noise variance and the algorithm tests a number of noise variances within this range and the user can see which one gives satisfactory performance.

The code should be modified such that the filter can be placed in any constellation like the other filters in the Filter Bank.

(46)

References

[1] Monson H. Hayes, Statistical Digital Signal Processing and Modeling, John Wiley & Sons, Inc., 1994.

[2] Karl J. ˚Astr¨om and Bj¨orn Wittenmark, Computer Controlled Systems, Ch. 11.3, Prentice Hall, 3rd edition, 1997.

[3] Mats Larsson, Development of a Digital Universal Filter Bank, Uppsala Universitet, 2016.

[4] ”CMSIS DSP Software Library”

https://www.keil.com/pack/doc/CMSIS/DSP/html/index.html Visited: 2018-05-24.

[5] STMicroelectronics, UM1472 User manual, 2017, Rev. 6.

[6] ”PicoScope 2000 Series”,

https://www.picotech.com/oscilloscope/2000/picoscope-2000-overview Visited: 2018-05-25.

[7] ”HD44780 image”,

http://www.explorelabs.com/lcd-display-16x2-hd44780-blue-white Visited: 2018-06-05.

[8] Hitachi, HD44780U (LCD-II), 1998, Rev 0.0.

[9] ”Keypad image”,

http://nadieleczone.com.my/products/4x4-Matrix-16-Key-Membrane -Switch-Keypad-Keyboard-for-Arduino-PIC/243

Visited: 2018-06-06.

[10] LCD library,

http://stm32f4-discovery.net/2014/06/library-16-interfacing-hd44780 -lcd-controller-with-stm32f4/

Visited: 2018-06-06.

(47)

Appendices

A Conversion to state space form

In this appendix, the signal model is converted into state space form.

First the issue of an ARMA random process observed in noise is addressed and then this case can be extended to an AR random process observed in noise.

An ARMA random process d(n) is observed in a noisy measurement y(n).

y(n) = d(n) + v(n) (A.1)

where v(n) is zero-mean white measurement noise.

The ARMA process x(n) is generated by filtering white noise w(n) through a LSI filter as in (??)

d(n) = Pq

k=0bq(k)q−k 1 +Pp

k=1ap(k)q−kw(n) (A.2)

To avoid direct feedthrough and have a causal filter, p ≥ q + 1. We set p = q + 1 which is the most general case. Inserting (A.2) in (A.1) gives

y(n) = Pq

k=0bq(k)q−k 1 +Pp

k=1ap(k)q−kw(n) + v(n) (A.3)

Re-arranging (A.3) gives

y(n) =h

q

X

k=0

bq(k)q−ki 1 1 +Pp

k=1ap(k)q−kw(n) + v(n) (A.4) Let

x(n) = 1

1 +Pp

k=1ap(k)q−kw(n) (A.5)

or equivalently

x(n) = −

p

X

k=1

a(k)x(n − k) + w(n) (A.6)

which is an AR process. Then (A.4) turns out to be

y(n) =hXq

k=0

bq(k)q−ki

x(n) + v(n) (A.7)

or

y(n) =

q

X

k=0

b(k)x(n − k) + v(n) (A.8)

(48)

which contains the moving average (MA) part of the process.

Expressing (A.6) and (A.8) in matrix form gives.

x(n) x(n − 1)

... x(n − p + 1)

=

−a(1) −a(2) · · · −a(p − 1) −a(p)

1 0 · · · 0 0

0 1 · · · 0 0

· · · ·

0 0 0 1 0

x(n − 1) x(n − 2)

... x(n − p)

 +

 1 0 ... 0

w(n) (A.9)

y(n) =h

b(0) b(1) · · · b(p − 1) i

x(n) x(n − 1)

... x(n − p + 1)

+ v(n) (A.10)

Equation (A.9) and (A.10) are the state and observation equation respectively written on control- lable canonical form. The matrices used in the algorithm can now be identified as.

A=

−a(1) −a(2) · · · −a(p − 1) −a(p)

1 0 · · · 0 0

0 1 · · · 0 0

· · · ·

0 0 0 1 0

 B= 0p×m

C=h

b(0) b(1) · · · b(p − 1) i

R1= E

 w(n)

0 ... 0

 h

w(n) 0 · · · 0 i=

E[w(n)2] 0 · · · 0

0 0 · · · 0

· · · ·

0 0 · · · 0

=

σ2w 0 · · · 0 0 0 · · · 0

· · · · 0 0 · · · 0

 R2= E[v(n)2] = σv2.

where σw2 and σ2v is the variance of the process and the measurement noise respectively.

In the AR case everything is the same except for the C matrix which is now C =h

b(0) 0 · · · 0 i

(49)

x(n) x(n − 1)

... x(n − p + 1)

=

−a(1) 1 0 · · · 0

−a(2) 0 1 · · · 0 ... ... ... ... ...

−a(p − 1) 0 0 · · · 1

−a(p) 0 0 · · · 0

x(n − 1) x(n − 2)

... x(n − p)

 +

 b(0) b(1) ... b(p − 1)

w(n) (A.11)

y(n) =h

1 0 · · · 0i

x(n) x(n − 1)

... x(n − p + 1)

+ v(n) (A.12)

The matrices used in the algorithm is identified to.

A=

−a(1) 1 0 · · · 0

−a(2) 0 1 · · · 0 ... ... ... ... ...

−a(p − 1) 0 0 · · · 1

−a(p) 0 0 · · · 0

 C=h

1 0 · · · 0i

R1= E

 b(0) b(1) ... b(p − 1)

 h

b(0) b(1) · · · b(p − 1)

iw(n)2=

 b(0) b(1) ... b(p − 1)

 h

b(0) b(1) · · · b(p − 1) i

σ2w

R2= E[v(n)2] = σv2

The matrices for an AR process observed in noise is the same except for R1 which is now.

R1=

 b(0)

0 ... 0

 h

b(0) 0 · · · 0 i

σ2w.

(50)

B Derivation of cross-correlation function

In this appendix a derivation of how to find the cross-correlation between a desired signal and noisy observations of that signal when only the noisy observations are available.

y(n) = x(n) + v(n) (B.1)

In equation (B.1) a desired signal, x, is observed in zero mean white noise, v, with variance σv2. x and v are assumed uncorrelated.

rxy(k) =E[x(i)y(i − k)] = E[(y(i) − v(i))y(i − k)]

=E[y(i)y(i − k)] − E[v(i)(x(i − k) + v(i − k))]

=ry(k) − E[v(i)v(i − k)] − E[v(i)x(i − k)]

=ry(k) − rv(k) =

(ry(k) − σ2v, if k = 0.

ry(k), if k 6= 0.

References

Related documents

• For simplification it is assumed to be known: the actual location of the observer, that will be used as the reference point, and the fact that its receiver is not moving; the

network-controlled car (NetCar) which has a GNSS receiver and a 4G module on it, a separate reference GNSS receiver, a computer hosting a control interface for controlling the

But Julia Shelkova doesn’t regret applying for the Executive MBA program at Stockholm School of Economics. “You need to further

46 Konkreta exempel skulle kunna vara främjandeinsatser för affärsänglar/affärsängelnätverk, skapa arenor där aktörer från utbuds- och efterfrågesidan kan mötas eller

Since it is an external part that is responsible for these tests Scania have no control of, or cannot change any fuel parameters like fuel quality, pressure or fuel flow.. The

By integration of the turbo compound models with the existing model forming a complete mean value engine model, the difference in accuracy showed to be insignificant for all

The associations between childhood circumstances and adult SRH were analysed by logistic regression, adjusting for sex, age, economic stress in adulthood, condescending treatment

Den praktiska implikationen av den här rapporten är att den vill hävda att det behövs ett skifte i utvecklingen inom ambulanssjukvården mot att även utveckla och öka