• No results found

Identification of Piecewise Affine Wiener Systems Using Data Partition

N/A
N/A
Protected

Academic year: 2021

Share "Identification of Piecewise Affine Wiener Systems Using Data Partition"

Copied!
38
0
0

Loading.... (view fulltext now)

Full text

(1)

Identification of Piecewise Affine Wiener

Systems Using Data Partition

Rimantas Pupeikis

Division of Automatic Control

Department of Electrical Engineering

Link¨

oping University, SE-581 83 Link¨

oping, Sweden

WWW: http://www.control.isy.liu.se

E-mail:

{rimas}@isy.liu.se

June 7, 2003

AUTOMATIC CONTROL

COM

MUNICATION SYSTEMS

LINKÖPING

Report no.: LiTH-ISY-R-2523

Technical reports from the Control & Communication group in Link¨oping are available at http://www.control.isy.liu.se/publications.

(2)
(3)

Identification of Piecewise Affine Wiener Systems

Using Data Partition

Rimantas Pupeikis

Department of Electrical Engineering

Link¨

oping University,

SE-581 83 Link¨

oping, Sweden

June 7, 2003

Abstract

In the previous paper [1] the problem of identification of Wiener systems with special types of hard and discontinuous nonlinearities in the presence of process and measurement noises in observations to be processed has been considered, when the threshold of the nonlinearity is known beforehand. The aim of the given paper is the development of an approach for the identification of affine Wiener systems with piecewise linear nonlinearities, i.e., when the linear part with unknown parameters is followed by a saturation-like function with un-known slopes. It is shown here that by a simple data rearrangement and by a following data partition the problem of identification of the nonlinear Wiener system could be reduced to a linear parametric estimation problem. Afterwards, estimates of the unknown parameters of linear regression models are obtained by processing respective sets of input-output data. A technique based on ordinary least squares, to be used in a case of missing data, and on the expectation-maximization algorithm is proposed here for the identification of parameters of linear and nonlinear parts of the Wiener system, including the unknown thresh-old of the piecewise nonlinearity, too. The results of numerical simulation and identification obtained by processing observations of input-output signals of dis-tinct discrete-time Wiener systems with two types piecewise nonlinearities by computer are given.

Keywords: System identification, parameter estimation, nonlinear systems, Wiener systems

Currently with Automatic Control Division at ISY at Link¨oping University on the leave from Radioelectronics Department, Electronics Faculty, Vilnius Gediminas Technical Univer-sity, Naugarduko 41, 2600 Vilnius, Lithuania, and Institute of Mathematics and Informatics, Akademijos 4, LT-2600, Vilnius

(4)

1

Introduction

A lot of physical systems are described as Wiener systems with piecewise linear nonlinearity, i.e., when a linear system is followed by a hard nonlinearity—like saturation or dead-zone [2], [3], [4]. A special class of such systems are piecewise affine (PWA) systems, consisting of some subsystems, between which occasional switchings happen at different time moments [5], [6]. Recently, extensive theo-retical investigations in such a direction are carried out in economics, too [7]. In computer controlled systems a simple example of a piecewise affine system could be a linear system, acting in a closed-loop and controlled by a linear feedback, when the control signal is bounded [5].

Assuming the nonlinearity to be piecewise linear, one could let the nonlinear part of the Wiener system be represented by different regression functions with some parameters, that are unknown beforehand [8]. In such a case observations of an output of the Wiener system could be partitioned into distinct data sets according to different descriptions. The boundaries of sets of observations de-pend on the value of the unknown threshold a —observations are divided into “regimes” subject to whether the some observed threshold variable is smaller or larger than a [7], [9]. Thus, there arises a problem, at first, to find a well-grounded way to partition the available data, at second, to determine estimates of parameters of various regression functions by processing particles of observa-tions, and, at third, to determine the unknown threshold of the nonlinearity.

The next Section introduces the statement of the problem to be solved. In Section 3 we present definitions of nonlinearities to be investigated. How to reduce the problem of identification of the PWA system using the interchang-ing equations in the system of linear algebraic equations it is shown here, too. Theoretical investigations concerning the invariability of the least squares (LS) to data rearrangement are given in Section 4. Section 5 presents the paramet-ric estimation technique, that is based on the expectation-maximization (EM) algorithm and is used in a case of missing data. In Section 6 simulation results, obtained for two types of nonlinearities are presented. Here a Monte Carlo sim-ulation results for periodical and for random signals are given, too. Section 7 contains conclusions.

2

Statement of the problem

The Wiener system consists of a linear part G(q, Θ) followed by a static nonlin-earity f (·, η). The linear part of the PWA system is dynamic, time invariant, causal, and stable. It can be represented by a time invariant dynamic sys-tem (LTI) with the transfer function G(q, Θ) as a rational function of the form

G(q, Θ) = b1q−1+, . . . , +bmq−m 1 + a1q−1+, . . . , +amq−m

= B(q, b)

1 + A(q, a) (1) with a finite number of parameters

ΘT=(b1, . . . , bm, a1, . . . , am), bT = (b1, . . . , bm), aT = (a1, . . . , am), (2)

that are determined from the set of permissible parameter values Ω. The set Ω is restricted by conditions of the stability of the respective difference equation

(5)

and normalized, requiring the static gain of the linear model to be 1. The unknown intermediate signal

x(k) = B(q, b)

1 + A(q, a)u(k) + v(k), (3) generated by a linear part of the PWA system (1) as response to an input u(k) and corrupted by an additive noise v(k), is acting on a static nonlinear part f (·, η) (see Fig. 1), i.e.,

y(k) = f (x(k), η) + e(k). (4) Here a nonlinear part f (·, η) of the PWA system is a saturation-like function of the form [2], [5], i.e.,

y(k) =    c0+ c1x(k) if x(k) ≤ −a x(k) if −a < x(k) ≤ a. d0+ d1x(k) if x(k) > a (5)

The process noise

v(k) = ξ(k) (6)

and the measurement noise

e(k) = ζ(k) (7)

are added to an intermediate signal x(k) and an output y(k), respectively; ξ(k), ζ(k) are noncorrelated between each other sequences of independent Gaus-sian variables with E{ξ(k)} = 0, E{ζ(k)} = 0, E{ξ(k)ξ(k + τ)} = σ2

ξδ(τ ),

E{ζ(k)ζ(k + τ)} = σ2

ζδ(τ ); E{·} is a mean value, σζ2, σ2ξ are variances of ζ and

ξ, respectively, δ(τ ) is the Kronecker delta function. PSfrag replacements

G(q, Θ) f (·, η)

u(k) x(k) y(k)

v(k) e(k)

Figure 1: The PWA system with process v(k) and measurement e(k) noises. A linear dynamic part G(q, Θ) of the PWA system is parameterised by Θ, while a static nonlinear part f (·, η)—by η. Signals: u(k)—input, y(k)—output, x(k)— unmeasurable intermediate signal.

The aim of the given paper is to estimate parameters (2) of the linear part (1) of the PWA system, parameters η = (c0, c1, d0, d1)T of the nonlinear part (5)

and the threshold a of the nonlinearity f (·, η) by processing N pairs of obser-vations of u(k) and noisy y(k).

(6)

3

Nonlinearities to be investigated

The initial data set of y(k) could be partitioned into three distinct data sets y(k) = c0+ c1x(k), k = 1, N1, (8)

the left side-set,

y(k) = x(k), k = 1, N2, (9)

the middle set, and

y(k) = d0+ d1x(k), k = 1, N3 (10)

the right side-set of data, containing N1, N2 and N3 observations, respectively.

Here N1+N2+N3= N , assuming that most observations (not less than 50%) are

concentrated in the middle-set and approximately by 25% or less in any side-set. Examples of piecewise nonlinearities are given in Fig.2. Actually, N1, N2 and

| | | | | | PSfrag replacements a a a a a a a a −a −a −a −a −a −a −a b c d fs(x, a) fdz(x, a) f1pw(x, a) f pw 2 (x, a) x x x x

Figure 2: Examples of hard nonlinearities: saturation (a), dead-zone (b) and two piecewise affine nonlinearities with positive and negative slopes (c), (d), respectively.

N3depend on the value of the threshold a, that is unknown beforehand. Thus,

there arises a problem to determine unknown a, too. In order to increase the efficiency of parametric identification and the unknown threshold determination one has to define functions y(k) to be considered in this Section (see Fig. 2 c, d). The first function f1pw(x(k), a) has positive slopes (see Fig. 2 c). It could be

(7)

Definition 1. The function y(k) = f {x(k; Θ), c, a} = c0+ c1x(k) has negative

values, when x(k) ≤ −a and c0 = −a(1 − ε0), c1 = ε0. Here x(k; Θ) ≡

x(k), cT = (c

0, c1), 0 < ε0<< a.

Definition 2. The function y(k) = f {x(k; Θ), a} = x(k) has arbitrary positive, as well as negative values, when −a < x(k) ≤ a.

Definition 3. The function y(k) = f {x(k; Θ), d, a} = d0+ d1x(k) has positive

values, when x(k) > a and d0= a(1 − ε0), d1= ε0. Here dT = (d0, d1).

The second function f2pw(x(k), a) to be considered has negative slopes (see Fig. 2

d).

Definition 4. The function y(k) = f {x(k; Θ), c, a} = c0+ c1x(k) has negative

values, when x(k) ≤ −a and c0= −a(1 + ε0), c1= −ε0.

Definition 5. The function y(k) = f {x(k; Θ), a} = x(k) has arbitrary positive, as well as negative values, when −a < x(k) ≤ a.

Definition 6. The function y(k) = f {x(k; Θ), d, a} = d0+ d1x(k) has positive

values, when x(k) > a and d0= a(1 + ε0), d1= −ε0.

Remark 1. The observations of an output y(k) of the Wiener system could be partitioned into three data sets: left data side-set with values less or equal to negative a, middle data set with values more than negative a but less or equal to a and right data side-set with values more than a.

Remark 2. To separate the data sets one ought to rearrange observations of y(k) in an ascending order according to their values. Afterwards, observations with largest values will be concentrated in the right side-set, while observations with smallest values in the left one.

Remark 3. The observations of side-sets are mixed with observations of the middle data set at boundaries, when the nonlinearity with negative slopes is applied. In such a case observations of the output y(k) could be partitioned only into intersected data sets.

Conclusion 1. A data rearrangement in an ascending order according to their values allows us to determine the first and the last observations of respective side-sets of data, when the nonlinearity with positive slopes is applied. On the other hand, it is not valid for the nonlinearity with negative slopes, because of mixed observations. In such a case there exists an ambiquity as to whether these observations correspond to the middle-set or to the respective side-set. Never-theless most observations of the middle-set are equivalent to observations of the intermediate signal of the Wiener system. These observations are described by the equation (9), and could be displayed in an ascending order, too.

Thus, the problem of identification of PWA systems could be completely reduced by a simple rearrangement of observations to be processed. Afterwards, parametric estimation techniques, based on the ordinary LS could be applied for the estimation of parameters of given regression models if the observations rearrangement does not influence an accuracy of estimates to be calculated.

(8)

4

Interchanging equations in the system

Assume that the output y(k) is described by the equation (9). Then, according to [1] the FIR filter of the form

y(k) = β0+ β1u(k) + β2u(k − 1) + . . . + βνu(k − ν + 1) (11)

as the initial model of the linear part (1) of the PWA system will be used, too. Here

βT = (β0, β1. . . , βν) (12)

is a (ν + 1) × 1 vector of unknown parameters, ν is the order of the FIR filter that is assumed to be known beforehand.

In such a case a dependence of some regressors from the process output will be facilitated and the assumption of the ordinary LS, that the regressors depend only on input signal, will be satisfied. The equation (11) is linear in the parameters (12). The set of linear algebraic equations could be written in matrix form as

Y = Λβ. (13)

Here

Y = (y(ν), y(ν + 1), . . . , y(L))T (14) is the (L − ν) × 1 vector of observations of an output y(k);

Λ=     

1 u(ν) . . . u(2) u(1) 1 u(ν + 1) . . . u(3) u(2) ..

. ... ... ... ... 1 u(L) . . . u(L − ν + 2) u(L − ν + 1)

     (15)

is the (L − ν) × (ν + 1) regression matrix, consisting only of observations of an input u(k) of the true Wiener system, besides L ≤ N2.

For the estimation of parameters β of the FIR filter (11) one can use the expression ˆ β=ΛTΛ−1ΛTY. (16) Here ˆ βT = ˆβ0, ˆβ1. . . , ˆβν  (17) is a (ν + 1) × 1 vector of estimates of parameters (12).

It is known [10] that the interchanging equations in the system (13) trans-forms the system to an equivalent one.

Theorem 1. Covariance matrices

cov{ ˆβ} = σ2T

(9)

and

cov{ˆβ˜} = σ2( ˜ΛTΛ)˜ −1 (19)

of parameter estimates ˆβ andβ, respectively, are equivalent, i.e.,ˆ˜

cov{ ˆβ} ≡ cov{βˆ˜}, (20) if the matrix ˜Λis obtained by interchanging rows of the matrix Λ.

Here σ is the measurement noise variance,

˜ Λ=     

1 u(ν)˜ . . . u(2)˜ u(1)˜ 1 u(ν + 1)˜ . . . u(3)˜ u(2)˜ ..

. ... ... ... ... 1 u(L)˜ . . . u(L − ν + 2) ˜u(L − ν + 1)˜

     (21)

is (L − ν) × (ν + 1) data matrix, consisting of observations of the rearranged input ˜u(k),βˆ˜= (βˆ˜0,βˆ˜1, . . . ,βˆ˜n)T the (ν + 1) × 1 vector of estimates of

param-eters (12).

Proof. Rewrite the matrix ΛTΛfollowing:

ΛTΛ=               L − ν L X i=ν u(i) . . . L−ν+1 X i=1 u(i) L X i=ν u(i) L X i=ν u2(i) . . . L X i=ν u(i)u(i − ν + 1) .. . ... ... ... L−ν+1 X i=1 u(i) L−ν+1 X i=1 u(i)u(i + ν − 1) . . . L−ν+1 X i=1 u2(i)               , (22) where ΛT =      1 1 . . . 1

u(ν) u(ν + 1) . . . u(L) ..

. ... ... ... u(1) u(2) . . . u(L − ν + 1)

     . (23)

Assume now, that the matrix ˜Λis obtained by changing the first row with the last one in the matrix Λ. Thus, in (21): ˜u(ν) = u(L), . . . , ˜u(2) = u(L − ν + 2), ˜u(1) = u(L − ν + 1) and ˜u(L) = u(ν), . . . , ˜u(L − ν + 2) = u(2), ˜u(L − ν + 1) = u(1). The order of others rows in (21) is the same as in (15), etc., ˜

u(ν + 1) = u(ν + 1), . . . , ˜u(3) = u(3), ˜u(2) = u(2) and so on. Thus,

˜ Λ=     

1 u(L) . . . u(L − ν + 2) u(L − ν + 1) 1 u(ν + 1) . . . u(3) u(2) ..

. ... ... ... ... 1 u(ν) . . . u(2) u(1)

     (24)

(10)

and ˜ ΛT =      1 1 . . . 1

u(L) u(ν + 1) . . . u(ν) ..

. ... ... ... u(L − ν + 1) u(2) . . . u(1)

     . (25)

Then after multiplying (25) with (24) we have ˜ΛTΛ˜ = ΛTΛ. It follows that (ΛTΛ)−1≡ ( ˜ΛTΛ)˜ −1 and cov{ ˆβ} ≡ cov{βˆ˜}, finally.

Theorem 2. Let the T heorem 1 holds. Then the estimates ˆ

β= (ΛTΛ)−1ΛT

Y, (26)

ˆ˜

β= ( ˜ΛTΛ)˜ −1Λ˜TY˜ (27)

are equivalent if rows in the vector ˜Y are interchanged in accordance with the matrix ˜Λ.

Here

˜

Y = (˜y(ν), ˜y(ν + 1), . . . , ˜y(L))T (28) is (L − ν) × 1 vector of output observations,

Proof. Rewrite ΛTY following:

ΛTY =               L X i=ν y(i) L X i=ν u(i)y(i) .. . L−ν+1 X i=1 u(i)y(ν + i − 1)               (29) and ˜

Y = (y(L), y(ν + 1), . . . , y(ν))T. (30)

Thus, after multiplying (25) with (30) we have ˜ΛTY˜ ≡ ΛT

Y and ˆβ ≡ β,ˆ˜ finally.

Theorem 3. The covariance matrix (18) and the estimate (26) are invariable to interchanging rows in the initial matrix (15) and in the initial vector (14), respectively, if it is performed in the same way.

Proof. The T heorem 3 follows from the T heorem 1 and T heorem 2. Remark 4. It could be shown that observations in the vector (14) can be rear-ranged in an ascending order or in an descending order or even in an arbitrary order according to their values.

(11)

Conclusion 2. Under considered conditions (the middle data set contains not less than 50% observations) the problem of parametric identification of a linear part of the PWA system (1)—(5) could be solved applying the FIR model (11) and input-output data {u(k), y(k)}. The FIR model parameters are calculated by processing the middle set of rearranged data. Afterwards, the estimate of the intermediate signal x(k) could be obtained and used to generate estimates of parameter of the linear difference equation (3), linear regressions (8), (10) and the threshold a, too.

5

Expectation–maximization

One could the parametric estimation of the PWA to carry out by the technique, based on the EM algorithm, consisting of the expectation step (E–step) in which the conditional expectation of the sufficient statistics (in our case the auxiliary signal ˆx(k)) is calculated given available data and the current estimates of the parameters, and the maximization step (M–step) in which the estimated suffi-cient statistics obtained in the E–step are applied to recalculate estimates. The technique to be used could be explained by the following steps:

(a) observations of y(k)={y(ν), y(ν+1), . . . , y(N −1), y(N)} are rearranged in an ascending order in respect of their values following: ˜y(k)=˜y(ν), ˜y(ν + 1), . . . , ˜y(N − 1), ˜y(N), where ˜y(ν) ≤ ˜y(ν + 1) ≤, . . . , ≤ ˜y(N − 1) ≤ ˜y(N) and y(ν) 6= ˜y(ν), y(ν + 1) 6= ˜y(ν + 1), . . . , y(N − 1) 6= ˜y(N − 1), y(N) 6= ˜y(N); (b) the LS problem (16), consisting of the identification of parameters of a linear part of the PWA system (FIR model (11)) and rearranged data of the middle-set, obtained on step (a), is solved;

(c) the auxiliary signal ˆx(k) ∀ k ∈ 1, L (the estimate of x(k), including its missing values for x(k) ≤ −a and x(k) > a) is calculated according to the expression

ˆ

x(k) = ˆβ0+ ˆβ1u(k − 1) + . . . + ˆβνu(k − ν + 1) ∀ k = ν, N, (31)

using estimates of parameters of FIR filter that was determined on (b); (d) estimates of parameters Θ of the transfer function G(q, Θ) of the form (1) are calculated according to

ˆ

Θ= ˜XTX˜−1X˜TY¯, (32) using the observations of the auxiliary signal ˆx(k) that was calculated on (c). Here ˆ ΘT =ˆb, ˆa T , ˆbT =ˆb1, . . . , ˆbm  , ˆaT = (ˆa1, . . . , ˆam) (33)

is a 2m × 1, m × 1, m × 1 vectors of estimates of parameters, respectively,

X=      u( ˆN1+ m) . . . u( ˆN1+ 1) −ˆx( ˆN1+ m) . . . −ˆx( ˆN1+ 1) u( ˆN1+ m + 1) . . . u( ˆN1+ 2) −ˆx( ˆN1+ m + 1) . . . −ˆx( ˆN1+ 2) .. . ... ... ... u(K − 1) . . . u(K − m) −ˆx(K − 1) . . . −ˆx(K − m)      (34)

(12)

is the ( ˆN2− m) × 2m matrix, consisting of observations of input u(k) and the auxiliary signal ˆx(k), ¯ Y =x( ˆˆ N1+ m + 1), ˆx( ˆN1+ m + 2), ..., ˆx(K) T (35) is the ( ˆN2 − m) × 1 vector, consisting of observations of reconstructed ˆx(k),

K = ˆN1+ ˆN2.

(e) the auxiliary signal ˆx(k) ∀ k ∈ 1, L (the estimate of x(k), including its missing values for x(k) ≤ −a and x(k) > a) could be recalculated according to the expression

ˆ

x(k) = ˆb1u(k) + . . . + ˆbmu(k − m) + ˆa1x(k − 1) + . . . + ˆaˆ mx(k − m)ˆ

∀ k = m + 1, N, (36) using estimates of parameters b1, . . . , bm, a1, . . . , amthat are calculated on (d);

(f ) estimates of parameters c1, d1and c0, d0 are calculated according to

ˆ c1= ˆ N1 X i=1 ˜ y(i)˜ˆx(i) ˆ N1 X i=1 ˜ˆx2 (i) , (37) ˆ d1= ˆ N3 X j=1 ˜ y(N − j)˜ˆx(N − j) ˆ N3 X j=1 ˜ˆx2(N − j) , (38) ˆ c0= ˆ N1 X i=1 ˜ y(k) − ˆc1˜ˆx(k) ˆ N1 , (39) ˆ d0= ˆ N3 X j=1 ˜ y(k) − ˆd1˜ˆx(k) ˆ N3 , (40)

respectively, using side-sets data particles, that are rearranged in an ascending order, too. Here ˆN1, ˆN3 are estimates of numbers N1, N3, ˜ˆx(k) is the signal

ˆ

x(k) that was calculated on (c), but rearranged in accordance with ˜y(k); (g) estimates of the threshold a for right-side and left-side sets are deter-mined according to

ˆ

(13)

ˆ

a = −ˆc0/(1 − ˆc1); (42)

(h) return to (b) and estimate parameters with more precise defined ˆN1, ˆN2, ˆN3.

An iteration of above mentioned calculations are continued until convergence is assured.

6

Simulation results

6.1

Nonlinearity—piecewise function with positive slopes

The true intermediate signal x(k) k = 1, N , of the PWA system (see Fig. 3) is given by (3). The true output signal is described by [2]

y(k) =    −0.9 + 0.1x(k) if x(k) ≤ −1, x(k) if −1 < x(k) ≤ 1, 0.9 + 0.1x(k) if x(k) > 1 (43)

with the sum of sinusoids (see Fig. 4a)

u(t) = 1 20 20 X i=1 sin(iπt/10 + φi) (44)

and white Gaussian noise with variance 4.5 (see Fig. 7a) as inputs to the linear block G(q, Θ) = b1q−1 1 + a1q−1. (45) PSfrag replacements 0.3q−1 1−0.5q−1 f1(·, a) u(k) x(k) y(k) v(k) e(k)

Figure 3: The PWA system to be simulated. Signals: u(k)—unnoisy input, y(k)—noisy output, x(k)—noisy intermediate signal.

Here b1= 0.3, a1= −0.5; in (44) the stochastic variables φkwith uniform

dis-tribution on [0, 2π] were chosen. First of all N = 100 data points were generated, without additive process and measurement noises (see Fig. 4—11). Figures (4— 6, 10) correspond to the periodical signal (44), while Figures (7—9, 11)—to the white Gaussian noise, acting on an input of the PWA system. Signals were rearranged in an ascending order (see Figures 6, 9). Afterwards, steps (b) − (g) of the EM algorithm were implemented in order to identify the PWA system, shown on Fig. 3. Below the performance of the EM for the PWA system, acting in an unnoisy environment is given.

(14)

0 20 40 60 80 100 −8 −6 −4 −2 0 2 4 6 0 20 40 60 80 100 −4 −3 −2 −1 0 1 2 3 0 20 40 60 80 100 −1.5 −1 −0.5 0 0.5 1 1.5 0 20 40 60 80 100 −4 −3 −2 −1 0 1 2 3 PSfrag replacements a b c d u(k) x(k) y(k) x(k), y(k) 1 2 Observations

Figure 4: The signals of the simulated PWA system with piecewise nonlinear-ity (43), when the periodical signal (44) is acting on the input.

0 20 40 60 80 100 −1.5 −1 −0.5 0 0.5 1 1.5 0 20 40 60 80 100 −1.4 −1.2 −1 −0.8 −0.6 −0.4 −0.2 0 0 20 40 60 80 100 −1 −0.5 0 0.5 1 0 20 40 60 80 100 0 0.2 0.4 0.6 0.8 1 1.2 1.4 PSfrag replacements a b c d Observations

Figure 5: The signal y(k) (a) and its particles: left (b), middle (c), right (d), when the periodical signal (44) is acting on the input.

(15)

0 20 40 60 80 100 −1.5 −1 −0.5 0 0.5 1 1.5 0 20 40 60 80 100 −1.4 −1.2 −1 −0.8 −0.6 −0.4 −0.2 0 0 20 40 60 80 100 −1 −0.5 0 0.5 1 0 20 40 60 80 100 0 0.2 0.4 0.6 0.8 1 1.2 1.4 PSfrag replacements a b c d Observations

Figure 6: The rearranged in an ascending order signal y(k) (a) and its rearranged particles (see Fig. 5): left (b), middle (c), right (d)

0 20 40 60 80 100 −2 −1 0 1 2 0 20 40 60 80 100 −6 −4 −2 0 2 4 6 0 20 40 60 80 100 −2 −1 0 1 2 0 20 40 60 80 100 −1.5 −1 −0.5 0 0.5 1 1.5 PSfrag replacements a b c d u(k) x(k) y(k) x(k), y(k) 1 2 Observations

Figure 7: The signals of the simulated PWA system with piecewise nonlinear-ity (43), when the Gaussian white noise is acting on the input.

(16)

0 20 40 60 80 100 0 0.2 0.4 0.6 0.8 1 1.2 1.4 0 20 40 60 80 100 −1.5 −1 −0.5 0 0.5 1 1.5 0 20 40 60 80 100 −1.4 −1.2 −1 −0.8 −0.6 −0.4 −0.2 0 0 20 40 60 80 100 −1 −0.5 0 0.5 1 PSfrag replacements a b c d Observations

Figure 8: The signal y(k) (a) and its particles: left (b), middle (c), right (d), when the Gaussian white noise is acting on the input.

0 20 40 60 80 100 0 0.2 0.4 0.6 0.8 1 1.2 1.4 0 20 40 60 80 100 −1.5 −1 −0.5 0 0.5 1 1.5 0 20 40 60 80 100 −1.4 −1.2 −1 −0.8 −0.6 −0.4 −0.2 0 0 20 40 60 80 100 −1 −0.5 0 0.5 1 PSfrag replacements a b c d Observations

Figure 9: The rearranged in an ascending order signal y(k) (a) and its rearranged particles (see Fig. 8): left (b), middle (c), right (d).

(17)

On the EM step (b) the LS problem (16), consisting of the identification of parameters of FIR filter (11), was solved using 38 rearranged observations ( ˆN2=

38) (see Fig. 6a,c), to the exclusion zeros, when on the input was acting the periodical signal (44). The same problem for the Gaussian noise on an input was solved using 68 rearranged observations ( ˆN2= 68)(see Fig. 9a,c) of the middle

particle of data. The whole number of FIR filter parameters ν = 14 was chosen. Afterwards, on the step (c) the estimate ˆx(k) of the intermediate signal x(k) was reconstructed according to (11) by substituting instead of unknown true values of parameters their estimates, calculated on (b). The reconstructed versions of the intermediate signal x(k) for both inputs are shown in Figures 10a, 11a.

0 10 20 30 40 50 60 70 80 90 100 −4 −3 −2 −1 0 1 2 3 0 10 20 30 40 50 60 70 80 90 100 −4 −3 −2 −1 0 1 2 3 PSfrag replacements a b c d 1 2 3 3 3 3 3 3 3 4 1, 2 1, 2 1, 2 1, 2, 3 1, 3, 4 1, 4 1, 4 1, 4 1, 4 Observations

Figure 10: The intermediate signal x(k) (curve1), the output signal y(k) (curve 3) and reconstructed versions of x(k), calculated using the estimates of param-eters of the FIR model (31) (curve 2) and the estimates of paramparam-eters of the difference equation (50) (curve 4). Input—the periodical signal (44).

Then, on step (d) the estimates of parameters Θ of the transfer function G(q, Θ) was calculated by the formula

ˆ

Θ= ˜XTX˜−1X˜TY¯, (46) using the observations of the auxiliary signal ˆx(k), that were determined on step (c). Here

ˆ

ΘT = (b1, a1) T

(47) is a 2 × 1 vector of estimates of parameters of (45), respectively,

X =      u( ˆN1+ 1) −ˆx( ˆN1+ 1) u( ˆN1+ 2) −ˆx( ˆN1+ 2) .. . ... u( ˆN2− 1) −ˆx( ˆN2− 1)      (48)

(18)

0 10 20 30 40 50 60 70 80 90 100 −2 −1 0 1 2 0 10 20 30 40 50 60 70 80 90 100 −2 −1 0 1 2 PSfrag replacements a b c d 1 2 3 3 3 3 3 3 3 3 3 3 3 3 4 1, 2 1, 2 1, 2 1, 2, 3 1, 2, 3 1, 2, 3 1, 3, 4 1, 3, 4 1, 3, 4 1, 4 1, 4 1, 4 Observations

Figure 11: The values and markings are the same as in Fig.10. Input—the Gaussian white noise.

is the ( ˆN2− ˆN1−1)×2 matrix, consisting of rearranged observations of an input

u(k) and the auxiliary signal ˆx(k), ¯

Y =x( ˆˆ N1+ 1), ˆx( ˆN1+ 2), ..., ˆx( ˆN2)

T

(49) is the ( ˆN2− ˆN1− 1) × 1 vector of ˆx(k) observations.

On step (e) the estimate ˆx1(k) of the intermediate signal x(k) was

recalcu-lated according to ˆ

x1(k) = ˆb1 u(k − 1) + ˆa1xˆ1(k − 1), ∀ k = 2, 100, (50)

using ˆb1, ˆa1, that were obtained on step (d). In such a case estimates ˆb1, ˆa1were

equal to the true parameters: b1= 0.3, a1= 0.5. Two reconstructed versions of

the intermediate signal x(k), obtained by the formula (50) using distinct inputs are shown in Figures 10b, 11b. It could be noted that the accuracy of estimates of the intermediate signal, calculated by both approaches, is more or less similar to the exception of first 15 observations for the FIR model. If ˆx(k) is obtained, then it is simple to separate different particles of observations, that belong to the respective side-sets. On step (f ) estimates of parameters c1, d1 and c0, d0

are calculated according to the formulas (37), (38) and (39), (40), respectively. In such a case rearranged observations of ˆx1(k) and y(k) were substituted in

formulas (37), (38), and estimates of c1and d1 were determined: ˆc1= ˆd1= 0.1.

Then, on the same step estimates ˆc0 and ˆd0 according to (39), (40) were

cal-culated. Their values also coincide with the values of true coefficients: ˆc0=–0.9

while ˆd0=0.9. It could be noted, that ˆN1=28, ˆN3=25 for the periodical

sig-nal (44) and ˆN1=15, ˆN3=12 for the Gaussian white noise on an input were used

(19)

of the threshold was determined according to the formulas (41), (42). The es-timate ˆa was equal to the true value a=1. All estimates were obtained during one iteration of the EM procedure.

6.2

Nonlinearity—piecewise function with negative slopes

PSfrag replacements

q−1

1−0.7q−1 f2(·, a)

u(k) x(k) y(k)

v(k) e(k)

Figure 12: The PWA system to be simulated. Signals: u(k)—unnoisy input, y(k)—noisy output, x(k)—noisy intermediate signal.

The true intermediate signal x(k) k = 1, N , of the PWA (see Fig. 12) is given by (3). The true output signal is described by

y(k) =    −7.6 + 0.1x(k) if x(k) < −7.5, x(k) if −7.5 ≤ x(k) < 7.5, 7.6 − 0.1x(k) if x(k) ≥ 7.5 (51)

if the periodical signal (44), and

y(k) =    −1.1 − 0.1x(k) if x(k) < −1, x(k) if −1 ≤ x(k) < 1, 1.1 − 0.1x(k) if x(k) ≥ 1 (52)

if the Gaussian white noise with variance one [3] as inputs to the linear block (45) were used, respectively. Here b1 = 1, a1 = −0.7. First of all N = 100 data

points were generated, without additive process and measurement noises (see Fig. 13—20).

Figures (13, 15, 16, 19) correspond to the periodical signal (44), while Fig-ures (14, 17, 18, 20)—to the white Gaussian noise, acting on input of the Wiener system. Signals were rearranged in an ascending order (see Figures 16, 18). Af-terwards, steps (b) − (g) of the EM algorithm were implemented in order to identify the PWA system (Fig.12). On the EM step (b) the LS problem (16), consisting of the identification of parameters of FIR filter (11), was solved using the middle particle of data, consisting of 38 rearranged observations ( ˆN2= 38)

(see Fig. 16a, c) to the exclusion zeros. In such a case the periodical signal (44) was acting on an input. The same problem for the Gaussian noise on an input was solved using 36 rearranged observations ( ˆN2= 36)(see Fig. 18a, c). Again

the whole number of FIR filter parameters ν = 14 was chosen. Afterwards, on the step (c) the estimate ˆx(k) of the intermediate signal x(k) was reconstructed according to (11) by substituting instead of unknown true values of parameters their estimates, calculated on (b). The reconstructed versions of the interme-diate signal x(k) for both inputs are shown in Figures 19a, 20a. Then, on step (d) estimates of parameters Θ of the transfer function G(q, Θ) was calculated

(20)

according to (46) by processing observations of the auxiliary signal ˆx(k), that were determined on step (c). On step (e) the estimate ˆx1(k) of the

interme-diate signal x(t) was recalculated according to (50) by substituting instead of true values of parameters their estimates ˆb0, ˆa1, obtained on step (d). In such

a case the estimates were equal to the true parameters: b0 = 1, a1 = 0.7 for

both types of nonlinearities, that were applied here. The reconstructed versions of the intermediate signal x(t) for both inputs are shown in Figures 19b, 20b. It could be noted that estimates of the intermediate signal, calculated by both approaches, deviate from the true x(t) negligibly to the exception of first 15 observations for the FIR model. The reconstructed versions of x(k) were used to separate respective particles of observations (Figures 19, 20), that correspond to different equations of nonlinearities (51) or (52). On step (f ) estimates of parameters c1, d1and c0, d0 are calculated according to the formulas (37), (38)

and (39), (40), respectively. In such a case rearranged observations of ˆx1(k) and

y(k) were substituted in formulas (37), (38), and estimates of c1 and d1of

non-linearities (51), (52) were determined: ˆc1= 0.1; ˆd1 = −0.1. Then, on the same

step estimates ˆc0 and ˆd0 according to (39), (40) were calculated. Their values

also coincide with the values of true coefficients: ˆc0=–7.6, ˆd0=7.6 for the

nonlin-earity (51) and ˆc0=–1.1, ˆd0=1.1 for the nonlinearity (52), respectively. ˆN1=14,

ˆ

N3=8 for the periodical signal (44), and ˆN1=32, ˆN3=28 for the Gaussian white

noise on the input were used to calculate estimates ˆc0, ˆc1, ˆd0, ˆd1, respectively.

At last, on step (g) estimates of the threshold were determined according to the formulas (41), (42). The estimates ˆa were equal to their true values, too. All estimates were obtained during the one iteration of the EM procedure.

0 20 40 60 80 100 −8 −6 −4 −2 0 2 4 6 0 20 40 60 80 100 −15 −10 −5 0 5 10 15 0 20 40 60 80 100 −10 −5 0 5 10 0 20 40 60 80 100 −15 −10 −5 0 5 10 15 PSfrag replacements a b c d u(k) x(k) y(k) x(k), y(k) 1 2 Observations

Figure 13: The signals of the simulated PWA system with piecewise nonlinear-ity (51), when the periodical signal (44) is acting on the input.

(21)

0 20 40 60 80 100 −3 −2 −1 0 1 2 3 0 20 40 60 80 100 −4 −2 0 2 4 0 20 40 60 80 100 −1 −0.5 0 0.5 1 0 20 40 60 80 100 −4 −2 0 2 4 PSfrag replacements a b c d u(t) x(t) y(t) x(t), y(t) 1 2 Observations

Figure 14: The signals of the simulated PWA system with piecewise nonlinear-ity (52), when the Gaussian white noise is acting on the input.

0 20 40 60 80 100 −10 −5 0 5 10 0 20 40 60 80 100 −7 −6 −5 −4 −3 −2 −1 0 0 20 40 60 80 100 −10 −5 0 5 10 0 20 40 60 80 100 0 1 2 3 4 5 6 7 PSfrag replacements a b c d Observations

Figure 15: The signal y(k) (a) and its particles: left (b), middle (c), right (d), when the periodical signal (44) is acting on the input.

(22)

0 20 40 60 80 100 −10 −5 0 5 10 0 20 40 60 80 100 −7 −6 −5 −4 −3 −2 −1 0 0 20 40 60 80 100 −10 −5 0 5 10 0 20 40 60 80 100 0 1 2 3 4 5 6 7 PSfrag replacements a b c d Observations

Figure 16: The rearranged in an ascending order signal y(k) (a) and its rear-ranged particles (see Fig. 15): left (b), middle (c), right (d).

0 20 40 60 80 100 −1 −0.5 0 0.5 1 0 20 40 60 80 100 −1 −0.8 −0.6 −0.4 −0.2 0 0 20 40 60 80 100 −1 −0.5 0 0.5 1 0 20 40 60 80 100 0 0.2 0.4 0.6 0.8 1 PSfrag replacements a b c d Observations

Figure 17: The signal y(k) (a) and its particles: left (b), middle (c), right (d), when the Gaussian white noise is acting on the input.

(23)

0 20 40 60 80 100 −1 −0.5 0 0.5 1 0 20 40 60 80 100 −1 −0.8 −0.6 −0.4 −0.2 0 0 20 40 60 80 100 −1 −0.5 0 0.5 1 0 20 40 60 80 100 0 0.2 0.4 0.6 0.8 1 PSfrag replacements a b c d Observations

Figure 18: The rearranged in an ascending order signal y(k) (a) and its rear-ranged particles (see Fig. 17): left (b), middle (c), right (d).

0 10 20 30 40 50 60 70 80 90 100 −15 −10 −5 0 5 10 15 0 10 20 30 40 50 60 70 80 90 100 −15 −10 −5 0 5 10 15 PSfrag replacements a b c d 1 2 3 3 3 3 3 3 3 3 4 1, 2 1, 2 1, 2 1, 2 1, 2 1, 2, 3 1, 3, 4 1, 4 1, 4 1, 4 1, 4 Observations

Figure 19: The intermediate signal x(k) (curve1), the output signal y(k) (curve 3) and reconstructed versions of x(k), calculated using the estimates of param-eters of the FIR model (31) (curve 2) and the estimates of paramparam-eters of the difference equation (50) (curve 4). Input—the periodical signal (44).

(24)

0 10 20 30 40 50 60 70 80 90 100 −4 −2 0 2 4 0 10 20 30 40 50 60 70 80 90 100 −4 −2 0 2 4 PSfrag replacements a b c d 1 1 2 3 3 3 3 3 3 3 3 3 3 3 3 3 3 4 1, 2 1, 2 1, 2 1, 2 1, 2 1, 2, 3 1, 3, 4 1, 4 1, 4 1, 4 1, 4 1, 4 1, 4 1, 4 Observations

Figure 20: The values and markings are the same as in Figure 19. Input—the Gaussian white noise.

It follows that in spite of the type of the nonlinearity the accuracy of esti-mates ˆb1, ˆa1, ˆc0, ˆc1, ˆd0, ˆd1 and ˆa is the same—in an unnoisy environment they

all are equal to true values of parameters b1, a1, c0, c1, d0, d1and a, respectively.

6.3

A Monte Carlo simulation

6.3.1 Nonlinearity—piecewise function with positive slopes

In order to determine how different process and measurement noises realizations affect the accuracy of estimation of unknown parameters we have used a Monte Carlo simulation with 10 data samples, each containing 100 input-output obser-vations pairs. 10 experiments with the same realization of the process noise v(k) and different realizations of the measurement noise e(k) with different levels of its intensity were carried out. The intensity of noises was assured by chosing respective signal-to-noise ratios SNR (the square root of the ratio of signal and noise variances): SN Rv= s σ2 x σ2 v , (53)

for the process noise and

SN Re= s σ2 y σ2 e . (54)

for the measurement one. For the process noise SNRv was equal to 10 and to

100 and for the measurement noise SNRe was equal to 1, to 10, and to 100,

(25)

Table 1: The averaged estimates of parameters b1, a1, c0, c1, d0, d1, and

thresh-olds a, −a with their confidence intervals. Input—the periodical signal (44). SN Rv=100. Estimates SN Re= 1 SN Re= 10 SN Re= 100 ˆb1 0.28 ± 0.07 0.3 ± 0.00 0.3 ± 0.00 ˆ a1 −0.52 ± 0.06 −0.5 ± 0.00 −0.5 ± 0.00 ˆ c0 −0.85 ± 0.31 −0.89 ± 0.04 −0.9 ± 0.00 ˆ c1 0.16 ± 0.21 0.1 ± 0.02 0.1 ± 0.00 ˆ d0 1 ± 0.5 0.89 ± 0.04 0.9 ± 0.00 ˆ d1 0.04 ± 0.4 0.1 ± 0.02 0.1 ± 0.00 ˆ a 0.97 ± 0.25 1 ± 0.02 1 ± 0.00 −ˆa −1 ± 0.22 −1 ± 0.02 −1 ± 0.00

and white Gaussian noise with variance 1 were chosen. In each ith experiment the estimates of parameters were calculated, using the EM procedure, which is described in the Section 5. Figures 21–28 present input-output signals and ˆx, ˆ

x1(t) (see the same Section, subsection 6.1) obtained in one arbitrary

experi-ment, respectively. Here SNRv=SNRe=10 were chosen.

During a Monte Carlo simulation averaged values of estimates of parame-ters and two values of the threshold–positive and negative–and their confidence intervals were obtained by the formulas

∆1= ±tα ˆ σb1 √ M, ∆2= ±tα ˆ σa1 √ M, ∀ k = 1, 100. (55) ∆3= ±tα ˆ σc0 √ M, ∆4= ±tα ˆ σc1 √ M, ∀ k = 1, 100. (56) ∆5= ±tα ˆ σd0 √ M, ∆6= ±tα ˆ σd1 √ M, ∀ k = 1, 100. (57) ∆7= ±tα ˆ σa √ M, ∆8= ±tα ˆ σa √ M ∀ k = 1, 100, (58) Here ˆσ are estimates of the standard deviation σ of respective estimates; α = 0.05 is the significance level; tα = 2.26 is the 100(1 − α)% point of Student’s

distribution with M − 1 degrees of freedom; M = 10 is the number of experi-ments.

In Tables 1—4 for each input the averaged estimates of parameters of the simulated PWA system (see Fig. 3) with the linear part (45) (b1 = 0.3; a1 =

−0.5) and piecewise nonlinearity (43) and two values of the threshold with their confidence intervals (55)—(58) are presented. It could be noted that here in each experiment SNRv value was fixed and was the same while the values of

(26)

0 20 40 60 80 100 −8 −6 −4 −2 0 2 4 6 0 20 40 60 80 100 −4 −3 −2 −1 0 1 2 3 0 20 40 60 80 100 −1.5 −1 −0.5 0 0.5 1 1.5 0 20 40 60 80 100 −4 −3 −2 −1 0 1 2 3 PSfrag replacements a b c d u(k) x(k) y(k) x(k), y(k) 1 2 Observations

Figure 21: The signals of the simulated PWA system (see Fig. 3) with the linear part (45) (b1 = 0.3; a1 = −0.5) and piecewise nonlinearity (43) in

an noisy environment, when on an input the periodical signal (44) is acting. SNRv=SNRe=10. 0 20 40 60 80 100 −2 −1 0 1 2 0 20 40 60 80 100 −6 −4 −2 0 2 4 6 0 20 40 60 80 100 −2 −1 0 1 2 0 20 40 60 80 100 −1.5 −1 −0.5 0 0.5 1 1.5 PSfrag replacements a b c d u(k) x(k) y(k) x(k), y(k) 1 2 Observations

Figure 22: The values and markings are the same as in Figure 21. Input—the Gaussian white noise.

(27)

0 20 40 60 80 100 −1.5 −1 −0.5 0 0.5 1 1.5 0 20 40 60 80 100 −1.4 −1.2 −1 −0.8 −0.6 −0.4 −0.2 0 0 20 40 60 80 100 −1 −0.5 0 0.5 1 0 20 40 60 80 100 0 0.2 0.4 0.6 0.8 1 1.2 1.4 PSfrag replacements a b c d Observations

Figure 23: The signal y(k) (a) and its particles: left (b), middle (c), right (d), when on the input is acting the periodical signal (44).

0 20 40 60 80 100 −1.5 −1 −0.5 0 0.5 1 1.5 0 20 40 60 80 100 −1.4 −1.2 −1 −0.8 −0.6 −0.4 −0.2 0 0 20 40 60 80 100 −1 −0.5 0 0.5 1 0 20 40 60 80 100 0 0.2 0.4 0.6 0.8 1 1.2 1.4 PSfrag replacements a b c d Observations

Figure 24: The rearranged in an ascending order signal y(k) (a) and its rear-ranged particles (see Fig. 23): left (b), middle (c), right (d).

(28)

0 20 40 60 80 100 0 0.2 0.4 0.6 0.8 1 1.2 1.4 0 20 40 60 80 100 −1.5 −1 −0.5 0 0.5 1 1.5 0 20 40 60 80 100 −1.4 −1.2 −1 −0.8 −0.6 −0.4 −0.2 0 0 20 40 60 80 100 −1 −0.5 0 0.5 1 PSfrag replacements a b c d Observations

Figure 25: The signal y(k) (a) and its particles: left (b), middle (c), right (d), when on the input is acting the Gaussian white noise. SNRv=SNRe=10.

0 20 40 60 80 100 0 0.2 0.4 0.6 0.8 1 1.2 1.4 0 20 40 60 80 100 −1.5 −1 −0.5 0 0.5 1 1.5 0 20 40 60 80 100 −1.4 −1.2 −1 −0.8 −0.6 −0.4 −0.2 0 0 20 40 60 80 100 −1 −0.5 0 0.5 1 PSfrag replacements a b c d Observations

Figure 26: The rearranged in an ascending order signal y(k) (a) and its rear-ranged particles (see Fig. 25): left (b), middle (c), right (d).

(29)

0 10 20 30 40 50 60 70 80 90 100 −4 −3 −2 −1 0 1 2 3 0 10 20 30 40 50 60 70 80 90 100 −4 −3 −2 −1 0 1 2 3 PSfrag replacements a b c d 1 2 3 3 3 3 3 3 3 4 1, 2 1, 2 1, 2 1, 2, 3 1, 3, 4 1, 4 1, 4 1, 4 1, 4 Observations

Figure 27: The intermediate signal x(k) (curve1), the output signal y(k) (curve 3) and reconstructed versions of x(k), calculated using the estimates of param-eters of the FIR model (31) (curve 2), and the estimates of paramparam-eters of the difference equation (50) (curve 4). Input—the periodical signal (44).

0 10 20 30 40 50 60 70 80 90 100 −2 −1 0 1 2 0 10 20 30 40 50 60 70 80 90 100 −2 −1 0 1 2 PSfrag replacements a b c d 1 2 3 3 3 3 3 3 3 3 3 3 3 3 3 4 1, 2 1, 2 1, 2 1, 2, 3 1, 2, 3 1, 2, 3 1, 3, 4 1, 3, 4 1, 4 1, 4 1, 4 Observations

Figure 28: The values and markings are the same as in Figure 27. Input— Gaussian white noise.

(30)

Table 2: The values and markings are the same as in Table 1. Input—the periodical signal (44). SN Rv=10. Estimates SN Re= 1 SN Re= 10 SN Re= 100 ˆb1 0.29 ± 0.07 0.31 ± 0.01 0.32 ± 0.00 ˆ a1 −0.5 ± 0.06 −0.5 ± 0.01 −0.49 ± 0.00 ˆ c0 −0.84 ± 0.32 −0.89 ± 0.04 −0.9 ± 0.00 ˆ c1 0.17 ± 0.22 0.1 ± 0.02 0.1 ± 0.00 ˆ d0 0.92 ± 0.49 0.81 ± 0.04 0.81 ± 0.00 ˆ d1 0.09 ± 0.38 0.14 ± 0.03 0.14 ± 0.00 ˆ a 0.89 ± 0.31 0.95 ± 0.02 0.95 ± 0.00 −ˆa −0.99 ± 0.22 −1. ± 0.02 −1 ± 0.00

Table 3: The values and markings are the same as in Table 1. Input—the Gaussian white noise. SN Rv=100.

Estimates SN Re= 1 SN Re= 10 SN Re= 100 ˆb1 0.31 ± 0.04 0.3 ± 0.00 0.3 ± 0.00 ˆ a1 −0.39 ± 0.08 −0.5 ± 0.01 −0.5 ± 0.00 ˆ c0 −0.5 ± 0.86 −0.84 ± 0.06 −0.89 ± 0.00 ˆ c1 0.42 ± 0.72 0.1 ± 0.05 0.1 ± 0.00 ˆ d0 1.07 ± 0.51 0.91 ± 0.06 0.9 ± 0.00 ˆ d1 0.04 ± 0.44 0.15 ± 0.05 0.1 ± 0.00 ˆ a 1.03 ± 0.4 0.99 ± 0.02 1 ± 0.00 −ˆa −1.21 ± 0.19 −1 ± 0.02 −1 ± 0.00

Table 4: The values and markings are the same as in Table 1. Input—the Gaussian white noise. SN Rv=10.

Estimates SN Re= 1 SN Re= 10 SN Re= 100 ˆb1 0.31 ± 0.04 0.3 ± 0.00 0.3 ± 0.00 ˆ a1 −0.39 ± 0.08 −0.5 ± 0.01 −0.5 ± 0.00 ˆ c0 −0.51 ± 0.88 −0.85 ± 0.06 −0.9 ± 0.00 ˆ c1 0.42 ± 0.72 0.09 ± 0.05 0.1 ± 0.00 ˆ d0 1.08 ± 0.51 0.92 ± 0.06 0.9 ± 0.00 ˆ d1 0.03 ± 0.45 0.14 ± 0.05 0.1 ± 0.00 ˆ a 1.03 ± 0.36 0.99 ± 0.01 1 ± 0.00 −ˆa −1.21 ± 0.19 −1 ± 0.02 −1 ± 0.00

(31)

Table 5: The values and markings are the same as in Table 1. Nonlinearity— piecewise function with negative slopes. Input—the periodical signal (44). SN Rv=100. Estimates SN Re= 1 SN Re= 10 SN Re= 100 ˆb1 1.31 ± 0.33 1.04 ± 0.03 1.02 ± 0.00 ˆ a1 −0.68 ± 0.05 −0.7 ± 0.01 −0.7 ± 0.00 ˆ c0 −8.19 ± 5.13 −7.74 ± 0.67 −7.62 ± 0.07 ˆ c1 0.17 ± 0.76 0.08 ± 0.06 0.1 ± 0.01 ˆ d0 4.59 ± 9.27 7.64 ± 1.18 7.62 ± 0.12 ˆ d1 −0.02 ± 0.44 −0.11 ± 0.12 −0.1 ± 0.00 ˆ a 13.67 ± 11.02 6.84 ± 0.32 6.92 ± 0.03 −ˆa −1.4 ± 16.57 −7.27 ± 1.19 −6.94 ± 0.1

6.3.2 Nonlinearity—piecewise function with negative slopes

For the PWA with nonlinearities (51), (52) Figures 29–36 present input-output signals and ˆx(k), ˆx1(k) (see the same Section, subsection 6.1) determined in

one arbitrary experiment, respectively. Here SNRv=SNRe=10 were chosen. In

Tables 5—8 for each input the averaged estimates of parameters of the simulated PWA system (see Fig. 12) with the linear part (45) (b1 = 1; a1 = −0.7) and

nonlinearities (51), (52), including two values of the threshold (positive and negative) with their confidence intervals (55)—(58) are presented.

From a Monte Carlo simulation results (see Tables 1—8) imply that the accuracy of parametric identification depends on many factors, such as length of the data particle, type of an input signal, intensity of process and measurement noises, type of a nonlinearity, threshold of a nonlinearity, etc.

The additive noise in observations to be processed influences the quality of estimates of parameters rather clearly—if SN Re or SN Rv decrease, then the

accuracy of estimates decreases, too. It is obvious, when SN Re= 1. A quality

of estimates depends on the input signal to be excited the Wiener system and on the type of nonlinearity to be used, too. The threshold strongly affects the accuracy of estimates, because of direct dependence between its value and the length of a respective data particle.

In general, a Monte Carlo simulation results maintains the efficiency of the proposed approach to the identification of parameters of the Wiener system. On the other hand, there arrise here some problems, such as, stopping of calculations of parametric identification, when the required accuracy of estimates is achieved or when the current observation to be processed belongs to the different set. These problems require further investigations. It could be noted, that in such a case recursive identification techniques could be preferable [11], [12].

(32)

0 20 40 60 80 100 −15 −10 −5 0 5 10 0 20 40 60 80 100 −8 −6 −4 −2 0 2 4 6 0 20 40 60 80 100 −15 −10 −5 0 5 10 0 20 40 60 80 100 −10 −5 0 5 10 PSfrag replacements a b c d u(k) x(k) y(k) x(k), y(k) 1 2 Observations

Figure 29: The signals of the simulated PWA system (see Fig. 12) with the linear part (45) (b1= 1; a1= −0.7) and piecewise nonlinearity (51) in a noisy

environ-ment, when on the input is acting the periodical signal (44). SNRv=SNRe=10.

0 20 40 60 80 100 −3 −2 −1 0 1 2 3 0 20 40 60 80 100 −4 −2 0 2 4 6 0 20 40 60 80 100 −2 −1.5 −1 −0.5 0 0.5 1 1.5 0 20 40 60 80 100 −4 −2 0 2 4 6 PSfrag replacements a b c d u(k) x(k) y(k) x(k), y(k) 1 2 Observations

Figure 30: The values and markings are the same as in Fig. (29). Input— Gaussian white noise. SNRv=SNRe=10.

(33)

0 20 40 60 80 100 0 1 2 3 4 5 6 7 0 20 40 60 80 100 −10 −5 0 5 10 0 20 40 60 80 100 −7 −6 −5 −4 −3 −2 −1 0 0 20 40 60 80 100 −10 −5 0 5 10 PSfrag replacements a b c d u(t) x(t) y(t) x(t), y(t) 1 2 Observations

Figure 31: The signal y(t) (a) and its particles: left (b), middle (c), right (d), when on the input is acting the periodical signal (44).

0 20 40 60 80 100 0 1 2 3 4 5 6 7 0 20 40 60 80 100 −10 −5 0 5 10 0 20 40 60 80 100 −7 −6 −5 −4 −3 −2 −1 0 0 20 40 60 80 100 −10 −5 0 5 10 PSfrag replacements a b c d Observations

Figure 32: The rearranged in an ascending order signal y(k) (a) and its rear-ranged particles (see Fig. 31): left (b), middle (c), right (d).

(34)

0 20 40 60 80 100 −1.5 −1 −0.5 0 0.5 1 0 20 40 60 80 100 −1.4 −1.2 −1 −0.8 −0.6 −0.4 −0.2 0 0 20 40 60 80 100 −1 −0.5 0 0.5 1 0 20 40 60 80 100 0 0.2 0.4 0.6 0.8 1 PSfrag replacements a b c d u(k) x(k) y(k) x(k), y(k) 1 2 Observations

Figure 33: The signal y(k) (a) and its particles: left (b), middle (c), right (d), when on the input is acting the Gaussian white noise.

0 20 40 60 80 100 −1.5 −1 −0.5 0 0.5 1 0 20 40 60 80 100 −1.4 −1.2 −1 −0.8 −0.6 −0.4 −0.2 0 0 20 40 60 80 100 −1 −0.5 0 0.5 1 0 20 40 60 80 100 0 0.2 0.4 0.6 0.8 1 PSfrag replacements a b c d Observations

Figure 34: The rearranged in an ascending order signal y(k) (a) and its rear-ranged particles (see Fig. 33): left (b), middle (c), right (d).

(35)

0 10 20 30 40 50 60 70 80 90 100 −15 −10 −5 0 5 10 15 0 10 20 30 40 50 60 70 80 90 100 −15 −10 −5 0 5 10 15 PSfrag replacements a b c d Observations 1 2 3 3 3 3 3 3 3 3 3 3 3 3 4 1, 2 1, 2 1, 2 1, 2 1, 2 1, 2, 3 1, 2, 3 1, 3, 4 1, 3, 4 1, 4 1, 4 1, 4 1, 4 1, 4 1, 4 1, 4 Observations

Figure 35: The intermediate signal x(t) (curve1), the output signal y(k) (curve 3) and reconstructed versions of x(k), calculated using the estimates of param-eters of the FIR model (31) (curve 2), and the estimates of paramparam-eters of the difference equation (50) (curve 4). Input—the periodical signal (44).

0 10 20 30 40 50 60 70 80 90 100 −4 −2 0 2 4 0 10 20 30 40 50 60 70 80 90 100 −4 −2 0 2 4 PSfrag replacements a b c d Observations 1 1 2 3 3 3 3 3 3 3 3 3 3 3 3 3 3 4 1, 2 1, 2 1, 2 1, 2 1, 2, 3 1, 3, 4 1, 4 1, 4 1, 4 1, 4 1, 4 1, 4 Observations

Figure 36: The values and markings are the same as in Figure (35). Input—the Gaussian white noise.

(36)

Table 6: The values and markings are the same as in Table 1. Nonlinearity— piecewise function with negative slopes. Input—the periodical signal (44). SN Rv=10. Estimates SN Re= 1 SN Re= 10 SN Re= 100 ˆb1 1.42 ± 0.33 1.15 ± 0.03 1.13 ± 0.00 ˆ a1 −0.68 ± 0.05 −0.68 ± 0.01 −0.68 ± 0.00 ˆ c0 −8.47 ± 4.9 −7.86 ± 0.65 −7.71 ± 0.07 ˆ c1 0.27 ± 0.72 0.01 ± 0.11 0.02 ± 0.01 ˆ d0 3.11 ± 9.28 6.49 ± 1.14 6.49 ± 0.11 ˆ d1 −0.03 ± 0.39 0.07 ± 0.05 0.09 ± 0.01 ˆ a 13.71 ± 12.51 6.5 ± 0.45 6.62 ± 0.04 −ˆa 1.81 ± 24.4 −7.42 ± 1.11 −7.09 ± 0.1

Table 7: The values and markings are the same as in Table 1. Nonlinearity— piecewise function with negative slopes. Input—the Gaussian noise with the variance equal one. SN Rv=100.

Estimates SN Re= 1 SN Re= 10 SN Re= 100 ˆb1 0.83 ± 0.21 0.99 ± 0.02 1.01 ± 0.00 ˆ a1 −0.48 ± 0.17 −0.7 ± 0.01 −0.71 ± 0.00 ˆ c0 −1.21 ± 0.17 −1.1 ± 0.03 −1.09 ± 0.00 ˆ c1 0.19 ± 0.25 0.09 ± 0.02 0.09 ± 0.00 ˆ d0 0.9 ± 0.39 1.09 ± 0.05 1.1 ± 0.01 ˆ d1 −0.02 ± 0.3 −0.09 ± 0.03 −0.09 ± 0.00 ˆ a 0.82 ± 0.28 0.99 ± 0.02 1 ± 0.00 −ˆa −1.1 ± 0.28 −1.01 ± 0.04 −1 ± 0.00

Table 8: The values and markings are the same as in Table 1. Nonlinearity— piecewise function with negative slopes. Input—the Gaussian white noise with variance equal one. SN Rv=10.

Estimates SN Re= 1 SN Re= 10 SN Re= 100 ˆb1 0.79 ± 0.21 0.95 ± 0.02 0.97 ± 0.00 ˆ a1 −0.48 ± 0.17 −0.71 ± 0.01 −0.72 ± 0.00 ˆ c0 −1.17 ± 0.17 −1.04 ± 0.03 −1.03 ± 0.00 ˆ c1 0.24 ± 0.3 0.12 ± 0.02 0.12 ± 0.00 ˆ d0 0.89 ± 0.4 1.07 ± 0.05 1.09 ± 0.01 ˆ d1 −0.02 ± 0.32 −0.09 ± 0.03 −0.1 ± 0.00 ˆ a 1.03 ± 0.28 0.92 ± 0.04 0.91 ± 0.00 −ˆa 0.81 ± 0.29 −0.98 ± 0.02 −0.99 ± 0.00

(37)

7

Conclusions

One important type of hybrid systems encountered in practice are piecewise affine Wiener systems. As a rule, piecewise nonlinearities include a saturation-like function. They can not be described by polynomials and are usually nonin-vertible. On the other hand, it is known, that a PWA system consists of some subsystems, between which switchings occur at occasional time moments. They could be presented by different threshold regression models, including the model of the linear part of the PWA system. Of course, for parametric identification of such systems the ordinary least squares can be applied if it is known how to rec-ognize observations, that belong to different subsystems. Really, observations do not possess distinct signs that could help us to separate them into different sets. Usually such a problem is solved as the problem of the Wiener system with a noninvertible nonlinearity with all well-known consequences beforehand. It is shown here (Section 4) analytically that the problem of identification of PWA systems could be reduced essentially by a simple data rearrangement in an ascending order according their values. It follows the data partition into three data sets, that correspond to distinct threshold regression models. After-wards, estimates of unknown parameters of linear regression models are obtained by processing respective sets of the input and rearranged in an ascending or-der output observations. A technique based on the ordinary LS and on the expectation-maximization algorithm is proposed here for the identification of parameters of linear and nonlinear parts of the Wiener system, including the unknown threshold of the piecewise nonlinearity, too. During successive steps missing values of observations of respective particles are replaced by their es-timates, determined at E-step of EM, using ordinary LS, but tuned for the missing data. Various results of numerical simulation, including a Monte Carlo one (see Figures 21—36 and Tables 1—8), prove the efficiency of the proposed approach for the identification of PWA systems.

(38)

References

[1] R. Pupeikis, D. Navakauskas, and L.Ljung. Identification of Wiener systems with hard and discontinuous nonlinearities. Technical Report LiTH-ISY-R-2501, Department of Electrical Engineering, Link¨oping University, SE-581 83 Link¨oping, Sweden, 2003.

[2] A. Hagenblad. Identification of Wiener models. Technical Report LiTH-ISY-R-2031, Department of Electrical Engineering, Link¨oping University, SE-581 83 Link¨oping, Sweden, may 1998.

[3] A. Hagenblad. Aspects of the Identification of Wiener Models. Licentiate thesis no.793, Department of Electrical Engineering, Link¨oping University, SE-581 83 Link¨oping, Sweden, November 1999.

[4] J. V¨or¨os. Parameter identification of Wiener systems with discontinuous nonlinearities. Systems & Control Letters, 44:363–372, 2001.

[5] J. Roll. Identification of piecewise affine Wiener models via mixed-integer programming. Technical Report LiTH-ISY-R-2346, Department of Elec-trical Engineering, Link¨oping University, SE-581 83 Link¨oping, Sweden, 2001.

[6] J. Roll. Local and piecewise affine approaches to system identification. PhD thesis, Department of Electrical Engineering, Link¨oping University, 2003. [7] B.E. Hansen. Threshold effects in non-dynamic panels: Estimation, testing,

and inference. Journal of Econometrics, 93:345–368, 1999.

[8] L. Ljung. Estimating linear time-invariant models of nonlinear time-varying systems. European Journal of Control, 7:203–219, 2001.

[9] B.E. Hansen and B. Seo. Testing for two-regime threshold cointegration in vector error-correction models. Journal of Econometrics, 110:293–318, 2002.

[10] P. Lancaster and M. Tismenetsky. The Theory of Matrices. Academic Press, Inc., second edition, 1985.

[11] L. Ljung. System identification. Theory for the user. Prentice–Hall PTR, second edition, 1999.

[12] L. Ljung. Recursive identification algorithms. Technical Report LiTH-ISY-R-2366, Department of Electrical Engineering, Link¨oping University, SE-581 83 Link¨oping, Sweden, 2001.

References

Related documents

Anledningen till att modellen kan ge en mindre energiutvinning i förhållande till verkligheten är att energiutvinningen beror på transporttiden för magneten genom spolen.. För

Even with this problem of accuracy, and the missing data for all corrections, my calculation can be used for an overview of traffic noise problems in any areas (and

A research project in primary health care was established in 1988, in the rural area of Spili in Crete, aiming to monitor the health status of the population.. The project,

Findings from these national independent evaluations indicate that exposure to IPE in combination with PBL, as developed and implemented at the Faculty of Health Sciences at

Det är inte bara de här tre akustiska faktorerna som inverkar vid ljudlokalisering utan även ljudets styrka (Su &amp; Recanzone, 2001), andra konkurrerande ljud (Getzmann,

Enheten har gjort en del satsningar för att minska kön till att få komma till nybesök, bland annat har personal kallats in för att arbeta under helger och kvällar och under

The kind of integrated services that combines a fixed route service and a demand responsive service has not been studied in the same way, especially not for local area

Utifrån vilka elever är som är behov av extra anpassning och särskilt stöd kunde tre kategorier utläsas: de som inte når målen, de som har någon form av fysisk- eller psykisk