• No results found

Stochastic analysis show that the estimate is asymptotically (in data) normal distributed and an explicit expression of the resulting variance is given

N/A
N/A
Protected

Academic year: 2021

Share "Stochastic analysis show that the estimate is asymptotically (in data) normal distributed and an explicit expression of the resulting variance is given"

Copied!
25
0
0

Loading.... (view fulltext now)

Full text

(1)Asymptotic Variance Expressions for a Frequency Domain Subspace Based System Identication Algorithm Tomas McKelvey Dept. of Electrical Engineering, Linkoping University S-581 83 Linkoping, Sweden, Phone: +46 13 282461, Fax: +46 13 282622 Email: tomas@isy.liu.se. February 27, 1995 Submitted to 34th CDC Report LiTH-ISY-R-1774, ISSN 1400-3902 Abstract. This paper deals with the analysis of a frequency domain identication algorithm. The algorithm identies state-space models given samples of the frequency response given at equidistant frequencies. A rst order perturbation analysis is performed revealing an explicit expression of resulting transfer function perturbation. Stochastic analysis show that the estimate is asymptotically (in data) normal distributed and an explicit expression of the resulting variance is given. Monte Carlo simulations illustrates the validity of the variance expression also for the non-asymptotic case.. Keywords: Identication Subspace Method Stochastic Analysis Variance Expression.. 1 Introduction The present paper deals with a frequency domain based identication problem. In this formulation, the experimental data are taken to be the noisy values of the frequency response of the system at a given set of frequencies. In a number of applications cases frequency domain data are better suited for identication than the time-domain data 18, 14]. Frequency domain subspace algorithms 7, 11, 12, 17, 16] are based on the famous Ho and Kalman realization algorithm 4] or Kung's version 8]. The realization algorithms 4, 8] nd a minimal state-space realization given a nite This work was supported in part by the Swedish Research Council for Engineering Sciences (TFR), which is gratefully acknowledged.. 1.

(2) sequence of Markov parameters. The Markov parameters or impulse response coecients of the system are estimated from the inverse discrete Fourier transform (DFT) of the frequency response data. The approach taken in 7] is exact only if the system has a nite pulse response and therefore for lightly damped systems yields very poor estimates. In 17, 16], the inverse DFT technique is combined with a subspace identication step yielding the true nite-dimensional system in spite of the bias introduced by the inverse DFT and possessed by the estimated Markov parameters. The topic of this paper is to present and analyze a variation of the algorithm 16] which identify the B and D matrices in a dierent way. The algorithm will be analyzed under the the assumption of bounded errors as well as using a stochastic perspective of the errors. In the stochastic analysis we will show strong consistency of the method and show that the asymptotic estimate is normal distributed. An explicit expression of the asymptotic variance will also be given. The paper is outlined as follows. In the next section we formulate the identication problem and in the third section the identication algorithm is presented. The fourth section is devoted to error analysis. First the case of bounded errors is considered followed by a rst order perturbation analysis. Last stochastic analysis is performed where consistency and asymptotic normality is derived. In Section 6 a Monte Carlo simulation example is given illustrating the validity of the derived asymptotic variance expression. The paper is concluded in Section 7.. 2 Problem formulation Assume that the true system G is a stable multivariable linear time-invariant discrete time system with input/output properties characterized by the impulse response coecients gk through the equation. y(t) =. 1 X. k=0. gk u(t ; k). (1). where y(t) 2 R p , u(t) 2 R m and gk 2 R pm . If the system is of nite order n it can be described by a state-space model. x(t + 1) = A0 x(t) + B0 u(t)  y(t) = C0 x(t) + D0 u(t). (2). where y(t) 2 R p  u(t) 2 R m  and x(t) 2 R n . The state-space model (2) is a special case of (1) with. gk =. D. k=0 0 C0 Ak0;1 B0  k > 0 :. (3). 1 X gk e;j!k. (4). The frequency response of (1) is. G(ej! ) =. k=0. 2.

(3) which for the state-space model (2) can be written as G(ej! ) = C0 (ej! I ; A0 );1 B0 + D0 : (5) The problem formulation is then: Given a nite number, possibly noisy, samples Gk = G(ej!k ) + nk  !k 2 0 ] (6) of the frequency response of the system, nd a nite dimensional state-space system (2) of order n, denoted by G^ , such that the true system and the identied model are \close". In (6) G(ej!k ) represents the system frequency function (4) and nk denotes the noise. Closeness between systems is quantied by the distance between the true and estimated transfer functions and is given by jjG^ ; Gjj1 = sup (G^ (ej! ) ; G(ej! )): (7) !. Here  (A) denote the largest singular value of the matrix A and ()H the complex conjugate and transpose. Since  (A)  kAkF we notice that jjG^ ; Gjj1  sup kG^ (ej! ) ; G(ej! )kF : !. 3 The Algorithm If the impulse response coecients (3) are given, well-known realization algorithms can be used to obtain state-space realizations 4, 20, 8, 6]. The algorithm to be presented is closely related to these results but does not require the true impulse response coecients to be known. Assume that frequency response data Gk on a set of uniformly spaced frequencies, !k = k M  k = 0 : : :  M are given. Since G is a transfer function with a real valued impulse response (1), frequency response data on 0 ] can be extended to 0 2] which forms the rst step of the algorithm.. Algorithm 1 1.. GM +k := GM ;k  k = 1 : : :  M ; 1. where () denotes complex conjugate.. (8). 2. Let h^ i be dened by the 2M -point Inverse Discrete Fourier Transform (IDFT) 2M ; 1 ^hi = 1 X Gk ej2ik=2M  i = 0 : : :  q + r ; 1: 2M k=0. (9). 3. Let the block Hankel matrix H^ qr be dened as 0 h^ 1 h^ 2 : : : h^ r 1 B h^ 2 h^ 3 : : : h^ r+1 CC qprm H^ qr = B (10) B@ .. .. . . . .. CA 2 R . . . h^ q h^ q+1 : : : h^ q+r;1 with number of block rows q > n and block columns r  n. The size of H^ qr is limited by q + r  M . 3.

(4) 4. Let the singular value decomposition of H^ qr be    ^ s 0  H^ qr = U^s U^o 0 ^o. V^sT V^oT. (11). where ^ s contains the n largest singular values. 5.. A^ = (J1 U^s )y J2 U^s C^ = J3 U^s. (12) (13). B^ = (I ; A^2M ) ^ s V^sT J4 ^ D^ = h^ 0 ; C^ A^2M ;1 (I ; A^2M );1 B:. (14) (15). 6.. where. J1 = J3 =. ;I (q;1)p ;I 0 p. ;. 0(q;1)pp  J2 = 0(q;1)pp I(q;1)p.  J =  Im 4 p(q;1)p 0(r;1)mm. (16) (17). and Ii denotes the i  i identity matrix, 0ij denotes the i  j zero matrix and X y = (X T X );1 X T denotes the Moore-Penrose pseudo-inverse of the full-rank matrix X of size q  r and q > r. In the estimation of B and D Algorithm 1 dier from the algorithm presented in 16] where ^ D^ = arg min B BD. 2M X;1

(5) k=0.

(6)

(7) Gk ; D ; C^ (ej!k I ; A^);1 B

(8)

(9)

(10) 2 : F. (18). After the IDFT step 2, Algorithm 1 dier from Kung's 8] in two ways, the estimation of B and D, and in the selection of state-space basis. The dierence in B and D stems from the fact that Algorithm 1 uses the coecients h^ k from the IDFT instead of the impulse response gk . The realization given by the algorithm using noise free data is balanced in the sense that the q-block row observability matrix 0 C^ 1 B C^ A^ CC Oi = B (19) B@ .. CA . C^ A^q;1 and the r-block column controllability matrix ; Cr = B^ A^B^ : : : A^r;1 B^ satisfy. OqT Oq = I Cr CrT = (I ; A^2M ) ^ 2s (I ; A^2M )T 4. (20) (21).

(11) As M q and r jointly tend to innity, the products (21) will converge to the observability and controllability Grammians, respectively, and the diagonal elements of ^ s will converge to the Hankel singular values of the system. By using U^s in (12) and (13) instead of U^s ^ 1s=2 as in Kung's algorithm results in a dierent state-space basis. However, the identied transfer functions are still the same if noise-free data is used.. 3.1 Practical Aspects. When facing a practical identication problem many models of dierent orders are estimated and compared in order to nd a suitable \best" model. In the presented algorithms most of the computational eort lies in the SVD factorization (11). Given the factorization (11), all models of order less than q are easily obtained from the rest of the algorithms by letting n range from 1 to q ; 1. Hence, the choice of appropriate model order can easily be accomplished by direct comparison of a wide range of models with dierent orders at a low computational cost. Often stable models are sought for various reasons. The algorithms, as they stand, do not guarantee stability of the estimated models when using a nite number of noisy frequency data. However, with an additional step which projects the unstable eigenvalues of A^ into the unit disc, stability can be ensured for Algorithm 1. Transform A^ to the complex Schur form. Project any diagonal elements (eigenvalues) i which lye outside the unit disc into the unit disc by 1=i . Eigenvalues on the unit circle can be moved into the unit disc by changing the magnitude of the eigenvalue to 1 ; for some small . Finally transform A^ back into its original form before proceeding further.. 3.2 Noise Free Case. Let us now assume that nk = 0 in (6), i.e. noise free measurements. In the sequel we will denote by X the noise free counterpart of the symbol X^ . Furthermore assume that the frequency responses Gk origin from an nth order system with a state-space representation (A0  B0  C0  D0 ). Algorithm 1 is based on the fact that the resulting coecients h^ i from the nite IDFT of the frequency response has a Hankel matrix with the same range space as the observability matrix of the true system. It is well known that the Hankel matrix of the impulse response has this property, 4, 8] but much less known that the Hankel matrix (10) also has this property. A and C are thus estimated as in Kung's algorithm. However, the estimates of B and D has to take into account that the Hankel matrix is not based on the impulse response. The factor (I ; A^2M ) in (14) accomplishes this. The usefulness of Algorithm 1 in the case of nite M follows from the following key theorem.. Theorem 1 Let G be an nth order stable discrete time system represented by (2). Then n +2 noise-free equidistant frequency response measurements of G on 0 ] are sucient to identify a state-space realization with a transfer function equal to G by Algorithm 1. 5.

(12) Proof: Denote by O0 and C0 the q block row extended observability (19). and r block column controllability (20) matrices from the system realization (A0  B0  C0  D0 ). Let (A) denote the spectral radius 5] of A. Since G is a stable transfer function, it can be represented by the following Taylor series. G(z ) = D0 + C0 (zI ; A0 );1 B0 = D0 + =. 1 X gk z ;k  (A) < jz j:. 1 X. k=1. C0 Ak0;1 B0 z ;k. k=0. Notice that hk dened by (9) can be written as 2M 1 X;1 X. 1 hk = 2M gi ej2s(k;i)=2M 1 s=0 i=0 X = gk+2iM i=0. ;P. 2iM B0 = C0 Ak;1 (I ; A2M );1 B0  k > 0 = C0 Ak0;1 1 0 0 i=0 A0 and therefore Hqr can be factored as Hqr = O0 (I ; A20M );1 C0 (22) From the dimensions of the factors in (22) it is clear that Hqr has a maximal rank n. Furthermore since the system is stable (A) < 1, (I ; A2M ) is always of rank n. Minimality of the system also implies that both C0 and O0 are of rank n and hence also Hqr if r  n and q > n. In (11) then o = 0 and the column range spaces of Hqr , O0 and Us will be equal. A valid extended observability matrix O is then given by Us since there exists a non-singular n  n matrix S such that O = Us = O0 S: Us is thus an extended observability matrix from a state-space realization (A B C D) which is similar to the original realization (A0  B0  C0  D0 ). (12) and (13) will yield A and C matrices which are related to the original realization as A = S ;1 A0 S C = C0 S as shown in 8]. According to the estimated A and C we notice Hqr = O0 (I ; A20M );1 C0 = OS ;1 (I ; A20M );1 C0 = O(I ; S ;1 A20M S );1 S ;1 C0 = O(I ; A2M );1 C From this it follows that s VsT = (I ; A2M );1 C : Hence, we obtain (I ; A2M ) s VsT J4 = B. 6.

(13) which proves (14). From the IDFT (9) we have. h0 = g0 +. 1 X i=1. g(2iM ) = D0 +. 1 X i=1. C0 A20iM ;1 B0 = D0 + CA2M ;1 (I ; A2M );1 B. which proves (15). Then the state-space realizations (A B C D) and (A0  B0  C0  D0 ) are similar and have equal transfer functions. Letting q = n + 1 r = n M = n + 2, we satisfy the condition q + r < 2M which completes the proof. 2. 3.3 Uniqueness of the Realization. The particular realization obtained by the algorithm is not completely unique. The non-uniqueness origins from the SVD (11). If the singular values s are assumed to be distinct and in a descending order the only non-uniqueness origins from possible sign changes in the left and right singular vectors. A unique SVD can in this case be imposed with the additional constraint that the rst nonzero element in each of the left singular vectors (column vectors of U^s ) to be positive. If some of the singular values are equal a more involved procedure can be undertaken. A possible way is to use a modied version of the procedure outlined in the proof of Theorem 2.1 in 15]. For all practical identication applications there are however no need for an unique realization since the goal is only to obtain the transfer function.. 4 Error Analysis In the previous section, we demonstrated that an nth order system can be recovered by a simple algorithm involving only the IDFT, SVD, and simple algebraic manipulations using n + 2 frequency response measurements. The assumption that the system which generated the data is nite dimensional with a known order is not realistic and we will admit the possibility of some \small unmodeled dynamics" which is not captured by any nite dimensional model set. First, we will derive expressions of the perturbation of the transfer function and demonstrate that the proposed algorithm is robust to unmodeled dynamics and noise. Secondly, we analyze the algorithm by stochastic analysis and show consistency as well as derive an asymptotic variance expression of the estimated transfer function. In this section we will drop all subscripts denoting the size of a matrix. In the noisy case we can write H^ = H + !H where !H represents the Hankel matrix of the noise part. In general H^ will be of full rank (= min(qp mr)) because of the perturbation matrix !H . If the largest singular value of !H is signicantly smaller than the smallest singular value of H the n largest singular values ^ s of H^ and corresponding left singular vectors U^s will be close to the unperturbed counterparts and the estimated system will be close to the true system. The SVD of the identication algorithm will thus have a noise threshold and when the noise level increases over this level the resulting estimates will not be reliable since the singular vectors in U^s might 7.

(14) change places. In the next section we will discuss how the estimated system is perturbed when !Hqr is \small".. 4.1 Bounded Noise. Suppose that frequency response data Gk as in (6) where G is the nite dimensional nominal model is given and nk captures unmodeled dynamics and noise. First we consider the case when nk is uniformly bounded. knk1 := sup knk kF  : k. (23). The noise term nk will yield an additive perturbation on the coecient hk and we nd h^ k = hk + !hk where !hk are the IDFT of nk. X;1 j2ki=(2M ) 1 2M !hk = 2M ni e : i=0. The perturbed Hankel matrix is given by perturbed Hankel matrix as H^ = H + !H. (24) (25). where H is the unperturbed Hankel matrix and !H is the perturbation matrix with elements !H ]ij = !hi+j;1 : (26) The norm of !H is bounded by. k!H kF  pqr :. as. Let the singular value decompositions of the unperturbed Hankel matrix be.  ;. H = Us Uo. s 0.  V T . s (27) 0 0 VoT : Denote by k (H ) the n singular values of H ordered in a non-increasing order. In the identication algorithm the SVD plays an important role or more precisely the left singular vectors Us of the Hankel matrix. The following result will describe how the noise will in"uence this quantity.. Lemma 1 Let H^ = H + !H and k!H kF  .. Let U^s be given by (11) and Us by (27), and assume that. < n (H )=4:. Then 9 a nonsingular matrix T 2 R nn such that U^s = (Us + Uo P )T and. kP kF   (4H ) : n. 8. (28) (29) (30).

(15) Proof. Let.  UT s UoT. ;.  E11 E12 = E: E21 E22. !H Vs Vo =. (31). From (31) we notice that kE kF = k!H kF  and also , kEij k2  kEij kF since the spectral norm is upper bounded by the Frobenius norm. Therefore it is clear that. = n (H ) ; kE11 k2 ; kE22 k2  n (H ) ; kE11 kF ; kE22 kF  n (H ) ; 2  12 n (H ) > 0. (32). by assumption (28). We also have. kE21 E12T kF   1 : 1 n (H ) 2. 2 Then by Theorem 8.3.5 in 3] there exists a matrix P satisfying kP kF  2   4( H ) n such that range(U^s) and range(Us + Uo P ) are equal. Since the range spaces are equal and Us is of full rank there exists a unique non-singular matrix T such that (29) is satised. 2 It is clear from the result above that the resulting subspace spanned by U^s is close to the subspace spanned by Us if the noise level is low. Since only the subspaces are close we had to introduce the matrix T in order to relate U^s and Us directly. This a priori unknown T represents the \unknown" basis resulting from the identication. This would lead one to expect a diculty in bounding the transfer function perturbation. However, we will show that this fact does not represent any problem since dierent state-space basis do not alter the resulting transfer function.. Theorem 2 Let the frequency data be given by (6) with the noise uniformly bounded knk k1  and let G be a stable nth order linear system with transfer ^ B ^ C ^ D^ ) be the identi ed state-space model given by function G(z ). Let (A Algorithm 1 and let G^ (z ) be the corresponding transfer function. Then 9 0  c > 0 such that 8  0 (33) sup sup kG^ (z ) ; G(z )kF  c. knk k1  jzj=1 Proof. First we bound the perturbation of the estimated state-space matrices. with the aid of a change of basis represented by the unknown but non-singular T from Lemma 1. These bounds are then used to nally bound the transfer function itself. Throughout the proof we let c denote a bounded real constant which might have dierent values. Let (A B C D) denote the state-space realization of the true system which is a result from applying noise free data to Algorithm 1. First consider the 9.

(16) estimation of the A matrix (12). Let be small enough in order to use the result from Lemma 1. Then U^s = (Us + Uo P )T . Therefore A^ = (J1 (Us + UoP )T )y J2 (Us + Uo P )T which can be written. ^ ;1 = (J1 (Us + Uo P ))y J2 (Us + Uo P ) T AT where we used the fact that (XT )y = T ;1(X )y . Hence we can write ^ ;1 ; AkF = k(J1 (Us + UoP ))y J2 (Us + UoP ) ; (J1 Us )y J2 Us kF kT AT  k(J1 (Us + UoP ))y J2 UoP kF + k(J1 (Us + Uo P ))y ; (J1 Us )y kF kJ2Us k Since Us is an extended observability matrix of an nth order minimal system J1 Us is of full rank. The matrix J1 (Us + Uo P ) will thus also be of full rank for all suciently small vales of . We can thus use Lemma 4 in appendix to obtain ^ ;1 ; AkF  k(J1 (Us + UoP ))y J2 UoP kF kT AT +k(J1 (Us + UoP ))y k2 k(J1 Us )y k2 kJ1 UoP kF kJ2Us kF By using Lemma 5 in appendix it follows that the pseudo-inverse is bounded for all suciently small and kP kF  c . All this yields ^ ;1 ; AkF  c  8 < A  kT AT (34) ^ ;1 ; A. for some A > 0. For convenience let !A = T AT ^ For B from (14) we obtain ^ 4 = T ;1(I ; (A + !A)2M )TT T (Us + P T UoT )(H + !H )J4 B^ = (I ; A^2M )U^sT HJ. Since U^sT U^s = I = T T (UsT + P T UoT )(Us + UoP )T = T T + T T P T PT we can write T T = T ;1 ; T T P T P . Also we can write1 (A + !A)2M = A2M + With this we get. ". B^ = T ;1 I ; A2M +. Hence we obtain. 2M X 2M k=1. ( k )A2M ;k !Ak. M X 2M. ( k )A2M ;k !Ak. k=1 (I ; TT T P T P )(UsT + P T UoT )(H + !H )J4 = T ;1B + O( ). kT B^ ; B kF = O( )  c  8 < B . for some B > 0. The C^ matrix is straightforward C^ = J3 (Us + Uo P )T 1. #. This is actually abuse of notation unless A and A commutes.. 10. (35).

(17) which directly gives. ^ ;1 ; C k  c  8 < A: kCT (36) ; 1 ^ ; C . For D^ we have Let !B := T B^ ; A and !C := CT D^ = D + CA2M ;1 (I ; A2M );1 B + !h0 ; C^ A^2M ;1 (I ; A^2M );1 B^. and we obtain. kD^ ; DkF  k!h0 kF +kCA2M ;1 (I ; A2M );1 B ;(C + !C )(A + !A)2M ;1 (I ; (A + !A)2M );1 (B + !B )kF. The last term above requires some further study and we obtain kCA2M ;1 (I ; A2M );1 B (37) 2 M ; 1 2 M ; 1 ;(C + !C )(A + !A) (I ; (A + !A) ) (B + !B )kF  k!C kF k(A + !A)2M ;1 (I ; (A + !A)2M );1 (B + !B )kF +kC ((A + !A)2M ;1 ; A2M ;1 )(I ; (A + !A)2M );1 (B + !B )kF +kCA2M ;1 (I ; (A + !A)2M );1 !B kF +kCA2M ;1 (I ; A2M );1 ((A + !A)2M ; A2M )(I ; (A + !A)2M );1 B kF We notice that. k(A + !A)2M ; A2M kF = k. 2M X 2M k=1. ( k )A2M ;k !Ak kF  c :. Since (I ; A2M ) is nonsingular Lemma 5 is applicable to conclude that the inverses in expression (37) are bounded by some constant for all suciently small values of . Therfore we can conclude kD^ ; DkF  c  8 < D (38) for some constant D > 0. We have now established that the distances between the estimated state-space matrices converted to some basis and the state-space matrices of the true system indeed are bounded by whenever is suciently small. Consider the transfer function error G^ (z ) ; G(z ) which has a norm which can be written as kG^ (z ) ; G(z )kF = kD^ + C^ (zI ; A^);1 B^ ; D ; C (zI ; A);1 B kF ^ ;1(zI ; T AT ^ ;1);1 T B^ ; D ; C (zI ; A);1 B kF = kD^ + CT for any nonsingular matrix T 2 R nn . This expression can be bounded as kG^ (z ) ; G(z )kF  kD^ ; DkF + k!C (zI ; A^);1 B^ kF +kC (zI ; A^);1 !B kF + kC (zI ; A);1 !A(zI ; A^);1 B kF Since G is a stable system (zI ; A) is nonsingular for all jz j = 1. Again we can ^ ;1 ) is bounded by some use Lemma 5 to conclude that the norm of (zI ; T AT constant. By using the bounds (34), (35), (36) and (38) we obtain (33) with. 0 = min( A  B  D ) which concludes the proof. 2 11.

(18) 4.2 First Order Perturbation Analysis. The perturbation analysis will be performed using a rst order expansion. The analysis will then be valid whenever is suciently small. The route we will follow below is inspired by similar types of analysis of array signal processing algorithms 9, 10]. Although U^s is highly nonlinear in the data H^ we seek a rst order perturbation matrix !Us which is linear in the perturbation !H . An obvious but not tractable way is to nd !Us by dierentiation. Instead a backward error analysis 10] is employed to nd !Us . The following lemma gives the expression of the rst order perturbation of the left singular vectors Us .. Lemma 2 The perturbed subspace spanned by U^s is spanned by Us + UoP where P is a matrix with a norm of the order of k!H k. Furthermore a rst order term expression of U^s is given by U^s = Us + !Us where. Proof. See 10, 9].. !Us = UoUoT !HVs ;s 1 :. (39). 2. The derived perturbation expression has the following two properties. It is exactly orthogonal to the unperturbed space and hence !UsT Us = 0: The perturbed subspace is orthonormal up to rst order (Us + !Us )T (Us + !Us ) = I + O(k!H k2F ): If we assume Us = O we obtain the alternative expression !Us = U0 U0T !H ((I ; A2M );1 C )y. since. s VsT = (I ; A2M );1 C :. With the aid of this lemma it is straightforward to derive the corresponding rst order perturbation expressions for the estimated system matrices A^ (12), B^ (14), C^ (13) and D^ (13). We will in the sequel encounter the inverse of a perturbed matrix and present the following lemma.. Lemma 3 Assume X = Y ;1. Let Y^ = Y + !Y and Y^ ;1 = X^ = X + !X . Then !X = ;Y ;1 !Y Y ;1 is a rst order approximation of the perturbation of the inverse.. 12.

(19) Proof. We have Y^ ;1 = X^ . Hence (Y + !Y )(X + !X ) = I . A rst order. approximation is then. Y !X = ;!Y X = ;!Y Y ;1. 2. which concludes the proof.. In what follows we assume that (A B C D) represents the corresponding state-space realization which is obtained from Algorithm 1 when nk = 0. The only uniqueness imposed is that the extended observability matrix must satisfy OT O = I which is guaranteed by the algorithm since O = Us . Theorem 3 Consider the Hankel matrix H^ = H + !H where !H is the perturbations of the matrix. The rst order perturbations of the estimated system ^ B ^ C^ and D^ from H^ using the algorithm (11){(15) can be described matrices A by A^ = A + !A B^ = B + !B C^ = C + !C D^ = D + !D where !A = (J1 Us )y (J2 !Us ; J1 !Us A) (40)  2 M T 2 M ; 1 T !B = (I ; A )Us !H ; 2MA !AUs H J4 (41) !C = J3 !Us (42) !D = !h0  ; !CA2M ;1 + (2M ; 1)CA2M ;2 !A (I ; A2M );1 B 2M ;1 (I ; A2M );1 ;CA    2MA2M ;1 !A(I ; A2M );1 B + !B (43) and !Us is given by (39). Furthermore, the rst order perturbation of the transfer function is given by G^ (z ) = G(z ) + !G(z ) where !G(z ) = !D + !C (zI ; A);1 B + C (zI ; A);1 !A(zI ; A);1 B + C (zI ; A);1 !B: (44) Proof. In this proof all equality signs are valid up to rst order. From the algorithm we have A^ = (J1 U^s )y J2 U^s . Hence we obtain J1 (Us + !Us )(A + !A) = J2 (Us + !Us ): (45) By using the fact that J1 Us A = J2 Us and truncate the left-hand side of (45) up to rst order terms yields J1 Us !A + J1 !Us A = J2 !Us 13.

(20) which proves (40). B^ is given by B^ = (I ; A^2M ) ^ s V^sT J4 . Notice that U^sT H^ = ^ s V^sT . Using this ^ 4 . For A^2M we have (A + !A)2M = equality we obtain B^ = (I ; A^2M )U^sT HJ 2 M 2 M ; 1 A + 2MA !A up to rst order terms. Using this yields. B + !B = (I ; (A + !A)2M )(Us + !Us )T (H + !H )J4 = (I ; A2M )UsT HJ4 + (I ; A2M )UsT !H ; 2MA2M ;1 !AUsT H +(I ; A2M )!UsT H J4 = (I ; A2M )UsT HJ4 +  (I ; A2M )UsT !H ; 2MA2M ;1 !AUsT H J4 where the last equality follows from the fact that !UsT H = 0. C^ = J3 U^s directly gives C + !C = J3 Us + J3 !Us : D^ is given by (15). By using Lemma 3 we can simplify the factor (I ; (A + !A)2M );1 = (I ; A2M ; 2MA2M ;1 !A);1 = (I ; A2M );1 + (I ; A2M );1 2MA2M ;1 !A(I ; A2M );1 :. which occurs in (15). A simple collection of the rst order terms of (15) proves (43). The transfer function is given by G^ (z ) = D + !D + (C + !C )(zI ; A ; !A);1 (B + !B ): By applying Lemma 3 to (zI ; A ; !A);1 and collecting the rst order terms we arrive at (44). 2. Remark 1 Although we derived the expressions under the assumption that (A B C D) is the noise free realization this particular choice of state-space basis do not inuence the perturbation expression given in the theorem. This can easily been seen by introducing a orthonormal similarity transformation in the expression of !G(z ). It is also possible to drop the requirement of orthonormality of the similarity transformation. However this would lead to a much more complex expression for (41).. 4.3 Stochastic Noise. If we now adopt the view on nk as being stochastic variables, the estimated transfer function G^ (z ) will also be a stochastic variable. The aim of this section is to rst show that the elements of !H converge w.p. 1 to zero as M tends to innity. This implies that the norm k!H kF also tends to zero and strong consistency follows. Furthermore we will show that the estimate is asymptotically normal and derive the variance of the estimate.. 4.3.1 Assumptions. Let us assume that the noise term nk has the form. nk = k + jk 14.

(21) and k and k are assumed to be zero mean independent random variables with equal variance E fk Tk g = E fk kT g = 21 Rk such that. E fnk nHk g = Rk :. (46). Furthermore assume that all terms have a common bound. Rk  R:. (47). In (46) ()H denotes the complex conjugate and transpose of a complex matrix. We also assume that E fnk nHl g = 0 8 l 6= k, i.e. the noise term for dierent frequencies are independent. For more information on complex noise models see 1, 18]. These noise assumptions are rather weak and, for example, valid asymptotically if the frequency response is obtained as the empirical transfer function estimate, see 13].. 4.3.2 Consistency. In this section we show that the described method is strongly consistent. i.e. the limiting transfer function estimate is equal to the true transfer function with probability one as the number of frequency points M tend to innity.. Theorem 4 Let G be a stable linear system of order n and let Gk be given by. (6) where nk satis es the assumptions (46) and (47). Let G^ denote the transfer function obtained by Algorithm 1. Then lim sup kG^ (z ) ; G(z )kF = 0 w:p:1 (48) M !1 jzj=1. Proof. The elements of !H given by (24) are all sums of zero mean independent. random variables with a common bound on the second moments. Applying Theorem 5.1.2 in 2] we directly obtain lim !hk = 0 w.p. 1. M !1. which implies. lim k!H kF = 0 w.p. 1:. M !1. The result (48) now follows from Theorem 2.. 2. 4.3.3 Asymptotic Distribution and Variance. The rst order perturbation analysis has given us expressions of the estimation error dependence of the matrix !H and consequently the sums !hk . We have G^ (z ) ; G(z ) = !G(z ) + O(k!H k2 ) (49) where, as shown in Theorem 3, !G(z ) is a linear function of the noise nk . This property can be utilized to prove asymptotic normality of the transfer function estimate.. 15.

(22) Theorem 5 Let G be a stable linear system of order n and let Gk be given by. (6) where nk satis es the assumptions (46) and (47) and have bounded moments of order six. Let G^ denote the transfer function obtained by Algorithm 1. Then p ^ M (G(z ) ; G(z )) 2 AsN (0 P (z )) as M ! 1 (50) where. H P (z ) = Mlim !1 E M !G(z )!G(z ). (51). and !G(z ) is given by (44). If we modify the algorithm to guarantee estimated transfer functions satisfying sup kG^ (z )k < 1: (52). jzj=1. the covariance of the estimated transfer function satis es M E (G^ (z ) ; E G^ (z ))(G^ (z ) ; E G^ (z ))H ! P (z ) as M ! 1. (53). Proof. We will conduct the proof under the assumption that the transfer. function is scalar valued. In the following let c, possibly with one or more indices, denote a bounded real or complex constant. From Theorem 3 we know we can write. p. p. q X r 2M X X;1. 2M X;1. 1 c(i j k z )nk = p1 c(k z )nk : M !G(z ) = M 2M M k=0 i=1 j =1 k=0. Let. XMk = c(kpz )nk M. and. SM =. 2M X;1 k=0. XMk :. Furthermore, using the properties of the noise nk we obtain E X = 0 E jX j2 = 1 jc(k z )j2 R < 1 8M k: Mk. Mk. M. k. (54). For the third order moment of XMk we obtain Mk = E jXMk j3 = 13=2 E jc(k z )nk j3  c3=2  8M k. M. M. where the last equality follows from the assumption of bounded sixth moments and Jensen's inequality 2]. This implies that lim M !1. 2M X;1 k=0. Mk = 0:. (55). (54) and (55) together with Theorem 7.1.2 in 2] implies that. p. M !G(z ) 2 AsN (0 P (z )): 16. (56).

(23) p. Consider now the term MO(k!H k2F ). Let. p. XM = M k!H k2F = M13=2 which gives the expectation. E jXM j3 = M19=2. 2M X;1 i1 =0. :::. 2M X;1 2MX;1 i=0 i=0. 2M X;1 i6 =0. c(i z )c(j z )H ni nHj :. c(i1  z ) : : : c(i6  z )H ni1 : : : nHi6 :. In this sum only terms in which there exists an index pairing such that each pair have equal index yield non-zero values. The number of such terms are asymptotically of order N 3 and together with the assumption on bounded moments of order 6 we conclude that. cM 3 = c E jXM j3  M 9=2 M 3=2. holds asymptotically. Chebyshov's inequality then gives. and consequently. 3 P (jXM j > )  E jX 3M j  Mc3=2  8 > 0. lim M !1. 2M X;1 k=0. P (jXM j > ) < 1 8 :. Applying the Borel-Cantelli Lemma 2] gives lim X = 0 w.p. 1 M !1 M p which implies that MO(k!H k2F ) ! 0 w.p. 1 as M. ! 1 which proves (50). The asymptotic distribution result above does not necessarily imply (53). The rest of the proof will be devoted to this issue. The perturbation expression (49) is valid for all suciently small norms of !H . When !H becomes large some small singular values of the n largest in (11) will change places. As shown in Lemma 1 the perturbation expression of the singular vectors are valid as long as n (H ) > 4k!H kF : Divide the event space # into two complementary sets #1 = f! j n (H ) > 4k!H kF g. #2 = #1 where ! is the elementary event variable. To bound the probability of #2 we have E k!H k4F  Mc 2 17.

(24) where the inequality follows from the bounded sixth moments. Hence by Chebyshov's inequality we have H k4F  c (57) P (k!H kF > n (H )=4)  (E k(! n H )=4)4 M 2 For events belonging to #1 the expression G^ (z ) = G(z ) + !G(z ) + O(k!H k2F ) holds and for #2. kG^ (z )k  c. from assumption (52). The variance of the estimated transfer function is given by. Z.

(25). #2. M jG^ (z ) ; E G^ (z )j2 d!. By dividing the integral over the two event sets we obtain asymptotically for lim. Z. M !1

(26) 2. c M jG^ (z ) ; E G^ (z )j2 d!  Mlim !1 M = 0. where the inequality follows from (52) and (57). For the event set #1 we thus obtain lim M !1. Z.

(27) 1. M j!G(z ) + O(k!H k2F ) ; EO(k!H k2F )j2 d! = P (z ). 2. which concludes the proof.. Remark 2 The expressions given in Theorem 5 are valid for single-input multi-. output transfer functions, i.e. when Gk is a column vector. The general case can be handled analogously by vectorizing the matrices using the vec operator.. 4.3.4 Explicit Expression of the Asymptotic Variance. First we notice that the asymptotic covariance between elements !hk is T R h(k ; l) = Mlim !1 E M !hk !hl. 2M X;1 2MX;1 H j2(kt;ls)=(2M ) 1 = Mlim !1 E 4M t=0 s=0 nt ns e X;1 j2(k;l)t=(2M ) 1 2M = Mlim !1 4M t=0 Rt e. (58). which can be interpreted as the inverse DFT of the sequence of noise covariances. Rk . To simplify notation denote by E the operator E = Mlim !1 EM. where E is the expectation operator.. 18. (59).

(28) The asymptotic variance is given by the following expression P (z ) = E f!G(z )!G(z )H g = E f(!D + !C (zI ; A);1 B + C (zI ; A);1 !A(zI ; A);1 B +C(zI ; A);1 !B )(: : :)H g = E !D!DT + !C (zI ; A);1 B (: : :)H +C (zI ; A);1 !A(zI ; A);1 B (: : :)H +C (zI ; A);1 !B (: : :)H +2 Re !D !C (zI ; A);1 B + C (zI ; A);1 !A(zI ; A);1 B +C (zI ; A);1 !B H C (zI ; A);1 !A(zI ; A);1 BB T (zI ; A);H !C T +C (zI ; A);1 !A(zI ; A);1 B !B T (zI; A);H C T +!C (zI ; A);1 B !B T (zI ; A);H C T = R h (0) + V1 + V2 + V3 + 2 Re (V4 + V5 + V6 + V7 + V8 + V9 ) (60) where V1 = E f!C (zI ; A);1 B (: : :)H g (61) = J3 UoUoT E !H C y (zI ; A);1 B B T (zI ; A);H (C T )y !H T Uo UoT J3T V2 = E fC (zI ; A);1 !A(zI ; A);1 B (: : :)H g = C (zI ; A);1 (J1 O)y  J2 Uo UoT E !H C y (zI ; A);1 B B T (zI ; A);H (CT )y !H T Uo UoT J2T (62) ;2 Re fJ1 Uo UoT E !H C y A(zI ; A);1 B B T (zI ; A);H (C T )y !H T Uo UoT J2T g +J1 Uo UoT E !H C y A(zI ; A);1 B B T (zI ; A);H AT (C T )y !H T UoUoT J1T ] (OT J1T )y (zI ; A);H C T. . . V3 = E C (zI ; A);1 !B (: : :)H  = C (zI ; A);1 OT E !HJ4 J4T !H T O(zI ; A);H C T. n.  o. . V4 = E  !D !C (zI ; A);1 B H  = E !h0 B T (zI ; A);H (C T )y !H T Uo UoT J3T. n.  o. . V5 = E !D C (zI ; A);1 !A(zI ; A);1 B H    = E !h0 B T (zI ; A);H (C T )y !H T UoUoT J2T  ;E !h0 B T (zI ; A);H AT (C T )y !H T Uo UoT J1T (OT J1T )y (zI ; A);H C T. n. .  o. . V6 = E  !D C (zI ;A);1 !B H = E !h0 J4T !H T O(zI ; A);H C T. (64) (65) (66). . V7 = E C (zI ; A);1 !A(zI ; A);1 BB T (zI ; A);H !C T = C J(zIU ;U TAE);1 (!JH1OC)yy(zI ; A);1BBT (zI ; A);H (C T )y!H T  2 o o  ;J1 UoUoT E !H C y A(zI ; A);1 B B T (zI ; A);H (C T )y !H T Uo UoT J3T 19. (63). (67).

(29) . . V8 = E C (zI ; A);1 !A(zI ; A);1 B !B T (zI ; A);H C T  = C (zI ; A);1(J1 O)y J2 Uo UoT E !H C y (zI ; A);1 BJ4T !H T  ;J1 UoUoT E !H C y A(zI ; A);1 BJ4T !H T O(zI ; A);H C T. (68). . . V9 = E !C (zI; A);1 B !B T (zI ; A);H C T = J3 Uo UoT E !H C y (zI ; A);1 BJ4T !H T O(zI ; A);H C T. (69). In the expressions given above we have deliberately removed any factors A2M although the limit is not \completely evaluated". However this is only abuse of notation and do not change the nal result. In the expressions two expectations occur frequently. For simplicity assume single input single output case. Then these expressions are given as follows. Let X be a matrix of size r  r. For matrix element i j we obtain. E f!HX !H T g. ij =. =. r X r X k=1 m=1. r X r X. k=1 m=1. E f!hi+k;1 xkm !hm+j;1 g. (70). xkm R h (i + k ; m ; j ). where X ]ij denotes the element of the matrix X located at row i and column j. Let X denote a 1  r row vector which gives the expectation the value. E f!h x!H T g. 1i =. 0. r X. k=1. xk R h (k + i ; 1):. (71). If we consider the special case when Rk =  8k then. R h(k) = 2 (k). where (k) is Kronecker's delta function. In this case (70) and (71) simplies to. E f!HX !H T g.  ij = 2. and. E f!h x!H T g 0. r X r X. k=1 m=1.  1i = 2. r X k=1. xkm (i + k ; m ; j ). xk (k + i ; 1) = 0 8i:. 5 Verication by Monte Carlo Simulation We shall in this Section verify the validity of the asymptotic variance expression derived in section 4.3.3 for nite data. We do this by use of so called Monte Carlo simulations wherein we estimate the variance by calculating the sample variance of a large number of noise realizations. 20.

(30) The variance P (z ) given by (51) is only valid as M tends to innity. However it is common to use such an expression 13] also for nite data to get an estimate of the resulting variance. In our case we have approximately E jG^ M (z ) ; G(z )j2

(31) 1 P (z ). M. where subscript M in G^ M (z ) explicitly shows the dependency of the data record length. As an example the following second order system ;2 q2 + 9:140 q + 6:204) u(t) = G(q) u(t) y(t) = 10 (6q:2376 (72) ; 1:613 q + 0:828 will be used. The magnitude of the transfer function is depicted in gure 1. Using the transfer function G 101 samples of the frequency response are generated using a linear frequency grid ranging from !0 = 0 to !M = . Each frequency response sample is corrupted with independent complex Gaussian noise with three dierent variance levels R = 1, R = 0:1 and R = 0:01. Using dierent realizations of the noise 3  10000 dierent estimation data sets is generated. From the data sets 3  10000 models of second order are estimated using Algorithm 1. Let the estimated models be denoted by G^ k where subscript k indicates which data set is used in the identication. The estimated variance of the transfer function is evaluated as X ^ j!k 1 10000 jGk (e ) ; G(ej!k )j2 : (73) PMC (!k ) = 10000 k=1 The result of the simulations are depicted in Figure 2. The theoretical and simulated values are very close for the two lower noise levels. A somewhat larger discrepancy can be seen from the highest noise level. This is in line with the analysis since the theoretical expression is derived under the assumption that the noise level is suciently low. However, a noise level with variance 1 is quite high compared with the maximal magnitude of the transfer function kGk1 = 2:55. This example clearly shows that the asymptotic variance expression (51) indeed can be used to accurately estimate the resulting variance the estimated transfer function when using Algorithm 1.. 6 Conclusions We have in this paper presented a non-iterative frequency domain state-space identication algorithm. If the frequency data is noise free and is generated by a nth order system we show that only n + 2 equidistant frequency samples are required to exactly recover the true system. If the measurements are contaminated with unmodeled dynamics and/or noise upper bounded by we show that the resulting identication error is upper bounded by and hence the algorithm is robust. An asymptotic stochastic analysis shows that the algorithm is strongly consistent if each measurement is perturbed by an independent stochastic noise term. Furthermore we show that the resulting estimated transfer function is asymptotically normal distributed and give an explicit expression of the asymptotic variance. 21.

(32) Transfer function magnitude. 1. 10. 0. 10. −1. | G(z) |. 10. −2. 10. −3. 10. −4. 10. 0. 0.5. 1. 1.5 2 Frequency (rad/s). 2.5. 3. 3.5. Figure 1: Magnitude of transfer function G(z) dened in (72). References 1] D. R. Brillinger. Time Series: Data Analysis and Theory. McGraw-Hill Inc., New York, 1981. 2] K. L. Chung. A Course in Probability Theory. Academic Press, San Diego, CA, 1974. 3] G. H. Golub and C. F. Van Loan. Matrix Computations. The Johns Hopkins University Press, Baltimore, Maryland, second edition, 1989. 4] B. L. Ho and R. E. Kalman. Eective construction of linear state-variable models from input/output functions. Regelungstechnik, 14(12):545{592, 1966. 5] R. A. Horn and C. R. Johnsson. Matrix Analysis. Cambridge University Press, Cambridge, NY, 1985. 6] J. N. Juang and R. S. Pappa. An eigensystem realization algorithm for modal parameter identication and model reduction. J. of Guidance, Control and Dynamics, 8(5):620{627, 1985. 7] J. N. Juang and H. Suzuki. An eigensystem realization algorithm in frequency domain for modal parameter identication. Journal of Vibration, Acoustics, Stress, and Reliability in Design, 110:24{29, January 1988.. 22.

(33) Covariance of Transfer function. 0. 10. Monte Carlo estimate Theoretical value. −1. 10. Cramer−Rao lower bound. Covariance. −2. 10. −3. 10. −4. 10. −5. 10. 0. 0.5. 1. 1.5 2 Frequency (rad/s). 2.5. 3. 3.5. Figure 2: Result of Monte Carlo Simulations for three dierent noise levels,. R = 1, R = 0:1 and R = 0:01. The solid line shows PMC (z ) and the dashed line shows M1 P (z ) given by 51.. 8] S. Y. Kung. A new identication and model reduction algorithm via singular value decomposition. In Proc. of 12th Asilomar Conference on Circuits, Systems and Computers, Paci c Grove, CA, pages 705{714, 1978. 9] F. Li and R. J. Vaccaro. Unied analysis for DOA estimation algorithms i array signal processing. Signal Processing, 25:147{169, 1991. 10] F. Li, R. J. Vaccaro, and D. W. Tufts. Performance analysis of the statespace realization (TAM) and ESPRIT algorithms for DOA estimation. IEEE Trans. on Antennas and Propagation, 39(3):418{423, March 1991. 11] K. Liu, R. N. Jacques, and D. W. Miller. Frequency domain structural system identication by observability range space extraction. Technical report, Space Engineering Research Center, MIT Cambridge, MA 02139, September 1993. Submitted for review for 1994 ACC and ASME Journal of Dynamic Systems, Measurement and Control. 12] K. Liu, R. N. Jacques, and D. W. Miller. Frequency domain structural system identication by observability range space extraction. In Proc. of the American Control Conference, Baltimore, Maryland, volume 1, pages 107{111, June 1994. 13] L. Ljung. System Identi cation: Theory for the User. Prentice-Hall, Englewood Clis, New Jersey, 1987. 23.

(34) 14] L. Ljung. Some results on identifying linear systems using frequency domain data. In Proc. 32nd IEEE Conference on Decision and Control, San Antonio, Texas, pages 3534{3538, December 1993. 15] J. M. Maciejowski. Balanced realizations in system identication. In Proc. 7'th IFAC Symposium on Identi cation & parameter Estimation, York, UK, 1985. 16] T. McKelvey and H. Ak%cay. An ecient frequency domain state-space identication algorithm: Robustness and stochastic analysis. In Proc. 33rd IEEE Conference on Decision and Control, Lake Buena Vista, Florida, pages 3348{3353, December 1994. 17] T. McKelvey, H. Ak%cay, and L. Ljung. Ecient construction of transfer functions from frequency response data. Technical report, Report LiTHISY-R-1609, Dep. of EE, Link&oping University, S-581 83 Link&oping, Sweden, 1994. 18] J. Schoukens and R. Pintelon. Identi cation of Linear Systems: a Practical Guideline to Accurate Modeling. Pergamon Press, London, UK, 1991. 19] G. W. Stewart and J. G. Sun. Matrix Perturbation Theory. Academic Press, Inc., 1990. 20] H. P. Zeiger and A. J. McEwen. Approximate linear realizations of given dimension via Ho's algorithm. IEEE Trans. on Automatic Control, 19:153, April 1974.. A Some Matrix Results The following lemma describes the resulting perturbation of the pseudo inverse of a perturbed matrix. Lemma 4 19] Let A 2 C nmn, where m  n and A^ = A + !A. If rank(A) = rank(A^), then kA^y ; Ay kF  kAy k2 kA^y k2 k!AkF : (74) Proof. See Theorem 3.9 in 19]. 2 In the next lemma we show that the norm of the perturbed pseudo inverse is always bounded if the perturbation is suciently small.. Lemma 5 Let A !A 2 R mn, m  n, let k!Ak = and let k  k be any. consistent matrix norm. Also let A be of full rank n, then. k(A + !A)y k  c 8 < 0. with and. q. 0 = kAk2 + k(AT A);1 k;1 ; kAk T ;1 c = 1 ; kAkT(AAk(A )+ 2kkAk) (kAk + 0 ):. 0. 0. 24. (75).

(35) Proof. Let X = AT A and !X = !AT !A + !AT A + AT !A. We then get k(A + !A)y k = k(X + !X );1 (A + !A)T k  k(X + !X );1 kkA + !Ak (76) Now from the construction of X we conclude that. k!X k  (k!Ak + 2kAk)k!Ak  ( + 2kAk). X is of full rank since A is of full rank. kX ;1k is therfore bounded. Then from. the denition of 0. kX ;1!X k  kX ;1k( + 2kAk) < 1 8 < 0 :. By the use of Theorem 2.3.4 in 3] we obtain. X ;1k  c  8 <. k(X + !X );1 k  1 ; kkX 0 ;1!X k 2. (77). ;1 k where c2 = 1;kX ;1kkX(0 +2 kAk)0 By inserting (77) into (76) we obtain. k(A + !A)y k  c2 (kAk + )  c 8 < 0. 2. Remark 3 For the inverse of a perturbed square matrix Lemma 5 is also applicable by setting m = n.. 25.

(36)

References

Related documents

46 Konkreta exempel skulle kunna vara främjandeinsatser för affärsänglar/affärsängelnätverk, skapa arenor där aktörer från utbuds- och efterfrågesidan kan mötas eller

The increasing availability of data and attention to services has increased the understanding of the contribution of services to innovation and productivity in

Let P be the transition matrix of an irreducible Markov chain with finite state space Ω.. Let P the transition probability matrix of a

Är uppsvinget för återtag och direkt sälj till konsument uppsving för de

A random sample of 200 newly admitted students took the diagnostic test and it turned out that 60 of these students recieved one of the grades E, D or C, that 45 recieved one of

The illumination system in a large reception hall consists of a large number of units that break down independently of each other. The time that elapses from the breakdown of one

In this thesis we investigated the Internet and social media usage for the truck drivers and owners in Bulgaria, Romania, Turkey and Ukraine, with a special focus on

Industrial Emissions Directive, supplemented by horizontal legislation (e.g., Framework Directives on Waste and Water, Emissions Trading System, etc) and guidance on operating