• No results found

(1)Issues in closed-loop identication

N/A
N/A
Protected

Academic year: 2021

Share "(1)Issues in closed-loop identication"

Copied!
16
0
0

Loading.... (view fulltext now)

Full text

(1)Issues in closed-loop identication. Urban Forssell and Lennart Ljung Department of Electrical Engineering, Linkoping University S-581 83 Linkoping, SWEDEN. E-mail: ufo@isy.liu.se, ljung@isy.liu.se. Report: LiTH-ISY-R-1940. Abstract In this contribution we study the statistical properties of a number of closed-loop identication methods and parameterizations. A focus will be on asymptotic variance expressions for these methods. By studying the asymptotic variance for the parameter vector estimates we show that indirect methods fail to give better accuracy than the direct method. Some new results for bias distribution in closed-loop experiments will also be presented. It is also shown how dierent methods correspond to dierent parameterizations and how direct and indirect identication can be linked together via the noise model.. 1 Introduction \Identication for control" has drawn signicant interest the past few years (see e.g., 1, 6, 12]). The objective is to achieve a model that is suited for robust control design. Thus one has to tailor the experiment and preprocessing of data so that the model is reliable in regions where the design process does not tolerate signicant uncertainties. The use of closed-loop experiments has been a prominent feature in these approaches. Other reasons for performing identication experiments under output feedback (i.e., in closed-loop) may be that the plant is unstable, or that it has to be controlled for production economic or safety reasons, or that it contains inherent feedback mechanisms. The task in closed-loop identication is to obtain good models of the open-loop system despite the feedback.. -?. r1- e u- G 0 ++. ++ e. 6. Fy. v. . y? e+ r. 2. Figure 1: The closed-loop system In many cases we will not need to know the feedback. mechanism, but for some of the analytic treatment we shall work with the linear output feedback setup depicted in Figure 1: The true system is. y(t) = G0 (q)u(t) + v(t) = G0 (q)u(t) + H0 (q)e(t) (1) Here fe(t)g is white noise with variance 0 . The regulator is. u(t) = r1 (t) + Fy (q)(r2 (t) ; y(t)) where r1 (t) may be regarded as a excitation signal and r2 (t) as a set point signal. For our purposes here it will be sucient to consider the reference signal. r(t) = r1 (t) + Fy (q)r2 (t) which will be done in the sequel. Thus the regulator simplies to. u(t) = r(t) ; Fy (q)y(t) (2) The reference signal fr(t)g is assumed independent of the noise fe(t)g.. We also assume that the regulator stabilizes the system and that either G0 (q) or Fy (q) contains a delay so that the closed-loop system is well dened. The closedloop system can be written. y(t) = G0 (q)S0 (q)r(t) + S0 (q)H0 (q)e(t) where S0 (q) is the sensitivity function, S0 (q) = 1 + F (1q)G (q) y 0 With. Gcl0 (q) = G0 (q)S0 (q) Hcl0(q) = S0 (q)H0 (q) we can rewrite (3) as. y(t) = Gcl0 (q)r(t) + vcl (t) = Gcl0 (q)r(t) + Hcl0 (q)e(t). (3).

(2) The input can be written as. u(t) = S0 (q)r(t) ; Fy (q)vcl (t). (4). To reduce the notational burden we will from here on suppress the arguments q, ei! and t whenever there is no risk of confusion. The spectrum for the input is u (!) = jS0 j2 r (!) + jFy j2 jS0 j2 v (!). (5). where r (!) is the spectrum of the reference signal and v (!) = jH0 j2 0 the spectrum for the noise. We shall denote the two terms ru (!) = jS0 j2 r (!) eu (!) = jFy j2 jS0 j2 v (!) Consider the model set. M = fM() : y(t) = G(q )u(t) + H (q )e(t)  2 DM  Rd g Here d = dim(). We say that the true system is contained in the model set if for some 0 2 DM : G(q 0 ) = G0 (q) H (q 0 ) = H0 (q) This will also be written S 2 M. The case when the. noise model cannot be correctly described but where there is a 0 2 DM such that. G(q 0 ) = G0 (q) will be denoted G0 2 G .. 2 Approaches to closed-loop identication It is important to realize that a directly applied prediction error method { applied as if any feedback did not exist { will work well and give optimal accuracy if the true system can be described within the chosen model structure (i.e., if S 2 M). Nevertheless, due to the pitfalls in closed-loop identication, several alternative methods have been suggested. One may distinguish between methods that (a) Assume no knowledge about the nature of the feedback mechanism, and do not use r even if known. (b) Assume the regulator and the signal r to be known (and typically of the linear form (2)). (c) Assume the regulator to be unknown, but of a certain structure (like (2)). If the regulator indeed has the form (2), there is no major dierence between (a), (b) and (c): The noisefree relation (2) can be exactly determined based on a fairly short data record, and then r carries no further information about the system, if u is measured.. The problem in industrial practice is rather that no regulator has this simple, linear form: Various delimiters, anti-windup functions and other non-linearities will have the input deviate from (2), even if the regulator parameters (e.g. PID-coecients) are known. This strongly favors the rst approach. In model-based control one can argue that it is important that the model explains the closed-loop behavior of the plant as good as possible correct modeling of the open-loop system is less critical, at least in some frequency ranges. This would for instance be the case if the regulator was known to contain an integral action in itself. Then the modeling of the low-frequency behavior of the open-loop system would be less important in the subsequent control design. This motivates the two latter approaches. The methods for closed-loop identication correspondingly fall into the following main groups (see 3]): 1. The Direct Approach : Apply a prediction error method and identify the open-loop system using measurements of the input u and the output y. 2. The Indirect Approach : Identify the closed-loop system using measurements of the reference signal r and the output y and use this estimate to solve for the open-loop system parameters using the knowledge of the controller. 3. The Joint Input-Output Approach : Identify the transfer functions from r to y and from r to u and use them to compute an estimate of the open-loop system. In the following we will analyze several methods for closed-loop identication. In particular we will study several schemes for indirect and joint input-output identication.. 2.1 Direct identi cation. In the direct approach one typically works with models of the form. y(t) = G(q )u(t) + H (q )e(t) The prediction errors for this model are given by. "(t ) = H (q );1 (y(t) ; G(q )u(t)) In general, the prediction error estimate is obtained as 1 ^N = arg min  N. N 1 X. "F (t ) t=1 2 2. with "F = L", and L is some stable prelter. We will assume L  1 since the prelter can be included in the noise model. The resulting estimates of the dynamics model and the noise model will be denoted G^ N and H^ N , G^ N (q) = G(q ^N ) H^ N (q) = H (q ^N ).

(3) The direct identication approach should be seen as the natural approach to closed-loop data analysis. The main reasons for this are  It works regardless of the complexity of the regulator, and requires no knowledge about the character of the feedback.  No special algorithms and software are required.  Consistency and and optimal accuracy is obtained if the model structure contains the true system (including the noise properties). There are two drawbacks with the direct approach: One is that we will need good noise models. In openloop operation we can use output error models (and other models with xed or independently parameterized noise models) to obtain consistent estimates (but not of optimal accuracy) of G even when the noise model H is insucient. See Theorem 8.4 in 7]. The second drawback is a consequence of this and appears when a simple model is sought that should approximate the system dynamics in a pre-specied frequency norm. In open-loop we can do so with the output error method and a xed prelter/noise model that matches the specications. For closed-loop data a prelter/noise model that deviates from the true noise characteristics will introduce bias (cf. (22) below). The natural solution to this would be to rst build a higher order model using the direct approach, with small bias, and then reduce this model to lower order with the proper frequency weighting. Another case that shows the necessity of good noise models concerns unstable systems. For closed-loop data, the true system to be identied could very well be unstable, although the closed-loop system naturally is stable. The prediction error methods require the predictor to be stable. This means that any unstable poles of G must be shared by H , like in ARX, ARMAX and state-space models. Output error models cannot be used for this case. Just as in the open-loop case, models with common parameters between G and H require a consistent noise model for the G-estimate to be consistent.. 2.2 Indirect identi cation. If the regulator Fy is known and r is measurable, we can use the indirect identication approach. It consists of two steps: 1. Identify the closed-loop system from the reference signal r to the output y. 2. Determine the open-loop system parameters from the closed-loop model obtained in step 1, using the knowledge of the regulator. An advantage with the indirect approach is that any identication method can be applied in the rst step, since estimating the closed-loop system Gcl from measured y and r is an open-loop problem. Therefore. methods like spectral analysis, instrumental variables, and subspace methods, that may have problems with closed-loop data, also can be applied. One draw-back with closed-loop identication though, is that it is not clear, in general, how to perform the second step in an optimal way. In principle, we have to solve the equation. G^ cl =. G^ 1 + Fy G^. (6). Typically, this gives an over-determined system of equations in the parameters  which can be solved in many ways (see e.g., Section 5 below). The exact solution to (6) is of course. G^ =. G^ cl 1 ; Fy G^ cl. but this will lead to a high-order estimate G^ { typically the order of G^ will be equal to the order of G^ cl plus the order of the regulator Fy . For methods, like the prediction error method, that allow arbitrary parameterizations Gcl (q ) it is natural to let the parameters  relate to properties of the open-loop system G, so that in the rst step we should use a model. y(t) = Gcl (q )r(t) + Hcl (q )e(t). (7). ) Gcl (q ) = 1 + FG((q q)G(q ). (8). with y. This way the second step becomes superuous. Since identifying Gcl in (7) is an open-loop problem consistency will not be lost if we chose a xed noise model/prelter Hcl (q ) = Hcl (q) to shape the bias distribution of Gcl (cf. Section 3 below). The parameterization can be arbitrary, and we shall comment on it below. It is quite important to realize that as long as the parameterization describes the same set of G, the resulting transfer function G^ (q ^N ) will be the same, regardless of the parameterizations. The choice of parameterization may thus be important for numerical and algebraic issues, but it does not aect the statistical properties of the estimated transfer function.. The dual-Youla method A nice and interesting idea is to use the so called dual-Youla parameterization that parameterizes all systems that are stabilized by a certain regulator Fy (see, e.g., 14]). In the SISO case it works as follows. Let Fy = X=Y (X , Y stable, coprime) and let Gnom = N=D (N , D stable, coprime) be any system that is stabilized by Fy . Then, as R ranges over all stable transfer functions, the set. .  ) R ( q  ) N ( q ) + Y ( q G : G(q ) = D(q) ; X (q)R(q ).

(4) describes all systems that are stabilized by Fy . The unique value of R that corresponds to the true plant G0 is given by G0 ; Gnom ) R0 = DY((1 (9) + Fy G0 ) This idea can now be used for identication (see, e.g., 4], 5], 12]): Given an estimate R^ of R0 we can compute an estimate of the transfer function G as ^ G^ = N + Y R^ D ; XR Note that, using the dual-Youla parameterization we can write. Gcl (q ) = L(q)Y (q)(N (q) + Y (q)R(q )) where L = 1=(Y D + NX ) is stable and inversely sta-. ble. With this parameterization the identication problem (7) becomes. z (t) = R(q )x(t) + H (q )e(t). (10). where. z (t) = y(t) ; L(q)N (q)Y (q)r(t) x(t) = L(q)Y 2 (q)r(t) Thus the Dual-Youla parameterization is special parameterization of the general indirect method. Later we will show that this parameterization does not aect the statistical properties of G^ as we emphasized above. The main advantage of this method is of course that the obtained estimate G^ is guaranteed to be stabilized by Fy , which clearly is a nice feature.. 2.3 Joint input-output identi cation. The third main approach to closed-loop identication is the so called joint input-output approach. Recall that we have the closed-loop relations y(t) = G0 (q)S0 (q)r(t) + S0 (q)H0 (q)e(t) u(t) = S0 (q)r(t) ; Fy (q)S0 (q)H0 (q)e(t) (11) The basic principle is now that once we have obtained estimates of the closed-loop system Gcl0 = G0 S0 and the sensitivity function S0 we could compute an estimate of G0 . Note that estimating Gcl0 and S0 are open-loop problems so in principle all open-loop methods can be used (cf. the rst step of the indirect approach). However, just as in the indirect approach, computing an estimate of the open-loop system using estimates of closed-loop transfer functions, e.g., Gcl0 and S0 , can cause troubles. For instance, the straight-forward approach of dividing G^ cl by S^ i.e., ^ G^ (q) = G^cl (q) S (q ). will lead to a high-order estimate of G0 and, furthermore, the model order cannot be controlled and typically the order of G^ will be the sum of the order of G^ cl and the order of S^. To deal with this researchers have come up with various methods/parameterizations. In this paper we will study two of these: the so called coprime factor identication scheme and the two-stage method.. Coprime factor identication. The coprime factor identication scheme 13] can be understood as follows. Rewrite (11) using the ltered signal x = Fr: y(t) = N0F (q)x(t) + S0 (q)H0 (q)e(t) u(t) = D0F (q)x(t) ; Fy (q)S0 (q)H0 (q)e(t) Here N0F = G0 S0 F ;1 and D0F = S0 F ;1 The choice of F is discussed in, e.g., 13]. For our purposes here F can be any stable, linear lter. Identifying N0F and D0F using measurements of y, u and x is still an open-loop problem, since x and e are uncorrelated. Typically, a standard prediction error method is used to nd the estimates N^ and D^ . Then the open-loop model is retrieved by ^ G^ (q) = N^ (q) D(q) A benet from using prediction error methods is that N and D can be parameterized on a common-denominator form N (q) = fb((qq))  D(q) = fa((qq)) This will give us control over the model order of G^ since then ^ G^ (q) = a^b((qq)). The two-stage method. It can be argued whether the two-stage method 11] is a joint input-output method or not. Below we will give an alternative interpretation of the algorithm that justify this classication. The two-stage method is usually presented as follows. 1. Identify the sensitivity function S0 using measurements of u and r. ^ and identify the open2. Construct the signal u^ = Sr loop system as the mapping from u^ to y. This method deserves a couple of remarks.  Exact knowledge of the controller Fy is not necessary since in the rst step only the sensitivity function is modeled..

(5)  In the rst step a high-order model of S0 can be. used since we in the second step can control the open-loop model order independently..  The simulated signal u^ will be the noise-free part of the input signal in the feedback system, thus u^ clearly is independent of the noise e. The simplicity and robustness of the two-stage method makes it an attractive alternative to closed-loop identication. For our analysis it will be convienient to study the following reformulation of the two-stage method. Consider the following single input-two output model.       u(t) = S ( ) H 0 1 (q ) (t) y(t) G(q )S (q  ) r(t) + 0 H2 (q) e(12). The parameter vector is.    = . Let. "u (t ) = H ;1 (q)(u(t) ; S (q  )r(t)) 1. (13). and. "y (t ) = H2;1 (q)(y(t) ; G(q )S (q  )r(t)). (14). The prediction error for the model (12) equals. . "(t ) = ""uy ((tt )). . With these denitions we can reformulate the two-stage method as. . . N 1X min lim "T (t ) 10 w02 "(t ) =  w2 !1 N t=1 2 (15) N X min lim 1 "2 (t ) + w2 "2 (t )  w2 !1 N t=1 u. 2. y. This follows since, for large w2 the open-loop parameters will minimize N N ; 2 1X 1X ;1 2 " ( t  ) = H y 2 (q )(y (t) ; G(q )S (q  )r (t)) N N t=1. t=1. regardless of the value of  , which then will minimize N N ; 2 1X 1X ;1 2 " ( t  ) = H u 1 (q )(u(t) ; S (q  )r (t)) N N t=1. t=1. Except for the additional weighting w2 this is equivalent to the standard joint input-output method, given that we parameterize the model as in (12).. 2.4 A formal connection between direct and indirect methods. The noise model H in a linear dynamics model structure has often turned out to be a key to interpretation of dierent \methods". The distinction between the models/\methods" ARX, ARMAX, output error, BoxJenkins, etc, is entirely explained by the choice of the noise model. Also the practically important feature of preltering is equivalent to changing the noise model. Even the choice between minimizing one- or k-step prediction errors can be seen as a noise model issue. See, e.g. 7], for all this. Therefore it should not come as a surprise that also the distinction between the fundamental approaches of direct and indirect identication can be seen as a choice of noise model. One important point of the prediction error approach is that the transfer functions G and H can be arbitrarily parameterized. Suppose that we have a closed-loop system with known regulator Fy as before. We parameterize G as G(q ) and H as. H (q ) = H1 (q )(1 + Fy (q)G(q )). (16). We thus link the noise model to the dynamics model. There is nothing strange with that: So do ARX and ARMAX models. Note that this particular parameterization scales H1 with the inverse model sensitivity function. Now, the predictor for. y(t) = G(q )u(t) + H (q )e(t) is y^(tj) = H ;1(q )G(q )u(t) + (1 ; H ;1 (q ))y(t) ) (r(t) ; F (q)y(t)) = H1;1(q ) 1 + FG((q y q ) G (q ) y + y(t) ; H1;1 (q ) 1 + F (q1)G(q ) y(t) y. ) = H1;1(q ) 1 + FG((q q)G(q ) r(t) + y. + (1 ; H ;1 (q ))y(t) 1. Now, this is exactly the predictor also for the model of the closed-loop systems. y(t) = Gcl (q )r(t) + H1 (q )e(t). (17). with the closed-loop transfer function parameterized in terms of the open-loop one, as in (8). The indirect approach to estimate the system in terms of the closed-loop model (17) is thus identical to the direct approach with the noise model (16). This is regardless of the parameterization of G and H1 . Among other things, this shows that we can use any theory developed for the direct approach (allowing for feedback) to evaluate properties of the indirect approach..

(6) 3 Bias distribution We shall now characterize in what sense the model approximates the true system, when it cannot be exactly described within the model class. This will be a complement to the open-loop discussion in Section 8.5 of 7]. We start with the direct method.. 3.1 The direct method. Consider the model y(t) = G(q )u(t) + H (q )e(t) (18) where G(q ) is such that either Fy (q) or G(q ) contains a delay. We have the prediction errors 1 (y(t) ; G(q )u(t)) "(t) = H (q ) 1 = H (q ) f(G0 (q) ; G(q ))u(t) + H0 (q)e(t)g 1 (G~ (q)u(t)) + ( H0 (q) ; 1)e(t) + e(t) = H (q ) H (q ) 1 = H (q ) (G~ (q)u(t) + H~ (q)e(t)) + e(t). Here G~(q) = G0 (q) ; G(q ) and H~ (q) = H0 (q) ; H (q ) Insert (2) for u, "(t) = H (q1 ) (G~ (q)(S0 (q)r(t) ; Fy (q)S0 (q)H0 (q)e(t)) + H~ (q)e(t)) + e(t) (19) Our assumption that the closed-loop system was well~ y contains a delay, as well as H~ dened implies that GF (since both H and H0 are monic). Therefore the last term of (19) is independent of the rest. Computing the spectrum of the rst term we get (\*" denotes complex conjugate) i 1 h jG~ j2 ; 2ReH~ GF ~ S H  + jH~ j2 . jH j2. . u. y. 0. 2. 0 0. 0. . ~ 0  0 jH~ j2 u G~ ; (Fy S0 H0 ) H eu = jH + 1 ;   2 2  j  u jH j u. . Let us introduce the notation (B for \bias") ~ B = (Fy S0 H(0!)) H0 u then the spectral density of " becomes     u (! ) G~ ; B 2 + 0 jH~ j2 1 ; eu (! ) +  " (!) = jH j2 jH j2 u (!) (20)0 Note that. e jB j2 =  (0!)  u ((!!))  jH~ j2 u. u. (21). The limiting model will minimize the integral of " (!), according to standard prediction error identication theory. We see that if Fy = 0 (open-loop operation) we have B = 0 and eu (!) = 0 and we re-obtain expressions that are equivalent to the expressions in Section 8.5 in 7]. Let us now focus on the case with a xed noise model H (q ) = H (q). This case can be extended to the case of independently parameterized G and H . Recall that any preltering of the data or prediction errors is equivalent to changing the noise model. The expressions below therefore contain the case of arbitrary preltering. For a xed noise model, only the rst term of (20) matters in the minimization, and we nd that the limiting model is obtained as. Gopt = arg min G. Z. ;. jG0 ; G ; B j2 jHu (!j2) d! . (22). This is identical to the open-loop expression, except for the bias term B . Within the chosen model class, the model G will approximate the biassed transfer function G0 ; B as well as possible according the the weighted frequency domain function above. The weighting function u (!)=jH j2 is the same as in the open-loop case. The major dierence is thus that an erroneous noise model (or unsuitable preltering) may cause the model to approximate a biassed transfer function. Let us comment the bias function B . First, note that while G (in the xed noise model case) is constrained to be causal and stable, the term B need not be so. Therefore B can be replaced by its stable, causal component (the \Wiener part") without any changes in the discussion. Next, from (21) we see that the bias-inclination will be small in frequency ranges where either (or all) of the following holds.  The noise model is good (H~ is small)  The feedback contribution to the input spectrum (eu (!)=u (!)) is small  The signal to noise ratio is good (0 =u (!) is small). In particular, it follows that if a reasonably exible, independently parameterized noise model is used, then the bias-inclination of the G-estimate can be negligible.. 3.2 Indirect methods For a moment assume that Gcl is estimated using a prediction error method with a xed noise model/prelter H and that Gcl is parameterized according to (8). Our model can thus be written. y(t) = Gcl (q )r(t) + H (q)e(t).

(7) We then have the following result for the bias. We know that the limiting estimate Gopt is given by.  Z   G G 2 r (!) d! 0  ; ; 1 + Fy G0 1 + Fy G  jH j2 Z   G ; G 2 jS j2  (!)  0 r2 d! (23)  0 = arg min G ;  1 + Fy G  jH j. Gopt = arg min G. Now, this is no clear cut minimization of the distance G0 ; G. The estimate Gopt will be a compromise between making G close to G0 and making 1=(1 + Fy G) (the model sensitivity function) small. There will thus be a \bias-pull" towards transfer functions that give a small sensitivity for the given regulator, but unlike (22) it is not easy to quantify this bias component. However, if the true system can be represented within the model set, this will always be the minimizing model, so there is no bias in this case. It can be questioned whether this \bias-pull" actually is harmful. Indeed, this shaping of the bias distribution by the unknown sensitivity function S0 that we get automatically in the indirect methods has been one of the main reasons why authors have recommended indirect methods for closed-loop identication.. The dual-Youla method Before turning to the joint input-output methods we remark that the dual-Youla method (10) applied with a xed noise model H gives the following expression for the resulting R estimate. Ropt = arg min R. Z. ;. jR0 ; Rj2 jHx (!j2) d! . From (9) we get that. G ; G. D G ; G 0 nom nom R0 ; R = Y 1 + F G ; 1 + F G y 0 y ( G ; G ) D (1 + F G ) 0 y nom = Y 1 + Fy G  S0 G)  S = LY1 2 (1G+0 ; FG 0 y. Now, from x (!) = jLj2 jY j4 r (!) it follows that expression (23) characterizes the bias distribution for the dual-Youla method also. This is in line with the statement that the statistical properties are independent of the parameterization.. 3.3 Joint input-output methods. Here we will only discuss the coprime factor identication scheme and the two-stage method.. Coprime factor identication The bias distribution for the coprime factor identication scheme, when applied with xed noise models, will. be characterized by. ; N j2 + jD0F ; Dj2  (!)d! x jH1 j2 jH2 j2 ; By carefully choosing the lter F and the noise models H1 and H2 we can shape the resulting bias in a controlmin ND. Z  jN. 0F. relevant way as shown in, e.g., 13].. The two-stage method. For the two-stage method we obtain the following expression for the resulting estimate Gopt from the second step Z G = arg min jG S ; GS j2 r (!) d! opt. G. ;. 0 0. . jH j2. where S is the xed estimate of the sensitivity function obtained from the rst step in the algorithm. Note that. jG0 S0 ; GS j2 = j(G0 ; G)S0 + G(S0 ; S )j2. (24) Thus it is clear that for cases where S 6= S0 we will have a bias-pull towards models G that minimize (24). On the other hand, if we in the rst step have obtained a very accurate (high order) estimate of the sensitivity function S0 this eect is negligible so that. jG0 S0 ; GS j2 jG0 ; Gj2 jS0 j2 Thus the mismatch G0 ; G will be minimized in a fre-. quency dependent norm that is shaped by the true sensitivity function. Finally we remark that it is interesting to compare this result with the corresponding results for the indirect methods, e.g., expression (23).. 3.4 Arbitrary shaping of the bias distribution. The \dream result" for closed-loop experiments in connection with identication for control would be to have a method that allows tting the model to the data in a xed, model-independent and user dened frequency domain norm. This is possible for open-loop data using preltering and an output error model/method (like in (22) with B = 0). For closed-loop we either get bias as in (22) or model-dependent norms as in (23). It is not clear whether such a method exists for the general case. However, 8] has pointed out and analyzed such a method for the case of periodic reference signals and time-invariant regulators: For a periodic reference signal, the parts of u and y that originate from r will be periodic after a transient. Now, average y and u over periods corresponding to the period of r. These averages will then converge to a correct, noisefree input-output relationship for the system over one period. Then use these averages as input and outputs in a direct output-error identication scheme, possibly with preltering. This gives a method with the desired properties..

(8) 4 Asymptotic variance expressions Let us now consider the asymptotic variance of the estimated transfer function G^ N using the Asymptotic Black-Box theory of Section 9.4 in 7] see also 2] for related results.. 4.1 The direct method. Note that the basic result ^  n  ;1 G  ( ! )  ( ! ) N u ue Cov H^ N v (!)  (!) 0 N ue. (25). 0 u ; jue j2 = 0 jS0 j2 r = 0 ru Cov G^ N Nn rv ((!!)) u. y (!) = jG0 j2 ru (!) + jS0 j2 v (!) The corresponding spectrum in open-loop operation would be. applies also to the closed-loop case. Here n is the model order, N the number of data, v the spectrum of v = H0 e, and ue (!) the cross spectrum between input u and noise source e. From this general expression we can directly solve for the upper left element: Cov G^ N Nn   (!)v (;!)j0 (!)j2 0 u ue From (4) we easily nd that so. Note, though, that the \basic problem" is a practical and not a fundamental one: There are no diculties, per se, in the closed-loop data, it is just that in practical use, the information contents is less. We could on purpose make closed-loop experiments with good information contents (but poor control performance). Note that the output spectrum is, according to (3),. (26). 2 open y (! ) = jG0 j u (! ) + v (! ). This shows that it may still be desirable to perform a closed-loop experiment: If we have large disturbances at certain frequencies we can reduce the output spectrum by (1 ;jS0 j2 )v (!) and still get the same variance for G^ N according to (26). Note that the basic result (26) is asymptotic when the orders of both G and H tend to innity. Let us now turn to the case where the noise model is xed, H (q ) = H (q). We will then only discuss the simple case where it is xed to the true value. H (q) = H0 (q) and where the bias in G^ is negligible. In that case the covariance matrix of ^N is given by the standard result $ (t 0 ) (t 0 )T ];1 Cov ^N N0 E. The denominator of (26) is the spectrum of that part of the input that originates form the reference signal r. The open-loop expression has the total input spectrum here. The expression (26) { which also is the asymptotic Cramer-Rao lower limit { tells us precisely \the value of information" of closed-loop experiments. It is the noise-to-signal ratio (where \signal" is what derives from the injected reference) that determines how well the open-loop transfer function can be estimated. From this perspective, that part of the input that originates from the feedback has no information value when estimating G. Since this property is, so to say, inherent in the problem, it should come as no surprise that several newly suggested methods for indirect identication can also be shown to give the same asymptotic variance, namely (26) (see, e.g., 2] and Sections 4.2 and 4.3 below).. where (t 0 ) is the negative gradient of "(t ) = H 1(q) (y(t) ; G(q )u(t))  evaluated at  = 0 . The covariance matrix is thus determined entirely by the second order properties (the spectrum) of the input, and it is immaterial whether this spectrum is a result of open-loop or closed-loop operation. In particular we obtain in the case that the model order tends to innity that Cov G^ N Nn v ((!!)) u just as in the open-loop case.. The expression (26) also clearly points to the basic problem in closed-loop identication: The purpose of feedback is to make the sensitivity function small, especially at frequencies with disturbances and poor system knowledge. Feedback will thus worsen the measured data's information about the system at these frequencies.. We will now turn to the variance properties of the indirect approach. According to the open-loop result, the asymptotic variance of G^ clN will be. 4.2 Indirect methods. 2 Cov G^ clN Nn vcl(!(!) ) = Nn jS0j (!v ()!). r. r. regardless of the noise model H . Here vcl (!) is the spectrum of the additive noise vcl in the closed-loop.

(9) system (3), which equals the open-loop additive noise, ltered through the true sensitivity function. To transform this result to variance of the open-loop transfer function, we use Gauss' approximation formula. dG Cov G^ dG  Cov G^ = dG cl dG. It is easy to verify that so. cl. cl. 0. Thus. The dual-Youla method. For the dual-Youla method we get 2 Cov R^N Nn vcl(!(!)) = Nn jS0j (!v ()!) x x Now, using Gauss' approximation formula and x (!) = jLj2jY j4 r (!) the relation we soon arrive at  2 2 2  = n rv (!) Cov G^ N Nn jLjjS20jYj j4v(!()!)   LY 2 S0 N u (!) r which is (26).. 4.3 Joint input-output methods. For the joint input-output methods we note that we may write (11) as      y (t) = N0 (q) r(t) + 1 S0 (q)H0 (q)e(t) u D0 (q) ;Fy (q) (27). where N0 = G0 S0 and D0 = S0 . Now applying the multivariable version of the standard result (25) (refer to 15] for details) to the situation (27) gives  ^  n  (!) NN. w Cov D (28) ^N N r (!) Here we have introduced the signal   w(t) = ;F1y (q) S0 (q)H0 (q)e(t) It follows that  ^  n jS j2  (!)   1 ;Fy NN. 0 v Cov D ^N N r (!) ;Fy jFy j2 Now, since G0 = N0 =D0 we get from Gauss' approximation formula 2 CovG^ N Nn jS0j (!v ()!)  r h i  1 ;Fy  "

(10) 1 # 1 N 0  1  N0 D0. D0. 0. CovG^ N Nn jS j2v(!()!) = Nn rv ((!!)) 0. which { not surprisingly { equals what the direct approach gives, i.e., (26).. ;Fy jFy j2. y. = jS1 j2 (j1 + Fy G0 j2 ) = jS1 j4. Cov G^ N Nn jS j2v(!()!) = Nn rv ((!!)) 0 r u. D0. y. 0. dG 1 dGcl = S02. jD0 j2. Carrying out the algebra and noting that D0 = S0 we see that #  " 1 h1 N0 i 1 ;Fy

(11) 1  = N0 D0 ;F jF j2 jD j2. r. u. just as in the previous cases. Analogous calculations will show that this result holds for the coprime factor identication scheme for arbritrary choices of the lter F . However, more interesting and novel is that this also shows that the two-stage method gives the same asymptotic variance expression as the other closed-loop methods. To see this we employ the alternative interpretation of the two-stage method provided by the criterion in (15). The weight w2 used there can actually be seen as a scaling of the noise model: H2 ! H2 =w2 (cf. (14)). Now, since the result (28) in fact is independent of the noise model (open-loop operation) it follows that the above calculations hold for the two-stage method as well.. 5 Covariance of the parameter vector estimate In this section we derive expressions for the covariance of the parameter vector estimate for the direct method and a number of indirect methods.. 5.1 The direct method. Consider again the model (18) and assume that the dynamics model and the noise model are independently parameterized, i.e. that. G(q ) = G(q ) and H (q ) = H (q  ) where and  refers to the following partitioning of the parameter vector :    =  Also assume that S 2 M (i.e., that the true system is. contained in the model set). It will be convienient to consider the following augmented signal: Let.  0 = ue. then its spectrum is. . .  0 (!) = u ((!!)) ue(!) 0 ue.

(12) Now, since ue (!) = ;Fy S0 H0 0 , we may also write this as  0 (!) = r 0 (!) + e 0 (!) (29) where  r   ( ! ) 0 r u  0 (!) = 0 0.    F S F S y 0 H0 y 0 H0 e  0 (!) = 0 ;1 ;1. With being the negative gradient of the prediction errors "(t ) = H (q1  ) (y(t) ; G(q )u(t)) and with R dened as $ (t  ) (t  )T R = 1 E. . we have. 0. 0. 0. Cov ^N N1 (R );1. (30). Using the frequency-domain results in Section 9.4 in 7] we see that R can be rewritten as Z 1 1 R = 2

(13) T0 (ei!  0 ) 0 (!)T0 (ei!  0 ) d! ; v (!). . . where T = G H and where T0 = dT d . From (29) it follows that T0  0 (!)T0 = T0 r 0 (!)T0 + T0 e 0 (!)T0 We may thus write R = Rr + Re where Rr is given by Z 1 r R = 2

(14) T0  0 (!)T0 d! Z; r (!) 1 u G0 (ei!  0 )G0 (ei!  0 ) d! = 2

(15)   ; v (!) Note that Rr in only depends on ru (!) and not the total input spectrum u (!), as in the open-loop case. If we partition Rr conformably with  we see that, due to the chosen parameterization,  r  r R = R0 00 where Z  r (! ) u G0 (ei!  0 )G0 (ei!  0 ) d! Rr = 21

(16)    ; v (!) (31) Returning to Re we see that Z 1 e R = 2

(17) T0 e 0 (!)T0 d! ;. Now, Re can be partitioned as.  e e Re = RRe RRe   e e and Re can be and explicit expressions for R , R  found by using. . . S0 H0 = Fy S0 H0 G0 ; H 0 T0 Fy ;   1 However, we will not carry out the calculations here. Instead we note that an expression for the covariance of ^N can be extracted from the top left block of Cov ^N (cf. (30)). Using the above expressions we soon arrive at Cov ^ 1 (Rr + %);1 (32) N. . N. where. e (Re );1 Re

(18) 0 % = Re ; R   is the Schur complement of Re in the matrix Re . Note that, since %

(19) 0, this contribution has a positive eect on the accuracy. Here it is important to note that the term % is entirely due to the noise part of the input spectrum. We conclude that in the direct approach the noise in the loop is utilized in reducing the variance. Further insight in the origin of the term % can be gained through the following thought experiment. If the identication experiment is performed with no external reference signal present (r (!) = 0) we get Cov ^ 1 %;1 N. N. Thus % characterizes the lower limit of achievable accuracy for the direct method.. 5.2 Indirect methods. Let us now try to derive similar expressions for indirect identication.. Independently parameterized noise model. Consider the following model y(t) = Gcl (q )r(t) + Hcl (q  )e(t) (33) where, as in (8), Gcl is parameterized in terms of the open-loop parameters. Estimating and  in (33) is an open-loop problem. All standard open-loop results can thus be applied also in this case. As an example, assume S 2 M, then we can immediately write down the expressions for the covariance of ^: Cov ^ 1 (R );1 N. where. N. cl. Z   (!) 1 r Rcl = 2

(20) G0cl (ei!  0 )G0cl (ei!  0 ) d! ; vcl (!).

(21) The dual-Youla method. Note that. Suppose we use the following parameterization of the dual-Youla method (10):. G0cl = S02G0 and, since vcl = jS0 j2 v , we see that. z (t) = R(q )x(t) + H (q  )e(t) This is an open-loop problem since x(t) and e(t) are independent. Hence, if S 2 M, then. Z  r (!) u G0 (ei!  0 )G0 (ei!  0 ) d! Rcl = 21

(22)    ; v (! ). This is in fact identical to the expression (31), hence. Cov ^N N1 (Rr );1. (34). and as we remarked before this covariance will always be larger than the covariance obtained in the direct method, equation (32). The dierence stemming from the term % that is missing in (34). Thus, in terms of accuracy of the parameter estimates, the direct method outperforms the indirect.. where. Z   (! ) 1 x RY oula = 2

(23) R0 (ei!  0 )R0 (ei!  0 ) d!  ; vcl (!) Now, since x(!) = jLj2 jY j4 r (!) vcl (!) = jS0 j2 v (!) S0 = LY (D ; XR0 ) R0 = L(D ; XR0)2 G0. Fixed noise model Consider now the case where the noise model is xed H (q  ) = H (q). Typically H 6= H0 so that we are in the situation S 62 M. The analysis above will thus not apply here. However, since estimating in (33) with H (q  ) = H (q) is an open-loop problem we can use the standard open-loop covariance results for the case of an inconsistent noise model. To this end, assume G0 2 G . Then we get (cf. expression (9.55) in 7]). Cov ^ N1 R;1Q R;1 where. Z  r G0 (ei!  )G0 (ei!  ) d! R = 21

(24) 0 0 cl cl j H ;  j2 Z  jS0 j2 r u G0 (ei!  0 )G0 (ei!  0 ) d! = 21

(25)   ; jH j2. and. Z  vcl r G0 (ei!  )G0 (ei!  ) d! Q = 21

(26) 0 0 cl cl ;  j H  j4 Z  jS0 j4 v r u G0 (ei!  0 )G0 (ei!  0 ) d! = 21

(27)   4 j H j  ;. For all H ,. R;1Q R;1

(28) (Rr );1 with equality for H = Hcl0 = S0 H0 . Thus we have the following ranking: The direct method gives better accuracy than the indirect noise model with an independently parameterized noise model which in turn gives better accuracy than the indirect method with a xed noise model.. Cov ^N N1 (RY oula );1. we get. Z  r (!) 1 u G0 (ei!  0 )G0 (ei!  0 ) d! RY oula = 2

(29)   ; v (!) Thus RY oula = Rr so that. Cov ^N N1 (Rr );1. Not surprisingly, the accuracy for this method is identical to the accuracy for the other indirect methods with independently parameterized noise models.. 5.3 Indirect identi cation with optimal accuracy So far in this section we have seen that the indirect methods we have considered gives worse accuracy than the direct method. This is not the case in general for indirect identication as we will see presently. In the following we will review and extend a result that was rst derived in 9] (see also 3] and 10] for related results) and we will show that indirect identication can give the same level of accuracy as direct identication. Suppose we identify the closed-loop system using an ARMAX model. Acl (q)y(t) = Bcl (q)r(t) + Ccl (q)e(t) Thus with denoting the closed-loop parameter vector we model the system dynamics as cl (q ) G(q ) = B Acl (q).

(30) Independently parameterized noise model. while the noise model becomes. H (q ) = ACcl ((qq)) cl. Also assume S 2 M. Next, let the regulator is given by (q) Fy (q) = X Y (q) where the polynomials X and Y are assumed coprime. Then, if we in the second step of the indirect scheme model the open-loop system as. A(q)y(t) = B (q)r(t) + C (q)e(t) Gcl = 1 +GF G y. (35). Equation (35) may be interpreted as a system of linear equations in the open-loop parameters  (36). where ; is completely determined by X and Y and where $ depends on the estimated closed-loop parameters ^ and Y . The exact denitions of ; and $ are not important at this stage but explicit expressions can be found in Appendix A. The best unbiased estimate of  is the Markov estimate ^ = ;T (Cov $);1 ;];1 ;T (Cov $);1 $ which gives. Cov ^ = ;T (Cov $);1 ;];1. (37). Now, by explicitly forming (36) and carrying out the algebra it can be shown that (the proof is given in Appendix A) $ (t 0 ) (t 0 )T ];1 Cov ^ N0 E. cl. This is also known as a Box-Jenkins model. We assume also in this case that S 2 M. Let the open-loop model be. (. it follows that (cf. (6)). ; = $. cl. To nd the indirect estimate of G0 we should thus solve for B and F in (cf. (35)). Hcl = (1 +HF G) y 8 > < Acl = AY + BX Bcl = BY > : Ccl = CY. y(t) = Gcl (q )r(t) + Hcl (q  )e(t) Ccl (q) cl (q ) =B F (q) r(t) + D (q) e(t). y(t) = G(q  )r(t) + H (q  )e(t) (q) C (q) =B F (q) r(t) + D(q) e(t). we get the following. From and. Suppose that we in the rst step in the indirect method use the following model with an independently parameterized noise model. (38). But (38) is equivalent to the open-loop expression (30). Thus, under certain circumstances, indirect identication gives the same accuracy as direct identication. As will become more apparent below, the fact that the dynamics model and the noise model shares the same poles seems crucial for obtaining the same accuracy as with the direct approach.. Bcl = BY Fcl = FY + BX. (39). This system of equations which may be written in form similar to (36): ;~  = $. (40) We may compute the Markov estimate ^ = ;~ T (Cov $ );1 ;~ ];1 ;~ T (Cov $ );1 $. This time, however, we do not obtain the open-loop expression for the variance. Instead, as shown in appendix B, we get (41) Cov ^ 1 (Rr );1 . N. . where Rr is given by (31). We once again conclude that, with an independently parameterized noise model, this indirect method gives worse accuracy than the direct method. The dierence is quantied by the term % (cf. (32)) which is missing in (41).. 5.4 Joint input-output methods. We will now derive the corresponding covariance expression for the joint input-output methods. We only give the results for the two-stage method.. The two-stage method Consider the following formulation of the closed-loop system.     u(t) = S0 (q) r(t) y(t) G (q)S (q)  0 0 + ;Fy (q)S0 (q)H0 (q) 0. . 0 S0 (q)H0 (q) e(t) (42).

(31) Let. . . ' = 12 12 = Ee(t)eT (t) 1 2. Our single input-two output model (12) is.       u(t) = S ( ) H 0 1 (q ) y(t) G(q )S (q  ) r(t) + 0 H2 (q) e(t). For convinience we introduce N 1 X 1 VN ( w2 ) = N 2 "T (t )W"(t ) t=1. where. . W = 10 w02 2. . and. Then. $ (t )W T (t ) Rw2 = E. = E$ f u (t ) uT (t ) + w22 y (t ) yT (t )g. Let. )S0 (q)H0 (q) = Hu (q) = ;Fy (qH (q) 1. and similarly. Hy (q) = S0 (Hq)H(q0)(q) = 2. Now dene where. and. Our goal is to derive an expression for the covariance of the estimate ^N . Using the standard results in Chapter 9 in 7] we immediately can write down the following expression for the covariance of ^N Cov ^ 1 R;1 QR;1. Then. where. N. $ N00 ( ) R = EV and. Q = Nlim N  E f VN0 ( )] VN0 ( )]T g !1 Alternatively we can write ;1 ;1 Cov ^N N1 w2lim !1 Rw2 Qw2 Rw2 where now $ N00 (  w2 ) Rw2 = EV. Let. d "T (t ). (t ) = ; d   = ; dd "u (t ) ; dd "y (t )   = u (t ) y (t ).  . ~(t ) = ~u (t ) ~y (t ) 1. ~u (t ) =. 1 X i=0. hy (i) y (t + i ). Qw2 = E$ ~(t )W 'W ~T (t ) = E$ f1 ~u (t ) ~uT (t ) + w22 12 ( ~u (t ) ~yT (t ) + ~y (t ) ~uT (t )) + w24 2 ~y (t ) ~yT (t )g. It follows that ;1 ;1 Cov ^N N1 wlim !1 Rw2 Qw2 Rw2 2. i $ ;1 h $ ~ = N2 E. E y (t ) ~yT (t )  y (t ) yT (t ) $ ;1  E. y (t ) yT (t ) The covariance matrix for ^N will be the top left block. in this quantity. An explicit expression for this covariance matrix can be obtained as follows. Let. . $ y (t ) yT (t ) = R R Ry = E. . R R. Then. R ;1 = y. and. Qw2 = Nlim N  E f VN0 (  w2 )] VN0 (  w2 )]T g !1. hy (i)q;i. i=0. The resulting prediction error estimate for the twostage method is given by ^N = arg min VN (). N. i=0. i=0. hu (i)q;i. X. ~u (t ) = hu (i) u (t + i ). VN () = wlim VN ( w2 ) 2 !1. . 1 X. 1 X. ". %;1. ;R;1R %;1. ;%;1 R R;1. #. %^ ;1. where. % = R ; R R;1 R %^ = R ; R R;1R If we now introduce. . Qy = E$ ~y (t ) ~yT (t ) = Q Q Q Q. .

(32) we get. h Cov ^N N2 %;1 Q ; R R;1 Q.  Joint input-output models can be applied when-. ever the reference signal is measurable knowledge of the regulator is not necessary. The joint inputoutput methods provide consistent estimates of G even with xed prelters/noise models, just as the indirect methods, but give worse accuracy than the direct method.. i ;Q R;1R + R R;1 Q R;1R %;1. A special case is when the noise models are correct, i.e., when H1 (q) = ;Fy (q)S0 (q)H0 (q) and H2 (q) = S0 (q)H0 (q). Then Qy = Ry so that. Cov ^N 2 Ry;1. and. N. h i ;1 Cov ^N N2 %;1 = N2 R ; R R;1 R. We conclude that N2 R;1 is an upper bound on Cov ^N . In the frequency domain R can be written (given that H2 (q) = S0 (q)H0 (q)) Z   (!) 1 r G0 (ei!  )G0 (ei!  ) d! R = 2

(33) 0 0   j H ; 0 j2 This is the (inverse of the) covariance matrix that would have resulted had we identied the system in open-loop, i.e., u  r. Thus it follows that the two-stage method gives worse accuracy than identication in open-loop. The relation to the previous results for the closed-loop situation is not obvious from the above expressions.. 6 Summarizing remarks We may summarize the basic issues on closed-loop identication as follows:  The basic problem with closed-loop data is that it typically has less information about the open-loop system { an important purpose of feedback is to make the closed-loop system insensitive to changes in the open-loop system.  Prediction error methods, applied in a direct fashion, with a noise model that can describe the true noise properties still gives consistent estimates and optimal accuracy. No knowledge of the feedback is required. This should be regarded as a prime choice of methods.  Several methods that give consistent estimates for open-loop data may fail when applied in a direct way to closed-loop identication. This includes spectral and correlation analysis, the instrumental variable method, the subspace methods and output error methods with incorrect noise model.  If the regulator mechanism is correctly known, indirect identication can be applied. Its basic advantage is that the dynamics model G can be correctly estimated even without estimating any noise model. A draw-back is that indirect methods typically give worse accuracy than the direct method.. Acknowledgments The authors wish to thank Dr. Paul Van den Hof for inspiring discussions and his contributions in the derivation of the asymptotic variance results for the two-stage method.. A Proof of. (38). Let. A(q) = 1 + a1 q;1 +    + ana q;na B (q) = b1 q;1 +    + bnb q;nb C (q) = 1 + c1 q;1 +    + cnc q;nc and similarly for the closed-loop polynomials. We can, without loss of generality, take the regulator polynomials to be. X (q) = x0 + x1 q;1 +    + xnx q;nx Y (q) = 1 + y1 q;1 +    + yny q;ny It follows that ; in (36) can be written (the partitioning refers to the parameters in the A, B and C polynomials). 2 ;Y ;X ; = 4 0 ;Y 0. where. 2 66 6 ;Y = 666 64. while. 1. y1 . . . .. .. .... 0. 3 2 77 66 77 66  ; = 7 1 7 X 66 64 y1 75 .. .. 2a^cl1 ; y1 3 66a^cl2 ; y2 77 66 ... 77 66 ^ 7 66 ^bcl0 777 $ = 66 bcl1 77 66 ... 77 66 c^cl1 ; y1 77 64 c^cl2 ; y2 75 .. .. 3. 0 05 ;Y. 3 77 77 .. . . . x0 77 . x1 75. x0 x1 . . .. .. ..

(34) and similarly for the closed-loop polynomials and let X and Y be as in appendix A. ;~ and $ in (40) becomes. We have (as in (37)). Cov ^ = ;T (Cov $);1 ;];1 However,. . $ ;1 Cov $ N0 E. cl (t 0 ) cl (t 0 )T. ;~ = ;;Y X. where cl is the negative gradient of. ... cl (q ) r(t)) "cl(t ) = ACcl((qq)) (y(t) ; B Acl (q) cl which, when evaluated at = 0 , becomes. Similar to the derivation in appendix A we then get. ^ = ;~ T (Cov $ );1 ;~ ];1. cl (t 0 ) = C 1 (q) ;y(t ; 1) : : :  ;y(t ; nacl ) cl0 r(t ; 1) : : :  r(t ; nbcl ) e(t ; 1) : : :  e(t ; nccl )]T. while. (43). Cov ^ N0 E$ ;T cl (t 0 ) cl (t 0 )T ;];1. 1 (y(t) ; G (q )r(t)) "cl (t ) = H (q cl. cl  ) cl (q ) = H (1q  ) (y(t) ; B F (q) r(t)). But since 1 Y (q) Ccl0 (q) = C0 (q) and X (q)y(t) ; Y (q)r(t) = ; 1 u(t) Ccl0 (q) Ccl0 (q) C0 (q). cl. cl. taken w.r.t. , giving. cl (t 0 ) = H (q)1F (q) r(t ; 1) : : :  r(t ; nbcl ) cl0 cl0 ; BF cl0((qq)) (r(t ; 1) : : :  r(t ; nfcl ))] cl0. we have ;T cl (t 0 ) = C 1(q) ;y(t ; 1) : : :  ;y(t ; na ) 0. u(t ; 1) : : :  u(t ; nb ) e(t ; 1) : : :  e(t ; nc )]T (44). where Hcl0 = S0 H0 . Now, since. It follows that = (t 0 ). $ ;1 Cov $. N0 E. cl (t 0 ) cl (t 0 )T. Here cl is the negative gradient of the closed-loop predictions errors,. Thus. ;T cl (t 0 ) = ; dd CA((qq)) (y(t) ; BA((qq)) u(t)). 2 ^ 3 bcl1 66 ^bcl2 77 6 . 7  0  $ = 666 .. 777. 6f^ ; y 7 ;Y 66 ^cl1 1 77 4fcl2 .; y2 5. Fcl0 (q)Y (q) ; Bcl0 (q)X (q) = S02 (q) Fcl2 0 (q) F0 (q) 2 Bcl0 (q)Y (q) = S0 (q)  B0 (q) Fcl2 0 (q) F0 (q) F0 (q).  = 0. we get (after some calculations). Hence $ (t 0 ) (t 0 )T ];1 Cov ^ N0 E. 2. which is (38).. B Proof of. ;~ T cl (t 0 ) = H S(q0)(Fq)(q) r(t ; 1) : : :  r(t ; nb ) 0 0 B 0 (q ) ; F (q) (r(t ; 1) : : :  r(t ; nf ))]T 0. so. (41). Let. B (q) = b1q;1 +    + bnb q;nb F (q) = 1 + f1q;1 +    + fnf q;nf C (q) = 1 + c1 q;1 +    + cnc q;nc D(q) = 1 + d1 q;1 +    + dn q;nd d. S0 (q)  d (y(t) ; B(q) r(t)) ;~ T cl (t 0 ) = ; H F (q) (q) d 0. . = 0. S0 (q)  d G(q )r(t) =H . = 0 0 (q ) d . and (41) follows immediately after switching to the frequency domain. 2.

(35) References. 1] M. Gevers. Towards a Joint Design of Identication and Control. In H. L. Trentelman and J. C. Willems, editors, Essays on Control: Perspectives in the Theory and its Applications, pages 111{151. Birkh(auser, 1993.. 2] M. Gevers, L. Ljung, and P. Van den Hof. Asymptotic variance expression for closed-loop identication and their relevance in identication for control. 1997. To be presented at SYSID'97, Fukuoka, Japan.. 3] I. Gustavsson, L. Ljung, and T. S(oderstr(om. Identication of Processes in Closed Loop | Identiability and Accuracy Aspects. Automatica, 13:59{ 75, 1977.. 4] F. R. Hansen. A fractional representation to closed-loop system identication and experiment design. Phd thesis, Stanford University, Stanford, CA, USA, 1989.. 5] F. R. Hansen, G. F. Franklin, and R. Kosut. Closed-loop identication via the fractional representation: experiment desgin. In Proceedings of the American Control Conference, pages 1422{1427, Pittsburg, PA, 1989.. 6] W. S. Lee, B. D. O. Anderson, I. M. Y. Mareels, and R. L. Kosut. On some key issues in the windsurfer approach to adaptive robust control. Automatica, 31(11):1619{1636, 1995.. 7] L. Ljung. System Identication: Theory for the User. Prentice-Hall, 1987.. 8] T. McKelvey. Periodic excitation for identication of dynamic errors-in-variables systems operating in closed loop. In Proc. 13th IFAC World Congress, volume J, pages 155{160, San Francisco, CA, 1996.. 9] T. S(oderstr(om, L. Ljung, and I. Gustavsson. On The Accuracy Of Identication And the Design Of Identication Experiments. Technical Report 7428, Department of Automatic Control, Lund Institute of Technology, Lund, Sweden, 1974.. 10] T. S(oderstr(om, P. Stoica, and B. Friedlander. An Indirect Prediction Error Method for System Identication. Automatica, 27:183{188, 1991.. 11] P. M. J. Van den Hof and R. J. P. Schrama. An Indirect Method for Transfer Function Estimation from Closed Loop Data. Automatica, 29(6):1523{ 1527, 1993.. 12] P. M. J. Van den Hof and R. J. P. Schrama. Identication and Control | Closed-loop Issues. Automatica, 31(12):1751{1770, 1995.. 13] P. M. J. Van den Hof, R. J. P. Schrama, R. A. de Callafon, and O. H. Bosgra. Identication of normalized coprime factors from closed-loop experimental data. European Journal of Control, 1(1):62{74, 1995.. 14] M. Vidyasagar. Control System Synthesis: A Factorization Approach. MIT Press, Cambridge, Mass., 1985.. 15] Y.-C. Zhu. Black-box identication of MIMO transfer functions: asymptotic properties of prediction error models. Int. Journal of Adaptive Control and Signal Processing, 3:357{373, 1989..

(36)

References

Related documents

If we con- tinue to iterate, there is an improvement in the obtained model estimate, and both MORSM (without low-order noise-model estimate) and BJSM perform similarly, at- taining

46 Konkreta exempel skulle kunna vara främjandeinsatser för affärsänglar/affärsängelnätverk, skapa arenor där aktörer från utbuds- och efterfrågesidan kan mötas eller

Generally, a transition from primary raw materials to recycled materials, along with a change to renewable energy, are the most important actions to reduce greenhouse gas emissions

The increasing availability of data and attention to services has increased the understanding of the contribution of services to innovation and productivity in

Av tabellen framgår att det behövs utförlig information om de projekt som genomförs vid instituten. Då Tillväxtanalys ska föreslå en metod som kan visa hur institutens verksamhet

In the simulation study below we will illustrate the nite sample behavior of this method and it will then be clear that the noncausal FIR model used in the rst step of the

The projection method may be applied to arbi- trary closed-loop systems and gives consistent esti- mates regardless of the nature of the feedback and the noise model used. Thus

We conclude that the least squares identication step used in the iterative H 2 identication and control schemes approximates the Gauss Newton step in the direct minimization