• No results found

Speeding up the Training of Lattice-Ladder Multilayer Perceptrons

N/A
N/A
Protected

Academic year: 2021

Share "Speeding up the Training of Lattice-Ladder Multilayer Perceptrons"

Copied!
30
0
0

Loading.... (view fulltext now)

Full text

(1)

Speeding up the Training of Lattice–Ladder

Multilayer Perceptrons

Dalius Navakauskas

Division of Automatic Control

Department of Electrical Engineering

Link¨

opings universitet, SE-581 83 Link¨

oping, Sweden

WWW: http://www.control.isy.liu.se

E-mail: dalius@isy.liu.se

March 19, 2002

AUTOMATIC CONTROL

COM

MUNICATION SYSTEMS

LINKÖPING

Report no.: LiTH-ISY-R-2417

Submitted to Neural Networks

Technical reports from the Control & Communication group in Link¨oping are available at http://www.control.isy.liu.se/publications.

(2)
(3)

Speeding up the Training of Lattice–Ladder

Multilayer Perceptrons

Dalius Navakauskas

Radioelectronics Department, Electronics Faculty

Vilnius Gediminas Technical University

Auˇsros Vart¸

u 7

A

, 2600 Vilnius, Lithuania

March 19, 2002

Abstract

A lattice–ladder multilayer perceptron (LLMLP) is an appealing structure for advanced signal processing in a sense that it is nonlinear, possesses infinite im-pulse response and stability monitoring of it during training is simple. However, even moderate implementation of LLMLPtraining hinders the fact that a lot of storage and computation power must be allocated. In this paper we deal with the problem of computational efficiency of LLMLPtraining algorithms that are based on computation of gradients, e.g., backpropagation, conjugate-gradient or Levenberg-Marquardt. The paper aims to explore most computationally demanding calculations—computation of gradients for lattice (rotation) param-eters. Here we find and propose to use for training of several LLMLParchitec-tures a simplest in terms of storage and number of delay elements computation of exact gradients, assuming that the coefficients of the lattice–ladder filter are held stationary.

Keywords: Gradient adaptive lattice algorithms, lattice–ladder filter, lattice– ladder multilayer perceptron, training, estimation, adaptation, backpropaga-tion.

1

Introduction

Complex nonlinear mapping which Artificial Neural Networks (ANNs) perform using simple basis functions and not required deep knowledge about process which generates data are key properties for their application in signal processing success. During the last decade a lot of new structures for ANNs were intro-duced fulfilling the need to have models for nonlinear processing of time-varying signals [8, 9, 17, 18].

Currently with Automatic Control Division at ISY at Link¨oping University whose financial

(4)

One of many and perhaps the most straightforward way to insert temporal behaviour into ANNs is to use digital filters in a place of synapses of multi-layer perceptron (MLP). Following that way, a time-delay neural network [20], FIR/IIR MLPs [1, 22], gamma MLP [10], a cascade neural network [5] to name a few ANN architectures, were developed. All of such architectures belong to the class of locally–recurrent globally–feedforward neural networks (for an overview of ANN architectures see [19]).

A lattice–ladder realization of IIR filters incorporated as MLPsynapses forms a structure of lattice–ladder multilayer perceptron (LLMLP) firstly in-troduced by A. Back and A. C. Tsoi [2] and followed by several simplified versions proposed by the author [11, 13]. A LLMLPis an appealing structure for advanced signal processing in a sense that it is nonlinear, possesses infinite impulse response and stability monitoring of it during training is simple. How-ever, even moderate implementation of LLMLPtraining hinders the fact that a lot of storage and computation power must be allocated [12].

Well known neural network training algorithms such as backpropagation and its modifications, conjugate-gradients, Quasi-Newton, Levenberg-Marquardt, etc. or their adaptive counterparts like temporal-backpropagation [21], IIR MLP training algorithm [3], recursive Levenberg-Marquardt [14], etc., essentially are based on the use of gradients (also called sensivity functions)—partial deriva-tives of the cost function with respect to current weights. It is a fact, that the only reason made backpropagation algorithm so popular was an efficient way

of gradients computation. Here we are not going to consider why and which

adaptive training algorithm applied for LLMLPis the best (such studies, e.g., for FIR MLPtraining algorithms, could be found in [4, 6]). Rather we will try to explore a backbone of all mentioned training algorithms—specific to LLMLP computation of gradients. Although, in order not to obscure the main ideas, first we will work only with the one LLF and afterwards generalize results to particular ANNs structures.

The main way of simplification of computation of gradients we have chosen here is not new—it is known mostly in the theory of adaptation of lattice–ladder filters (LLF) [15]. However, so far it was not used to simplify the training algorithms for ANNs. Moreover, here we present all possible rearrangements for the simplified algorithm trying to identify the most efficient one in terms of number of multiplication and delay elements.

The organization of the paper is as follows: First, in Section 2, some previ-ous results on the computation of LLF gradients are described. They uncover the main drawback of usual computation of gradients and introduce to the way we have chosen to simplify. In Section 3, a general statement of the problem and a roadmap of explored ways of computation simplifications is presented. A typical example of simplification of computation of LLF gradients is presented in Section 4, while results of all undertaken explorations are summarized in Appendix A with discussion given in Section 5. Finally, in Section 6, applica-tions of chosen way of gradient computation to simplify training algorithms for LLMLP, Reduced Size LLMLPand Extra Reduced Size LLMLPstructures are presented.

(5)

2

Some Previous Results on Coefficient

Adaptation of Lattice–Ladder Filter

In order not to obscure main ideas, we will first work only with one LLF and afterwards in Section 6 generalize results to particular LL MLPstructures. Al-though, the final LLF training algorithm is fairly simple, many intermediate steps involved in the derivation could be confusing. Thus, here we will present previous results on calculation of LLF gradients, introduce the simplifications we have chosen, and in the next sections actually state the problem and derive all simplified expressions.

Let us here consider a class of lattice–ladder descent gradient algorithms used to compute LLF coefficient update based on the minimization of the in-stantaneous mean square error criterion

E(n) = Ee(n)2= E 

(sout(n)− d(n))2



, (1)

where sout(n) is the current output signal of LLF, d(n) is a desired signal, e(n)

is an instantaneous error signal and E[·] is the expected value operator. Usually, gradient descent algorithms seek a minimum point of the cost function E(n) by adjusting parameters in the negative gradient direction of this cost function. What concerns our study, it really does not matter how gradients will be used in the coefficients update, but it is important that gradients would be calculated instantaneously.

2.1

Standard Gradient Expressions

Consider one lattice–ladder filter used in LLMLPstructure, when its computa-tions are expressed by

 fj−1(n) bj(n)  =  cos Θj − sin Θj sin Θj cos Θj   fj(n) zbj−1(n)  , j = 1, 2, . . . , M ; (2a) sout(n) = M  j=0 vj· bj(n), (2b)

with boundary conditions

b0(n) = f0(n); fM(n) = sin(n). (2c)

Here we used the following notations: sin(n) and sout(n) are signals at input and

output of LLF, fj(n) and bj(n) are forward and backward signals floating in j-th

section of LLF, Θj and vj are lattice and ladder coefficients correspondingly.

It could be shown (see for example [7]) that gradient expressions for the LLF coefficients are ∇vj(n) = ∂E(n) ∂vj = e(n)· bj(n); (3a) ∇Θj(n) = ∂E(n) ∂Θj = e(n)· M  r=0 vr· Djbr(n); (3b)

(6)

z z sin(n) = fM(n) sout(n) Θ1 Θ2 ΘM v0 v1 vM b1(n) bM(n) f0(n) = b0(n) f1(n)

(a) Normal form.

z z 1 sout(n) = gM(n) Θ1 Θ1 ΘM v0 v1 vM t1(n) tM(n) g0(n) = t0(n) g1(n) (b) Transpose form.

Figure 1: Lattice–ladder filter realizations.

 Djfr−1(n) Djbr(n)  =  cos Θj − sin Θj sin Θj cos Θj   Djfr(n) zDjbr−1(n)  (3c) +      −br(n) fr−1(n) , r = l, 0, otherwise, with such boundary conditions

DjfM(n) = 0, Djb0(n) = Djf0(n), (3d)

where we have introduced new notation

Dj = ∂Θj

. (4)

From (3), it is evident that the calculation of each Djbr(n) requires M

recur-sions, yielding a training algorithm with total complexity proportional to M2.

2.2

Towards Simplified Computation of the Gradients

One possible way to simplify the LLF gradients calculation was presented by J. R.-Fonollosa and E. Masgrau [16]. Here we introduce their method, while in the next sections we explore all possible ways of simplification.

(7)

We assume that the concept of flowgraph transposition is already known (if not—consult, e.g., [15, pages 291–293]). Applying the flowgraph transposition rules to the LLF equations (2a) and (2b) that for convenience are sketched in Figure 1(a), we obtain the LLF transpose realization shown in the Figure 1(b). The resulting system gives rise to the following recurrent relation

 gj(n) tj−1(n)  =  1 z   cos Θj sin Θj − sin Θj cos Θj   gj−1(n) tj(n)  (5a) +  0 vj−1  , j = M, . . . , 2, 1,

with boundary conditions

tM(n) = vM, g0(n) = t0(n), gM(n) = sout(n). (5b)

After simple re-arrangements it could be shown that filtered regressor com-ponents alternatively to (3b)–(3d) could be expressed as

∇Θj(n) = e(n)

fj(n)tj(n)− bj(n)gj(n)

cos Θj

. (6)

The main idea towards simplifying the gradient computations for lattice parameters is to find a recurrence relation that realizes the mapping

 fj−1(n)tj−1(n) bj−1(n)gj−1(n)  −→  fj(n)tj(n) bj(n)gj(n)  , (7)

in such a way that all the necessary transfer functions may be obtained from one single filter.

3

A General Roadmap of Gradient Calculation

Multiplying (2a) by gj−1(n) and tj−1(n) respectively, while similarly,

multiply-ing (5a) by fj(n) and bj(n), a system of eight equations is obtained

 fj−1(n)gj−1(n) bj(n)gj−1(n)  =  cos Θj − sin Θj sin Θj cos Θj   fj(n)gj−1(n) zbj−1(n)gj−1(n)  ; (8a)  fj−1(n)tj−1(n) bj(n)tj−1(n)  =  cos Θj − sin Θj sin Θj cos Θj   fj(n)tj−1(n) zbj−1(n)tj−1(n)  ; (8b)  fj(n)gj(n) fj(n)tj−1(n)  =  1 z   cos Θj sin Θj − sin Θj cos Θj   fj(n)gj−1(n) fj(n)tj(n)  (8c) +  0 vj−1  fj(n);  bj(n)gj(n) bj(n)tj−1(n)  =  1 z   cos Θj sin Θj − sin Θj cos Θj   bj(n)gj−1(n) bj(n)tj(n)  (8d) +  0 vj−1  bj(n).

The main objective of the paper is to try to find all possible systems that could be obtained by re-arranging initial system described by (8).

(8)

Representation of the system we will deal with from now on is shown as a flow-graph in Figure 2. It consists of four independent blocks, ending in some information flow directions that are in conflict, and showing that the initial system is not implementable. Note that in general this system actually has 4 interdependent paths, thus actually 24 = 16 possible ways of information flow

by them. Two un-implementable cases (beeing more specific, I and XVI cases) must be excluded from the following discussion. They correspond to situations where information flows in the same direction in all the paths. All remaining possibilities are indicated in Table 1 and will be studied here.

As seen, in principal there are 14 different possibilities of information flow in the system at hand. Arrows in Table 1 show the direction of information flow, while binary 1 (0) indicates that particular direction must be obtained reversing (taking) the initial information flow as stated in (8). The last row serves as a guide for the expressions given in Appendix A and will be discussed later.

4

A Case Study of Gradient Calculation

Let us pick case XI and show in the step-by-step fashion the re-arrangements involved in the derivation. According to Table 1, we are seeking a simple filter for the following mapping to be realized

    fj(n)gj(n) bj−1(n)gj−1(n) fj(n)tj(n) bj−1(n)tj−1(n)     −→     fj−1(n)gj−1(n) bj(n)gj(n) fj−1(n)tj−1(n) bj(n)tj(n)     . (9)

In order to achieve this mapping two information flow directions must be re-versed: now fj(n)gj−1(n) must be computed based on fj(n)gj(n), and bj(n)tj(n)

must be computed based on bj(n)tj−1(n). For the convenience we rewrite

un-changed expressions here  fj−1(n)gj−1(n) bj(n)gj−1(n)  =  cos Θj − sin Θj sin Θj cos Θj   fj(n)gj−1(n) zbj−1(n)gj−1(n)  ; (10a)  fj−1(n)tj−1(n) bj(n)tj−1(n)  =  cos Θj − sin Θj sin Θj cos Θj   fj(n)tj−1(n) zbj−1(n)tj−1(n)  . (10b)

For the first re-direction to be fulfilled we take (8c) and re-arrange it as follows  fj(n)gj−1(n) fj(n)tj−1(n)  =  1 0 0 z−1    1 cos Θj sin Θj cos Θj sin Θj cos Θj 1 cos Θj      fj(n)gj(n) fj(n)tj(n)  . (10c)

Similarly, taking (8d) and re-arranging we get  bj(n)gj(n) bj(n)tj(n)  =  1 0 0 z−1    1 cos Θj sin Θj cos Θj sin Θj cos Θj 1 cos Θj      bj(n)gj−1(n) bj(n)tj−1(n)  . (10d)

(9)

z z z z cos Θj cos Θj cos Θj cos Θj cos Θj cos Θj cos Θj cos Θj sin Θj sin Θj sin Θj sin Θj sin Θj sin Θj sin Θj sin Θj fj (n )gj (n ) bj (n )gj (n ) fj (n )tj (n ) bj (n )tj (n ) fj (n )gj− 1 (n ) bj (n )gj− 1 (n ) fj (n )tj− 1 (n ) bj (n )tj− 1 (n ) fj− 1 (n )gj− 1 (n ) bj− 1 (n )gj− 1 (n ) fj− 1 (n )tj− 1 (n ) bj− 1 (n )tj− 1 (n ) vj− 1 bj (n ) vj− 1 fj (n ) (a) (b) (c) (d) Figure 2: Represen tation of order recursion b et w een pro duct transfer functions, where eac h sub-blo ck lab elled b y (a)–(d) corresp onds to one p air of equations in (8).

(10)

II III IV V VI VI I V III IX X XI XII XIII XIV XV fj (n )gj (n ) 0 1 0 1 0 1 0 1 0 1 0 1 0 1 fj (n )gj− 1 (n ) bj (n )gj (n ) 1 0 0 1 1 0 0 1 1 0 0 1 1 0 bj (n )gj− 1 (n ) fj (n )tj (n ) 0 0 0 1 1 1 1 0 0 0 0 1 1 1 fj (n )tj− 1 (n ) bj (n )tj (n ) 0 0 0 0 0 0 0 1 1 1 1 1 1 1 bj (n )tj− 1 (n ) fj (n )gj− 1 (n ) 1 0 1 0 1 0 1 0 1 0 1 0 1 0 fj− 1 (n )gj− 1 (n ) bj (n )gj− 1 (n ) 1 0 0 1 1 0 0 1 1 0 0 1 1 0 bj− 1 (n )gj− 1 (n ) fj (n )tj− 1 (n ) 0 0 0 1 1 1 1 0 0 0 0 1 1 1 fj− 1 (n )tj− 1 (n ) bj (n )tj− 1 (n ) 1 1 1 1 1 1 1 0 0 0 0 0 0 0 bj− 1 (n )tj− 1 (n ) (A.2) (A.3) (A.4) (A.5) (A.6) — (A.8) (A.9) — (A.11) (A.12) (A.13) (A.14) (A.15) T a ble 1 : T he general roadmap of all deriv ed re-arrangemen ts for the computation of LLF lattice g radien ts. The top ro w n ames fourteen cases o f all p o ssible re-arrangemen ts of information flo w in a system of equations (8). E ac h case is listed as a column of 2–9 ro ws. A rro ws sho w the d irection of signal flo w, binary 1 u p o n a rro w states that p articular direction of information flo w w as rev ersed, while binary 0 states that it w as remained original as stated in the system of equations. Note that in eac h case arro w directions in ro w s 2–5 and 6–9 are the same b ecause w e need uniformit y o f information flo w, while binary 1 / 0 p atterns in these ro w s d iffer b ecause o f d ifferen t set-up of eac h sub-blo ck. Last ro w links eac h case to the simplification results presen ted in A pp endix A , w hile gra y bac k ground additionally indicates that some results ha v e adv a nce op eration(s) z 1 , a nd th us are unimplemen table.

(11)

Now, (10) describes a system that has no conflicting information flow directions. However, there are two advance operations to be performed in (10c) and in (10d), making the system non-causal, hence un-implementable. There is, however, more to this computation than first meets the eye.

Notice first, the mapping (9) needs only four expressions to be specified. Thus a simplification of (10) becomes plausible. It could be shown that (10) could be simplified drastically and finally expressed by

    fj−1(n)gj−1(n) bj(n)gj(n) fj−1(n)tj−1(n) bj(n)tj(n)     =     1 1 z 1     ×    I4+     0 − sin Θj − sin Θj 0 sin Θj 0 0 sin Θj − sin Θj 0 0 − sin Θj 0 sin Θj sin Θj 0         ×     1 z 1 1     ·     fj(n)gj(n) bj−1(n)gj−1(n) fj(n)tj(n) bj−1(n)tj−1(n)     + vj−1     0 − sin Θjbj−1(n) cos Θjfj(n) −bj−1(n)     . (11a)

Notice that the simplified system is already causal. In order to finish the deriva-tion, the boundary conditions when M such systems are cascaded must be con-sidered. Based on (2c) and (5b) we get such new boundary conditions for a reduced system

fM(n)gM(n) = sout(n); (11b) b0(n)g0(n) = f0(n)t0(n);

fM(n)tM(n) = vM; b0(n)t0(n) = f0(n)g0(n).

5

Selecting Simplest Calculation of Gradients

Based on the results presented in Appendix A and summarized in the last row in Table 1, it appears that the most promising ways of LLF lattice gradient computation are cases III, IV, XI and XII. That is justified by the fact that the other remaining cases have one or several un-implementable advanced operators. Here we exclude (because of unnecessary additional complexity of the resulting system) the fact that advance operators could be omitted from the computations delaying remaining signals adequately.

In order to narrow our choice and select the simplest calculation of gradients, let us look into cases III, IV, XI and XII more closely.

5.1

Comparison of Order Recurssions

Figures 3, 4, 5 and 6 show graphically order recursions for case III, IV, XI and XII, respectively. By the use of dashed boxes we separate ”main order” calculations from the calculation involved with the delays and additional signals going from lattice part of the LLF.

(12)

Case One Order Recurssion Regressor Lattice of order M

Additions Constants Delays Additions Constants Delays

III 9 3 2 10M − 1 4M 2M− 1

IV 8 3 2 10M 4M 2M

XI 9 5 2 10M − 1 6M + 1 2M

XII 10 5 2 10M − 1 6M + 1 2M Table 2: A summary for the case III, IV, XI and XII of LLF lattice gradient computation complexity in terms of the number of addition operations, con-stants and delay operations. Data are shown for the one order recursion and for the complete regressor lattice.

It could be seen that all calculations in dashed boxes are original, outside counterparts are similar in pairs, namely cases III with IV and XI with XII. Note also that boxed calculations in all the cases have the same number of constants, i.e., only two, but different number of additions—six additions in cases IV with XII, while seven additions in cases III and XI (here we do not take into acount differences between addition and subtraction operations). From aforementioned figures it also appears that all the cases have the same number (two) of delay elements, while cases III and IV differ from cases XI and XII in one way, the latter ones have two additional constants (sin Θjand cos Θj). A summary of the

involved calculation in single order recursions are shown in Table 2 as leftmost four columns.

5.2

Comparison of Regressor Lattice

Complete calculation of LLF lattice gradients forms a regressor lattices that are graphically represented in the Figure 7, Figure 8, Figure 9 and Figure 10 for the case III, IV, XI and XII, correspondingly. Each regressor lattice is formed cas-cading M sections of the order recursion blocks and taking in an account bound-ary conditions. Gradients for the lattice (rotation) parameters are calculated based on (6) using extra addition and multiplication with constant (1/ cos Θj)

operations for each of them.

Closer examination of aforementioned figures reveals the fact, that in the case III we have one delay less and the case IV one addition operation more than in the other cases at the consideration. Note also, that cases III and XI have regressor lattices driven by the output of the primary lattice, that could be in some cases considered as a drawback. More importantly, cases XI and XII have approximately 50% more constants than other two cases (see summary in Table 2 in last three columns).

5.3

Simplest Calculation of Gradient

Our study reveals that in terms of the number of additions, delays and constants the simplest calculations of LLF lattice gradients are offered by cases III and IV. It is a very close run among them. Nevertheless case III in comparison to case IV has one addition and delay less, for the application to simplified LLMLP training we finally select case IV. The main reason for that is the outstanding regularity of the structure of involved computations and that it does not require

(13)

z z sin Θj sin Θj fj(n)gj(n) bj(n)gj(n) fj(n)tj(n) bj(n)tj(n) fj−1(n)gj−1(n) bj−1(n)gj−1(n) fj−1(n)tj−1(n) bj−1(n)tj−1(n) vj−1bj−1(n) fj−1(n) vj−1 Static part of the recursion, Θ

j

Figure 3: A representation of order recursion corresponding to LLF lattice gra-dient calculation for case III as defined in Table 1.

z z sin Θj sin Θj fj(n)gj(n) bj(n)gj(n) fj(n)tj(n) bj(n)tj(n) fj−1(n)gj−1(n) bj−1(n)gj−1(n) fj−1(n)tj−1(n) bj−1(n)tj−1(n) vj−1bj−1(n) fj−1(n) vj−1 Static part of the recursion, Θ j

Figure 4: A representation of order recursion corresponding to LLF lattice gra-dient calculation for case IV as defined in Table 1.

(14)

z z sin Θj sin Θj sin Θj cos Θj fj(n)gj(n) bj(n)gj(n) fj(n)tj(n) bj(n)tj(n) fj−1(n)gj−1(n) bj−1(n)gj−1(n) fj−1(n)tj−1(n) bj−1(n)tj−1(n) vj−1bj−1(n) fj(n) vj−1 Static part of the recursion, Θ

j

Figure 5: A representation of order recursion corresponding to LLF lattice gra-dient calculation for case XI as defined in Table 1.

z z sin Θj sin Θj sin Θj cos Θj fj(n)gj(n) bj(n)gj(n) fj(n)tj(n) bj(n)tj(n) fj−1(n)gj−1(n) bj−1(n)gj−1(n) fj−1(n)tj−1(n) bj−1(n)tj−1(n) vj−1bj−1(n) fj(n) vj−1 Static part of the recursion, Θ j

Figure 6: A representation of order recursion corresponding to LLF lattice gra-dient calculation for case XII as defined in Table 1.

(15)

z z z z z 1 cos Θ1 1 cos ΘM sin(n) sout(n) v0 v1 v1 vM vM Θ1 Θ2 ΘM Θ 1 Θ 2 Θ M ∇Θ1(n) ∇ΘM(n) Regressor Lattice

Figure 7: A representation of calculations involved in primary and regressor lattices for the case III as defined in Table 1. Regressor lattice is outlined by dashed box, while computations of Θ

i are sketched in Figure 3.

z z z z z z 1 cos Θ1 1 cos ΘM sin(n) sout(n) v0 v1 v1 vM vM Θ1 Θ2 ΘM Θ 1 Θ 2 Θ M ∇Θ1(n) ∇ΘM(n) Regressor Lattice

Figure 8: A representation of calculations involved in primary and regressor lattices for the case IV as defined in Table 1. Regressor lattice is outlined by dashed box, while computations of Θ

(16)

z z z z z z 1 cos Θ1 1 cos ΘM sin(n) sout(n) v0 v0 v1 v1 vM vM Θ1 Θ2 ΘM Θ 1 Θ 2 Θ M ∇Θ1(n) ∇ΘM(n) sin Θ1 sin ΘM cos Θ1 cos Θ2 Regressor Lattice

Figure 9: A representation of calculations involved in primary and regressor lattices for the case XI as defined in Table 1. Regressor lattice is outlined by dashed box, while computations of Θ

i are sketched in Figure 5.

z z z z z z 1 cos Θ1 1 cos ΘM sin(n) sout(n) v0 v0 v1 v1 vM vM Θ1 Θ2 ΘM Θ 1 Θ 2 Θ M ∇Θ1(n) ∇ΘM(n) sin Θ1 sin ΘM cos Θ1 cos Θ2 Regressor Lattice

Figure 10: A representation of calculations involved in primary and regressor lattices for the case XII as defined in Table 1. Regressor lattice is outlined by dashed box, while computations of Θ

(17)

feedback of the output signal of primary lattice to the input of regressor lattice as in case III.

6

Simplified Training Algorithms

After selection of, in our opinion, the most promising way (case IV) of calculation of LLF lattice gradients, we will here present simplified training algorithms for the three LLMLPstructures—LLMLP, Reduced Size LLMLPand Extra Reduced Size LLMLP.

6.1

Lattice–Ladder Multilayer Perceptron

A LLMLP [2] of size L layers,{N0, N1, . . . , NL} nodes and {M1, M2, . . . , ML}

filter orders could be expressed by

slh(n) = Φl Nl  i=1 Ml  j=0 vlijh· blijh(n)    =ˆsl h(n)  , l = 1, 2, . . . , L, (12a)

when local flow of information in the lattice part of filters is  fl i,j−1,h(n) bl ijh(n)  =  cos Θl

ijh − sin Θlijh

sin Θl

ijh cos Θlijh

  fl ijh(n) zbl i,j−1,h(n)  , j = 1, 2, . . . , Ml, (12b)

with such initial and boundary conditions

bli0h(n) = fi0hl (n); fi,Ml l+1,h(n) = sli−1(n), (12c)

where sl

h(n) is an output signal of the neuron; ˆslh(n) is a signal before activation

function of the neuron; Θl

ijhand vlijh are weights of filter’s lattice and ladder

parts correspondingly; blijh(n) and fijhl (n) are forward and backward predic-tion errors of the filter; i, j and h index inputs, filter coefficients and outputs respectively.

Let us consider calculation of sensivity functions using backpropagation al-gorithm for LLMLP l-th hidden layer neurons. It could be shown that sensivity functions for LLMLPcould be expressed by

∇vl

ijh(n) = δhl(n)blijh(n); (13a) ∇Θl

ijh(n) = δhl(n) Ml



r=0

(18)

Note, that these expressions are similar to (2b) and (3b) in a way that here we used additional indexes to state LLF position in the whole LLMLParchi-tecture (beeing precise, we showed expressions for sensivity functions for j-th coefficients of LLF connecting i-th input neuron with h-th output neuron in

l-th layer of LLMLP) and replaced output error e(n) term by the generalized

local instantaneous error, i.e., δl

h(n) = ∂E(n)/∂ ˆslh(n), that could be explicitly

expressed by δhl(n) =      −el h(n)ΦL ˆslh(n) ! , l = L; Φl ˆslh(n)! Nl+1 " p=1 δl+1p (n) Ml+1 " j=0 vl+1hjpγl+1hjp(n), l = L, (13c)

with γ and its companion ϕ by ϕl+1h,j−1,p(n) γhjpl+1(n) = cos Θl+1hjp − sin Θl+1hjp sin Θl+1hjp cos Θl+1hjp ϕl+1hjp(n) h,jl+1−1,p(n) , (13d)

which is nothing else but filtered back error signals.

In order to simplify training of LLMLPwe replace (13b) with similar to (6) expression as

∇Θl ijh(n) =

fl

ijh(n)tlijh(n)− blijh(n)gijhl (n)

cos Θl ijh

δhl(n), (14)

where order recursion computation is done according to case IV (A.4) by     fijhl (n)gijhl (n) blijh(n)glijh(n) fi,j−1,hl (n)tli,j−1,h(n) bli,j−1,h(n)tli,j−1,h(n)     =     1 z sin Θl

ijh sin Θlijh 0

sin Θl

ijh z 0 sin Θlijh −z sin Θl

ijh 0 z −z sin Θlijh

0 −z sin Θl

ijh − sin Θlijh 1

        fl i,j−1,h(n)gli,j−1,h(n) bl i,j−1,h(n)gi,j−1,hl (n) fl ijh(n)tlijh(n) bl ijh(n)tlijh(n)     + vli,j−1,h     0 0 fl i,j−1,h(n) bl i,j−1,h(n)     , (15a)

with boundary conditions:

fi0hl (n)gli0h(n) = fi0hl (n)tli0h(n); (15b)

bli0h(n)gli0h(n) = bli0h(n)tli0h(n);

fiMl lh(n)tliMlh(n) = viMl lh;

(19)

6.2

Reduced Size LLMLP

A Redused Size LLMLP(RLLMLP ) [11] of size L layers, {N0, N1, . . . , NL}

nodes and{M1, M2, . . . , ML} filter orders could be expressed using previously

presented equation (12a), when the local flow of information in the lattice part of filters changes as follows

 fl i,j−1(n) bl ij(n)  =  cos Θl ij − sin Θlij sin Θl ij cos Θlij   fl ij(n) zbl i,j−1(n)  , j = 1, 2, . . . , Ml, (16a)

with initial and boundary conditions

bi0l (n) = fi0l(n); fi,Ml l+1(n) = sli−1(n). (16b)

The difference of the RLLMLPexpression from LLMLPis that we dropped h index in b, f and k terms. It means, that lattice parts of filters do not depend on outputs, i.e., they are common for outputs, but unique for each input. It could be shown [12], that expressions (13) in a case of RLLMLPchanges in a following way ∇vl ijh(n) = δhl(n)blij(n); (17a) ∇Θl ij(n) = Nl  h=1 δlh(n) Ml  r=0 vlirhDlijblir(n); (17b) δlh(n) =      −el h(n)ΦL sˆlh(n) ! , l = L, Φl ˆslh(n)! N"l+1 p=1 δpl+1(n) M"l+1 j=0 vl+1hjpγhjl+1(n), l = L; (17c) ϕl+1h,j−1(n) γhjl+1(n) = cos Θl+1hj − sin Θl+1hj sin Θl+1hj cos Θl+1hj ϕl+1hj (n) h,jl+1−1(n) . (17d)

Aiming to present simplified training of RLLMLPwe replace (17b) with

∇Θl ij(n) = fl ij(n)tlij(n)− blij(n)glij(n) cos Θl ij Nl  h=1 δlh(n), (18)

where order recursion computation is done according to case IV (A.4) by     fl ij(n)glij(n) bl ij(n)gijl (n) fl i,j−1(n)tli,j−1(n) bl i,j−1(n)tli,j−1(n)     =     1 z sin Θlij sin Θlij 0 sin Θl ij z 0 sin Θlij −z sin Θl ij 0 z −z sin Θlij 0 −z sin Θlij − sin Θlij 1         fi,j−1l (n)gli,j−1(n) bl i,j−1(n)gi,jl −1(n) fl ij(n)tlij(n) blij(n)tlij(n)     + Nl  h=1 vi,j−1,hl     0 0 fl i,j−1(n) bli,j−1(n)     , (19a)

(20)

with boundary conditions: fi0l(n)gi0l (n) = fi0l(n)tli0(n); (19b) bli0(n)gi0l (n) = bli0(n)tli0(n); fiMl l(n)tliMl(n) = Nl  h=1 viMl lh; bliMl(n)tliMl(n) = bliMl(n) Nl  h=1 vliMlh.

6.3

Gradients for Extra Reduced Size LLMLP

Another way of structure reduction could be achieved by restricting each neuron to have only one ‘output’ lattice–ladder filter while connecting layers through conventional synaptic coefficients. It yields an extra reduced size lattice–ladder multilayer perceptron structure (XLLMLP) [13].

A XLLMLP of size L layers,{N0, N1, . . . , NL} nodes and {M1, M2, . . . , ML}

filter orders is defined by

slh(n) = Φl Nl  i=1 wlih Ml  j=0 vijl · blij(n)    =˜sl i(n)  , j = 1, 2, . . . , Ml, (20)

when local flow of information in the lattice part of filters is defined as in (16a), with the same initial and boundary conditions as in (16b), while additional variable wihl represents single weight connecting two neurons in a layer.

Like in the previous cases, let us start from the gradient expressions

∇wl ih(n) = δhl(n)˜sli(n); (21a) ∇vl ij(n) = blij(n) Nl  h=1 δlh(n); (21b) ∇Θl ij(n) = Ml  r=0 virl (n)Dijl blir(n) Nl  h=1 wihl δhl(n), (21c) δhl(n) =      −el h(n)ΦL ˆslh(n) ! , l = L, Φl sˆl h(n) !Ml+1 " j=0 vhjl+1γhjl+1(n) Nl+1 " p=1 wl+1hp δl+1 p (n), l = L, (21d) ϕl+1h,j−1(n) γhjl+1(n) = cos Θl+1hj − sin Θl+1hj sin Θl+1hj cos Θl+1hj ϕl+1hj (n) h,j−1l+1 (n) . (21e)

Aiming to present simplified training of RLLMLPwe replace (21c) with

∇Θl ij(n) = fl ij(n)tlij(n)− blij(n)gijl (n) cos Θl ij Nl  h=1 wihl δlh(n), (22)

(21)

where order recursion computation is done according to case IV (A.4) by     fl ij(n)glij(n) blij(n)gijl (n) fl i,j−1(n)tli,j−1(n) bl i,j−1(n)tli,j−1(n)     =     1 z sin Θl ij sin Θlij 0 sin Θl ij z 0 sin Θlij −z sin Θl ij 0 z −z sin Θlij 0 −z sin Θl ij − sin Θlij 1         fl i,j−1(n)gli,j−1(n) bl i,j−1(n)gi,jl −1(n) fijl(n)tlij(n) bl ij(n)tlij(n)     + vli,j−1     0 0 fi,j−1l (n) bl i,j−1(n)     , (23a)

with boundary conditions:

fi0l(n)gli0(n) = fi0l(n)tli0(n); (23b)

bli0(n)gli0(n) = bli0(n)tli0(n); fiMl l(n)tliMl(n) = viMl l; bliMl(n)tliMl(n) = bliMl(n)viMl l.

7

Conclusions

In this paper we dealt with the problem of computational efficiency of LLMLP training algorithms that are based on the computation of gradients, e.g., back-propagation, conjugate-gradient or Levenberg-Marquardt. Here we explored most computationally demanding calculations in LLMLPtraining—computa-tion of gradients for lattice (rotaLLMLPtraining—computa-tion) parameters.

In total 14 different ways of computation of lattice gradients were investi-gated. It was found that only 4 of them are feasible to consider for future study. Based on the minimal number of constants, addition and delay operations in-volved in aforementioned computations, and also regularity of the structure, case IV (as described in Table 1 and derived in Appendix A by (A.4)) was se-lected for the final application. Based on it three training algorithms for the LLMLP, Reduced Size LLMLP and Extra Reduced Size LLMLP were derived. All these algorithms require less computations (10M additions, 4M constants and 2M delays for each filter of order M ) while they follows exact gradient path when coefficients of LLF are assumed to be stationary.

References

[1] A. D. Back and A. C. Tsoi. FIR and IIR synapses, a new neural network ar-chitecture for time series modeling. Neural Computation, 3:375–385, 1991. [2] A. D. Back and A. C. Tsoi. An adaptive lattice architecture for dynamic

(22)

[3] A. D. Back and A. C. Tsoi. A simplified gradient algorithm for IIR synapse multilayer perceptrons. Neural Computation, 5:456–462, 1993.

[4] A. D. Back and A. C. Tsoi. Aspects of adaptive learning algorithms for FIR feedforward networks. In Proc. of 1996 Int. Conf. on Neural Information

Processing, volume 2, pages 1311–1316, 1996.

[5] A. D. Back and A. C. Tsoi. A cascade neural network model with non-linear poles and zeros. In Proc. of 1996 Int. Conf. on Neural Information

Processing, volume 1, pages 486–491, 1996.

[6] A. D. Back, E. A. Wan, S. Lawrence, and A. C. Tsoi. A unifying view of some training algorithms for multilayer perceptrons with FIR filter synapses. In J. Vlontzos, J. Hwang, and E. Wilson, editors, Neural

Net-works for Signal Processing 4: Proceedings of the 1994 IEEE Workshop,

pages 146–154. IEEE Press, 1994.

[7] S. Haykin. Adaptive Filter Theory. Prentice-Hall International, Inc., 3rd edition, 1996.

[8] S. Haykin. Neural Networks: A Comprehensive Foundation. Prentice-Hall, Upper Saddle River, N.J., 2nd edition, 1999.

[9] A. Juditsky, H. Hjalmarsson, A. Benveniste, B. Delyon, L. Ljung, J. Sj¨oberg, and Q. Zhang. Nonlinear black-box modeling in system identi-fication: Mathematical foundations. Automatica, 31:1725–1750, 1995. [10] S. Lawrence, A. D. Back, A. C. Tsoi, and C. L. Giles. The Gamma

MLP-using multiple temporal resolutions for improved classification. In Neural

Networks for Signal Processing 7, pages 256–265. IEEE Press, 1997.

[11] D. Navakauskas. A reduced size lattice-ladder neural network. In Signal

Processing Society Workshop on Neural Networks for Signal Processing, Cambridge, England, pages 313–322. IEEE, 1998.

[12] D. Navakauskas. Artificial Neural Network for the Restoration of Noise

Distorted Songs Audio Records. Doctoral dissertation, Vilnius Gediminas

Technical University, No. 434, September 1999.

[13] D. Navakauskas. Reducing implementation input of lattice-ladder multi-layer perceptrons. In Proceedings of the 15th European Conference on

Cir-cuit Theory and Design, Espoo, Finland, volume 3, pages 297–300, 2001.

[14] L. S. H. Ngia and J. Sj¨oberg. Efficient training of neural nets for nonlinear adaptive filtering using a recursive Levenberg-Marquardt algorithm. IEEE

Trans. on Signal Processing, 48(7):1915–1927, 2000.

[15] P. A. Regalia. Adaptive IIR Filtering in Signal Processing and Control. Marcel Dekker, Inc., 1995.

[16] J. A. Rodriguez-Fonollosa and E. Masgrau. Simplified gradient calcula-tion in adaptive IIR lattice filters. IEEE Trans. on Signal Processing,

(23)

[17] J. Sj¨oberg, Q. Zhang, L. Ljung, A. Benveniste, B. Delyon, P.-Y. Glorennec, H. Hjalmarsson, and A. Juditsky. Nonlinear black-box modeling in system identification: A unified overview. Automatica, 31:1691–1724, 1995. [18] A. C. Tsoi and A. Back. Discrete time recurrent neural network

architec-tures: A unifying review. Neurocomputing, 15:183–223, 1997.

[19] A. C. Tsoi and A. D. Back. Locally recurrent globally feedforward net-works: A critical review of architectures. IEEE Trans. on Neural Networks, 5(2):229–239, 1994.

[20] A. Waibel, T. Hanazawa, G. Hinton, et al. Phoneme recognition using time-delay neural networks. IEEE Trans. on ASSP, 37(3):328–339, 1989. [21] E. A. Wan. Temporal backpropagation for FIR neural networks. In Proc.

of International Joint Conf. on Neural Networks, pages 575–580, 1990.

[22] E. A. Wan. Finite Impulse Response Neural Networks with Applications

in the Time Series Prediction. PhD thesis, Department of Electrical

(24)

A

Summary of Different Order Recursions

Here we summarize results of derived simplified expressions for calculation of LLF lattice gradients as defined by (6) and (8). For the numbering of cases, please refer to Table 1 and see Section 3 for more thorough explanation.

In order to compactly represent following results we introduce new notation of a diagonal matrix            d1 d2 d3 d4           =     d1 d2 d3 d4     .

We also emphasize non-causal parts of expressions (advance operators) by gray background, like here z−1.

A.1

No Order Recursions for Cases I and XVI

As was stated in Section 3, there is impossible to derive order recursions when all information flow directions in the system 8 are the same, i.e., for case I and XVI.

A.2

Case II Order Recursion

    fj(n)gj(n) bj−1(n)gj−1(n) fj−1(n)tj−1(n) bj−1(n)tj−1(n)     =            1 z−1 1 1            ×                   1 1 z 1           ·                1 1 1 1           +            sin Θj 1 1 − sin Θj           ·     1 1 1 1 1 0 0 1 1 0 0 1 1 1 1 1     ·            − sin Θj 1 1 − sin Θj                ×     fj−1(n)gj−1(n) bj(n)gj(n) fj(n)tj(n) bj(n)tj(n)     + vj−1     0 0 fj−1(n) bj−1(n)            , (A.2a)

with boundary conditions:

f0(n)g0(n) = f0(n)t0(n) = b0(n)t0(n) = b0(n)g0(n); (A.2b) bM(n)gM(n) = sout(n)bM(n);

fM(n)tM(n) = vM; bM(n)tM(n) = vMbM(n).

(25)

A.3

Case III Order Recursion

    fj−1(n)gj−1(n) bj(n)gj(n) fj−1(n)tj−1(n) bj−1(n)tj−1(n)     =            1 1 z 1           ·                1 1 1 1           +            1 sin Θj − sin Θj 1           ·     0 1 1 0 1 1 1 1 1 1 1 1 0 1 1 0     ·            1 − sin Θj − sin Θj 1                ·            1 z 1 1            ×     fj(n)gj(n) bj−1(n)gj−1(n) fj(n)tj(n) bj(n)tj(n)     + vj−1     0 0 fj−1(n) bj−1(n)     , (A.3a)

with boundary conditions:

fM(n)gM(n) = sout(n); (A.3b) b0(n)g0(n) = b0(n)t0(n) = f0(n)t0(n) = f0(n)g0(n);

fM(n)tM(n) = vM; bM(n)tM(n) = vMbM(n).

A.4

Case IV Order Recursion

    fj(n)gj(n) bj(n)gj(n) fj−1(n)tj−1(n) bj−1(n)tj−1(n)     =            1 1 z 1           ·                1 1 1 1           +            sin Θj sin Θj − sin Θj − sin Θj           ·     0 1 1 0 1 0 0 1 1 0 0 1 0 1 1 0         ·            1 z 1 1            ×     fj−1(n)gj−1(n) bj−1(n)gj−1(n) fj(n)tj(n) bj(n)tj(n)     + vj−1     0 0 fj−1(n) bj−1(n)     , (A.4a)

with boundary conditions:

f0(n)g0(n) = f0(n)t0(n); (A.4b) b0(n)g0(n) = b0(n)t0(n);

fM(n)tM(n) = vM; bM(n)tM(n) = vMbM(n).

(26)

A.5

Case V Order Recursion

    fj−1(n)gj−1(n) bj−1(n)gj−1(n) fj(n)tj(n) bj−1(n)tj−1(n)     =            1 z−1 1 1            ×                       1 1 1 1           +            1 − sin Θj sin Θj 1           ·     0 1 1 0 1 1 1 1 1 1 1 1 0 1 1 0     ·            1 − sin Θj − sin Θj 1                ·            1 1 z−1 1            ×     fj(n)gj(n) bj(n)gj(n) fj−1(n)tj−1(n) bj(n)tj(n)     + z−1vj−1     sin Θjfj−1(n) − sin Θ2 jfj−1(n) −fj−1(n)/ cos Θ2j cos Θjbj(n)            , (A.5a)

with boundary conditions:

fM(n)gM(n) = sout(n); (A.5b) bM(n)gM(n) = sout(n)bM(n);

f0(n)t0(n) = b0t0; bM(n)tM(n) = vMbM(n).

A.6

Case VI Order Recursion

    fj(n)gj(n) bj−1(n)gj−1(n) fj(n)tj(n) bj−1(n)tj−1(n)     =            1 z−1 1 1            ×                       1 1 1 1           +            sin Θj − sin Θj sin Θj − sin Θj           ·     0 1 1 0 1 0 0 1 1 0 0 1 0 1 1 0         ·            1 1 z−1 1            ×     fj−1(n)gj−1(n) bj(n)gj(n) fj−1(n)tj−1(n) bj(n)tj(n)     + z−1vj−1    

sin Θjcos Θj sin Θ2j/ cos Θj

0 0

cos Θj sin Θj/ cos Θj

0 cos Θj      fj(n) bj(n)     , (A.6a) with boundary conditions:

f0(n)g0(n) = b0(n)t0(n); (A.6b) bM(n)gM(n) = sout(n)bM(n);

f0(n)t0(n) = b0(n)g0(n); bM(n)tM(n) = vMbM(n).

A.7

Case VII Order Recursion

Expressions of this case involve z−2 operations thus we save place not presenting them here.

(27)

A.8

Case VIII Order Recursion

    fj(n)gj(n) bj(n)gj(n) fj(n)tj(n) bj−1(n)tj−1(n)     =                1 1 1 1           +            sin Θj 1 1 − sin Θj           ·     1 1 1 1 1 0 0 1 1 0 0 1 1 1 1 1     ·            sin Θj 1 1 sin Θj                ·            1 z z−1 1            ×     fj−1(n)gj−1(n) bj−1(n)gj−1(n) fj−1(n)tj−1(n) bj(n)tj(n)     + z−1vj−1     − sin Θjfj−1(n) 0 −fj−1(n) cos Θjbj(n)     , (A.8a)

with boundary conditions:

f0(n)g0(n) = b0(n)t0(n); (A.8b) b0(n)g0(n) = b0(n)t0(n);

f0(n)t0(n) = b0(n)t0(n); bM(n)tM(n) = vMbM(n).

A.9

Case IX Order Recursion

    fj−1(n)gj−1(n) bj−1(n)gj−1(n) fj−1(n)tj−1(n) bj(n)tj(n)     =            1 z−1 1 1            ×                   1 1 z 1           ·                1 1 1 1           +            − sin Θj 1 1 sin Θj           ·     1 1 1 1 1 0 0 1 1 0 0 1 1 1 1 1     ·            − sin Θj 1 1 − sin Θj                ×     fj(n)gj(n) bj(n)gj(n) fj(n)tj(n) bj−1(n)tj−1(n)     + z−1vj−1    

sin Θjcos Θj sin Θ2j/ cos Θj

0 0

cos Θj sin Θj/ cos Θj

0 cos Θj      fj(n) bj(n)     , (A.9a) with boundary conditions:

fM(n)gM(n) = sout(n); (A.9b) bM(n)gM(n) = sout(n)bM(n);

fM(n)tM(n) = vM;

b0(n)t0(n) = b0(n)g0(n) = f0(n)t0(n) = f0(n)g0(n).

A.10

Case X Order Recursion

Expressions of this case involve z−2 operations thus we save place not presenting them here.

(28)

A.11

Case XI Order Recursion

    fj−1(n)gj−1(n) bj(n)gj(n) fj−1(n)tj−1(n) bj(n)tj(n)     =            1 1 z 1           ·                1 1 1 1           +            − sin Θj sin Θj − sin Θj sin Θj           ·     0 1 1 0 1 0 0 1 1 0 0 1 0 1 1 0         ·            1 z 1 1            ×     fj(n)gj(n) bj−1(n)gj−1(n) fj(n)tj(n) bj−1(n)tj−1(n)     + vj−1     0 − sin Θjbj−1(n) cos Θjfj(n) −bj−1(n)     , (A.11a)

with boundary conditions:

fM(n)gM(n) = sout(n); (A.11b) b0(n)g0(n) = f0(n)t0(n);

fM(n)tM(n) = vM; b0(n)t0(n) = f0(n)g0(n).

A.12

Case XII Order Recursion

    fj(n)gj(n) bj(n)gj(n) fj−1(n)tj−1(n) bj(n)tj(n)     =            1 1 z 1           ·                1 1 1 1           +            1 sin Θj − sin Θj 1           ·     0 1 1 0 1 1 1 1 1 1 1 1 0 1 1 0     ·            1 sin Θj sin Θj 1                ·            1 z 1 1            ×     fj−1(n)gj−1(n) bj−1(n)gj−1(n) fj(n)tj(n) bj−1(n)tj−1(n)     + vj−1     0 − sin Θjbj−1(n) cos Θjfj(n) −bj−1(n)     , (A.12a)

with boundary conditions:

f0(n)g0(n) = f0(n)t0(n); (A.12b) b0(n)g0(n) = b0(n)t0(n);

fM(n)tM(n) = vM; bM(n)tM(n) = vMbM(n).

(29)

A.13

Case XIII Order Recursion

    fj−1(n)gj−1(n) bj−1(n)gj−1(n) fj(n)tj(n) bj(n)tj(n)     =            1 z−1 1 1            ×                       1 1 1 1           +            − sin Θj − sin Θj sin Θj sin Θj           ·     0 1 1 0 1 0 0 1 1 0 0 1 0 1 1 0         ·            1 1 z−1 1            ×     fj(n)gj(n) bj(n)gj(n) fj−1(n)tj−1(n) bj−1(n)tj−1(n)     + z−1vj−1     sin Θjfj−1(n) z sin Θjbj−1(n) − cos Θjfj(n) − cos Θjbj(n)            , (A.13a)

with boundary conditions:

fM(n)gM(n) = sout(n); (A.13b) bM(n)gM(n) = sout(n)bM(n);

f0(n)t0(n) = f0(n)g0(n); b0(n)t0(n) = b0(n)g0(n).

A.14

Case XIV Order Recursion

    fj(n)gj(n) bj−1(n)gj−1(n) fj(n)tj(n) bj(n)tj(n)     =            1 z−1 1 1            ×                       1 1 1 1           +            1 − sin Θj sin Θj 1           ·     0 1 1 0 1 1 1 1 1 1 1 1 0 1 1 0     ·            1 sin Θj sin Θj 1                ·            1 1 z−1 1            ×     fj−1(n)gj−1(n) bj(n)gj(n) fj−1(n)tj−1(n) bj−1(n)tj−1(n)     + z−1vj−1    

− sin Θj/ cos Θj sin Θ2j/ cos Θ2j

0 sin Θjcos Θj −1/ cos Θj sin Θ3j/ cos Θj

0 − cos Θj      fj(n) bj(n)     , (A.14a) with boundary conditions:

f0(n)g0(n) = b0(n)g0(n); (A.14b) bM(n)gM(n) = sout(n)bM(n);

f0(n)t0(n) = b0(n)g0(n); b0(n)t0(n) = b0(n)g0(n).

(30)

A.15

Case XV Order Recursion

    fj−1(n)gj−1(n) bj(n)gj(n) fj(n)tj(n) bj(n)tj(n)     =                1 1 1 1           +            − sin Θj 1 1 sin Θj           ·     1 1 1 1 1 0 0 1 1 0 0 1 1 1 1 1     ·            sin Θj 1 1 sin Θj                ·            1 z z−1 1            ×     fj(n)gj(n) bj−1(n)gj−1(n) fj−1(n)tj−1(n) bj−1(n)tj−1(n)     + z−1vj−1     sin Θjcos Θj 0 sin Θ2

j/ cos Θj − sin Θj/ cos Θj

− cos Θj 0 sin Θ3 j/ cos Θj −1/ cos Θj      fj(n) bj(n)  , (A.15a) with boundary conditions:

fM(n)gM(n) = sout(n); (A.15b) b0(n)g0(n) = f0(n)g0(n);

f0(n)t0(n) = f0(n)g0(n); b0(n)t0(n) = f0(n)g0(n).

References

Related documents

Omvendt er projektet ikke blevet forsinket af klager mv., som det potentielt kunne have været, fordi det danske plan- og reguleringssystem er indrettet til at afværge

I Team Finlands nätverksliknande struktur betonas strävan till samarbete mellan den nationella och lokala nivån och sektorexpertis för att locka investeringar till Finland.. För

Generally, a transition from primary raw materials to recycled materials, along with a change to renewable energy, are the most important actions to reduce greenhouse gas emissions

För att uppskatta den totala effekten av reformerna måste dock hänsyn tas till såväl samt- liga priseffekter som sammansättningseffekter, till följd av ökad försäljningsandel

Från den teoretiska modellen vet vi att när det finns två budgivare på marknaden, och marknadsandelen för månadens vara ökar, så leder detta till lägre

Syftet eller förväntan med denna rapport är inte heller att kunna ”mäta” effekter kvantita- tivt, utan att med huvudsakligt fokus på output och resultat i eller från

Regioner med en omfattande varuproduktion hade också en tydlig tendens att ha den starkaste nedgången i bruttoregionproduktionen (BRP) under krisåret 2009. De

Generella styrmedel kan ha varit mindre verksamma än man har trott De generella styrmedlen, till skillnad från de specifika styrmedlen, har kommit att användas i större