• No results found

PARAMETRIZATION AND CONDITIONING OF HINGING HYPERPLANE MODELS { PRESENTED AT IFAC 96

N/A
N/A
Protected

Academic year: 2021

Share "PARAMETRIZATION AND CONDITIONING OF HINGING HYPERPLANE MODELS { PRESENTED AT IFAC 96"

Copied!
7
0
0

Loading.... (view fulltext now)

Full text

(1)

PARAMETRIZATION AND CONDITIONING OF HINGING HYPERPLANE MODELS { PRESENTED AT IFAC 96

Predrag Pucar Jonas Sjoberg

Department of Electrical Engineering Linkoping University

S-581 83 Sweden

Email: predrag@isy.liu.se

See address above

Email: sjoberg@isy.liu.se

Abstract. Recently a new model class has emerged in the eld of nonlinear black- box modeling the hinging hyperplane (HH) models. The HH model is closely related to the well known neural net model. In this contribution the parametrization and conditioning of HH models is addressed. It is shown that the original setting is over- parametrized and a new parametrization involving fewer parameters is suggested.

Moreover, it is shown that there is nothing to loose in terms of negative e ects in the numerical search when fewer parameters are used.

Keywords. Nonlinear models, identication algorithms, function approximation, parametrization.

1. INTRODUCTION

Recently a new approach to nonlinear function approx- imation named hinging hyperplanes (HH), was reported (Breiman, 1993). In that article a number of advantages of HH were pointed out. For example, they have the same nice feature as, e.g., neural nets (NN) avoiding the curse of dimensionality. In (Breiman, 1993) there is also an estimation algorithm suggested which is inspired by linear regression trees and projection pursuit. How- ever, (Pucar and Sj oberg, 1995b)

1

shows that this is a Newton-type algorithm. Here the parametrization of the HH model is considered. It will be shown that the num- ber of parameters can be reducedwithout restricting the

1

All cited technical reports are available at http://www.

control.isy.liu.se/AutomaticControl/Reports/index.html

model's capability of approximating unknown functions.

The objective is to explain, or predict, a variable y using some other known variables placed in a regressor vector '

2

IR d . A model structure is proposed to approximate the relation between y and ' and in this paper the HH model, which is a nonlinear black-box model structure, is investigated.

Most nonlinear black-box models can be expressed as a basis function expansions as

f ( ' ) =

X

K

i

=1

h i ( ' )  (1)

where  is a parameter vector. The basis function h

in the expansion above is crucial and is the detail by

which a number of approaches di er, e.g., NN, projec-

tion pursuit, etc. The idea is that although using rel-

(2)

hinge

hinge Hinging Hyperplanes

Hinging Hyperplanes hinge function function

hinge

(a) 2-Dimensional (b) 3-Dimensional

Fig. 1. Hinge, hinging hyperplane and hinge function atively simple building blocks, the basis functions h i , a broad class of nonlinear functions can be well ap- proximated. In (Sj oberg et al., 1995) and (Juditsky et al., 1995) a general framework for nonlinear black-box models is further developed.

When the specic form of the basic expansion (1) has been decided upon, it remains to estimate the parame- ters by using collected data from the unknown system.

This is typically performed by computing the parameter value that minimizes a criterion of t, i.e.,

 ^ N = argmin  V N (  ) 

where N is the number of data available. In the origi- nal algorithm for HH the criterion V N (  ) is the sum of squared errors, see (Pucar and Sj oberg, 1995b),

V N (  ) = 1 N

N

X

t

=1

( y ( t )

;

f ( ' ( t )  ))

2

 (2) where

f

y ( t ) ' ( t )

g

Nt

=1

is the available data. Since the approximating function is nonlinear, the minimization problem ts into the nonlinear least squares setting and the minimization is typically performed with a numeri- cal search routine, see Section 2.1.

The HH approach uses hinge functions as basis func- tions, see Figure 1. As can be seen in Figure 1 the hinge function consists of two joined hyperplanes. Assume the two hyperplanes are given by y

1

= ' T 

+

and y

2

= ' T 

;

, where ' = 1 '

1

'

2





' d

;1

] T , is the regres- sor vector and y is the output. These two hyperplanes are joined at

f

' : ' T ( 

+;



;

) = 0

g

which is dened as the hinge of the two hyperplanes. The solid/shaded part of the two hyperplanes in Figure 1, is explicitly given by

y = max( ' T 

+

 ' T 

;

) or

y = min( ' T 

+

 ' T 

;

) (3) and is dened as the hinge function. A HH model con- sists of a sum of hinge functions of the sort (3).

The above presentation of the HH model follows the original way the model was introduced in (Breiman,

1993). In the next section the parametrization of the HH model is discussed, and it is shown that the original presentation is overparametrized. A new parametriza- tion where the similarity between feed-forward NN and HH becomes striking, is suggested. Further, in Section 3 the conditioning of HH models is investigated.

2. PARAMETRIZATION

When discussing overparametrization there are two cases that can occur which are fundamentally di erent. The

rst case, which is called truly overparametrized, means that the model structure can be described by fewer pa- rameters. Then there exists a mapping from the orig- inal parameter space to a parameter space with lower dimension. The other case of overparametrization ac- tually concerns the balancing of the bias- and variance contribution of the total error, see, e.g., (Ljung, 1987) or (Sj oberg et al., 1995) for a general discussion on this. If a model is truly overparametrized or not, is only a mat- ter of how the model structure is parametrized and it does not inuence the exibility of the model structure.

However, for the second type of overparametrization the problem is that the model structure is unnecessary ex- ible, the model contains too many parameters with re- spect to the the unknown relationship to be identied and the number of available data for the identication.

There are indications that truly overparametrized mod- els, in some cases, perform better than not overparametrized models when a numerical search routine is used for min- imization of the loss function (McKelvey, 1995 McK- elvey, 1994). This will be investigated for the HH model and it will be shown that there are no such advantages with an overparametrized HH model. This means that the HH model becomes identical for a manifold of pa- rameters, i.e., the parameter vector can be parametrized itself as  =  ( k ) where dim(  ) < dim(  ) and where only the parameters in  inuence the model.

Let us describe the HH model (3) in a slightly di erent manner. Assume that two parameter vectors 

+

and 

;

are estimated. The parameter vectors dene a division of the input space into two half-spaces S

+

and S

;

. The hinge function (3) can be rewritten as

h ( ' ) = ' T 

+

I

f

S

+g

( ' ) + ' T 

;

(1

;

I

f

S

+g

( ' ))  (4) where the indicator function I S is dened as

I S ( ' ) =



1 if '

2

S 0 if '

62

S:

An HH model with M hinge functions is a sum of func-

tions like (4)

(3)

f ( ' ) =

X

M

i

=1

' T 

+

i I

f

S

+

i

g

+ ' T 

;

i (1

;

I

f

S

+

i g

)  (5) where the dependence on ' of the indicator function is suppressed. The model above can be rewritten in the following equivalent way

f ( ' ) =

X

M

i

=1

' T  i I

f

S

ig

+ ' T 

0

 (6) where  i = 

+

i

;

 i

;

and 

0

=

P

M i

=1

 i

;

. The rewriting results in using d ( M +1) parameters. For M > 1 this is less than the 2 dM parameters of the original description.

The border between the two half-spaces across which the indicator function switches from zero to one is ' T ( 

+;



;

) = 0, or equivalently ' T  i = 0.

The similarity with NN is striking. Both the HH model and the NN model can be described by

f ( ' ) =

X

M

i

=1

( ' T i ) + ' T where

( x ) =



0 for x < 0



x for x > 0

for the HH model and ( x ) = ( x ) for the NN model with a direct term from input to output. The activation function (



) is usually chosen to ( x ) = 1 = (1 + e

;

x ).

The parameters

1

, i and are stored in  . The HH model is built up by a basis which is zero on one half- space and linear on the other half-plane. The basis func- tion for the NN model also divides the input space into two half-spaces, but it takes a constant value on the half- plane where the model is non-zero, and makes a smooth transition between these values.

The HH model does not have a smooth derivative. It is, however, possible to replace the indicator function in (6) with a sigmoid function, which is a \smooth indicator", to obtain smooth HH, see (Pucar and Millnert, 1995).

The resulting HH model (6) is no longer a sum of hinge functions as the one depicted in Figure 1, but rather a sum of one hyperplane and a number of basis functions which are zero for

f

' : '

62

I

f

S

gg

and a hyperplane otherwise. See Figure 2 where such functions with '

2

IR and '

2

IR

2

, are depicted.

2.1 Equivalence of Parametrizations

The relation between the two parameter vectors in the two parametrizations (5) and (6), turns out to be a pro-

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

−0.1 0 0.1 0.2 0.3 0.4

0 0.2

0.4

0.6 0.8

1

0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 1.2 1.4

Fig. 2. Form of hyperplanes when reparametrized hing- ing hyperplane models are used.

jection of the original parameter space onto a reduced one. Consider the case of M hinges. The relation be- tween the original parameter vector and the reduced one is then given by

0

B

B

B

@



1

 M ...



0

1

C

C

C

A

=

0

B

B

B

@

I

;

I 0 0



0 0 ... ... ... ... ... ... ...

0 0 0 0



I

;

I 0 I 0 I



0 I

1

C

C

C

A 0

B

B

B

B

B

@



1+



1;



+

M ...



;

M

1

C

C

C

C

C

A

 (7)

where I , in this case, is a d



d identity matrix. The relation between the two parameter vectors can, thus, be written as  R = A , where the superscript R indicates that the vector lies in the reduced parameter space. The following questions arise:

(1) Are the outputs of the original model and the re- duced model, using the parameter vector  R ac- cording to (7), equivalent for every value of the pa- rameter vector  in the original parametrization?

(2) Are the properties of the numerical search routine used for minimizing the loss function and nding the optimal parameter values, a ected?

Question (1) is readily answered using (5) and (6). Since the reparametrization in fact is reordering and lump- ing some parameters in the original parametrization, the output space is not changed.

Question (2) is somewhat more complicated than ques- tion (1). It originates in the experience that in some model structures the risk of getting stuck at local min- ima decreases if truly overparametrized models are used.

We will here show that in the case discussed here, no such advantages are present.

The routine that we use in the search of the optimal parameter estimate, is the well known damped Newton method for minimization of the criterion (2), see (Dennis and Schnabel, 1983), and (Pucar and Sj oberg, 1995b) for connections with HH models. Newton's scheme for

nding the minimum of the criterion (2) is

 k

+1

=  k

;

 (

r2

 V )

yr

 V (8)

(4)

Q - space

Q - spaceR q

q q

q A

R R 0

0 1

1

A

Fig. 3. Idea of proof of \numerical equivalence" of the two parameter spaces. The question is if the right- most projection arrow will point at  R

1

.

where A

y

denotes the pseudo-inverse of a matrix A ,

r

 V is the gradient of V with respect to  , and

r2

 V is the Hessian of V . Question (2) can now be restated as fol- lows. Assume that 

0

is chosen as the initial value for the search algorithm used in the original parameter space, and the projection of 

0

, i.e., 

0

R = A

0

, is used as the initial value of the algorithm for the reduced parameter space. Will the two algorithms executed in parallel give the same path in the reduced parameter set if the pro- jection of  onto the  R -space is compared to the path of  R at every step of the algorithm? Straightforward calculations lead to the following relations between the gradients and the Hessians in the two parameter spaces

r

V = A T

r

V R and

r2

V = A T

r2

V R A: (9) Given an arbitrary 

0

the next step is



1

= 

0;

 (

r2

V )

yr

V: (10) If the above equation is multiplied by the projection matrix A we obtain

 R

1

=  R

0 ;

A ( A T

r2

V R A )

y

A T

r

V R  where the gradient and Hessian are expressed in the  R - space. The strategy is to show that the parameter up- date term on the right hand side of (10) is equivalent to the parameter update term of the algorithm using the reparametrized parameter vector. The parameter update term in the algorithm executed in the reduced parameter space is (

r2

V R )

yr

V R . If the equivalence of the parameter update terms can be shown, then using the assumption that 

0

is arbitrary, it follows that the result does not depend on which parametrization is cho- sen. The idea of the proof is illustrated in Figure 3.

If the expressions in (9) are substituted in the parameter update term, straightforward calculations, see (Pucar and Sj oberg, 1995a), give the equivalence of the two spaces with respect to the behavior of the numerical

algorithm. There is, thus, nothing to gain by using the truly overparametrized model. There is an obvious dis- advantage though, namely the computational complex- ity. In the numerical algorithms that are used, one of the most time consuming steps is taking the pseudo- inverse/inverse of the Hessian in the parameter update equation. For large M the complexity of the numeri- cal algorithm when using the reparametrized model is about 30 % of the complexity using the original model.

3. ILL-CONDITIONING OF THE ESTIMATION ALGORITHM

The second kind of overparametrization which was dis- cussed in the introduction is addressed in this section.

It might imply numerical problems in the minimization and it typically also indicates that the model structure is too exible. NN models are often ill-conditioned, see (Sj oberg and Ljung, 1992 Moody, 1992 Saarinen et al., 1993), which means that some of the directions in the parameter space have little inuence on the model.

These parameters decrease the bias part of the error lit- tle but their contribution to the variance part of the error is as large as the important parameter's.

In this section it is shown that HH model also can be expected to be ill-conditioned. Before going into a dis- cussion on the reasons for ill-conditioning of the HH models, necessary denitions are given.

De nition 1. The rank of A

2

IR m



n ( m



n ) is de-

ned by the number of non-zero singular values in the singular value decomposition A = P " Q T , where "

2

IR m



n is a diagonal matrix with singular values,

1 

2

n . A is rank decient, i.e., has rank r < n , if some of the singular values are equal to zero, r

+1

=



= n = 0.

De nition 2. The condition number of A is dened

as ( A ) =

1

= n . A is said to be ill-conditioned if ( A ) is large.

An estimation procedure for a parameterized model is a numerical minimization algorithm applied to minimize a specied loss function. An often occuring case is that the loss function is quadratic and the minimization algo- rithm is then a nonlinear least squares algorithm where the estimate is given by

 ^ = argmin  1 2 " T "

where " =  y

1;

f ( '

1

 )



y N

;

f ( ' N  )] T and f is

the sum of basis functions used for approximation.

(5)

To minimize the criterion function by means of a numer- ical search algorithm the gradient and the Hessian are needed. The gradient of the loss function is

r

V = J T " , where J is the Jacobian matrix of the criterion func- tion J =

r

" . The Hessian of the loss function is H = J T J + S , where S ( ' ) =

P

N i

=1

" i

r2

" i .

De nition 3. A minimization problem is ill-conditioned if the Hessian of the criterion function is ill-conditioned near a local (or global) minimum of the criterion func- tion.

There are a number of algorithms for nding the esti- mate that minimizes the quadratic loss function. Here (8) is used as algorithm for minimization. The Jacobian plays an important role in the minimization algorithm (for HH models the Hessian of V is J T J ). The conse- quence of having an ill-conditioned Jacobian in the min- imization routine is bad convergence properties, i.e., the time for nding the estimate ^  dramatically increases, see (Dennis and Schnabel, 1983).

A closer look at how the Jacobian is constructed in the HH case will be taken, and expressions that relates the directions of the Jacobian's columns to the how ill- conditioned the Jacobian is, are derived. Firstly, the con- nection between the condition number of a matrix and the angle between two columns in the same matrix, is stated.

Theorem 4. (Golub and van Loan, 1989) Let B be a sub- matrix of A consisting of columns of A. Then ( B )



( A ), where denotes the condition number of a matrix.

Theorem 5. (Saarinen et al., 1993) Let A =  x y ]

2

R

n

2

and suppose that the angle  between x and y satises cos (  ) = 1

;

 for some 

2

(0  1). Then

2

( A )



1 4  (2

;

 )





jj

y

jj2

jj

x

jj2

+

jj

x

jj2

jj

y

jj2

+ 2



: It is, thus, su#cient to investigate angles between columns in the Jacobian of a HH model, to get an insight when the Jacobian will be ill-conditioned.

The HH model f ( ' ( i )  ) is dened in (6) and the ele- ments of Jacobian  J ij ] are dened as @" ( ' ( i )  ) =@ j =

;

@f ( ' ( i )  ) =@ j . The rst index in  J ij ] denotes the time instant or sample number. Hence, J has the di- mensions N



( d + 1)( M + 1), where N is the number of available data and ( d + 1)( M + 1) is the number of parameters. The following expression for the Jacobian is obtained when taking the derivative of "

r

 f ( i ) =



@f ( i )

@ (1)



@f ( i )

@ ( M ) @f ( i )

@ (0)

=

;

' T I

f

S

1g

( ' )



' T I

f

S

Mg

( ' ) ' T

T : Assume that the dimension of the input space is one, i.e., d = 1. The following four columns of a matrix is an example of a part of the Jacobian for the above HH model. The columns correspond to two of the M hinge functions.

J =

;

2

6

6

6

6

6

6

6

6

6

6

6

4

0 0 0 0

... ... ... ...

0 0 1 '

1

( i ) ... ... ... ...

1 '

1

( j + 1) 1 '

1

( j + 1) ... ... ... ...

1 '

1

( N ) 1 '

1

( N )

3

7

7

7

7

7

7

7

7

7

7

7

5

(11)

The subscript of the ' denotes the number of the input.

In the example above when d = 1 there is only one input.

The number in the parenthesis is the sample number.

The cosine of the angle between two vectors a and b is given by cos(  ) = a T b=

p

( a T a )( b T b ), and three cases that can occur in the Jacobian of the HH model will now be discussed. The rst case is the one correspond- ing to checking the angle between for example column two and four in the example above. Note that the two columns originate from the same input '

1

, which is the only one in this example. If d > 1 it would be possible to check angles between columns originating form di er- ent inputs, but nothing consistent can be said regarding the angle between such columns. The cosine of the angle between column two and four is

cos(  ) = 1

r

1 + N j

;;

i j  

2j;i2 +



2j;i

N;j +



2N;j

: (12)

The notation above needs further explanation. Except for the subscripts on ' , the subscripts denote the in- terval over which a quantity is calculated. For exam- ple,  N

;

j denotes the mean of

f

'

1

( k )

g

Nk

=

j . Similarly N

;

j denotes the standard deviation of

f

'

1

( k )

g

Nk

=

j in the same interval.

Similar calculations, see (Pucar and Sj oberg, 1995a), give the result for the other two combinations of columns, namely column number one and number three

cos(  ) = 1

q

1 + N j

;;

i j  (13)

and column number one and four

(6)

cos(  ) = 1

1 + N j

;;

i j  N

;

j

(

2

N

;

j + 

2

N

;

j )

1

=

2

: (14) Expression (12) and (13) will be close to one if j

;

i is small, or in other words if two hinges are located very close to each other. In expression (14) it is not su#- cient to have small j

;

i to have an ill-conditioned Jaco- bian, but also the variance of the input has to be small compared with its mean. This is intuitively clear. If we assume that j

;

i = 0, then a column in (11) that is almost constant will be parallel with the column in (11) containing ones only.

A second phenomenon that can occur and result in ill- conditioning is if, e.g., two consecutive (in time) inputs, say u ( t

;

1) and u ( t

;

2), are used in the regressor, and the input contains little information so u ( t

;

1)

u ( t

;

2).

This phenomenon is not unique for HH models. It has to do with the excitation capabilities of the input, which is a well-known issue in system identication.

4. CONCLUSIONS

This paper has considered a number of topics relating to the parametrization of HH models. We have seen that:

1) HH models are truly overparametrized in the origi- nal form used in (Breiman, 1993). The number of pa- rameters can be reduced to d ( M + 1), where d is the dimension of the input space and M is the number of hinge functions used, without restricting the exibility of the model class compared to 2 dM in the original parametrization.

2) The properties of the parameter estimation algorithm (Newton's algorithm) are not negatively a ected by the parametrization. The benet of using fewer parameters is a decrease in computational load.

3) From the new parametrization it is evident that HH models are very similar to NN models. Both models di- vide the input space into half-spaces and the di erence can be described in their way model the behavior in the directions perpendicular to the hyperplanes dividing the space.

4) Similar to NN models HH models can be overparamet- rized. The model structure is then too exible and can be expected to be ill-conditioned which means that some directions on the parameter space are much more impor- tant than others.

5. REFERENCES

Breiman, L. (1993). \Hinging Hyperplanes for Regres-

sion, Classication, and Function Approximation".

IEEE Trans. on Information Theory.

Dennis, J.E. and R.B. Schnabel (1983). Numerical Meth- ods for Unconstrained Optimization and Nonlinear Equations. Prentice-Hall. Englewood Cli s, New Jersey.

Golub, G. and C. van Loan (1989). Matrix Computa- tions. The John Hopkins University Press. Balti- more, MD, USA.

Juditsky, A., H. Hjalmarsson, A. Benveniste, B. Deylon, L. Ljung, J. Sjoberg and Q. Zhang (1995). \Non- linear Black-Box Models in System Identication:

Mathematical Foundations". Automatica.

Ljung, L. (1987). System Identi cation: Theory for the User. Prentice-Hall. Englewood Cli s, NJ.

McKelvey, T. (1994). Fully parametrized state-space models in system identication. In: Proc. of the 10th IFAC Symposium on System Identi cation. Vol. 2.

Copenhagen, Denmark. pp. 373{378.

McKelvey, T. (1995). Personal communication.

Moody, J.E. (1992). The e ective number of parameters:

An analysis of generalization and regularization in nonlinear learning systems. In: Advances in Neu- ral Information Processing Systems 4 (J.E. Moody, S.J. Hanson and R.P. Lippmann, Eds.). Morgan Kaufmann Publishers. San Mateo, CA.

Pucar, P. and J. Sj oberg (1995a). On the parameteri- zation of hinging hyperplane models. Technical re- port. LiTH-ISY-R-1727, Dept. of Electrical Eng. S- 581 83 Link opings universitet, Sweden.

Pucar, P. and J. Sj oberg (1995b). Parameterization and conditioning of hinging hyperplane models. Tech- nical report. LiTH-ISY-R-1727, Dept. of Electrical Eng. S-581 83 Link opings universitet, Sweden.

Pucar, P. and M. Millnert (1995). Smooth hinging hy- perplanes - an alternative to neural networks. In:

Proc. 3rd ECC, Italy.

Saarinen, S., R. Bremley and G. Cybenko (1993). \Ill- Conditioning in Neural Network Training Prob- lems". SIAM Journal Sci. Computing 14 (3), 693{

Sj oberg, J. and L. Ljung (1992). Overtraining, regu- 714.

larization, and searching for minimum in neural networks. In: Preprint 4th IFAC Symposium on Adaptive Systems in Control and Signal Processing.

Grenoble, France. pp. 669{674.

Sj oberg, J., Q. Zhang, L. Ljung, A. Benveniste, B. Dey-

lon, P-Y. Glorennec, H. Hjalmarsson and A. Ju-

ditsky (1995). \Non-Linear Black-Box Modeling in

System Identication: a Unied Overview". Auto-

matica.

(7)

References

Related documents

Industrial Emissions Directive, supplemented by horizontal legislation (e.g., Framework Directives on Waste and Water, Emissions Trading System, etc) and guidance on operating

However, the board of the furniture company doubts that the claim of the airline regarding its punctuality is correct and asks its employees to register, during the coming month,

46 Konkreta exempel skulle kunna vara främjandeinsatser för affärsänglar/affärsängelnätverk, skapa arenor där aktörer från utbuds- och efterfrågesidan kan mötas eller

För att uppskatta den totala effekten av reformerna måste dock hänsyn tas till såväl samt- liga priseffekter som sammansättningseffekter, till följd av ökad försäljningsandel

The increasing availability of data and attention to services has increased the understanding of the contribution of services to innovation and productivity in

Generella styrmedel kan ha varit mindre verksamma än man har trott De generella styrmedlen, till skillnad från de specifika styrmedlen, har kommit att användas i större

I regleringsbrevet för 2014 uppdrog Regeringen åt Tillväxtanalys att ”föreslå mätmetoder och indikatorer som kan användas vid utvärdering av de samhällsekonomiska effekterna av

Närmare 90 procent av de statliga medlen (intäkter och utgifter) för näringslivets klimatomställning går till generella styrmedel, det vill säga styrmedel som påverkar