• No results found

Orientation Estimation Based on Weighted Projection onto Quadratic Polynomials

N/A
N/A
Protected

Academic year: 2021

Share "Orientation Estimation Based on Weighted Projection onto Quadratic Polynomials"

Copied!
8
0
0

Loading.... (view fulltext now)

Full text

(1)

Orientation Estimation Based on Weighted

Projection onto Quadratic Polynomials

Gunnar Farneb¨ack Computer Vision Laboratory Department of Electrical Engineering

Link¨oping University SE-581 83 Link¨oping, Sweden

Email: gf@isy.liu.se

Abstract

Essentially all Computer Vision strategies re-quire initial computation of orientation struc-ture or motion estimation. Although much work has been invested in this subfield, meth-ods have so far been very computationally de-manding and/or not very robust.

In this paper we present a novel method for computation of orientation tensors for signals of any dimensionality. The method is based on local weighted least squares approxima-tions of the signal by second degree polynomi-als. It is shown how this can be implemented very efficiently by means of separable convo-lutions and that the method gives very accu-rate orientation estimates. We also introduce the new concept of orientation functionals, of which orientation tensors is a subclass. Fi-nally we demonstrate the critical importance of using a proper weighting function in the lo-cal projection of the signal onto polynomials.

1

Introduction

Local orientation is a feature of multidimen-sional signals, which roughly can be described as the direction along which the signal has the largest variation. This is well-defined for (lo-cally) simple signals, i.e. signals which can be written f (x) = h(xTˆn) for some non-constant

function h of one variable and some unit vec-tor ˆn. A complication here is that both ˆnand −ˆnare valid representations for the same

ori-entation. A way to get around this ambiguity is to form the orientation tensor1

T= AˆnˆnT, (1)

where A is some constant that may encode other information than orientation, such as certainty or local signal energy. It is obvi-ous that this representation maps ˆn and −ˆn to the same tensor and that the orientation can be recovered from the eigensystem of T.

There are different ways to estimate lo-cal orientation tensors. Big¨un [2] computes weighted averages of outer products of gra-dients over a neighborhood (inertia tensor). Knutsson [6, 10] estimates orientation tensors from the responses of a set of quadrature fil-ters in different directions.

This paper presents a novel method to es-timate orientation tensors, based on weighted local projection of the signal onto a polyno-mial basis. The latter means that we for each neighborhood find the best approximation of the signal by a second degree polynomial, in the sense that we minimize a weighted norm of the residual. The result is that we for each point of the signal have to solve a weighted linear least squares problem, but it turns out that this in practice can be done by means of convolutions. These convolutions can also be made separable, which leads to a very efficient implementation.

1

In practice this is just a symmetric matrix and we use ordinary matrix algebra in the computations. The term tensor is used because it is established in this context.

(2)

The local signal model was developed within the framework of normalized convolu-tion [12, 13, 3], which also allows analysis of uncertain signals, but this is outside the scope of this paper. In the simplified form presented here, the signal model is almost equivalent with Haralick’s facet model [7, 8]. There is one crucial difference, however, and that is that we use a weighted least squares approx-imation. As is shown in Section 7, the use of a proper weighting function is critical for obtaining accurate orientation estimates.

2

Orientation Functionals

Most methods to estimate orientation tensors are guaranteed to produce rank one tensors, as in (1), only for signals which are indeed lo-cally simple. For non-simple neighborhoods we usually obtain higher rank tensors. This is no deficit, however, because the deviations from being rank one give additional informa-tion about the local structure of the signal.

Higher rank tensors can be analyzed, as in [6], by means of the eigenvalue decomposition, which can be written as

T=X

k

λkˆekˆeTk, (2) where λ1 ≥ λ2 ≥ . . . ≥ λN are the eigenvalues

and {ˆek} are the corresponding eigenvectors.

In 3D, e.g., this can be rewritten as T= (λ1− λ2)ˆe1ˆeT1

+ (λ2− λ3)(ˆe1ˆe1T + ˆe2ˆeT2) + λ3I.

(3) The intuitive interpretation is that the ten-sor is represented as a linear combination of three tensors. The first one corresponds to a simple neighborhood, i.e. locally planar, the second to a rank two neighborhood, i.e. lo-cally constant on lines, and the last term cor-responds to an isotropic neighborhood, e.g. non-directed noise.

To obtain a more strict interpretation of the relation between higher rank tensors and the geometry of non-simple signals, we introduce a new concept called orientation functionals, of which orientation tensors is a subclass.

Let U denote the set of unit vectors in RN,

U = {u ∈ RN; kuk = 1}. (4)

An orientation functional φ is a mapping

φ : U −→ R+∪ {0} (5)

that to each direction vector assigns a non-negative real value. The value is interpreted as a measure of how well the signal locally is consistent with an orientation hypothesis in the given direction, i.e. how much the signal varies along the direction. Since we do not distinguish between two opposite directions, we require that φ be even, i.e. that

φ(−u) = φ(u), all u ∈ U. (6)

We also set some restrictions on the map-ping from signal neighborhoods to the associ-ated orientation functionals. The signal f is assumed to be expressed in a local coordinate system, so that we always discuss the local orientation at the origin.

1. Assume that the signal is rotated around the origin, so that f (x) is replaced by

˜

f(x) = f (Rx), where R is a rotation matrix. Then the orientation functional

˜

φ associated to ˜f should relate to φ by ˜

φ(u) = φ(Ru), i.e. be rotated in the same way. This relation should also hold for other orthogonal matrices R, character-ized by RTR= I.

2. In directions along which the signal is constant, φ should be zero.

3. For a simple signal in the direction ˆn, φ should have its maximum value for ˆnand −ˆn. It should also decrease monotoni-cally as the angle to the closer of these two directions increases.

4. If a constant is added to the signal, φ should not change, i.e. the orientation functional should be invariant to the DC level.

5. If the signal is multiplied by a positive constant α, ˜f(x) = αf (x), the new orien-tation functional should be proportional to the old one, ˜φ(u) = βφ(u), where the positive constant β is not necessarily equal to α but should vary monotonically with α.

(3)

6. If the signal values are negated, φ should remain unchanged.

To transform an orientation tensor into an orientation functional, we use the construc-tion

φT(u) = uTTu. (7)

Hence the orientation tensors is the subclass of orientation functionals which are positive semidefinite quadratic forms in u.

3

Signal Model

For estimation of orientation we use the as-sumption that local projection onto second degree polynomials gives sufficient informa-tion. Thus we have the local signal model, expressed in a local coordinate system,

f(x) ∼ xTAx+ bTx+ c, (8)

where A is a symmetric matrix, b a vector and c a scalar. To compute the parameters of the model we use a weighted least squares approximation of the local neighborhood to a linear combination of the monomials up to the second degree. In 2D this means that we have the (subspace) basis functions

{1, x, y, x2, y2, xy} (9) and if we let r1, . . . , r6 be the corresponding

expansion coefficients, we have the relation c= r1, b= µr2 r3 ¶ , and A =µr4 r6 2 r6 2 r5 ¶ . (10) The generalization to higher dimensionalities is straightforward. The details of the least squares solutions are presented in Section 5.

The choice of weighting function can in principle be made arbitrarily, but as we will see in Section 7 it is important that it is isotropic. In Section 5 it is also shown that we can obtain a very efficient implementa-tion under the requirement that the weighting function is Cartesian separable. These two re-quirements limit our choice to the Gaussians, but since they in practice work very well, this is not a problematic constraint.

Local orientation can be estimated for sig-nal structures of different scale and the spatial extent of the weighting function is directly re-lated to this scale.

We can notice that A captures information about the even part of the signal, excluding DC, that b captures information about the odd part of the signal, and that c varies with the local DC level. While the latter gives no information about orientation, it is necessary to include it in the signal model because oth-erwise the DC level would affect the compu-tation of A.

4

Construction of the

Ori-entation Tensor

Given the local signal model (8), we compute the orientation tensor T by the construction

T= AAT + γbbT, (11)

where γ is a non-negative weight factor. A rationale for this construction can be ob-tained by studying ideal linear and quadratic neighborhoods. This is done in [3] but is omit-ted here due to space constraints. The factor γ, which decides the relative weight between the linear and quadratic parts of the signal, is unfortunately necessary and should be cho-sen with respect to the spatial extent of the weighting function and the scale of the struc-tures we want to estimate orientation for.

It is straightforward to show that this con-struction together with (7) gives an orienta-tion funcorienta-tional satisfying the requirements in Section 2, provided that we have an isotropic weighting function and ignore the discretiza-tion errors. The details can be found in [3].

5

Implementation

Projection onto a local signal model, using the weighted least squares method, may seem like a computationally demanding operation. This is not the case, however. In practice the computations are reduced to simple convolu-tions and if the weighting function is

(4)

Carte-sian separable, the correlation kernels obtain this property too.

In this Section we assume that we have a discretized signal. We denote the basis func-tions of the local signal model b1, . . . , bm, let

a be the weighting function and f the signal values in a local neighborhood. All these are finite dimensional vectors, containing the val-ues in the neighborhood in some fixed order. What we want to solve is the weighted linear least squares problem

arg min

r1,...rk

kW(r1b1+ · · · + rmbm− f )k, (12)

where r1, . . . , rmare the expansion coefficients

and W2 = diag(a) so W is a diagonal matrix

containing the square root of the weights.2 To simplify this expression we collect the basis vectors in the matrix

B=¡b1 b2 . . . bm¢ , (13)

and the expansion coefficients in the vector r= (r1, . . . , rm)T. Thus we can rewrite (12) as

arg min

r kW (Br − f ) k (14)

with the well known solution r=¡BTW2B¢−1

BTW2f. (15) Now we introduce the dual subspace basis [3, 4] ˜ B=¡˜ b1 ˜b2 . . . b˜m¢ , (16) by the expression ˜ B = W2BG−1, G= BTW2B. (17)

With this we can compute the expansion co-effients as r = ˜BTf, or

ri = ˜bTi f (18)

The interpretation of the last equation is that each expansion coefficient at a given point can be computed from a single inner product between a dual basis vector and the signal values in the neighborhood. Assuming

2

This convention has historical reasons and turns out practical in the end.

−4 −2 0 2 4 −4 −2 0 2 4 0 0.5 1 1.5 2 (a) 1 −4 −2 0 2 4 −4 −2 0 2 4 −4 −2 0 2 4 (b) x −4 −2 0 2 4 −4 −2 0 2 4 −4 −2 0 2 4 (c) y −4 −2 0 2 4 −4 −2 0 2 4 0 5 10 15 20 (d) x2 −4 −2 0 2 4 −4 −2 0 2 4 0 5 10 15 20 (e) y2 −4 −2 0 2 4 −4 −2 0 2 4 −20 −10 0 10 20 (f) xy Figure 1: Basis functions in 2D.

−4 −2 0 2 4 −4 −2 0 2 4 0 0.1 0.2 0.3 0.4 (a) 1 −4 −2 0 2 4 −4 −2 0 2 4 −0.1 −0.05 0 0.05 0.1 (b) x −4 −2 0 2 4 −4 −2 0 2 4 −0.1 −0.05 0 0.05 0.1 (c) y −4 −2 0 2 4 −4 −2 0 2 4 −0.05 0 0.05 (d) x2 −4 −2 0 2 4 −4 −2 0 2 4 −0.05 0 0.05 (e) y2 −4 −2 0 2 4 −4 −2 0 2 4 −0.05 0 0.05 (f) xy Figure 2: Dual basis functions in 2D.

that we have the signal values on a regular grid, this means that we can compute a given ri everywhere by convolving the signal with

the dual basis vector ˜bi as convolution

ker-nel.3 Notice that we do not have to compute

r1, since this coefficient is not used in the

ten-sor construction.

Basis functions in 2D and dual basis tions when using a Gaussian weighting func-tion can be found in Figures 1 and 2 respec-tively.

If we restrict ourselves to Cartesian separa-ble weighting functions, we can make further improvements. To see this, we need to study (17) in further detail. The expression for the dual basis functions can be rewritten

  | | ˜ b1. . . ˜bm | |  =   | | a· b1. . . a· bm | |  G−1, (19) 3

To be exact we obtain correlation kernels this way, which need to be reflected before they can be used as convolution kernels. In order to simplify the presen-tation we ignore this distinction.

(5)

?>=<

89:;

f 1 uujjjjjj jjjjjj jj 1 wwooo oooooo o 1££¦¦ ¦¦ 1 z ²² z z2 ¿¿ 9 9 9 9 z2 y²² 1©©µµ µµ y z ¹¹, ,, , yz y2 "" E E E E E 1 ²² y2 x ²² 1©©µµ µµ 1©©µµ µµ x z ¹¹, ,, , xz y ¿¿ 9 9 9 9 1 ²² xy x2 %% L L L L L L L L 1 ²² 1 ²² x2 Figure 3: Convolver structure. There is un-derstood to be a Gaussian factor in each box as well.

where a · b denotes component-wise multipli-cation of two vectors.

From this we can conclude that instead of convolving with ˜bi, we may convolve with a·bi

and then use G−1 to transform the

convolu-tion results into expansion coefficients. Since the basis functions are monomials they are ob-viously Cartesian separable and together with a separable a we have a·bi separable and thus

the convolutions become separable as well. The significance of this is that we only need to compute a number of 1D convolutions, which reduces the computational complexity drasti-cally, especially in 3D and higher. It can also be shown [3] that G−1 turns out to be very

sparse. In fact all dual basis vectors except ˜

b1 become themselves separable.

The final step to get an efficient convolver structure is to notice that the decompositions into 1D convolution kernels have a lot of com-mon factors. Figure 3 shows how the con-volutions for 3D can be structured hierarchi-cally in three levels, where the first level con-tains convolutions in the x direction, the sec-ond in the y direction, and the third in the z direction. The results are the convolutions {((a · bk) ∗ f )(x)} and the desired expansion

coefficients are then computed by transforma-tion according to the precomputed G−1. This

convolver structure can straightforwardly be extended to any dimensionality.

When the expansion coefficients are com-puted, it remains to construct orientation ten-sors according to (11). This step is trivial.

Method Time Memory

C d2 2n d d2 2 SC d3 6n d2 2

Table 1: Asymptotic complexities, d and n large, leading terms.

Time Memory

Method d= 2 d= 3 d= 2 d = 3

C 5n2+12 9n3+ 30 6 10

SC 9n+19 19n+42 6 10

Table 2: Time complexity and memory over-head for 2D and 3D.

6

Computational

Com-plexity

In this analysis we distinguish between com-putation of expansion coefficients by convolu-tion (C) according to (18) and separable con-volution (SC) using the convolver structure in Figure 3.

Let d be the dimensionality of the signal space and n the spatial extent of the weight-ing function per dimension. Independent of method we have (d+1)(d+2)2 basis functions and the computation of the tensor from the expan-sion coefficients requires d(d+1)(d+2)2 multipli-cations and slightly fewer additions per point. In general we count the complexity per com-puted tensor and only the number of multi-plications involved; usually there is a slightly lower number of additions as well. This is consistent with the traditional count of coeffi-cients for convolution kernels. Without going into details we present the asymptotic com-plexities, for both d and n large, in Table 1. Memory overhead is measured in floating point values per tensor to be computed.

Usually, however, we are more interested in small values of d rather than in large values. A more detailed estimation of the complexity for 2D and 3D can be found in Table 2. The first term of the time complexities is the total number of coefficients involved in the convolu-tion kernels, while the second is the count for the tensor construction from the convolution

(6)

(a) slice 5 (b) slice 14 (c) slice 21

(d) slice 32 (e) SNR 10 dB (f) SNR 0 dB Figure 4: Slices from the 64-cube test volume.

results.

The separable convolution method has been implemented in Matlab, with convolution as a C mex-file. On a 440 MHz SUN Ultra 10, computation of 3D orientation tensors for a 64 × 64 × 64 volume with effective filter size 15 × 15 × 15 takes 7.5 seconds.

7

Evaluation

The tensor estimation algorithm has been evaluated on a 3D test volume consisting of concentric spherical shells. The volume is 64 × 64 × 64 and selected slices are displayed in Figure 4(a)–(d). Except at the center of the volume the signal is locally planar and all possible orientations are present. As in [1, 11] the performance of the tensors is measured by an angular RMS error ∆φ = arccos   v u u t1 L L X l=1 (ˆxTˆe 1)2  , (20)

where ˆx is a unit vector in the correct ori-entation, ˆe1 is the eigenvector corresponding

to the largest eigenvalue of the estimated ten-sor, and L is the number of points. To avoid border effects and irregularities at the center of the volume, the sum is only computed for points at a radius between 0.16 and 0.84, with respect to normalized coordinates.

The algorithm has also been tested on de-graded versions of the test volume, where

−5 0 5 −5 0 5 0 0.5 1 (a) Cube 3 × 3 × 3 −5 0 5 −5 0 5 0 0.5 1 (b) Cube 5 × 5 × 5 −5 0 5 −5 0 5 0 0.5 1 (c) Cube 7 × 7 × 7 −5 0 5 −5 0 5 0 0.5 1 (d) Sphere, radius 3.5 −5 0 5 −5 0 5 0 0.5 1 (e) Sphere, oversampled −5 0 5 −5 0 5 0 0.5 1 (f) Cone, radius 4 −5 0 5 −5 0 5 0 0.5 1 (g) Cone, oversampled −5 0 5 −5 0 5 0 0.5 1 (h) Tent, oversampled −5 0 5 −5 0 5 0 0.5 1 (i) Gaussian, σ= 1.2 Figure 5: Weighting functions.

white noise has been added to obtain a signal to noise ratio of 10 dB and 0 dB respectively. One slice of each of these are shown in Figure 4(e)–(f).

As we mentioned in Section 4, isotropy is a theoretically important property of the weighting function. To test this in practice a number of different weighting functions have been evaluated. The test set consists of:

• Cubes of three different sizes, with sides being 3, 5, and 7 pixels wide.

• A sphere of radius 3.5 pixels.

• The same sphere but oversampled, i.e. sampled regularly at 10 × 10 points per pixel and then averaged. The result is a removal of jaggies at the edges and a more isotropic weighting function. • A 3D cone of radius 4 pixels. • The same cone oversampled.

• A “tent” shape, 8 pixels wide, oversam-pled.

• A Gaussian with standard deviation 1.2, sampled at 9 × 9 × 9 points.

These are illustrated in Figure 5 in form of their 2D counterparts.

The results are listed in Table 3 and we can see that the cube and tent shapes, which

(7)

shape ∞ 10 dB 0 dB cube 3 × 3 × 3 3.74◦ 7.2724.06◦ cube 5 × 5 × 5 13.50◦ 14.1618.48◦ cube 7 × 7 × 7 22.99◦ 23.5727.30◦ Sphere 6.69◦ 8.2015.34◦ Sphere, oversampled 0.85◦ 5.7814.30◦ Cone 1.39◦ 6.1013.89◦ Cone, oversampled 0.28◦ 5.8914.13◦ Tent, oversampled 21.38◦ 21.8625.16◦ Gaussian, σ = 1.2 0.17◦ 3.5310.88

Table 3: Evaluation of different weighting functions.

are highly anisotropic,4 performs significantly

worse than the more isotropic ones. This is of particular interest since the cube weighting functions correspond to the naive use of an unweighted subspace projection over a rect-angular neighborhood.

The main reason that the cubes are anisotropic is that they extend farther into the corners than along the axes. The spheres and the cones eliminate this phenomenon by being cut off at some radius. Still there is a marked improvement in isotropy when these shapes are oversampled, which can clearly be seen in the results from the noise-free volume. The difference between the spheres and the cones is that the latter have a gradial decline in the importance given to points farther away from the center. We can see that this makes a difference, primarily when there is no noise, but that the significance of isotropy is much larger can clearly be seen from the poor results of the tent shape.

The Gaussian, finally, turns out to have su-perior performance, which is fortunate consid-ering that this shape is separable and can be used with the convolver structure in Figure 3. The effects of varying the values of the de-sign parameters σ (standard deviation of the Gaussian weighting function) and γ (relative weight between linear and quadratic parts of the tensors) can be found in [3] but are omit-ted here. The most interesting result is that neither the linear part, nor the quadratic, is

4

The 3× 3 × 3 cube is actually too small to be

significantly anisotropic. kernel total n ∞ 10 dB 0 dB coeff. operations 3 0.87◦ 6.6923.1857 99 5 0.37◦ 3.5110.7085 127 7 0.15◦ 3.0510.26133 175 9 0.11◦ 3.0310.24171 213 11 0.11◦ 3.0310.24209 251

Table 4: Smallest angular errors for different kernel sizes.

Andersson et al. Farneb¨ack

SNR 345 coeff. 171 coeff.

∞ 0.76◦ 0.11

10 dB 3.02◦ 3.03

0 dB 9.35◦ 10.24

Table 5: Comparison with Andersson, Wik-lund & Knutsson [1, 11].

very good on its own. The significance of this comes from the fact that only using the linear part can be shown to be roughly equivalent to computing tensors as outer product of gradi-ents.

Table 4 lists the best results obtained for different extents of the weighting function. All computations have been made with the sepa-rable algorithm and σ and γ have been tuned for each weighting function size, n.

The results for 9×9×9 weighting functions, and equivalently kernels of the same effective size, can readily be compared to the results given in [1, 11] for a sequential filter imple-mentation of a quadrature filter based esti-mation algorithm. As we can see in Table 5, the algorithm proposed in this paper performs quite favorably in the absence of noise while being somewhat more noise sensitive. Addi-tionally it uses only half the number of kernel coefficients.

Further proof of the usefulness of this method can be found in [3, 5], where it is used as the basis for a velocity estimation al-gorithm, with very good results. Another ap-plication is detection of rotational symmetries [9].

(8)

8

Conclusions

We have introduced a novel concept called ori-entation functionals, which generalizes orien-tation tensors. We have also described a novel method to estimate orientation tensors from local projections of the signal onto quadratic polynomials. It has been demonstrated that it is very important that this projection be done with a proper weighting function. Fi-nally it has also been shown how the esti-mation method can very efficiently be imple-mented by means of separable convolutions, for signals of any dimensionality, and that the method gives very accurate orientation esti-mates.

Acknowledgments

The author wants to acknowledge the finan-cial support of WITAS: The Wallenberg Lab-oratory for Information Technology and Au-tonomous Systems.

References

[1] M. Andersson, J. Wiklund, and

H. Knutsson. Sequential Filter Trees for Efficient 2D 3D and 4D Orientation Estimation. Report LiTH-ISY-R-2070, ISY, SE-581 83 Link¨oping, Sweden, November 1998.

[2] J. Big¨un. Local Symmetry Features in Image Processing. PhD thesis, Link¨oping University, Sweden, 1988. Dissertation No 179, ISBN 91-7870-334-4.

[3] G. Farneb¨ack. Spatial Domain Meth-ods for Orientation and Velocity Estima-tion. Lic. Thesis LiU-Tek-Lic-1999:13, Dept. EE, Link¨oping University, SE-581 83 Link¨oping, Sweden, March 1999. The-sis No. 755, ISBN 91-7219-441-3.

[4] G. Farneb¨ack. A unified framework for bases, frames, subspace bases, and sub-space frames. In Proceedings of the 11th Scandinavian Conference on Image Analysis, pages 341–349, Kangerlussuaq, Greenland, June 1999. SCIA.

[5] G. Farneb¨ack. Fast and accurate motion estimation using orientation tensors and parametric motion models. In Proceed-ings of 15th International Conference on Pattern Recognition, Barcelona, Spain, September 2000. IAPR.

[6] G. H. Granlund and H. Knutsson. Signal Processing for Computer Vision. Kluwer Academic Publishers, 1995. ISBN 0-7923-9530-1.

[7] R. M. Haralick and L. G. Shapiro. Computer and robot vision, volume 2. Addison-Wesley, 1993.

[8] R. M. Haralick and L. Watson. A facet model for image data. Computer Graph-ics and Image Processing, 15(2):113–129, February 1981.

[9] B. Johansson and G. Granlund. Fast se-lective detection of rotational symmetries using normalized inhibition. In Proceed-ings of ECCV-2000, June 2000.

[10] H. Knutsson. Representing local struc-ture using tensors. In The 6th Scan-dinavian Conference on Image Analy-sis, pages 244–251, Oulu, Finland, June 1989. Report LiTH–ISY–I–1019, Com-puter Vision Laboratory, Link¨oping Uni-versity, Sweden, 1989.

[11] H. Knutsson and M. Andersson. Opti-mization of Sequential Filters. In Pro-ceedings of the SSAB Symposium on Im-age Analysis, pIm-ages 87–90, Link¨oping, Sweden, March 1995. SSAB.

[12] H. Knutsson and C-F. Westin. Normal-ized and Differential Convolution: Meth-ods for Interpolation and Filtering of In-complete and Uncertain Data. In Pro-ceedings of IEEE Computer Society Con-ference on Computer Vision and Pattern Recognition, pages 515–523, New York City, USA, June 1993. IEEE.

[13] C-F. Westin. A Tensor Framework

for Multidimensional Signal Processing. PhD thesis, Link¨oping University, Swe-den, SE-581 83 Link¨oping, SweSwe-den, 1994. Dissertation No 348, ISBN 91-7871-421-4.

References

Related documents

Both Brazil and Sweden have made bilateral cooperation in areas of technology and innovation a top priority. It has been formalized in a series of agreements and made explicit

The increasing availability of data and attention to services has increased the understanding of the contribution of services to innovation and productivity in

Parallellmarknader innebär dock inte en drivkraft för en grön omställning Ökad andel direktförsäljning räddar många lokala producenter och kan tyckas utgöra en drivkraft

Närmare 90 procent av de statliga medlen (intäkter och utgifter) för näringslivets klimatomställning går till generella styrmedel, det vill säga styrmedel som påverkar

I dag uppgår denna del av befolkningen till knappt 4 200 personer och år 2030 beräknas det finnas drygt 4 800 personer i Gällivare kommun som är 65 år eller äldre i

Detta projekt utvecklar policymixen för strategin Smart industri (Näringsdepartementet, 2016a). En av anledningarna till en stark avgränsning är att analysen bygger på djupa

DIN representerar Tyskland i ISO och CEN, och har en permanent plats i ISO:s råd. Det ger dem en bra position för att påverka strategiska frågor inom den internationella

However, the effect of receiving a public loan on firm growth despite its high interest rate cost is more significant in urban regions than in less densely populated regions,