• No results found

Representing Multiple Orientations in 2D with Orientation Channel Histograms

N/A
N/A
Protected

Academic year: 2021

Share "Representing Multiple Orientations in 2D with Orientation Channel Histograms"

Copied!
19
0
0

Loading.... (view fulltext now)

Full text

(1)

Representing Multiple Orientation in 2D

with Orientation Channel Histograms

Bj¨

orn Johansson

November 4, 2002

Technical report LiTH-ISY-R-2475 ISSN 1400-3902

Computer Vision Laboratory Department of Electrical Engineering

Link¨oping University, SE-581 83 Link¨oping, Sweden bjorn@isy.liu.se

Abstract

The channel representation is a simple yet powerful representation of scalars and vectors. It is especially suited for representation of several scalars at the same time without mixing them up.

This report is partly intended to serve as a simple illustration of the channel representation. The report shows how the channels can be used to represent multiple orientations in two dimensions. The idea is to make a channel representation of the local orientation angle computed from the image gradient. The representation basically becomes an orientation histogram with overlapping bins.

The channel histogram is compared with the orientation tensor, which is another representation of orientation. The performance comparable to tensors in the simple signal case, but decreases slightly for increasing number of channels. The channel histogram outperforms the tensors on non-simple signals.

(2)

Contents

1 Introduction 3

2 The Channel Representation 3

2.1 Channel encoding . . . 4

2.2 Channel decoding . . . 5

3 Orientation representations 5 3.1 Gradient . . . 6

3.2 Averaged Double angle . . . 6

3.3 Orientation Tensor . . . 6

3.4 Orientation Channel Histogram . . . 10

4 Experiments 12 4.1 Experiment 1 . . . 12

4.2 Experiment 2 . . . 15

5 Summary and discussion 15

(3)

1

Introduction

Lately, the channel representation has been explored as a representation of scalars and vectors in number of applications, e.g. edge detection [1], color image representation and adaptive filtering [7, 3], and learning recognition tasks [6, 9, 8]. This report shows how the channel representation can be used to represent multiple orientations in two dimensions. The idea is to make a channel representation of the local orientation angle computed from the image gradient. The representation basically becomes an orientation histogram with overlapping bins. A similar idea has been discussed in [4], where a global channel representation of the image orientation is used to find global image rotations.

The channel representation is compared with the orientation tensor, which is another representation of orientation. These tensors can be computed using different algorithms. One of them, sometimes called the structure tensor, is based on the image gradient. We will show that the orientation channel histograms is a generalization of this tensor.

Section 2 describes the channel representation for modular scalars. The tensor and the channel histograms are described in section 3. Section 4 compares the tensor and the channel histograms on some testimages containing simple and non-simple signals. The results are discussed in section 5.

2

The Channel Representation

The channel representation is a monopolar information representation, where each unit of data is called a channel. Each channel value is the output from a kernel function,

Hk(s), possibly weighted with some relevance measure. The kernel functions are typically

localized in the input space, i.e. they have a zero value outside some local region of the input space. Monopolar means that the kernel function is non-negative.

We will restrict the discussion to representation of modular variables. Assume s is a modular variable between 0 and 2π. One example of kernel function is the cos2-basis function,

Hc(s) =

(

cos2(ω(s − c)) if ωd(s, c) ≤ π2

0 otherwise , (1)

where c is the kernel center, ω is the kernel bandwidth, and d is the distance modulo 2π, i.e.

(4)

0 2π/3 4π/3 2π k = 1 k = 2 k = 3 k = 1

Figure 1: Illustration of kernel functions.

2.1

Channel encoding

A modular variable s is represented, or encoded, by a monopolar vector with channel values, s → v(s) =      v1(s) v2(s) .. . vK(s)     =      Hc1(s) Hc2(s) .. . HcK(s)     , (3)

where K is the number of channels. Assume that the kernel centers are evenly spread, i.e. ck = 2π(k−1)K . Also assume that ω = 2NK (i.e. the kernels are overlapping by π/N). This

is illustrated in figure 1 for the case K = 3 and N = 3. It can be shown that we have the properties X k Hck(s) = π and X k Hc2k(s) = (4)

(For a proof, see [4].)

The channel representation has the advantage that several values can be represented in the same channel vector without interference as long as the distance between the values are sufficiently large. For example, if we choose K = 8, s1 = 0, and s2 = 7π/8 we get

v(s1) + v(s2) =             1 0.25 0 0 0 0 0 0.25             +             0 0 0 0.75 0.75 0 0 0             =             1 0.25 0 0.75 0.75 0 0 0.25             . (5)

This channel vector is also illustrated in figure 2, where the kernels functions have been multiplied with their output values. Hence, it should be possible to extract both s1 and

s2 from the vector if we apply a local decoding algorithm.

It should be noted that we get an interference between the two scalars if they become too close to each other. The number of channels and channel overlap thus gives a limit to

(5)

0 π 2π

Figure 2: Example of channel representation of two values.

how many scalars we can represent simultaneously. The theoretical issue of interference distance and descriptive power of the channel representation will not be discussed further here.

2.2

Channel decoding

The decoding algorithm used here is adopted from [4]. The idea is to compute an estimate in each local region of the channel representation. For each of N neighboring kernel values we compute an estimate of s, ˆ sl= l + 1 arg l+N−1X k=l vkei2ω(k−l) ! . (6)

The estimate is rejected if it falls outside the valid range [l + N − 1 −π , l +π ], otherwise we compute a relevance measure rl as a local sum of channels,

rl = l+N−1X

k=l

vk. (7)

At most we get as many estimates as there are channels, but many of them may be dis-regarded because they fall outside the valid range or because their relevance is too small. The relevance measure is also used to find the most dominant scalar.

Comment:

The choice of kernel functions should not be a crucial issue as long as there exist a simple decoding algorithm. We can for instance choose Gaussians or splines instead of the cos2-basis function. Decoding strategies for these two choices can be found in [5] and [3] respectively. We choose cos2 here because the channel histograms can then be viewed as a generalization of the gradient tensor.

3

Orientation representations

We will now describe the orientation channel histograms. They will be compared with the gradient and the structure tensor. Their behavior for simple and non-simple signals will be illustrated with a simple image containing two simple regions with different orientation, see left image in figure 3. Further evaluations are found in the experiment section 4.

(6)

I ∇I z¯

Figure 3: Left: Simple image. Middle: A sample of image gradients. Right: Locally average double angle representation of the gradient.

3.1

Gradient

The gradient is computed by a differentiated Gaussian. The result on the illustration image is found in the middle image in figure 3.

The gradient varies in magnitude but gives the correct orientation for simple signals (except when∇I = 0). The orientations are mixed for non-simple signals.

3.2

Averaged Double angle

We may want to average the orientation over a local region to get a more robust estimate. Obviously we cannot not average the gradient directly. Instead we can for example average the double angle representation, z. The double angle can for instance be computed as

z = |∇I|2e2i∠∇I, (8) and we get an average as

¯

z =

Z

w(x)|∇I(x)|2e2i∠∇Idx , (9) where w is a Gaussian function serving as a spatial window. The result on the illustration image is found in the right image in figure 3.

The magnitude of ¯z is high for simple signals with high energy. We cannot differ between a signal with multiple orientations and a signal with low energy, we get a low magnitude|¯z| in both cases. Again we get interference for non-simple signals.

3.3

Orientation Tensor

A more powerful representation than the double angle is the orientation tensor. There exist several methods to construct an orientation tensor in practise. A short overview and further references can be found in [11]. They are constructed from different filter outputs. The first one is based on the image gradient and is sometimes called the structure tensor. It is computed from the outer product of the image gradient,

Tgrad=

Z

(7)

e

2

e

1

λ1

λ2

Figure 4: Geometric representation of a tensor.

Tgrad Tquad Tpol

Figure 5: A sample of orientation tensors, represented as ellipses. Left: Gradient-based (Structure) tensor. Middle: Quadrature-based tensor. Right: Polynomial-based tensor. The second one is based on quadrature filter outputs and will be referred to as Tquad. The

third one is based on polynomial filter outputs and is referred to as Tpol. The first one

will be of most interest in this report. The tensors can be defined in higher dimensions than two, but we will focus on the 2D case. The tensor contains information about the dominant orientation and its orthogonal direction. We can write

T = λ1eˆ1ˆeT1 + λeeT2 (11)

where {λk, ˆek} is the eigensystem of T with λ1 being the largest eigenvalue. Then ˆe1 is

interpreted as the dominant orientation of the signal, and λ1 is proportional to the energy

in the dominant direction (λ2 is proportional to the energy in the orthogonal direction).

The tensor is sometimes represented by an ellipse, see figure 4. Figure 5 shows the result of computing the three different tensors on the illustration image. The three tensors behaves slightly different (partly depending on the respective tuning parameters), but their representing power is the same; we can determine if the signal is simple or non-simple, and we can compute a dominant direction which may be difficult to interpret for non-simple signals.

We can get another representation for the tensor, instead of the geometric one, if we take the view that the orientation tensor is an instance of the concept of orientation functionals. An orientation functional is a mapping from the set of unit vectors to a set of non-negative real values. The value is interpreted as a measure of how well the signal locally is consistent with an orientation hypothesis in the gives direction ˆv. This idea was

(8)

Tgrad Tquad Tpol

Figure 6: Same tensors as in figure 5, represented by their corresponding orientation functional. Left: Gradient-based (Structure) tensor. Middle: Quadrature-based tensor. Right: Polynomial-based tensor.

based on the gradient is

φ(ˆv) =

Z

w(x)(∇I(x) · ˆv)2dx , (12) where w is a Gaussian function serving as a spatial window. φ can be rewritten as

φ(ˆv) = ˆvT Z w∇I∇ITdx  ˆ v = ˆvTTgradvˆ (13)

Tpol can be derived using the same functional where the gradient is computed from a

lo-cal polynomial model, and Tquad can be derived using a similar functional in the Fourier

domain. We can now illustrate the orientation tensors by drawing their corresponding orientation functional (12), see figure 6. The reason for this change in representation is that it then becomes easier to relate the orientation tensors to the orientation channel histograms described below.

We can elaborate on the functional idea somewhat more. Let

Dˆv(∇I) = (∇I · ˆv)2 =|∇I|2( c∇I · ˆv)2. (14)

Dˆv(∇I) can be interpreted as a direction sensitive function of ∇I, multiplied with the

relevance measure|∇I|2. Dˆv(∇I) can also be interpreted as a kernel function, even if it is not especially local. Figure 7 illustrates this kernel function. Let {ˆnk}Kk=1, K ≥ 3 denote

a set of fixed unit vectors, e.g. ˆ nk = cos2π(k−1)K sin2π(k−1)K ! , k = 1, . . . , K (15) Furthermore, let Nk = ˆnknˆTk, and let { eNk} denote the dual basis to {Nk}. Dual basis

theory then gives ˆ vˆvT =X k αknˆknˆTk = X k αkNk where αk= ˆvˆvT • eNk. (16)

(9)

0 π 2π

Figure 7: Dˆvu) in (14) as function of ∠ˆu.

We use this to rewrite Dˆv(∇I) as

Dˆv(∇I) = (∇I · v)2 =∇I∇IT • ˆvˆvT =∇I∇IT X k αknˆknˆTk =X k αk ∇I∇IT • ˆnknˆTk  =X k αk(∇I · ˆnk)2 =X k αkDk(∇I) (17)

Hence, Dˆv(∇I) can be computed from a fixed set of kernel functions (denoted Dk for

simplicity). We rewrite (12) as φ(ˆv) = Z wDˆv(∇I)dx =X k αk Z wDk(∇I)dx = α1 α2 . . . αN         R wD1(∇I)dx R wD2(∇I)dx .. . R wDK(∇I)dx        = α1 α2 . . . αN         φ1 φ2 .. . φK       . (18)

This means that the entire orientation functional can be interpolated from a set of samples,

(10)

0 π 2π

Figure 8: An example Dˆvu) in (19) as function of∠ˆu.

3.4

Orientation Channel Histogram

The kernel functions Dk above gives a very crude orientation functional with little

de-scriptive power. We can construct more dede-scriptive orientation functionals if we exchange the kernel functions Dk to more local ones. We use the kernel function Hk in (1) and

choose

Dˆv(∇I) = |∇I|2H∠ˆv(2∠∇I) . (19)

Note that the angle of ∇I is doubled. Figure 8 shows an example of such a function. Also note that the previous tensor kernel function (14) is a special case of (19) where

K = N and ω = 1/2 (i.e. full overlap). The structure tensor is therefore a special case of

orientation histograms. We construct an orientation functional based on the new kernel function as

φ(ˆv) =

Z

wDˆv(∇I)dx (20)

As before we compute a number of samples, φk, from the orientation functional,

φ =        φ1 φ2 .. . φK        =        R wD1(∇I)dx R wD2(∇I)dx .. . R wDK(∇I)dx        . (21)

Unfortunately, this time we cannot interpolate the entire functional φ(ˆv) from the sample

vector φ. But this is not necessary if we assume that the image region only contains a few, sufficiently separated, orientations. Then we can view the sample vector φ as a channel representation of the orientations, and use the local decoding scheme described in section 2.2 to find the orientations (e.g. the dominant orientation). Figure 9 gives an overview of the algorithm.

Since we cannot compute the entire orientation functional we have to find another suitable way to illustrate the histogram representation. One way is simply to plot the sample vector. Another way is to compute an approximation, φapproxv), using some

ad-hoc interpolation method on φ, e.g. modular cubic interpolation. Figure 10 shows the result of computing the channel histograms and decode into orientations on the illustration image using 10 channels with π/3 overlap. This is to be compared with the tensor result

(11)

encode Channel

Dk

Local

averaging Φk decodeChannel Orientation(s) Gradient

Image

Figure 9: Overview of the orientation channel histogram algorithm.

φ φapprox Decoded orientations

Figure 10: A sample of orientation channel histograms using 10 channels. Left: Sample vectorφ (note that φk is sensitive to the double angle). Middle: Approximation of φ(ˆv)

using cubic interpolation on φ. Left: Decoded orientations from φ. The color of each orientation represents the relevance measure (dark means high relevance, white means low relevance, and orientations with zero relevance has been removed).

(12)

SNR = SNR = 10 dB SNR = 0 dB

Figure 11: 64× 64 testimage for simple signals with different amount of noise added. in figure 6.

To summarize, the multiple orientation representation is basically locally averaged orientation histograms, but we use overlapping bins of a type that gives the structure tensor as a special case (we need at least 3 channels in the tensor case). We also use a local decoding scheme that is optimal for this type of bins. It is fairly easy to show that the local decoding (6) gives the same result as finding the eigenvector with the largest eigenvalue in the structure tensor case. Also note that although the averaged double angle in section 3.2 is not a special case, it is still constructed from orientation sensitive functions, namely D1 =|∇I|2cos 2∠∇I and D2 =|∇I|2sin 2∠∇I.

4

Experiments

The channel representation is evaluated and compared with the gradient and the tensor on two test images. The goal is to find the dominant orientation in each region of the image. The first experiment evaluates the performance on a simple signal. We only represent one scalar in this case, and it is therefore sufficient to use the double angle representation or the tensor representation. But we still include this experiment to explore how the channel histograms behave on simple signals. The second experiment evaluates the performance on an image containing multiple orientations. This time we get a better performance with the channel histograms compared to the tensor.

4.1

Experiment 1

The testimage for simple signals is displayed in figure 11. We have a simple signal in each local area, except in the middle and at the edges. Different amount of noise is added and the dominant local orientation is then estimated using the various algorithms mentioned

(13)

∇I σ = std of Gaussian differential filter sz = filter size

Tgrad

σ∇I = std of Gaussian differential filter

sz∇I filter size of Gaussian differential filter

σw = std of Gaussian smoothing filter

szw filter size of Gaussian smoothing filter

Tquad

u0 = center frequency of quadrature log-normal filter

B = bandwidth of log-normal filter sz = filter size

Tpol

σ = std of Gaussian applicability

γ = weight factor between even and odd filters sz = filter size

φ σ∇I, sz∇I, σw, szw same as for Tgrad

K = number of channels

Table 1: Short description of the tuning parameters for each algorithm. is this report. The performance of each method is measured by an angular RMS error

δφ = arcsin   v u u t 1 2L L X l=1 kˆxˆxT − ˆeˆeTk2   = arccos   v u u t 1 L L X l=1 (ˆxTeˆ 1)2   , (22)

where ˆx is the true orientation, ˆe is the estimated dominant orientation, and L is the

number of points. To avoid border effects and irregularities at the center of the volume, the sum is only computed for points at radius between 0.16 and 0.84.

Near-optimal parameters for each method was found by computing the estimate for all combinations of the method parameters. They are called near-optimal because only a discrete number of values was used for each parameter. Table 1 contains a very short description of the parameters.

The choice of filter sizes is a bit tricky in this test image. It is optimal to choose as large filter as possible for simple signals, but then we run into border problems which some algorithms can handle better than others. Therefore, for a fair comparison we choose parameters so that each algorithm is allowed to use a region of maximally 11× 11 pixels for each estimate.

The result for each method is shown in figure 12. The gradient-based tensor has the best performance among all the algorithms on this image, especially for the noisy versions. This indicates that is better to combine many small poor estimates like 3× 3 image gradients than to use information from a few large filters as in the quadrature and polynomial based tensors (i.e. using local linear models). We can also see that the performance of the channel histograms are comparable to the tensors, but decreases for increasing number of channels. This means that one should not use more channels than necessary (note that Tgrad is equivalent toφK=3).

(14)

SNR = SNR = 10 dB SNR = 0 dB Method RMS Optimal param. RMS Optimal param. RMS Optimal param. ∇I 1.0◦ σ = 0.8sz = 7 17.3◦ σ = 0.9sz = 5 29.1◦ σ = 1sz = 5 Tgrad (= φK=3) 0.18◦ σ∇I = 1 sz∇I = 7 σw = 1.8 szw = 5 0.95◦ σ∇I = 0.7 sz∇I = 3 σw = 3.2 szw = 9 3.7◦ σ∇I = 0.7 sz∇I = 3 σw = 5.6 szw = 9 Tquad 0.35◦ u0 = π/2 B = 1.5 sz = 11 4.1◦ u0 = 3π/8 B = 2 sz = 11 5.9◦ u0 = 3π/8 B = 1.75 sz = 11 Tpol 0.16◦ σ = 1 γ = 1 sz = 11 5.0◦ σ = 0.9 γ = 1 sz = 11 17.8◦ σ = 1 γ = 1 sz = 11 φK=4 0.18◦ σ∇I = 1 sz∇I = 7 σw = 1 szw = 5 0.98◦ σ∇I = 0.7 sz∇I = 3 σw = 3.2 szw = 9 3.93◦ σ∇I = 0.7 sz∇I = 3 σw = 5.6 szw = 9 φK=8 0.21◦ σ∇I = 1 sz∇I = 7 σw = 1 szw = 5 1.6◦ σ∇I = 0.7 sz∇I = 3 σw = 3.2 szw = 9 6.3◦ σ∇I = 0.8 sz∇I = 3 σw = 5.6 szw = 9 φK=16 0.33◦ σ∇I = 1 sz∇I = 7 σw = 1 szw = 5 2.7◦ σ∇I = 0.8 sz∇I = 5 σw = 1.8 szw = 7 9.7◦ σ∇I = 0.9 sz∇I = 3 σw = 10 szw = 9 SNR = SNR = 10 dB SNR = 0 dB

∇ I Tgrad Tquad Tpol φ4 φ8 φ16

0° 0.5° 1°

∇ I Tgrad Tquad Tpol φ4 φ8 φ16

0° 5° 10° 15°

∇ I Tgrad Tquad Tpol φ4 φ8 φ16

0° 10° 20° 30°

(15)

4.2

Experiment 2

The testimage for non-simple signals is displayed in figure 13. The image is composed by a number of small patches with different orientation and frequency. The orientations and frequencies are chosen such that we do not introduce false edges on the borders between the patches (which would ruin the ground truth), see the zoomed part in figure 13. The magnitude in the true orientation image is used as a weight in the RMS measure, see below.

The same setup as in experiment 1 was made again, except that no restriction on region sizes was made. The RMS measure is modified to

δφ = arccos sP l|x|(ˆxP Tˆe1)2 l|x| ! . (23)

This is the same measure as in (22), except that we have introduced a weight |x|. The result is shown in figure 14. We see that the channel histograms gives a better result than the tensors.

The left and middle figure in figure 15 shows the RMS for each pixel in the same zoomed region as in figure 13 for the gradient based tensor and the channel histogram. We can see that the channel histograms gives a better estimate near discontinuities than the tensors. The channel histograms encode more orientations than the dominant one. For example, the histogram gives two estimates with high relevance in regions near borders between two patches. It is not entirely relevant to choose the most dominant one, because we sometimes pick out the wrong one due to noise. The true orientation may for instance be the second most relevant orientation in the histogram. This can be realized if we evaluate the histograms by choosing the best estimate of the most relevant ones (we choose all estimates with a relevance above 0.8 of the most relevant one), see right figure in figure 15. Then many of the extreme outliers are removed.

5

Summary and discussion

This report illustrate how the channel representation can be used to represent multiple orientations. This representation has been compared with the orientation tensors on one simple and one non-simple signal. The behavior decreases for increasing number of channels in the simple signal case. But the channel histograms outperforms the tensors in the non-simple signal case. This is not a surprise since we extract more information from the gradient in the channel histogram case compared to the tensor case.

A bit surprising though, comparing the different tensor algorithms, is that it seems to be better to combine many small crude local estimates rather than using larger linear models, especially in the presence of noise. There may be better local estimates than the gradient, but the gradient is one of the fastest ones to compute. The choice is probably a trade-off between speed and accuracy.

There have been other attempts to represent multiple orientations, see [10]. They use harmonic functions as orientation basis functions, but these are not local and the representation does not become sparse. It may also be difficult to interpret and decode

(16)

Image True orientation (color coded)

Zoomed image True orientation True orientation superim-posed on zoomed image

Figure 13: 512×512 testimage for non-simple signals. By courtesy of Hans Knutsson and Mats Andersson, Medical informatics, Department of Biomedical Engineering, Link¨oping University.

(17)

Method RMS Optimal param ∇I 19.3◦ σ = 0.6 sz = 3 Tgrad (=φK=3) 11.2◦ σ∇I = 0.6 sz∇I = 3 σw = 1 szw = 5 Tquad 11.3◦ u0 = 3π/4 B = 2.5 sz = 5 Tpol 12.8◦ σ = 0.6 γ = 0.56 sz = 3 φK=4 10.8◦ σ∇I = 0.6 sz∇I = 3 σw = 1 szw = 5 φK=8 9.4◦ σ∇I = 0.7 sz∇I = 3 σw = 1 szw = 7 φK=16 9.4◦ σ∇I = 0.7 sz∇I = 3 σw = 1 szw = 9

∇ I Tgrad Tquad Tpol φ8 φ16 φ32

0° 10° 20°

(18)

Tgrad φK=16 φK=16, best estimate

Figure 15: Local RMS errors. The figures have the same normalization. Left: Optimal result for Tgrad. Middle: Optimal result for φK=16. Right: Result for φK=16 when

choosing the best estimate out of the most relevant ones.

the information. The channel representation used in this report on the other hand gives an almost trivial task (which is a good thing). Still, there are some issues that can be discussed for practical and generalization reasons, for example:

• One should not use more channels than necessary. Perhaps it is possible to find a

split strategy to decide if more channels is needed for a certain region.

• Complexity has not been discussed. More channels require more computations and

memory, but the increase may be less than linear due to the sparsity property.

• How can we generalize the channels to higher dimensions? Higher dimensional

histograms could for example be useful in velocity estimation. It is fairly easy to define kernel functions in higher dimensions, but it is difficult to have them evenly spread. We also have to design a different decoding algorithm.

6

Acknowledgment

This work was supported by the Swedish Foundation for Strategic Research, project VISIT - VIsual Information Technology.

(19)

References

[1] M. Borga, H. Malmgren, and H. Knutsson. FSED - Feature Selective Edge Detection. In Proceedings of 15th International Conference on Pattern Recognition, volume 1, pages 229–232, Barcelona, Spain, September 2000. IAPR.

[2] G. Farneb¨ack. Spatial Domain Methods for Orientation and Velocity Estimation. Lic. Thesis LiU-Tek-Lic-1999:13, Dept. EE, Link¨oping University, SE-581 83 Link¨oping, Sweden, March 1999. Thesis No. 755, ISBN 91-7219-441-3.

[3] Michael Felsberg, Hanno Scharr, and Per-Erik Forss´en. The B-spline channel repre-sentation: Channel algebra and channel based diffusion filtering. Technical Report LiTH-ISY-R-2461, Dept. EE, Link¨oping University, SE-581 83 Link¨oping, Sweden, September 2002.

[4] Per-Erik Forss´en. Sparse Representations for Medium Level Vision. Lic. Thesis LiU-Tek-Lic-2001:06, Dept. EE, Link¨oping University, SE-581 83 Link¨oping, Sweden, February 2001. Thesis No. 869, ISBN 91-7219-951-2.

[5] Per-Erik Forss´en. Observations Concerning Reconstructions with Local Support. Technical Report LiTH-ISY-R-2425, Dept. EE, Link¨oping University, SE-581 83 Link¨oping, Sweden, April 2002.

[6] Per-Erik Forss´en. Sucessive Recognition using Local State Models. In Proceedings

SSAB02 Symposium on Image Analysis, pages 9–12, Lund, March 2002. SSAB.

[7] Per-Erik Forss´en, G¨osta Granlund, and Johan Wiklund. Channel Representation of Colour Images. Technical Report LiTH-ISY-R-2418, Dept. EE, Link¨oping University, SE-581 83 Link¨oping, Sweden, March 2002.

[8] G¨osta Granlund, Per-Erik Forss´en, and Bj¨orn Johansson. HiperLearn: A High Per-formance Learning Architecture. Technical Report LiTH-ISY-R-2409, Dept. EE, Link¨oping University, SE-581 83 Link¨oping, Sweden, January 2002.

[9] G¨osta H. Granlund and Anders Moe. Unrestricted recognition of 3-D objects us-ing multi-level triplet invariants. In Proceedus-ings of the Cognitive Vision Workshop, Z¨urich, Switzerland, September 2002.

[10] L. Haglund and D. J. Fleet. Stable Estimation of Image Orientation. In Proceedings

of the IEEE-ICIP, pages 68–72. IEEE, 1994.

[11] Bj¨orn Johansson and Gunnar Farneb¨ack. A Theoretical Comparison of Different Orientation Tensors. In Proceedings SSAB02 Symposium on Image Analysis, pages 69–73, Lund, March 2002. SSAB.

References

Related documents

Some of the definitions include the term “same basic product,” or variations on that theme: multimarketing occurs “when a company uses separate channels to sell the same product

Hypothesis one suggested that the level of goal incompatibility among multiple mar- keting channels is positively associated with the level of channel conflict; hypothesis

– In a MathScript node which appears as a frame  inside the Block diagram of a VI (available on the  Functions / Mathematics / Scripts & Formulas 

In this section the statistical estimation and detection algorithms that in this paper are used to solve the problem of detection and discrimination of double talk and change in

Leading this deflection of the slope current by around 2 weeks, a cyclonic eddy associated with a doming of the halocline and originating from north of the Faroes (and hence

46 Konkreta exempel skulle kunna vara främjandeinsatser för affärsänglar/affärsängelnätverk, skapa arenor där aktörer från utbuds- och efterfrågesidan kan mötas eller

General government or state measures to improve the attractiveness of the mining industry are vital for any value chains that might be developed around the extraction of

The increasing availability of data and attention to services has increased the understanding of the contribution of services to innovation and productivity in