• No results found

Equalization of Time Errors in Time Interleaved ADC System - Part I: Theory

N/A
N/A
Protected

Academic year: 2021

Share "Equalization of Time Errors in Time Interleaved ADC System - Part I: Theory"

Copied!
14
0
0

Loading.... (view fulltext now)

Full text

(1)

Interleaved ADC System – Part I: Theory

Jonas Elbornsson

,

Fredrik Gustafsson

Jan-Erik Eklund

Control & Communication

Department of Electrical Engineering

Link¨

opings universitet

, SE-581 83 Link¨

oping, Sweden

WWW:

http://www.control.isy.liu.se

E-mail:

jonas@isy.liu.se

,

fredrik@isy.liu.se

3rd March 2003

AUTOMATIC CONTROL

COM

MUNICATION SYSTEMS

LINKÖPING

Report no.:

LiTH-ISY-R-2494

Submitted to

Technical reports from the Control & Communication group in Link¨oping are available athttp://www.control.isy.liu.se/publications.

(2)

To significantly increase the sampling rate of an A/D converter (ADC), a time interleaved ADC system is a good option. The drawback of a time interleaved ADC system is that the ADCs are not exactly identical due to errors in the manufacturing process. This means that time, gain and offset mismatch errors are introduced in the ADC system. These errors cause distortion in the sampled signal.

In this paper we present a method for estimation and compensation of the time mismatch errors. The estimation method requires no knowledge about the input signal except that it should be band limited to the Nyquist frequency for the complete ADC system. This means that the errors can be estimated while the ADC is running. The method is also adaptive to slow changes in the time errors.

Keywords: A/D conversion, nonuniform sampling, equalization estimation

(3)

Equalization of Time Errors in Time Interleaved

ADC System –Part I: Theory

Jonas Elbornsson, Fredrik Gustafsson, Jan-Erik Eklund

Abstract— To significantly increase the sampling rate of an A/D

converter (ADC), a time interleaved ADC system is a good option. The drawback of a time interleaved ADC system is that the ADCs are not exactly identical due to errors in the manufacturing process. This means that time, gain and offset mismatch errors are introduced in the ADC system. These errors cause distortion in the sampled signal.

In this paper we present a method for estimation and com-pensation of the time mismatch errors. The estimation method requires no knowledge about the input signal except that it should be band limited to the Nyquist frequency for the complete ADC system. This means that the errors can be estimated while the ADC is running. The method is also adaptive to slow changes in the time errors.

Index Terms— A/D conversion, nonuniform sampling,

equal-ization, estimation

I. INTRODUCTION

T

HERE is an ever increasing need for faster A/D convert-ers (ADCs) in modern communications technology, such as radio base stations and VDSL modems. To achieve high enough sample rates, an array of M ADCs, interleaved in time, can be used [1], [2], see Figure 1. The time interleaved ADC system works as follows:

The input signal is connected to all the ADCs.

Each ADC works with a sampling interval of M Ts, where

M is the number of ADCs in the array and Ts is the

desired sampling interval.

delay, Ts sampling ADC0 ADC1 ADC2 ADCM−1 u clock y0 y1 y2 yM−1 y M U X

Fig. 1. A time interleaved ADC system. M parallel ADCs are used with the same master clock. The clock is delayed by the nominal sampling interval to each ADC. The outputs are then multiplexed together to form a signal sampled M times faster than the output from each ADC.

The clock signal to the ith ADC is delayed with iTs. This

gives an overall sampling interval of Ts.

The drawback with the interleaved structure is that, due to the manufacturing process, all the ADCs are not identical and mismatch errors will occur in the system. Three kinds of mismatch errors will occur:

Time errors (static jitter)

The delay times of the clock between the different ADCs are not equal. This means that the signal will be periodically but non-uniformly sampled.

Amplitude offset errors

The ground level differs between the different ADCs. This means that there is a constant amplitude offset in each ADC.

Gain errors

The gain, from analog input to digital output, differs between the different ADCs.

The errors listed above are static or slowly time varying. This means here that the errors can be assumed to be constant for the same ADC from one cycle to the next over an interval of some million samples.

With a sinusoidal input, the mismatch errors can be seen in the output spectrum as non harmonic distortion [3]. With input signal frequency ω0, the gain and time errors cause distortion at the frequencies

i

Mωs± ω0, i = 1, . . . , M− 1

where ωs is the sampling frequency. The offset errors cause

distortion at the frequencies

i

Mωs, i = 1, . . . , M− 1

An example of an output spectrum from an interleaved ADC system with four ADCs with sinusoidal input signal is shown in Figure 2. This distortion causes problems for instance in a radio receiver where a weak carrier cannot be distinguished from the mismatch distortion from a strong carrier. It is therefore important to remove the mismatch errors. However, calibration of an ADC system is time consuming and costly. Furthermore the mismatch errors may change slowly with for instance temperature and aging. Therefore we want to estimate the mismatch errors while the ADC is used. Methods for esti-mation of timing errors have been published in for instance [4] and [5]. These methods require a known calibration signal, which means that the operation of the ADC must be stopped during calibration. A blind time error estimation method was presented in [6] and validated on measurements in [7]. This method works well, but gives a bias error in the time error

(4)

0 1 2 3 4 5 6 −50 −40 −30 −20 −10 0 10 20 30 40 50

Normalized angular frequency

Signal power [dB]

ADC output spectrum

signal component offset error distortion time and gain error distortion

Fig. 2. Simulated output spectrum from interleaved ADC system with four ADCs. The input signal is a single sinusoid. The distortion is caused by mismatch errors.

estimates. A blind amplitude offset error estimation method was presented in [8].

We will in this paper present a method for blind equalization of the time mismatch errors in a time interleaved ADC system. The estimation method requires only that the input signal is band limited to the Nyquist frequency, for the complete ADC system. This method gives no bias in the estimates. The joint estimation of all three mismatch error types is described in [9] where the time error estimation presented in this paper is one part, described from a system perspective. In this paper the time error estimation method is described in more detail. In [10], analysis, simulation results and validation on a real time interleaved ADC system of the time error estimation method is given.

II. NOTATION ANDDEFINITIONS

We will in this section introduce the notation that will be used in this paper. The nominal sampling interval, that we would have without time errors, is denoted Ts. M denotes the

number of ADCs in the time interleaved array, which means that the sampling interval for each ADC is M Ts. The time

error parameters are denoted ∆ti, i = 0, . . . , M − 1. The estimates of these errors are denoted ˆ∆ti, and the true errors are denoted ∆0ti. The vector notation ∆t= [∆t0· · · ∆tM−1] is used for all the time error parameters.

We use the following notation for the signals involved:

u(t) is the analog input signal.

u[k] denotes the ideal signal, sampled without mismatch

errors.

ui[k], i = 0, . . . , M− 1 denotes the M subsequences of u[k],

ui[k] = u[kM + i]. (1)

yi[k] i = 0, . . . , M− 1 denotes the output subsequences

from the M A/D converters, sampled with time errors.

yi[k] = u (kM + i)Ts+ ∆0ti



y[k] is the multiplexed output signal from all the ADCs, y[k] = y(kmodM )  k M  ,

where b·c denotes rounding towards −∞.

z(∆t)[k] denote the output signal, y[k], reconstructed with the error parameters, ∆t.

z(∆t)

i [k] are the subsequences of z(∆t)[k]

We assume throughout this paper that u(t) is band limited to the Nyquist frequency, Tπ

s, of the complete ADC system. We will next establish a few definitions which will be used later in the paper. A discrete time signal u[k] is said to be quasi-stationary [11] if ¯ mu= lim N→∞ 1 N N X k=1 E(u[k]) ¯ Ru[n] = lim N→∞ 1 N N X n=1 E(u[k + n]u[k])

both exist, where the expectation is taken over possible stochastic parts of the signal. Analogously, a continuous time signal u(t) is quasi-stationary if

¯ mu= lim T→∞ 1 T Z T 0 E(u(t))dt ¯ Ru(τ ) = lim T→∞ 1 T Z T 0 E(u(t + τ )u(t))dt

both exist. A stationary stochastic process is quasi-stationary, with ¯mu and ¯Ru[n] being the mean value and covariance function respectively.

Definition 1 (Modulo M quasi-stationary) Assume ¯ gui1,ui2,···= limN→∞ 1 N N X t=1 g(ui1[t], ui2[t], . . . ) i1, i2,· · · = 0, . . . , M − 1

exists for a function g(·, ·, · · · ). Then u is modulo M quasi-stationary with respect to g if

¯

gi1,i2,···= ¯g{(i1+l)modM,(i2+l)modM,··· }

∀l ∈ {. . . , −1, 0, 1, . . . }

The modulo M quasi-stationarity property guarantees that the input signal has the same statistical properties for all the ADCs in the time interleaved system.

Example 1 (Modulo M quasi-stationary) Consider first the function g(ui[k]) = u2

i[k]. The modulo M quasi-stationary

property then means that the mean square value should be equal for all subsequences, i.e., if

¯ σ2i = lim N→∞ 1 N N X k=1 u2i[k]

(5)

then

¯

σ2i = ¯σ2j, for i, j = 0, . . . , M− 1.

In this example this is true for most quasi-stationary signals, but some periodic signals are not modulo M quasi-stationary. Consider the deterministic signal

u[k] = cos(π 2k) and M = 2. Then we have

¯ σ2= lim N→∞ 1 N N X k=1 cos2(1π 2k) = 1 2 so the signal is quasi-stationary, but

¯ σ20= lim N→∞ 1 N N X k=1 cos2(2π 2k) = 1 and ¯ σ21= lim N→∞ 1 N N X k=1 cos(2π 2k + π 2) = 0

i.e., the signal is not modulo 2 quasi-stationary with respect to g(ui[k]) = u2i[k], but it is with respect to g(ui[k]) = ui[k].

We use further the following notation for the mean square and mean square difference of a quasi-stationary signal

¯ σu2= lim N→∞ 1 N N X k=1 E{u2[k]} ¯ Rui,uj[l] = lim N→∞ 1 N N X k=1 E  u(imodM )  k + i M  + l − u(jmodM )  k + j M 2 . (2)

The following notation is used to simplify the expressions involving the reconstructed signals.

σ2zi)(∆t)= ¯σ2 z(∆ti) i ¯ R(∆t) zi,zj[l] = Rzi(∆t),zj(∆t) [l] III. SIGNALRECONSTRUCTION

If the time error parameters are known, and the input signal

u(t) is band limited to the Nyquist frequency, u(t) can be

exactly reconstructed from the sampled signal y[k]. We will in this section describe the signal reconstruction.

The time errors can be compensated for by many different interpolation techniques, for instance splines [12], polynomial interpolation or filter bank interpolation [13]. We will here describe a method for exact interpolation by filtering the signal with a non-causal IIR filter. If the input signal is band limited to the Nyquist frequency, Tπ

s, and the time error parameters are known, the input signal can be perfectly reconstructed from the irregular samples [14]. In a real application, the interpolation is of course approximate since we cannot use

a filter of infinite length, but we can come arbitrarily close to the exact interpolation by choosing the length of the filter large enough. In [14] the interpolation is done at an arbitrary time instance according to the following:

Solve the equation system

M−1X i=0 ej(−M−12 +i+∆ti)ωH i(ω, t) = 1 MX−1 i=0 ej(−M−12 +i+∆ti)(ω+M Ts2π )H i(ω, t) = ej M Tst (3) .. . M−1X i=0 ej(−M−12 +i+∆ti)(ω+(M−1) M Ts)Hi(ω, t) = ej(M−1) M Tst

for Hi(ω, t). The input signal can then be calculated at any

time instance as u(t) = X k=−∞ M−1X i=0 yi[k]hi(t− kMTs) where hi(t) = M Ts Z −π/Ts+2π/(M Ts) −π/Ts Hi(ω, t)ejωtdω

The reconstruction described in [14] is done at an arbitrary time instance. If we only need to reconstruct the signal at the nominal sampling instances

t = (kM + l)Ts, l = 0, . . . , M− 1 (4)

k = . . . ,−1, 0, 1, . . .

we can simplify the reconstruction. Here we introduce the notation αi=−M−12 + i + ∆ti, to simplify the the equation system (3). The right hand side of (3) is then independent of

k in (4) and depends only on l. Further, the left hand side can

be factorized into one diagonal matrix which depends on ω, one matrix independent of ω and H(ω, t) which now also is independent of k A(α)E(α, ω)H(l)(ω) = Bl Here A(α) =      1 · · · 1 ejα0M Ts · · · ejαM−1M Ts2π .. . . .. ... ejα0(M−1)M Ts · · · ejαM−1(M−1) M Ts      E(α, ω) =      ejα0ω 0 · · · 0 0 ejα1ω · · · 0 .. . ... . .. ... 0 0 · · · ejαM−1ω      and Bl= 1 ej2πl/M · · · ej2π(M−1)l/M T

(6)

Since only E(α, ω) depends on ω and the time dependence in the right hand side of (3) is removed, we can easily calculate the coefficients h(l)i [k] = hi((kM + l)Ts) h(l)[k] = M Ts Z −π/Ts+2π/(M Ts) −π/Ts E−1(α, ω)ejω(kM +l)TsdωA−1(α)B l

From here on we assume M to be even, M odd gives similar calculations. Calculating the TDFT of the subsequences h(l)[k] gives H(l)(ejωM Ts) = M T s X k=−∞ h(l)[k] = (M Ts) 2 Z −π/Ts+2π/(M Ts) −π/Ts E−1(α, γ)ejγlTs X k=−∞ ejγkM Tse−jωkMTsdγA−1(α)B l = M Ts X r=−∞ Z −π/Ts+2π/(M Ts) −π/Ts E−1(α, γ)ejγlTsδ(γ− ω + r M Ts)dγA −1(α)B l = M TsE−1(α, ω− π Ts )ejωlTs(−1)lA−1(α)B l 0≤ ω < M Ts The subsequences Z(∆ 0 t)

i (ejωM Ts) can then be calculated as Z(∆0t) l (e jωM Ts) = YT(ejωM Ts)M T sE−1(α, ω− π Ts)e jωl(−1)lA−1(α)B l (5) where YT(ejωM Ts) = Y 0(ejωM Ts) · · · YM−1(ejωM Ts)  The TDFT of the time error compensated signal,

Z(∆0t)(ejωTs), can then be calculated from its subsequences [15] Z(∆0t)(ejωTs) = MX−1 l=0 Z(∆0t) l (e j(ωM Ts mod 2π))e−jlωTs (6)

With the inverse Fourier transform we get the time error reconstructed signal

z(∆0t)[k] = T DF T−1(Z(∆0t)(ejωTs)) (7) In practice (5), (6) and (7) are calculated on finite sequences using the DFT instead of the Fourier transform.

IV. TIMEERRORESTIMATION

We will in this section present a method to estimate the time errors in a time interleaved ADC system. The estimation is done without a special calibration signal and without knowl-edge of the input signal. The idea for the time error estimation is to study the mean square difference between the outputs of adjacent ADCs. Assuming that the input signal is band limited

to the Nyquist frequency, the signal cannot change arbitrarily fast. If the time interval between two ADCs is shorter than Ts the signal will change less on average between the samples compared to a time difference of Tsand vice versa if the time

interval is longer than Ts. In Figure 3 this is illustrated for a

dual ADC system.

The method that we will use to estimate the time error parameters is minimization of a loss function, which has its global minimum for ∆t= ∆0t. From the discussion above we

can see that the function

VtN(∆t) = MX−1 i=1 i−1 X j=0 (RN,(∆t) zi,zi−1[0]− R N,(∆t) zj,zj−1[0]) 2 (8) where RN,(∆t) zi,zi−1[0] = 1 N N X k=1 (z(∆t) i [k]− z (∆t) i−1 [k +b i− 1 M c]) 2

might be a good candidate for a loss function. We can see that the function limN→∞RN,(∆zi,zi−1t)[0] should have the same value for i = 0, . . . , M− 1 if the time error parameters are correct. This means that VN

t (∆0t) = 0. If the time error paramters are

not correct, the time difference between the samples are not equal and limN→∞RN,(∆zi,zi−1t)[0] then has different values for different values of i. This means that VN

t (∆0t) > 0.

We will in the following assume that the time error in the first ADC is zero, i.e., ∆0

t0 = 0. This is no loss of generality

since only the distance between the samples, and not the absolute sampling instances, needs to be correct. We will in this section show that the loss function (8) is minimized for ∆t= ∆0t, when N tends to infinity. We will start with the case

of a dual ADC system, i.e., M = 2, in which case we can also show that the loss function is monotonously increasing around ∆t= ∆0t. The signal reconstruction described in Section III is

quite complicated and not linear in the parameters. However, it can be considered locally linear in the paramters since it is a continuous mapping and the time errors normally are

0.5 1 1.5 2 2.5 3 −1 −0.5 0 0.5 1 ∆t=−0.3 t=0.3 ∆t=0 ∆ y 1 ∆ y 1 ∆ y 1 Too early sample

Too late sample Ideal sample

Fig. 3. The idea for time error estimation, here an example with two ADCs. If the sample of the second ADC is taken before the nominal sampling instance, the signal changes less on average between the samples, and vice versa.

(7)

small. The proofs will therefore be made for the case of a reconstruction that is linear in the parameters.

A. Dual ADC system

We will here consider the output signals from a dual ADC system. Since we here assume that the reconstruction is linear in the time error parameters, we do not need to involve the reconstruction at this stage. Instead we can study the output signals, yi[k], parameterized in the time error paramters, ∆ti. Since M = 2 and ∆t0= 0 we only have one parameter here, which we denote ∆t.

y0[k] = u(2kTs)

y1[k] = u((2k + 1)Ts+ ∆t)

From this we can calcualate the mean square difference functions, here depending on ∆t

R(∆t) y1,y0[0] = limN→∞ 1 N N X k=1 (y1[k]− y0[k])2 R(∆t) y0,y1[0] = limN→∞ 1 N N X k=1 (y0[k]− y1[k− 1])2

We will start with a lemma that will be needed in the theorems later.

Lemma 1 Assume that u(t) is quasi-stationary, band limited to ωc and not constant. Then

Ru(Ts− ∆t) > Ru(Ts+ ∆t)

if Ts< π ωc

and 0 < ∆t< Ts.

Proof: see Appendix I.

Next we will prove that the time error loss function has its global minimum for ∆t= 0.

Theorem 1 Assume that u(t) is quasi-stationary and ban-dlimited to ωc. Assume further that u[k] is modulo 2

quasi-stationary with respect to g1(ui, ui−1) = (ui[k]− ui−1 mod 2[k +bi−12 c])2, i = 0, 1 and g2(ui) = u2i[k], i = 0, 1. Then V (∆t) = (Ry1t,y0[0]− Rt y0,y1[0]) 2> 0 if Ts< π ω and 0 <|∆t| < Ts. Further V (0) = 0. Proof:

V (0) = 0 follows directly from the definition of a modulo 2

quasi-stationary signal. V (∆t) =  lim N→∞ 1 N N X k=1 [u((2k + 1)Ts+ ∆t)− u(2kTs)]2 − lim N→∞ 1 N N X k=1 [u((2k + 2)Ts)− u((2k + 1)Ts+ ∆t)]2 2 = Ru(Ts− ∆t)− Ru(Ts+ ∆t) 2 > 0, if ∆t6= 0

The last expression is strictly greater than zero according to Lemma 1.

Theorem 2 Assume that the requirements of Theorem 1 are fulfilled. Then |∆(1) t | < |∆ (2) t | < Ts 2 ⇒ V (∆ (1) t ) < V (∆ (2) t ) if Ts< π ωc.

Proof: see Appendix I.

To summarize, we have in this section shown that V (∆t) has

its global minimum for ∆t = 0 and that it is monotonically

increasing around ∆t= 0, for a dual ADC system. B. Extension of time error loss function

So far we have only considered the function R(∆t)

yi,yi−1[0]. It can also be interesting to extend this to include terms with a larger time difference, R(∆t)

yi,yi−1[l], l = 0, 1, . . . . We can then extend the definition of the time error loss function to the partial time error loss functions

˜

V(l)(∆t) = (R(∆y1,yt)0[l]− R

(∆t)

y0,y1[l])

2 (9)

and the time error loss function

V(L)(∆t) = L X l=0

βlV˜(l)(∆t). (10)

With the same calculations as for the case l = 0 we get ˜

V(l)(∆t) = Ru((2l + 1)Ts− ∆t)− Ru((2l + 1)Ts+ ∆t) 2

.

Here we can only guarantee that a partial loss function has a unique global minimum if u(t) is band limited to (2l+1)Tπ

s. But we still have the property that ˜V(l)(0) = 0, so these loss functions can still be used to improve the numerical properties of the minimum when we have a noisy signal.

Intuitively, the partial loss functions should be weighted so that the contribution from noise is the same from each part. Since the noise is additive on the signal we find, by studying the definition of the loss functions, that the noise gives the same contribution to the partial loss funcitons, independent of

l. This means that we should have βl= 1, ∀l.

We will next study the partial loss functions ˜V(l)(∆ t) for

three special input signals, sinusoidal input, multisine input and band limited white noise input.

Sinusoidal input:

With a sinusoidal input, u(t) = sin(ω0t), we have Φu(ω) = δ(ω− ω0) + δ(ω + ω0)

From this, we can calculate Ru(τ ) = 2 cos(ω0τ ) and ˜

V(l)(∆t) = 4 (sin((2l + 1)ω0Ts) sin(ω0∆t))2

This means that for a sinusoidal signal we still have the property that ˜V(l)(∆t) is monotonically increasing

around ∆t = 0 as long as (2l + 1)ω0Ts 6= nπ, n = 0, 1, . . . in which case ˜V(l)(∆

t)≡ 0. For low frequencies ˜

V(l)(∆

(8)

values of l. Thus, the loss function will become more “peaky” when more partial loss functions are added, which means that the numerical properties are improved.

Multisine input:

With a multisine input signal with I tones we have the input spectrum Φu(ω) = I X i=0 {δ(ω − ωi) + δ(ω + ωi)}

which gives the partial loss functions ˜

V(l)(∆t) = 4 I X i=0

sin((2l + 1)ωiTs) sin(ωit) !2

. Band limited white noise input:

With a band limited white noise input we have the input spectrum

Φu(ω) = 

1 if ω < ωc 0 otherwise . From this we can calculate the loss functions

˜ V(l)(∆t) =  sin[((2l + 1)Ts− ∆t)ωc] ((2l + 1)Ts− ∆t)ωc −sin[((2l + 1)Ts+ ∆t)ωc] ((2l + 1)Ts+ ∆t)ωc 2 .

1) Infinite sum of partial loss functions: As we have seen in the previous examples, we do not, in general, have the property that the partial loss functions, ˜V(l)(∆t), are monotonically

increasing around ∆t= 0. But if we study an infinite sum of

partial loss functions we have this property, independent of the input signal spectrum. Here we have to distinguish two cases

1) Φu(ω) does not include δ-spikes

2) Φu(ω) includes δ-spikes

We start with the first case.

Theorem 3 Assume that u(t) is quasistationary, not constant and band limited to ωc, and that u[k] is modulo 2

quasista-tionary with respect to g1(ui, ui−1) = (ui[k]−ui−1 mod 2[k+ i− 1])2, i = 0, 1 and g

2(ui) = u2i[k], i = 0, 1. Assume also

that Φu(ω) does not include δ-spikes.

Then V(∞)(∆ t) =

P

l=0V˜(l)(∆t) has the following

proper-ties:

(i) : V(∞)(0) = 0

(ii) : V(∞)(∆t) > 0, if ∆t6= 0

Proof: see Appendix I.

Theorem 4 Assume that the conditions of Theorem 3 are fulfilled. Then |∆(1) t | < |∆ (2) t | < Ts 2 V(∞)(∆(1)t ) < Vt(∞)(∆(2)t ) Proof: see Appendix I.

For the case where Φu(ω) includes δ-spikes we have to

redefine the loss function in order to make it converge

V(∞)(∆t) = lim L→∞ 1 L L X l=0 ˜ V(l)(∆t)

We will in the following split the spectrum into two parts, one part with δ-spikes and one part without them.

Φu(ω) = I X i=1 αi(δ(ω− ωi) + δ(ω + ωi)) + ˜Φu(ω) αi> 0, 0 < ωi< ωc (11)

With this input spectrum we can show that V(∞)(∆t) is

monotonically increasing around ∆t= 0.

Theorem 5 Assume that we have an input signal spectrum of the form (11). Assume also that u(t) is band limited to

ωc and that u[k] is modulo 2 quasistationary with respect

to g1(ui, ui−1) = (ui[k]− ui−1 mod 2[k + i− 1])2, i = 0, 1 and g2(ui) = u2i[k], i = 0, 1. Then V(∞)(∆t) = limL→∞L1

PL

l=0V˜(l)(∆t) has the following properties: (i) : V(∞)(0) = 0

(ii) : V(∞)(∆t) > 0, if ∆t6= 0

Proof: see Appendix I.

Theorem 6 With the conditions of Theorem 5 fulfilled V(∞)(∆

t) also has the following property: |∆(1) t | < |∆ (2) t | < Ts 2 ⇒ V (∞)(∆(1) t ) < V(∞)(∆ (2) t )

Proof: see Appendix I.

C. General M≥ 2

In this section we generalize some of the results to M > 2. We will start with a lemma that is needed to prove the theorems later.

Lemma 2 Assume that u(t) is quasi-stationary, band limited to ωc and not constant.

Then Ru(τ ) is monotonically decreasing if 0 < τ < ωπc.

Proof: see Appendix II.

Theorem 7 Assume that u(t) is quasistationary, band limited to ωc and not constant. Assume further that u[k] is modulo M quasistationary with respect to g1(ui, ui−1) = (ui[k]− ui−1 mod M[k +biM−1c])2, i = 0, . . . , M − 1 and g 2(ui) = u2 i[k], i = 0, . . . , M− 1. Then V (∆t) = M−1X i=1 i−1 X j=0 (R(∆t) yi,yi−1[0]− R (∆t) yj,yj−1[0]) > 0 (12) if Ts< 1 1 + β/2 π ωc and 0 <|∆ti| < βTs i = 1, . . . , M− 1 where 0 < β < 1.

(9)

Further

V (0) = 0.

Proof: see Appendix II.

Here we cannot guarantee the correct global minimum for an input signal arbitrarily close to the Nyquist frequency. Therefore we have introduced a paramter β, as a tradeoff between the input signal bandwidth and the maximum time error size. From Theorem 7 we see that in order to guarantee that the loss function has its global minimum for ∆t= 0 for

any ∆t such that |∆ti| < Ts/2, i = 0, . . . , M− 1, we have to assume that the input signal is band limited to 4/5 of the Nyquist frequency. However, usually the time errors are much smaller and if for instance |∆ti| < 0.1Ts we can allow the input signal to be band limited to around 95% of the Nyquist frequency.

In this section we have only considered the loss function including Ryi,yi−1[0]. However, as for the case M = 2, this can be generalized to a loss function including Ryi,yi−1[l], l >

0, to get higher resolution around ∆t= 0.

D. Time Error Estimation Algorithm

In this section we will discuss how the time errors can be estimated using the theory from the previous sections, but with some modifications to make the estimation more practical. With a finite amount of data, the general time error loss function is calculated as Vt,RN,(L)(∆t) = L X l=0 MX−1 i=1 i−1 X j=0  ¯ R(N ),(∆t) zi,zi−1 [l]− R (N ),(∆t) zj,zj−1 [l] 2 (13) where ¯ RN,(∆t) zi,zj [l] = 1 N N X k=1  z(∆t) (i mod M )  k + i M  + l − z(∆t) (j mod M )  k + j M 2 (14) and z(∆t)[k] is calculated according to (5), (6) and (7). As we have proved in the previous section, this loss function would have its global minimum at ∆t= ∆0t if the interpolation was

linear in the time error parameters. However, the interpolation method described in Section III is not linear in the parameters, so the loss function evaluation (13) is exactly valid only for ∆t = ∆0t and is only approximately true for ∆t 6= ∆0t.

However, interpolation is a continuous mapping in ∆tso it can

locally be considered as linear. Simulations show that there are local minima in the loss function Vt,R(N )(∆t). A contour plot

of Vt,R(N )(∆t) is shown in Figure 4. Here M = 4, but ∆t0 and

t2 are fixed to their true values to generate a two-dimensional

plot. The input signal is here sinusoidal. We can see that there are local minima along a line, ∆t1 − ∆t3 = constant, in

this figure. However, when ∆t 6= ∆0t in the interpolation,

simulations show that the gain of the subsequences of the

interpolated signals are changed. Consider instead the loss function Vt,σ(N )(∆t) = MX−1 i=1 i−1 X j=0  1 N N X k=1 z(∆ti) i [k] 2 − z(∆tj) j [k] 22 . (15) If we plot the same contour plot for this function, see Figure 5, we see that again there are local minima along a line. But this line, ∆t1 + ∆t3 = constant, is perpendicular to the line in

Figure 4. This means that adding the two loss functions (13) and (15)

Vt(N )(∆t) = Vt,R(N )(∆t) + Vt,σ(N )(∆t) (16)

eliminates the local minima, see Figure 6. This is just an example with a sinusoidal input, but simulations of many different input signals with different frequency range and

−0.08 −0.06 −0.04 −0.02 0 0.02 0.04 0.06 0.08 0.1 −0.08 −0.06 −0.04 −0.02 0 0.02 0.04 0.06 0.08 0.1 ∆t 1 ∆t3 Vt,R(∆t)

Fig. 4. A contour plot of the time error loss function, Vt,R(N )(∆t), with M = 4 and sinusoidal input. ∆t0and ∆t2are fixed to their true values.

−0.08 −0.06 −0.04 −0.02 0 0.02 0.04 0.06 0.08 0.1 −0.08 −0.06 −0.04 −0.02 0 0.02 0.04 0.06 0.08 0.1 ∆t 1 ∆t3 Vt,σ(∆t)

Fig. 5. A contour plot of the time error loss function, Vt,σ(N )(∆t), with M = 4 and sinusoidal input. ∆t0and ∆t2are fixed to their true values.

(10)

different values of M indicate that this loss function works for a wide range of signals.

The minimizing arguments of the loss function (16) gives the time error estimates. Since the minimizing argument cannot be calculated analytically, a numerical minimization algorithm is used. Further, the mismatch errors may change slowly with for instance temperature and aging. Therefore the parameter estimates should be adaptively updated with new data. There are many minimization algorithms available with

−0.08 −0.06 −0.04 −0.02 0 0.02 0.04 0.06 0.08 0.1 −0.08 −0.06 −0.04 −0.02 0 0.02 0.04 0.06 0.08 0.1 ∆t 1 ∆t3 Vt(∆t)

Fig. 6. A contour plot of the time error loss function, Vt(N )(∆t) = Vt,R(N )(∆t) + Vt,σ(N )(∆t), with M = 4 and sinusoidal input. ∆t0 and ∆t2 are fixed to their true values.

fast convergence, for instance Newton’s method [16]. How-ever, the fast converging methods are usually computationally demanding. Therefore a stochastic gradient search method is chosen here, which has a somewhat slower convergence rate than other methods, but is computationally very efficient. In a stochastic gradient minimization algorithm, the parameters are updated by a step in the negative gradient direction

ˆ

(i+1)t = ˆ∆(i)t − µ∇V ( ˆ(i)t )

The magnitude of the function Vt( ˆ∆(i)t ), may be very different

depending on the input signal. Therefore it is hard to choose the step length µ. A normalized version of the stochastic gradient method can be used to make the choice of µ easier, for instance ˆ ∆(i+1)t = ˆ∆ (i) t − µ ∇V ( ˆ(i)t ) max ∇V ( ˆ(i)t )

To avoid taking too long steps, we can check that the loss function decreases for every iteration, and otherwise backtrack the step size until it does [16]. The next iteration is then started with doubled step length, so that the step length does not get unnecessarily small. To summarize, the adaptive equalization algorithm is given by

Algorithm 1 (Interleaved ADC equalization) Initialization:

Choose a batch size, N , for each iteration.

Initialize the step lengths of the stochastic gradient al-gorithm, µt. If the order of magnitude of the mismatch

ADC0 ADC1 ADCM−1 u clock y0 y1 yM−1 z( ˆ∆t) 0 z( ˆ∆t) 1 z( ˆ∆t) M−1 z( ˆ∆t) M U X

Time error estimation algorithm delay, Ts ˆ ∆t C o o r r e c t i n ˆ ∆t

Fig. 7. Time interleaved ADC system with time errors. The time errors, ˆ∆t, are estimated by a blind adaptive algorithm and the signal is corrected by a filter.

(11)

errors are known, this information can be used for the initialization.

Initialize the parameter estimates for i = 0, . . . , M− 1

ˆ ∆(0)ti = 0 Adaptation:

1) Collect a batch of N data from each ADC, yi[k], i = 0, . . . , M− 1.

2) Calculate the reconstructed signals

z( ˆ∆

(j) t )

i [k], i = 0, . . . , M− 1

according to (5), (6) and (7)

3) Calculate the gradient of the loss function,

∇V(N ) t ( ˆ∆

(j)

t ). The gradients can be calculated

numerically by a finite difference approximation from the loss functions, or by analytically differentiating the loss function. The loss function is defined in (16). 4) Update the parameter estimates

ˆ ∆(j+1)t = ˆ∆(j)t − µt ∇V (N ) t ( ˆ∆ (j) t ) max|∇Vt(N )( ˆ∆(j)t )| 5) If the loss function has increased since the last iteration

Vt(N )( ˆ∆ (j+1) t ) > V (N ) t ( ˆ∆ (j) t )

backtrack the step size µt := µt/2 and change the

parameter estimates in step 4) until the loss function decreases. Otherwise double the step lengths for the next iteration: µt:= 2µt.

6) Return to step 1).

Figure 7 illustrates the operation of the adaptive equalization algorithm.

V. CONCLUSION

A time interleaved ADC system is a good option to significantly increase the sampling rate of A/D conversion. However, due to errors in the manufacturing process, the ADCs in the time interleaved system are not exactly identical. This means that mismatch errors in time, gain and offset are introduced. The mismatch errors cause distortion in the sampled signal. Calibration of ADCs is time consuming and costly. Further, the mismatch errors may change slowly, with for instance temperature and aging. Therefore it is preferable to continuously estimate the mismatch errors while the ADC is used.

We have in this paper presented a method for estimation and compensation of the time mismatch errors in a time interleaved ADC system. The estimation method is blind, so that it does not require any information about the input signal, except that it should be band limited to the Nyquist frequency of the complete ADC system. The method is also adaptive, so the estimates are updated if the mismatch errors change slowly. The method gives unbiased estimates, so that the estimation accuracy can be made arbitrarily good by increasing the amount of estimation data.

In the accompanying paper [9], examples and simulations of the time error estimation algorithm are given. The simulation results are also compared to the Cramer-Rao bound. The estimation algorithm is also tested on measured data.

APPENDIXI DUALADCSYSTEM

Proof of Lemma 1: Ru(Ts− ∆t)− Ru(Ts+ ∆t) = 1 Z −∞Φu(ω)e jω(Ts−∆t) 1 Z −∞Φu(ω)e jω(Ts+∆t) = 1 Z ωc −ωc Φu(ω)ejωTs(e−jω∆t− ejω∆t)dω =−j π Z ωc −ωc Φu(ω) cos(ωTs) sin(ω∆t)dω +1 π Z ωc −ωc Φu(ω) sin(ωTs) sin(ω∆t)dω

Consider the first term: Φu(ω) and cos(ωTs) are even

functions with respect to ω, while sin(ω∆t) is odd Z ωc

−ωc

Φu(ω) cos(ωTs) sin(ω∆t)dω = 0 Now consider the second term:

0 < Ts< π ωc and 0 < ω < ωc⇒ 0 < ωTs< π ⇒ sin(ωTs) > 0, 0 < ∆t< Ts⇒ sin(ω∆t) > 0. π ωc < Ts< 0 and 0 < ω < ωc⇒ −π < ωTs< 0 ⇒ sin(ωTs) < 0, 0 < ∆t< Ts⇒ sin(ω∆t) < 0.

These two statements and Φu(ω) > 0 give Z ωc

−ωc

Φu(ω) sin(ωTs) sin(ω∆t)dω > 0 ⇒ Ru(Ts− ∆t) > Ru(Ts+ ∆t)

unless Φu(ω) = aδ(ω). But that would imply that u(t)

must be constant.

Proof of Theorem 2: According to the proof of Theorem 1, we have

V (∆t) = Ru(Ts− ∆t)− Ru(Ts+ ∆t) 2 Due to the symmetry in ∆t, it is sufficient to show that

V (∆(2)t ) > V (∆(1)t ) for Ts 2 > ∆ (2) t > ∆ (1) t ≥ 0

Since Ru(Ts− ∆t) > Ru(Ts+ ∆t) according to Lemma 1,

we can instead prove that

Ru(Ts− ∆(2)t )− Ru(Ts+ ∆(2)t ) > Ru(Ts− ∆(1)t )− Ru(Ts+ ∆(1)t )

(12)

We have from the proof of Lemma 1 that Ru(Ts− ∆(2)t )− Ru(Ts+ ∆(2)t ) − (Ru(Ts− ∆(1)t )− Ru(Ts+ ∆(1)t )) = 1 π Z ωc −ωc

Φu(ω) sin(ωTs)(sin(ω∆(2)t )− sin(ω∆ (1) t ))dω Ts 2 > ∆ (2) t > ∆ (1) t ≥ 0 ⇒ sin(ω∆ (2) t ) > sin(ω∆(1)t ) if 0 < ω < ωc

The same motivation as in Lemma 1 gives that the above expression is positive, i.e.,

V (∆(2)t ) > V (∆(1)t ) Proof of Theorem 3: (i): ˜ V(l)(0) = 0, ∀l ⇒ V(∞)(0) = 0 (ii): ˜ V(0)(∆t) > 0 if ∆t6= 0 (17) together with ˜ V(l)(∆t)≥ 0∀l (18) gives V(∞)(∆t) > 0 if ∆t6= 0 (19) Proof of Theorem 4: V(∞)(∆t) = X l=0 {Ru((2l + 1)Ts− ∆t) − Ru((2l + 1)Ts+ ∆t)}2 =1 π2 X l=0 Z ωc −ωc Φu(ω)ej(2l+1)ωTssin(ω∆t) 2 =1 π2 Z ωc −ωc Z ωc −ωc Φu(ω)Φu(γ) sin(ω∆t) sin(γ∆t)· X l=0 n ej(2l+1)Tsωej(2l+1)Tsγ o dωdγ =1 π2 Z ωc −ωc Z ωc −ωc Φu(ω)Φu(γ) sin(ω∆t) sin(γ∆t)· X n=−∞  (−1)nδ(ω + γ + nπ Ts )  dωdγ = 2 π2 Z ωc 0  Φ2u(ω) sin2(ω∆t)· + Φu(ω)Φu( π Ts− ω) sin(ω∆t) sin(( π Ts − ω)∆t)  dω > 0

First, note that Vt(∞)(∆t) is symmetric around ∆t= 0 so

we only have to consider the case 0 < ∆(1)t < ∆(2)t < Ts

2.

Next, differentiate V(∞)(∆t) with respect to ∆t dV(∞)(∆t) d∆t = 2 π2 Z ωc 0  2Φ2u(ω) sin(ω∆t) cos(ω∆t)ω + Φu(ω)Φu( π Ts− ω) cos(ω∆t) sin(( π Ts− ω)∆t)ω + Φu(ω)Φu( π Ts − ω) sin(ω∆t)· cos((π Ts − ω)∆t)( π Ts − ω) (20)

Each term in (20) is strictly positive except for Φu(ω) = δ(ω)⇒ The derivative is strictly positive for 0 < ∆t<

Ts

2, which concludes the proof. Proof of Theorem 5: (i): ˜ V(l)(0) = 0, ∀l ⇒ V(∞)(0) = 0 (ii): ˜ V(0)(∆t) > 0 if ∆t6= 0 (21) together with ˜ V(l)(∆t)≥ 0∀l (22) gives V(∞)(∆t) > 0 if ∆t6= 0 (23) Proof of Theorem 6: V(∞)(∆t) = lim L→∞ 1 L L X l=0 ( I X i=1

αisin((2l + 1)ωiTs) sin(ωit)

+j Z ωc −ωc ˜ Φu(ω)ej(2l+1)ωTssin(ω∆t)dω 2 = I X i=1 I X k=1 αiαksin(ωit) sin(ωkt) lim L→∞ 1 L L X l=0

sin((2l + 1)ωiTs) sin((2l + 1)ωkTs) (24)

Z ωc −ωc Z ωc −ωc ˜ Φu(ω) ˜Φu(γ) sin(ω∆t) sin(γ∆t))· lim L→∞ 1 L L X l=0 ej(2l+1)ωTsej(2l+1)γTsdωdγ (25) + 2j I X i=1 Z ωc −ωc

αiΦ˜u(ω) sin(ωit) sin(ω∆t)·

lim L→∞ 1 L L X l=0 ej(2l+1)γTssin((2l + 1)ωiTs)dω (26)

(13)

Next, we will evaluate each of the three terms above sepa-rately: Term (24): lim L→∞ 1 L L X l=0 sin((2l + 1)ωiTs) sin((2l + 1)ωkTs) =    1 2 if ωi= ωk 1 2 if ωi+ ωk = π Ts 0 otherwise I X i=1 I X k=1 αiαksin(ωit) sin(ωkt) lim L→∞ 1 L L X l=0 sin((2l + 1)ωiTs) sin((2l + 1)ωkTs) =1 2 I X i=1 α2isin2(ωit) +1 2 X {i,k:ωi+ωk=Tsπ}

αiαksin(ωit) sin(ωkt)

Term (25): lim L→∞ 1 L L X l=0 ej(2l+1)ωTsej(2l+1)γTs =    1 if ω = γ −1 if ω + γ = ±π Ts 0 otherwise Z ωc −ωc Z ωc −ωc ˜ Φu(ω) ˜Φu(γ) sin(ω∆t) sin(γ∆t)· lim L→∞ 1 L L X l=0 ej(2l+1)ωTsej(2l+1)γTsdωdγ = 0 Term (26): lim L→∞ 1 L L X l=0 ej(2l+1)γTssin((2l + 1)ω iTs) =              1 2j if ωi+ γ = 0 1 2j if ωi+ γ = π Ts 1 2j if − ωi+ γ = 0 1 2j if − ωi+ γ =− π Ts 0 otherwise 2j I X i=1 Z ωc −ωc

αiΦ˜u(ω) sin(ωit) sin(ω∆t)·

lim L→∞ 1 L L X l=0 ej(2l+1)γTssin((2l + 1)ωiTs)dω = 0 That is, only the first term is non zero and we get

V(∞)(∆t) = 1 2 I X i=1 α2isin2(ωit) +1 2 X {i,k:ωi+ωk=Tsπ} αiαksin(ωit) sin(ωkt)

First, note that V(∞)(∆t) is symmetric around ∆t= 0,

so we only have to consider the case 0 < ∆(1)t < ∆(2)t < Ts 2 Next, differentiate V(∞)(∆ t) dV(∞)(∆ t) d∆t = I X i=1 α2isin(ωit) cos(ωit)ωi + X {i,k:ωi+ωk=Tsπ} αiαksin(ωit) cos(ωkt)ωk > 0 The positive derivative together with the symmetry in ∆t

concludes the proof.

APPENDIXII

GENERAL INTERLEAVEDADCSYSTEM

Proof of Lemma 2: Ru(τ ) = 1 Z ωc −ωc Φu(ω)ejωτ R0u(τ ) = 1 Z ωc −ωc jωΦu(ω)ejωτ = 1 Z ωc −ωc Φu(ω)ω sin(ωτ ) < 0 when ωcτ < π.

Proof of Theorem 7: Similar calculations to the proof of Theorem 1 gives V (∆t) = MX−1 i=1 i−1 X j=0 [Ru(Ts+ ∆ti− ∆ti−1) −Ru(Ts+ ∆tj− ∆tj−1 mod M)] 2 We can clearly see that V (0) = 0 from the modulo M quasi-stationary property. From here on in the proof, we introduce the notation γi = ∆ti − ∆ti−1 mod M to simplify notation. Since we have the constraint ∆t0 = 0, this is a one-to-one

mapping. In order to get V (∆t) = 0 all terms in the sum (12)

must be zero, which requires

Ru(Ts+ γi) = Ru(Ts+ γj), i, j = 0, . . . , M− 1 (27) We have that MX−1 i=0 γi= MX−1 i=0 (∆ti− ∆ti−1 mod M) = 0. (28)

which means that, if we want to find a solution other than

γi= 0, i = 0, . . . , M − 1

at least one of the γi’s must be negative and we can assume γi=−γ(1)< 0

for some i and a positive constant γ(1). From Lemma 2 we have that if γj < 0, j 6= i then γj = γi if (27) should be

fulfilled. Further, we have with a slight modification of Lemma 1 that if γj > 0 and Ts< αωπc then

γj> 2Ts( 1

α− 1) + γ

(14)

in order to fulfill (27). Equation (29) together with (28) gives that at least half of the γis must be smaller than zero. The ordering is here irrelevant so we can assume that

γk =−γ(1)< 0, k = 0, . . . , M 2 + 1 + n (30) where 0≤ n ≤ M 2 − 2

this means that the rest of the γks must be larger than zero

and we have

γk> 2Ts(1

α− 1) + γ

(1), k = M

2 + 2 + n, . . . , M The worst case, that gives the closest bounds on the sampling interval, is that all γk > 0 are equal so we can assume

γk= γ(2) > 2Ts( 1 α− 1) + γ (1), k = M 2 + 2 + n, . . . , M (31) where γ(2)is a positive constant. To summarize these require-ments we have from (28), (29), (30) and (31):

γ(2)= M/2 + 1 + n M/2− 1 − nγ (1) (32) γ(2)> 2Ts( 1 α− 1) + γ (1) (33) γ(1)< βTs, γ(2)< βTs (34)

Putting (32) into (33) gives

γ(1)> Ts(1/α− 1)(M/2 − 1 − n)

1 + n (35)

Putting (32) into (34) gives

M/2 + 1 + n M/2− 1 − nγ

(1)< βT

s (36)

Combining (35) and (36) gives

β > (1 α− 1) M/2 + 1 + n 1 + n ≥ ( 1 α− 1) 2M− 2 M− 2 α > 1 1 + β M−2 2M−2

This means that in order to guarantee that V (∆t) > 0, ∆t6= 0

we have to require α≤ 1 1 + β2MM−2−2 < 1 1 + β/2 i.e., Ts< 1 1 + β/2 π ωc

which concludes the proof.

REFERENCES

[1] W. Black and D. Hodges, “Time interleaved converter arrays,” IEEE

Journal of Solid-State Circuits, vol. SC-15, no. 6, pp. 1022–1029,

December 1980.

[2] Y.-C. Jenq, “Digital spectra of nonuniformly sampled signals: A robust sampling time offset estimation algorithm for ultra high-speed waveform digitizers using interleaving,” IEEE Transactions on Instrumentation and

Measurement, vol. 39, no. 1, pp. 71–75, February 1990.

[3] N. Kurosawa, K. Maruyama, H. Kobayashi, H. Sugawara, and K. Kobayashi, “Explicit formula for channel mismatch effects in time-interleaved ADC systems,” in Proc. IMTC, vol. 2, 2000, pp. 763–768. [4] J. Corcoran, “Timing and amplitude error estimation for time-interleaved

analog-to-digital converters,” October 1992, US Patent nr. 5,294,926. [5] H. Jin and E. Lee, “A digital-background calibration technique for

min-imizing timing-error effects in time-interleaved ADC’s,” IEEE

Transac-tions on Cicuits and Systems, vol. 47, no. 7, pp. 603–613, July 2000.

[6] J. Elbornsson and J.-E. Eklund, “Blind estimation of timing errors in interleaved AD converters,” in Proc. ICASSP 2001, vol. 6. IEEE, 2001, pp. 3913–3916.

[7] J. Elbornsson, K. Folkesson, and J.-E. Eklund, “Measurement verifi-cation of estimation method for time errors in a time-interleaved A/D converter system,” in Proc. ISCAS 2002. IEEE, 2002.

[8] J.-E. Eklund and F. Gustafsson, “Digital offset compensation of time-interleaved ADC using random chopper sampling,” in IEEE

Interna-tional Symposium on Circuits and Systems, vol. 3, 2000, pp. 447–450.

[9] J. Elbornsson, F. Gustafsson, and J.-E. Eklund, “Blind adaptive equal-ization of mismatch errors in time interleaved A/D converter system,” 2003, submitted to IEEE Transactions on Circuits and Systems. [10] ——, “Equalization of time errors in time interleaved ADC system –Part

II: Analysis and examples,” 2003, to be submitted to IEEE Transactions on Signal Processing.

[11] L. Ljung, System Identification, Theory for the user, 2nd ed. Prentice-Hall, 1999.

[12] M. Unser, “Splines –a perfect fit for signal and image processing,” IEEE

Signal Processing Magazine, pp. 22–38, November 1999.

[13] P. L¨owenborg, “Asymmetric filter banks for mitigation of mismatch errors in high-speed analog-to-digital converters,” Phd thesis 787, De-partment of Electrical Engineering, Link¨oping University, Link¨oping, Sweden, December 2002.

[14] A. Papoulis, Signal Analysis. McGraw-Hill, 1977.

[15] M. Hayes, Statistical digital signal processing and modeling. Wiley, 1996.

[16] J. Dennis and R. Schnabel, Numerical Methods for Unconstrained

References

Related documents

The teachers at School 1 as well as School 2 all share the opinion that the advantages with the teacher choosing the literature is that they can see to that the students get books

“Ac- celerating fibre orientation estimation from diffusion weighted magnetic resonance imaging using GPUs”. “Us- ing GPUs to accelerate computational diffusion MRI: From

This study aimed to compare the IPs and accuracy rates for the identification of different types of auditory speech stimuli (consonants, words, and final words in sentences)

When Stora Enso analyzed the success factors and what makes employees &#34;long-term healthy&#34; - in contrast to long-term sick - they found that it was all about having a

Taking basis in the fact that the studied town district is an already working and well-functioning organisation, and that the lack of financial resources should not be

does not provide a molecular explanation for the increased affinity of compounds with a short α distance and an increased efficacy for compounds with a long ω distance, but one

35 ”Men vi, jag och min rektor bestämde tillsammans vilken, den första modulen skulle vara också för att locka deltagare så valde vi den som hette lässtrategier för

Is there any forensically relevant information that can be acquired by using the Fusée Gelée exploit on the Nintendo Switch, that cannot otherwise be acquired by using