• No results found

Joint inversion of Direct Current and Radiomagnetotelluric data

N/A
N/A
Protected

Academic year: 2022

Share "Joint inversion of Direct Current and Radiomagnetotelluric data"

Copied!
64
0
0

Loading.... (view fulltext now)

Full text

(1)

U

PPSALA

U

NIVERSITY

Institutionen f ¨or Geovetenskaper - Geofysik

U

NIVERSIDAD

S

IMON

´ B

OL

´

IVAR

Earth Science Department - Geophysical Engineering

Joint inversion of Direct Current (DC) and Radiomagnetotelluric (RMT) data

Mar´ıa de los ´ Angeles Garc´ıa Juanatey

Uppsala, spring 2007

(2)

Contents

Acknowledgments vi

1 Introduction 1

2 Electric properties of rocks 2

2.1 Conduction mechanisms . . . 3

2.2 Archie’s law . . . 3

3 Direct Current Resistivity (DC) 5 3.1 Potentials in a homogeneous halfspace . . . 5

3.2 Array configurations . . . 7

3.3 Depth of investigation . . . 9

4 Radiomagnetotellurics (RMT) 10 4.1 Maxwell’s equations . . . 10

4.2 Diffusion equations . . . 12

4.3 Skin depth . . . 12

4.4 Transfer functions . . . 13

4.5 TE and TM mode. . . 13

5 Inversion theory 15 5.1 The nonlinear problem . . . 16

5.2 Types of inverse problems . . . 16

5.2.1 Mixed determined problems . . . 17

5.3 The length method . . . 18

5.3.1 The least squares solution for overdetermined systems . . . 18

5.3.2 Solution for underdetermined systems (minimum length solution) 18 5.3.3 Damped least squares solution (for mixed determined systems) . 19 5.4 Singular value decomposition . . . 19

5.5 Variance and resolution . . . 20

5.5.1 Resolution . . . 20

5.5.2 Variance . . . 21

5.5.3 Trade-off of resolution and variance . . . 21

5.5.4 Nonlinear variance and resolution analysis . . . 22

5.6 Note on the RMS . . . 23

6 The inversion program 24 6.1 Modifications to REBOCC . . . 24

6.1.1 DC forward responses . . . 25

(3)

6.1.2 DC Jacobian . . . 25

6.1.3 Wavenumber routine . . . 26

6.2 Truncated Singular Value Decomposition (TSVD) . . . 27

7 Results 29 7.1 The discretized synthetic model . . . 29

7.2 Forward responses . . . 30

7.3 Inversion models . . . 32

8 Resolution analysis 37 8.1 Singular values . . . 37

8.2 Nonlinear semi-axes . . . 38

8.3 Comparison of linear and nonlinear semi-axes . . . 39

8.4 Resolving kernels . . . 40

9 Conclusions 43

Further work 44

Bibliography 45

Appendix 47

(4)

List of Figures

3.1 General array configuration with the standard notation. . . 5

3.2 Electric field and equipotential lines due to current electrodes. . . 6

3.3 Most common array layouts in DC resistivity method. . . 7

3.4 Pseudosection data pattern of the three array layouts considered. . . 8

4.1 The TE and TM modes on a vertical discontinuity. . . 14

5.1 Illustration of the pseudo-hyperellipsoidal surface as an approximation to the true confidence surface. . . 22

6.1 General scheme of the inversion process. . . 24

6.2 Inputs and outputs of the forward routine for the DC method. . . 25

6.3 Inputs and outputs of the DC Jacobian routine. . . 26

6.4 Sensitivity matrices. . . 26

6.5 Inputs and outputs of the wavenumber routine. . . 26

6.6 Differences in the forward responses with different number of wavenum- bers. . . 27

6.7 Simplified flow chart of the TSVD routine. . . 28

7.1 Synthetic model used to generate data. . . 29

7.2 Mesh used in the discrete models. . . 30

7.3 DC forward responses of the synthetic model. . . 30

7.4 TE forward responses of the synthetic model. . . 31

7.5 RMS decay respect to truncation level increase. . . 32

7.6 Inversion model from the DC data. . . 33

7.7 Inversion model of the TE mode from the RMT synthetic data. . . 34

7.8 Joint inversion model from the DC and RMT data. . . 35

8.1 Singular values of the true model for the three techniques. . . 37

8.2 Nonlinear semi-axes of the true model for the three inversions. . . 38

8.3 Linear and nonlinear semi-axes of the three inversion problems. . . 39

8.4 The ten analyzed cells in the synthetic model . . . 40

A.1 Resolution kernels at cell 1 (50 m away from the conductor and at 15 m depth). . . 48

A.2 Resolution kernels at cell 2 (50 m away from the conductor and at 55 m depth). . . 49

A.3 Resolution kernels at cell 8 (above the conductor). . . 50

A.4 Resolution kernels at cell 4 (top of the conductor). . . 51

A.5 Resolution kernels at cell 5 (bottom of the conductor). . . 52

A.6 Resolution kernels at cell 6 (below the conductor). . . 53

(5)

A.7 Resolution kernels at cell 7 (above the resistor). . . 54

A.8 Resolution kernels at cell 8 (top of the resistor). . . 55

A.9 Resolution kernels at cell 9 (bottom of the resistor). . . 56

A.10 Resolution kernels at cell 10 (below the resistor). . . 57

(6)

List of Tables

2.1 Electrical resistivities of some minerals, rocks and sediments. . . 2 3.1 Relation between the depth of investigation (DI) and the total length of

the array (L) (Edwards, 1977) . . . 9 4.1 Assumptions of the RMT method (modified from Simpson and Bahr

(2005)). . . 11 7.1 Frequencies used to simulate data from the TE mode of the RMT method. 31 7.2 Summary of the inversion results. . . 32 8.1 Summary of the resolution properties. . . 41

(7)

Acknowledgments

I would like to thank Universidad Sim ´on Bol´ıvar (my home university) for giving me the opportunity to study abroad to fulfill my program in geophysics, and Uppsala University (host university) for welcoming me and contributing significantly in the completion of my education.

Thanks to Laust B. Pedersen, who entrusted me the job.

To Naser Meqbel, who provided the DC resistivity routines.

Special thanks to my mentor, Thomas Kalscheuer, who taught me many new things, had the patience to answer all my questions right away, tried to keep my motivation and solved all the problems I caused.

Many thanks to Claudia, Hanka, Martin and Michael, to made my days at work lighter and helped me whenever I needed it.

Finally, I must say that I would not have been able to accomplish my task without the support of my parents and siblings, who even being so far away, were close to me all the time giving me enough reasons and strength to go on.

(8)

Chapter 1

Introduction

Ambiguity is one of the most common phenomenons in geophysics. The acqui- sition of data by any method, or the method itself, has some limitations, which will make it impossible to get a data set capable to give an unique model of the subsurface (Roy, 1962). To reduce the effects of this situation, the simultaneous study of data from more than one method, is highly recommended. The combination of information with a different nature, in any stage of a survey, will provide more constraints and lead to a more reliable solution (Harinarayana, 1999).

The combination of different kinds of data can be done at the processing, modelling or interpretation stage of the investigation. The best way to do it will depend on the characteristics of the different kinds of data involved.

The case of interest in the present work is the joint inversion of radiomagnetotel- luric (RMT) and direct current resistivity (DC) data. Both methods depend on the same physical property of the ground, electrical resistivity, and both are inherently ambigu- ous, which makes its combination reasonable.

In many studies, it has been proven that electric and electromagnetic methods re- spond in different ways, in despite of being dependent on the same properties of rocks.

This makes its combination to yield better results (Vozoff and Jupp, 1975).

(9)

Chapter 2

Electric properties of rocks

One of the most important and useful physical properties of rocks is their electri- cal resistivity, which is closely related to rock porosity, pore water content and rock lithology. Electrical resistivity can be determined by many different electromagnetic methods, which makes it possible to find a suitable technique for almost every situa- tion. Moreover, the resistivity of the ground varies within a very large range of values, from 1.6 · 10−8Ωm for native silver to 106Ω m for pure sulfur (Reynolds, 1997). This fact can ease the interpretation as different rocks may have resistivity values wide apart, but overlapping still occurs (see table 2.1).

10−5 10−4 10−3 10−2 10−1 100 101 102 103 104 105 106 107 108 109 1010 1011 1012 1013

Alluvium and sand Basalt

Clays

Consolidated shales

Dry sandy soil Granite

Granite weathered

Gravel (dry) Gravel (saturated) Hematite

Limestones Marble

Moraine Pyrrhotite

Quartz

Quaternary/Recent sands Rock salt

Sandy clay/clayey sand Sandstones

Saturated landfill Slates

Unsaturated landfill

Resistivity (Ω m)

Table 2.1: Electrical resistivities of some minerals, rocks and sediments. It can be seen that for some of the materials there is a high overlap of resistivities while for others there is none. The presence of fluids is of great importance for the effective resistivity of the material.

(Resistivity values taken from Reynolds (1997))

(10)

Resistivity (ρ) is defined as the resistance of a cylinder with a cross section of unit area and unit length (Dobrin, 1976). Its SI units are ohms times meters (Ω·m) and its inverse, conductivity (σ), comes in Siemens over meters (S/m). It obeys the formula

ρ = RA

L , (2.1)

where R (Ω) is the resistance of a conducting body, A (m2) its cross section area and L (m) its length (Telford et al., 1976).

In an electrical circuit, resistance R is given by Ohm’s law:

R = V

I , (2.2)

with V (v) the potential difference across a resistor and I (A) the current passing through it. Thus, resistivity ρ can also be written as

ρ = V A

IL. (2.3)

2.1 Conduction mechanisms

There are three mechanisms allowing current to pass through a rock: electronic, di- electric and electrolytic conduction (Reynolds, 1997; Dobrin, 1976; Telford et al., 1976).

Electronic conduction occurs in metals, where electrons are free to move rapidly. It is important in studies on ore bodies.

Dielectric conduction in insulators, where atomic electrons are not free and just can be shifted with respect to their nuclei. It can be disregarded in surveys of electrical resistivity at low frequencies, but becomes important in spectral induced polariza- tion.

Electrolytic conduction in fractures and pores, due to the movement of ions within a fluid, which is the most important mechanism in near surface studies made by electric and electromagnetic techniques.

2.2 Archie’s law

In many situations the resistivity of the fluid within pores and fractures is more im- portant or determining than the resistivity of the host rock. In those situations, Archie’s law describes the effective resistivity of the whole rock (including pores and fractures filling) considering porosity, fluid saturation within the pores and the resistivity of the fluid as (Reynolds, 1997):

ρ = aφ−ms−nρw, (2.4)

where

ρ is the effective rock resistivity (Ω·m), ρw is the resistivity of the pore fluid (Ω·m),

(11)

φ is the porosity,

s is the volume fraction of pores with fluid, a is a constant between 0.5 and 2.5,

m also a constant between 1.3 and 2.5, n another constant, approximately 2.

Constants a, m and n are determined empirically and known beforehand.

(12)

Chapter 3

Direct Current Resistivity (DC)

Feeding a direct current into the earth and then measuring the potential difference at some points at the surface is the most common method to determine underground resistivities. It is widely used in the search for groundwater reservoirs and its monitor- ing, locating cavities, faults and fractures, archeology and some other applications.

To apply this technique, arrays of four electrodes are commonly used. Two of them feed the current into the ground and the other two are used to measure the potential difference between them (see figure 3.1).

C1 P1 P2 C2

+I -I

AM

AN BN

ρ U

A M N B

BM

Figure 3.1:General array configuration with the standard notation. Elec- trodes P1 and P2are the potential electrodes, while C1and C2are the current or source electrodes. All the electrodes lie on the same line.

3.1 Potentials in a homogeneous halfspace

For a single current electrode in a homogeneous halfspace, the potential Vp at any point p located at a distance r from the source (see figure 3.2(a)), is given by:

Vp = ρI

2πr, (3.1)

where ρ is the resistivity of the halfspace. For two current electrodes, the potential at any point p is equal to the sum of the potentials due to the sources alone:

Vp = ρI 2π

1 r1

− 1 r2



, (3.2)

(13)

I

p r

(a) A single current electrode

+I -I

r1 r2

p

z ρ

(b) Two current electrodes Figure 3.2:Electric field and equipotential lines due to current electrodes.

where r1 and r2 are the distances from p to the first (+I) and second (−I) source elec- trodes, respectively (see figure 3.2(b)).

Then the potentials at M and N in a general array layout (figure 3.1), are:

VM = ρI 2π

 1

AM − 1 BM



, VN = ρI 2π

 1

AN − 1 BN



.

The expression for the potential difference between them is a voltage VM N which is easily measured in the field:

VM N = VM − VN = ρI 2π

 1

AM − 1 BM



 1

AN − 1 BN



. (3.3)

In terms of the resistivity ρ (the physical property of interest), it is possible to rewrite equation 3.3 as:

ρ = VM N

I 2π

 1

AM − 1 BM



 1

AN − 1 BN

−1

or ρ = VM N

I K, (3.4) where K is defined as the geometrical factor and depends only on the array layout.

When the halfspace resistivity is not constant, ρ in equation 3.4 is not the true re- sistivity of the ground. It is instead an average of the real resistivities in a volume of the subsurface that is determined by the layout of the electrode array, called apparent resistivity ρa.

(14)

3.2 Array configurations

C1 P1 P2 C2

+I -I

na na

ρ U

na (a) Wenner-alpha array

C1 P1 P2 C2

+I -I

a ρ U

L

(b) Schlumberger array

C1 P1 P2 C2

+I -I

a ρ U

na na

(c) Wenner-Schlumberger array

C1 C2 P1 P2

+I -I

a na a

ρ

U

(d) Dipole-dipole array

Figure 3.3:Most common array layouts in DC resistivity method.

There are many different ways to arrange the current and potential electrodes in the field. Some of them simplify later calculations, while others ease the acquisition process. The array to be used depends on the aim of the study, the structures involved and the depth of interest.

In figure 3.3, the most common layouts are illustrated: Wenner-alpha, Schlum- berger and dipole-dipole. In all of them, the potential and current electrodes are along the same line.

In the Wenner-alpha array, all electrodes have the same separation distance a, which is increased by an integer factor n (commonly called the level) to reach greater

(15)

1 2 3 4 5 6 7 8 n

(a) Wenner-alpha

1 2 3 4 5 6 7 8 n

(b) Wenner-Schlumberger

1 2 3 4 5 6 7 8 n

(c) Dipole-dipole

Figure 3.4:Pseudosection data pattern of the three array layouts considered.

depths. Every time the level is increased, there are 3 data points less than in the previ- ous level in the horizontal direction. This can be seen in figure 3.4 (a), where the data pattern for the three different arrays is displayed. On average, the depth of investiga- tion is an/2. The array data are sensitive to vertical changes of resistivity, but poor at resolving horizontal variations. The associated geometrical factor is K = 2πa (Loke, 1999).

The Schlumberger layout is also good at resolving vertical changes. The potential electrodes are in the middle of the array separated by a small distance a. While the current electrodes at the ends are at a greater distance L from the middle point of the array (technically, L ≥ 5a). To increase the depth of investigation, L must be enlarged.

Normally this enlargement is done on a logarithmic scale. The geometrical factor K is

πL2

a [1 −4La22].

There is a new variant of the Schlumberger array, where L is chosen in such a way that L − a/2 = na with n being the level of the array (see figure 3.3(c)). This hybrid layout is called Wenner-Schlumberger and its advantage is that it can be implemented with electrodes arranged with constant spacing. It is sensitive to horizontal and verti- cal structures. As can be seen in figure 3.4 (b), it has a wider coverage in the horizontal direction than the Wenner array (there is a loss of just 2 data points per level) (Loke, 1999).

In the dipole-dipole array, the current electrodes are at one side, while the potential electrodes are at the other (see figure 3.3(d)). Each pair of electrodes has the same spacing a, while the distance between the pairs is the level n times a. It is very sensitive in the horizontal direction, but not in the vertical. It is widely used to map lateral resistivity changes.

(16)

3.3 Depth of investigation

It is known that the total length of the array determines the depth of investigation.

To reach greater depths, the distance between the electrodes must be increased. Ed- wards (1977) found linear relationships and gave factors which relate the length of the array with an effective depth (see table 3.1). More recently, Oldenburg and Li (1999) in- troduced a depth of investigation index. This index tells how reliable the information at depth is in a model obtained through an inversion technique. It helps to recognize which features are given by the data and which are a product of the inversion param- eters (i.e. dampening parameters).

Array type Level DI/L

Wenner-alpha 0.173

Dipole-dipole 1 0.139 2 0.174 3 0.192 4 0.203 5 0.211 6 0.216 Wenner-Schlumberger 1 0.173 2 0.186 3 0.189 4 0.190 5 0.190 6 0.190

Table 3.1: Relation between the depth of investigation (DI) and the total length of the array (L) (Edwards, 1977)

(17)

Chapter 4

Radiomagnetotellurics (RMT)

Radiomagnetotellurics (RMT) is a frequency-domain electromagnetic technique.

The sources employed in this method are remote radio transmitters, which provide electromagnetic plane waves with frequencies between 14 kHz – 250 kHz. By mea- suring the components of the electric (E) and magnetic (H) field at the surface, it is possible to obtain the frequency dependent apparent resistivity and phase information of the subsurface. To calculate them, the application of the electromagnetic theory is needed.

4.1 Maxwell’s equations

The bases of electromagnetic theory are Maxwell’s equations. They describe and rule the behavior. They are:

∇ × E = −∂B

∂t , [Faraday’s law] (4.1)

∇ × H = j + ∂D

∂t , [Ampere’s law] (4.2)

∇ · D = q, [Gauss’s law] (4.3)

∇ · B = 0, (4.4)

where

E is the electric field (V/m), H is the magnetic field (A/m), B is the magnetic flux density (T), D is the electric displacement (C/m2), j is the current density (A/m2),

q is the charge density (C/m3).

The induction of the electromagnetic fields are given by Faraday’s law (4.1) and Ampere’s law (4.2). Electric field flux is described by Gauss’s law (4.3). The nonexis- tence of isolated magnetic monopoles is stated in equation 4.4.

(18)

The vector fields in these equations (4.1 – 4.4) are coupled by the constitutive rela- tions:

D = εE, (4.5)

B = µH, (4.6)

j = σE, (4.7)

where the following constants are properties of the medium ε is the electric permittivity (F/m),

µ is the magnetic permeability (H/m), σ is the conductivity (S/m).

Assumptions of the RMT method

a) Maxwell’s general electromagnetic equations are obeyed.

b) The Earth does not generate electromagnetic energy, but only dissipates or absorbs it.

c) All fields may be treated as conservative and analytic away from their sources.

d) The electromagnetic fields utilized, generated by transmit- ters far away from the area of interest, may be treated as plane waves.

e) No accumulation of free charges is expected to be sus- tained within a layered Earth. In a multi-dimensional Earth, charges can accumulate along discontinuities. This gener- ates a non-inductive phenomenon known as static shift.

f) Charge is conserved and the Earth behaves as an ohmic con- ductor.

g) At sufficiently low frequencies, the electric displacement field is quasi-static and displacement currents are neglected.

h) At sufficiently low frequencies,any variations in the electri- cal permittivities and magnetic permeabilities of rocks are assumed negligible compared with variations in bulk rock conductivities.

Table 4.1: Assumptions of the RMT method (modified from Simpson and Bahr (2005)).

Equations 4.1 to 4.4 can be simplified by some of the assumptions listed in table 4.1.

Then using equations 4.5 to 4.7 and neglecting displacement currents (i.e., ∂D/∂t = 0), Maxwell’s equations can be rewritten as:

∇ × E = −∂B

∂t , (4.8)

∇ × B = µσE, (4.9)

∇ · E = q/ε, (4.10)

∇ · B = 0. (4.11)

(19)

4.2 Diffusion equations

Now, it is possible to work on equations 4.8 – 4.11 to get expressions for E and B.

Using the vector identity ∇×(∇×F) = ∇(∇·F)−∇2Fand taking the curl of equations 4.8 and 4.9, the following diffusion equations are obtained:

2E = µσ∂E

∂t − ∇(∇ · E), (4.12)

2B = µσ∂B

∂t − µ

σ∇σ × j. (4.13)

Assuming a plane wave with time dependence of the form eiωt, equations 4.12 and 4.13 can be written in the frequency domain as:

2E = iωµσE − ∇(∇ · E), (4.14)

2B = iωµσB − µ

σ∇σ × j. (4.15)

The solutions to the second order differential equations 4.14 and 4.15 have the form:

E= E0ei(ωt−kz), (4.16)

and B= B0ei(ωt−kz), (4.17)

where k =q−iωµσ or k = (1 − i)

rωµσ

2 . (4.18)

4.3 Skin depth

The electric and magnetic fields, as can be seen in equations 4.16 and 4.17, decay as z increases. The depth at which the amplitude is reduced by a factor of 1/e is defined as the skin depth and is given by:

δ =

s 2

ωµσ, (4.19)

where ω is the angular frequency of the incident wave and σ the conductivity of the medium. Note that the skin depth is dependent on the frequencies used and the con- ductivity distribution of the ground.

However, the skin depth is not the same as the depth of investigation or penetration depth. The latter is defined by Spies (1989) as the maximun depth in which an anoma- lous body can be detected by a particular technique and frequency. For the MT case, he states that a satisfactory depth of investigation estimate is 1.5 skin depths:

δ = 1.5

s 2

ωµσ. (4.20)

Equation 4.20 is supposed to be a good guess of the depths that can be effectively reached with the method.

(20)

4.4 Transfer functions

The horizontal components of the electromagnetic field are related through the impedance tensor:

Z =

"

Zxx Zxy

Zxx Zxy

#

, (4.21)

by EH = ZBH/µ:

"

Ex

Ey

#

= 1 µZ

"

Bx

By

#

. (4.22)

This can be simplified as

Z =

"

0 Zxy

Zxx 0

#

(4.23) in the 2D case if the considered profile is perpendicular to the strike of the structures below the surface.

The vertical component of the magnetic field is related to the horizontal ones by the tipper:

T=

"

A B

#

, (4.24)

as can be seen in the following equation:

h Hz

i= 1 µT

"

Bx

By

#

. (4.25)

It can also be simplified in the 2D case, where A = 0, and is related to the TE mode (see section 4.5), since equations 4.26 and 4.28 consider the vertical component of the magnetic field.

4.5 TE and TM mode.

Now, consider the case presented in figure 4.1, where the resistivity of the medium is not constant anymore. There is a vertical discontinuity with strike in the x direction.

The current must be conserved across the boundary, thus, the perpendicular compo- nent of the electric field Ey must be discontinuous (jy = σEy). If the anomalous body is significantly longer than the skin depth at the strike direction, there are no variations in the parallel fields (i.e. ∂/∂x = 0). Moreover, considering the ideal 2D case where electric and magnetic fields are orthogonal, equations 4.8 and 4.9 can be decoupled into two independent modes:

(21)

• TE mode: where the currents flow parallel to the strike and the induced magnetic field is perpendicular and in the vertical plane. The component of the electrical field along the strike, Ex, is continuous across the boundary. But the current density j is not. The variations in the impedance, apparent resistivity and phase, are smooth. The decoupled equations follow:

∂Ex

∂y = ∂Bz

∂t , (4.26)

∂Ex

∂z = ∂By

∂t , (4.27)

∂Bz

∂y − ∂By

∂z = µσEx. (4.28)

• TM mode: with currents flowing across the boundary and the magnetic field in the plane parallel to the strike. In this mode j is continuous across the boundary and Exis not (jx = σEx). The impedance, apparent resistivity and phase are not smooth and present some jumps.

∂Bx

∂y = µσEz, (4.29)

−∂Bx

∂z = µσEy, (4.30)

∂Ez

∂y − ∂Ey

∂z = ∂Bx

∂t . (4.31)

Figure 4.1:The TE and TM modes on a vertical discontinuity.

(22)

Chapter 5

Inversion theory

The geophysicist uses his/her knowledge in physics to estimate what is under the ground from observations at the surface. The link between the observations or measurements and the parameters of an estimated model of the ground, is a physical model. In the most general case, the situation can be expressed as (Menke, 1989):

f(d, m) = 0, (5.1)

where d is a vector with the measurements, m is a vector with the model parameters and f contains implicit equations that relates both (i.e. contains information about the physical model).

Equations in f can be of any kind and too often very complicated. The simplest case occurs when the relationship between d and m is linear and explicit. Equation 5.1 can be written as (Menke, 1989):

d− Gm = 0, (5.2)

which is the most studied and used inverse problem. G is an N × M matrix, called the data kernel, with N the total number of data points and M the number of model parameters.

The goal of the inversion is to find m, having d and knowing G: m = G−1d. This is the ideal inverse problem, as G is considered a forward operator, which applied to a model, produces data (in this case synthetic).

Not all the problems have an unique solution or it might be very hard to find it. But to find approximate solutions is possible or easier. Then the problem is to seek for the best solution, which must be a set of estimated model parameters ( ˆm) that give rise to a predicted data (ˆd) very close to the real measured data (d). So, the aim is to get the

ˆ

mthat causes the smallest prediction error e (Menke, 1989):

d− ˆd= d − G ˆm= e. (5.3)

There are several ways to minimize e. Every inversion technique involves an error function dependent on e, to be minimized.

(23)

5.1 The nonlinear problem

Unfortunately, most of the inverse problems are not linear and cannot be written as equation 5.2. But there is a possibility to approximate them with linear ones. A way to do it is expanding the nonlinear equation g(m) = d about a point ( ˆmn) through Taylor’s theorem. Neglecting the high order terms, the linearized problem becomes (Menke, 1989):

g(m) ≈ g( ˆmn) + ∇g[m − ˆmn] = g( ˆmn) + Gn[m − ˆmn],

where Gnis a matrix of derivatives, often known as the Jacobian or sensitivity matrix computed at mn. Choosing ∆mn+1 = m − ˆmn, the approximate and iterative equa- tions are (Menke, 1989):

Gn∆mn+1 = d − g( ˆmn), (5.4)

ˆ

mn+1 = ˆmn+ ∆mn+1. (5.5)

To solve the linearized problem, an initial estimation of ˆmnis needed. Then, small perturbations ∆mn+1 are applied in every iteration until a minimum of the error func- tion is found.

The error function from a nonlinear problem, might have an irregular shape. It could have many local minima or even more than one minimun at the same misfit level. The iterative equations above (5.4 and 5.5) do not take into account the behavior of the whole error function, they just do that in the vicinity of ˆmn. If the initial guess of ˆmnis not close enough to the global minimum, it is possible to reach just a local one and get stuck there (Menke, 1989).

In order to be sure of having reached a global minimum, one must examine the error function by trying with different initial guesses. But since it is not possible to examine with the required detail, it is impossible to be sure.

5.2 Types of inverse problems

Wether an inverse problem has a solution or not, provides the criteria for a classifi- cation system. According to it, inverse problems can be (Menke, 1989):

Underdetermined: The data does not provide enough information to obtain an unique solution of the problem. Thus, it is possible to find many solutions with a predic- tion error equal to zero (e = 0).

Evendetermined: There is exactly enough information on the data to find the model parameters. The problem can be solved exactly (unique solution with e = 0).

Overdetermined: The data contains more information than needed to solve the prob- lem exactly. Therefore, it is possible to find many solutions, but none of them with a prediction error equal to zero. Then, a criteria is needed to pick the best one.

(24)

Mixed determined: This is a combination of underdetermined and overdetermined problems. The data provides too much information to calculate some of the model parameters and at the same time too few to calculate others.

If the problem to be inverted is evendetermined and can be written as in equation 5.2, it can be solved easily just inverting G, since there is just one solution for ˆm:

ˆ

m= G−1d. (5.6)

Then G must be invertible (and N = M).

When the problem is not evendetermined, additional techniques are required. The model parameters ˆmcan be obtained solving

ˆ

m= G−gd, (5.7)

where G−g is called the generalized inverse in analogy to G−1 in 5.6. The generalized inverse can be derived for different inversion methods (Menke, 1989).

In the case of the over- and underdetermined problem, the length method is com- monly used. Minimum solution length is used in the underdetermined case and least squares in the overdetermined case (see section 5.3). For the mixed determined prob- lems, the approach is to split it in two problems: one overdetermined and one under- determined.

5.2.1 Mixed determined problems

Since the mixed determined problems are the most common ones, it is worthy to discuss them a bit further. In this section, it is shown through the concept of vec- tor spaces, how to treat the over- and underdeterminedness in the same problem as Menke (1989) explained it.

Consider the mixed determined problem Gm = d. Some model parameters are un- derdetermined and therefore no information about such components of m is contained in G. Then it is possible to say that those components lie in the null space S0(m). While the model parameters the problem can resolve, because it contains information about them, lie in the subspace Sp(m). Then m = mp + m0, where mp is contained in Sp(m) and m0 in S0(m).

As the problem is also partly overdetermined, there must be some components of dthat cannot be spanned by any choice of m. But it must be able to span those lying in the subspace Sp(d). All the other components of d will lie in the null space S0(d).

Thus, d = dp+ d0 and the initial problem have become:

G(mp + m0) = dp+ d0. (5.8)

A way to determine the subspaces p and null introduced here, is the singular value decomposition (SVD), discussed in section 5.4.

(25)

5.3 The length method

In the length method, the error function is the norm of e, defined as

||e|| =

 X

i

|ei|n

1

n, (5.9)

the power n can be any integer and is chosen in regard to the desired weight to be applied to the data. Higher values of n will weight preferentially values that are not in the average trend, while smaller values will do the opposite. The most used norm is with n = 2: the least squares method. It assumes that the data follows Gaussian statistics (Menke, 1989).

5.3.1 The least squares solution for overdetermined systems

To solve the linear inverse problem (equation 5.2) when it is overdetermined, means to pick the best solution. The best solution is considered as the one with the lowest prediction error. In the least squares sense this solution minimizes the error function

E = eTe= (d − G ˆm)T(d − G ˆm) (5.10) (i.e., equation 5.9 squared with n = 2). The derivative of E with respect to the model parameters, m, is:

∂m[(d − G ˆm)T(d − G ˆm)] = 2GTG ˆm− 2GTd, and setting it to zero, the solution will be:

ˆ

m= (GTG)−1GTd (5.11)

where the generalized inverse is G−gls = (GTG)−1GT (Menke, 1989).

5.3.2 Solution for underdetermined systems (minimum length solu- tion)

Underdetermined problems have many solutions, all of them with the prediction error equal to zero. In this case, the best solution is considered as the “simplest” one.

This would be the solution with the lowest norm L = mTm. The error function to minimize is L, subject to the constraint that e = d − Gm = 0. Applying the method of Lagrange multipliers it will be (Menke, 1989)

E = ˆmTmˆ + eTλ =

M

X

j=1

ˆ m2j +

N

X

i=1

λi

"

di

M

X

k=1

Gikk

#

. (5.12)

Then ∂E/∂m = 0, gives 2 ˆm= GTλ. Plugging the latter in the constraint d = G ˆm, the following expression is obtained 2d = G[GTλ/2]. Note that 2GGT is square and, if it is invertible, the solution for the Langrage multipliers 2λ = 2[GGT]−1d will be (Menke, 1989):

ˆ

m= GT(GGT)−1d. (5.13)

(26)

5.3.3 Damped least squares solution (for mixed determined systems)

For the mixed determined problem, the best solution must be a combination of the solutions for the purely overdetermined and purely underdetermined problem. The combined error function will be (Menke, 1989)

E = eTe+ ǫ2Tm,ˆ (5.14)

where ǫ is called the damping factor and determines the influence of the underdeter- minedness in the system.

Following the same steps as in section 5.3.1, the solution will be (Menke, 1989) ˆ

m=hGTG+ ǫ2Ii−1GTd. (5.15)

5.4 Singular value decomposition

The singular value decomposition (SVD), sometimes called spectral decomposition, is an eigenvalue decomposition. It can be used on the data kernel to identify (as said in section 5.2.1) the null and p subspaces (Menke, 1989). In the following lines, this technique is briefly described.

Any matrix can be written as

G= UΛVT, (5.16)

where G has N × M dimension (as in 5.2), U is an N × N matrix of eigenvectors that span the data space S(d), V is an M × M matrix of eigenvectors that span the model parameter space S(m) and Λ is an N × M diagonal matrix of ordered eigenvalues (sin- gular values).

Some of the singular values in Λ might be zero. If there are just p singular values greater than zero, equation 5.16 becomes G = UpΛpVTp, where Λp is the diagonal matrix with all the nonzero singular values, Upand Vpare the first p columns of U and V, respectively. All the other vectors in the matrices are canceled by the zeros in Λ, and G does not contain information about the subspaces spanned by them, since they span the null subspaces mentioned in section 5.2.1. Then, the solution of the inverse problem, if equation 5.16 holds, is

ˆ

m= VpΛ−1p UTpd. (5.17)

It can be demonstrated that the SVD generalized inverse of equation 5.17 (i.e., G−g = VpΛ−1p UTp), corresponds to the least squares generalized inverse in equation 5.11 for overdetermined problems and to the minimum solution length inverse for un- derdetermined problems.

Very often, the singular values in Λ decay and become very small without being equal to zero. In those cases it is convenient to set a cutoff value and consider all smaller singular values to be zero. By doing this, the solution of the problem is no

(27)

longer the real one. However, it will be very close to it, as just the small singular values are neglected, but with a better model variance and decreased resolution (see section 5.5.3) (Menke, 1989).

Instead of choosing a sharp cutoff for the singular values, it is possible to include all of them but damping the smaller ones. This can be done adding a small factor ε2to the eigenvalues in Λ. It will have a negligible effect in the large values, but it will prevent the small ones to lead to large variances. The resolution will be inevitably degraded (Menke, 1989). In this case, the generalized inverse will correspond to the damped least squares solution.

Either the cutoff value or the damping factor play an important roll in handling the trade-off between resolution and variance (see section 5.5.3). If they are too large (i.e., few singular values are included), the model parameters will become heavily biased.

If they are too small, the model parameters will become unstable (Pedersen, 2004).

Anyway, truncating the spectrum will effectively discard the null space as well as some of the information in the system (Muiuane and Pedersen, 2001).

5.5 Variance and resolution

In the previous sections, some techniques that can solve an inverse problem are briefly described. However, the solution so obtained is just an estimate of the real model parameters. Therefore, the precision and accuracy of this estimate must be known and are of great importance. A model without any quality estimator is an incomplete solution as it is not possible to know how reliable it is.

5.5.1 Resolution

The real model parameters (mr) are supposed to hold the following relationship with the data without noise (dr): Gmr = dr. Substituting it into the equation for the inverse problem 5.7, where d contains noise (i.e., d = dr+ n), gives

ˆ

m= G−g(dr+ n) = G−g(Gmr) + G−gn= Rmr+ G−gn, (5.18) where R is called the model resolution matrix (Menke, 1989). It tells the deviation of the estimated model parameters from the real ones and over which scale length the parameters of the estimated model are well determined. In the particular case of the SVD, it can be seen from equations 5.16 and 5.17 that:

R= G−gG= VpVTp, (5.19)

and that the resolution of the model depends only on the model eigenvectors (Menke, 1989).

As M is the total number of parameters, R is a M × M matrix. If every parameter is perfectly resolved, R would be the identity matrix, and the estimated model param- eters would be the same as the real ones (i.e., ˆm = Imr ⇒ ˆm= mr). But this just

(28)

happens when Vp spans all the model parameter space (i.e., p = M) (Menke, 1989).

Thus, the off-diagonal values of R are normally not zero, indicating that the estimated parameter is a weighted average of some of the real model parameters. If one of the parameters is not resolved at all, the correspondent diagonal value will be zero.

5.5.2 Variance

Data (d) always contains some level of noise and it is to be expected that this noise somehow influences the estimated model parameters ( ˆm). The model covariance matrix, explains how the errors of d propagate into errors of ˆm(Menke, 1989):

[cov ˆm] = G−g[cov d]G−gT. (5.20) The data covariance matrix ([cov d]) is a diagonal matrix and contains the squares of the data errors (i.e., the variance of each data point). The off-diagonal values are zero since the data is assumed to be uncorrelated.

From equation 5.20, it can be seen that the errors in the model depend on the data errors as well as on the way the model parameters are mapped from the data. In the case of the SVD generalized inverse, and assuming uncorrelated data with uniform variance σ2, the model covariance is

[cov ˆm] = G−g[cov d]G−gT = (VpΛ−1p UTp) · (σ2I) · (VpΛ−1p UTp)T

= σ2VpΛ−2p VpT. (5.21)

Note that the model covariance is very sensitive to the smallest singular value (Menke, 1989).

5.5.3 Trade-off of resolution and variance

The best solution to the inverse problem would be the set of estimated model pa- rameters with the smallest variance and the largest resolution. But the model with the smallest variance is hardly the one with the largest resolution, since the resolution decreases as the variance improves. Thus, there is a trade-off between the resolution spread and variance.

This trade-off is evident in the SVD, when a proper cutoff must be chosen for the truncation of the singular values. Kalscheuer and Pedersen (2007) showed this with the iterative expressions for the model resolution and variance. Those are:

Rp+1 = VpVTp + vp+1vp+1T = Rp+ vp+1vTp+1, (5.22) var( ˆmk p+1) = var( ˆmk p) + 1

λ2p+1vk p+12 , (5.23) where vp+1 is the (p + 1)th model eigenvector, var( ˆmk p+1) is the variance of the kth parameter (i.e., the kth diagonal value of the covariance matrix) and vk p+1 is the kth component of the (p + 1)th model eigenvector. With these expressions (5.22 and 5.23), it is easy to see how adding singular values improves the resolution and at the same time increases the variance of every parameter, especially due to the fact that λp ≥ λp+1.

(29)

5.5.4 Nonlinear variance and resolution analysis

The variance and resolution of the results obtained in this work, were analysed tak- ing into account the nonlinearity of the inversion problems. The analysis performed was developed by Kalscheuer and Pedersen (2007). It is based on a truncated singu- lar value decomposition and computes a simplified description of the nonlinear con- fidence surface of the model. A brief explanation of how the confidence surface is estimated follows (see Kalscheuer and Pedersen, 2007 and references therein).

In the case of a linear problem, the confidence surface in parameter space is a hyper- ellipsoid centered around the optimum model. The optimum model is the model with the minimum misfit for a particular truncation level, and the confidence surface gives the distance from this model at which a specific misfit is reached. The semi-axes of the hyperellipsoid representing the confidence surface are parallel to the model eigenvec- tors and proportional to the inverse of the associated singular value.

In nonlinear problems, the shape of the confidence surface is unknown. An ap- proximation to this surface can be a pseudo-hyperellipsoid with different lengths of the two semi-axes parallel to the same model eigenvector (see figure 5.1). Then, the actual lengths of the semi-axes are computed from the optimum model to the real con- fidence surface in the directions of the model eigenvectors.

m2

m1

a2

a1

s+1

s-1

s-2

s+2

mmax1

m01

mmin1

Q0+∆Q

Q0

non-linear confidence surface pseudo-hyperellipsoid

Figure 5.1:Illustration of the pseudo-hyperelipsoidal surface for two param- eters m1 and m2as an approximation to the true confidence sur- face of the nonlinear problem. The thick red line depicts the true confidence surface. The pseudo-hyperellipsoid is described by the thin blue line. The axes a1 and a2 point in to the directions of the eigenvectors v1 and v2, respectively. The extreme points mmin1 and mmax1 of m1on the pseudo-hyperellipsoid over and un- derestimate the true model variability, respectively. (Kalscheuer and Pedersen, 2007).

(30)

5.6 Note on the RMS

The root mean square (RMS) is used to assess the goodness of fit of the obtained model predictions to the actual values. It is given by the following expression:

RMS =

v u u

t(d − ˆd)T(d − ˆd)

N[cov d] , (5.24)

where [cov d] is the diagonal matrix containing the data variances (as in equation 5.20), N is the number of data points, d is the data vector and ˆdthe predicted data vector.

When the misfit of the data is on average the same as the errors in the actual data, the RMS is equal to one. Then, an RMS lower than 1 indicates that the model is also fitting noise. Considering that the introduction of structures in the model responding to the noise must be avoided, a model with an RMS of 1 is not convenient, as it to some extent is already fitting the noise. Therefore, the target RMS chosen for the inversion models computed in this work was 1.10 instead of 1.

(31)

Chapter 6

The inversion program

The inversions were carried out with the truncated SVD (TSVD) scheme. The code from Siripunvaraporn and Egbert (2000), a reduced basis OCCAM’s inversion (RE- BOCC), was used and modified. Routines for the computation of the DC forward responses and the sensitivity matrix have been added and the storage of all the DC data is done in a similar way as in the RMT case, taking into account that there is just one response type (apparent resistivity) and that the number of stations decreases with higher levels. The levels are treated as periods.

This modified version of REBOCC is coupled to a TSVD inversion, in such a way that REBOCC only provides the computation of the forward responses, sensitivities and RMS values for the TSVD routine (see figure 6.1).

Figure 6.1:General scheme of the inversion process.

6.1 Modifications to REBOCC

To introduce DC data in REBOCC, three new modes were created in addition to the original TE, TM, TP (tipper) and the added DET (determinant). The new DC resistivity modes are: DCW for Wenner-alpha array, DCS for Wenner-Schlumberger and DCD for dipole-dipole. As all the information is stored in the same way, it is possible to run a joint inversion of any combination of these modes. In that case, the field data and forward responses from the included modes will be arranged in one data vector, the sensitivities in one sensitivity matrix and the model parameters to get will be the same.

(32)

6.1.1 DC forward responses

The DC forward routine introduced in REBOCC was written in FORTRAN by Naser Meqbel (Meqbel, 2006). It computes the electric potentials in the nodes of a given dis- cretized 2D model with the finite differences scheme developed by Dey and Morrison (1979). In this scheme, the 3-dimensionality of the source and the potentials is allowed for by solving the set of finite difference equations for the Fourier transforms of the potentials in direction of the strike. After that, the potentials in the space domain are obtained by an inverse Fourier transform.

The subroutine needs as input the positions of the current electrodes (the sources), the discretized model with the resistivity distribution and the wavenumbers and weight- ing coefficients to perform the backward transformation of the Fourier transform (cal- culated in the wavenumber routine discussed in section 6.1.3). The outputs are the electric potentials in all nodes of the mesh (see figure 6.2).

Figure 6.2:Inputs and outputs of the forward routine for the DC method.

Once the potentials are available, the apparent resistivities for a specified array lay- out (Wenner-alpha, Wenner-Schlumberger or dipole-dipole) and level are easily com- puted with equation 3.4 and the corresponding geometrical factor K (see section 3.2).

An example of apparent resistivities computed with this routine can be found in figure 7.3.

6.1.2 DC Jacobian

The routine for the computation of the sensitivities in the DC modes, which was added to REBOCC, was also written by Naser Meqbel. The calculation of the sensitivi- ties is not done with the perturbation method, as it requires a forward computation for each model parameter consuming computing time. Instead, the reciprocity principle (Meqbel, 2006 and LaBrecque et al., 1999) is used, with which the derivative for the potential at node i due to a current electrode at node k is computed with respect to the conductivity of cell j. No further forward computations or matrix inversions are needed as each potential electrode is also used as a current electrode and the system matrix of the forward problem is symmetric. The latter fact is the mathematical expres- sion of reciprocity. The only additional computations required are the derivatives of the system matrix of the forward problem with respect to the conductivity of each cell.

For further details see Meqbel (2006) and LaBrecque et al. (1999).

The inputs needed for this subroutine are the apparent resistivities, the electrode positions, the discretized model, the electric potentials and the wavenumbers (see sec- tion 6.1.3). The output is the sensitivity matrix for an specific set of model parameter

(33)

values (see figure 6.3).

Figure 6.3:Inputs and outputs of the DC Jacobian routine.

In figure 6.4, two sensitivity matrices for different levels of a Wenner-alpha array computed with the DC Jacobian routine are shown.

20 40 60 80 100

5

10

15

20

Horizontal node number

Vertical node number

−0.02

−0.01 0 0.01 0.02

(a) Level 1

20 40 60 80 100

5

10

15

20

Horizontal node number

Vertical node number −6

−4

−2 0 2 4 6 x 10−3

(b) Level 4

Figure 6.4:Sensitivity matrices for two different levels of a Wenner-alpha ar- ray with a characteristic spacing of 4 cells between electrodes.

The black squares mark the positions of the electrodes. The color scales are dimensionless representing the ratio ∂logρa/∂logρ.

6.1.3 Wavenumber routine

The routine implemented to provide the wavenumbers and transformation factors, needed for the inverse Fourier transform of the electric potentials in the wavenumber domain, is the one from Xu et al. (2000). An optimization method is used to select the wavenumbers that provides an accurate and satisfactory inverse Fourier transform.

The routine just needs the maximum and minimum length of the cells in the model and the desired number of wavenumbers and it will give the computed wavenumbers and weighting coefficients (see figure 6.5).

Figure 6.5:Inputs and outputs of the wavenumber routine.

The algorithm allows the selection of the number of wavenumbers, n, in which the interval (0, ∞) will be discretized. The suggested value from Xu et al. (2000), for a

(34)

model of 1000 m length (as the one used to perform the inversions, see section 7), was used (n = 7). To compare the influence of this parameter, the difference between for- ward responses for a homogeneous halfspace of 100 Ωm are shown for n equal to 4 and 7 in figure 6.6. It is possible to see that the differences are not bigger ±3Ωm.

1 2 3 4 5 6 7 8 9 10

Level

0 100 200 300 400

Profile (m)

−3.0

−2.5

−2.0

−1.5

−1.0

−0.5 0.0 0.5 1.0 1.5 2.0 2.5 3.0

∆ρapp(m)

Figure 6.6:Differences in the forward responses of an homogeneous half- space of 100 Ωm with a Wenner-alpha array of 10 levels with discretizations of the (0, ∞) interval in the wavenumber domain equal to 4 and 7. The differences are not bigger than ±3Ωm.

6.2 Truncated Singular Value Decomposition (TSVD)

The inversion starts with an initial model as well as an initial truncation level and will proceed until a desired RMS value is achieved (specified at the beginning of the inversion). The TSVD routine consists of three loops: one that increases the truncation level, a second one that iterates within the same truncation level computing the new Jacobian, its SVD and data difference vector, and a third one that changes the damping factor, such that convergence is reached. A simplified flow chart of this TSVD routine is shown in figure 6.7.

(35)

Figure 6.7:Simplified flow chart of the TSVD routine. The values with a * are defined by the user. RMS* is the desired RMS and RMSdiff is the minimum difference to consider two RMS values different.

(36)

Chapter 7 Results

A synthetic model was designed and used to simulate DC and RMT data. From these synthetic data three inversions were carried out: DC, RMT and the combination of both, which were subject to a resolution analysis. In this chapter, the forward re- sponses and the inversions are discussed, while the resolution analysis can be found in next chapter.

7.1 The discretized synthetic model

A model with both conductive and resistive structures in a host with intermediate resistivity is used in order to compare the resolution properties of both methods, DC and RMT. The model is shown in figure 7.1 and consists of a homogeneous halfspace of 100 Ωm, with a conductive and resistive box of 10 and 1000 Ωm, respectively. The boxes are buried at 10 m and reach 40 m depth. The space between them is 50 m.

0

20

40

60

80

100

Depth (m)

0 50 100 150 200 250 300 350 400 450

Profile (m)

0.50 0.75 1.00 1.25 1.50 1.75 2.00 2.25 2.50 2.75 3.00

log10ρ(m)

Figure 7.1:Synthetic model used to generate data. The background resistiv- ity is 100 Ωm and the resistivities of the boxes are 10 and 1000 Ωm, respectively. The black triangles indicate the positions of the DC-electrodes.

(37)

The model was discretized in a grid with coarsening cells to the left, right and bot- tom to simulate infinite edges and fulfill the requirements to apply the finite differences scheme (Aprea et al., 1997; Dey and Morrison, 1979). The total length of the grid is 1015 m in the horizontal direction, divided in 126 cells. From those cells, 106 in the middle have a fine and constant width of 5 m. The maximum depth is of 211 m which is dis- cretized into 23 nodes. In the vertical direction, the size of the cells is never constant, it beginns being half a meter long and ends being 25.7 m. The total number of cells is then 126 × 22 = 2772 (see figure 7.2).

0 100 200 300 400 500 600 700 800 900 1000

0 20 40 60 80 100 120 140 160 180 200

Profile length (m)

Depth (m)

GRID IMPLEMENTED IN THE MODELS

Figure 7.2:Mesh used in the discrete models. The widths of the coarse cells at the side edges sum up to 240 m. The middle cells have a constant horizontal with of 5 m. The height of the cells is only constant within the same row.

7.2 Forward responses

In the DC case, apparent resistivities were computed for a Wenner-alpha layout with a spacing of 10 m (i.e., two cells between electrodes). To cover a profile of 400 m over the blocks, 40 electrodes were used. Seven levels were recorded giving 196 data points. The data obtained was contaminated with 2% of random Gaussian noise. With this layout, the depth of investigation expected (using table 3.1) is around 36 m, shal- lower than the bottom of the structures in the synthetic model (figure 7.1).

1 2 3 4

5 6 7

Level

0 100 200 300 400

Profile (m)

1.5 1.6 1.7 1.8 1.9 2.0 2.1 2.2 2.3 2.4 2.5

log10ρapp(m)

Figure 7.3:DC forward responses of the synthetic model, with a Wenner- alpha array of 40 electrodes and 7 levels. 2% Gaussian noise was added. The black triangles at the top, show the positions of the electrodes.

(38)

104 105

Frequency (Hz)

0 100 200 300 400

Profile (m)

1.5 1.6 1.7 1.8 1.9 2.0 2.1 2.2 2.3 2.4 2.5

log10ρapp(m)

(a) Apparent resistivity

104 105

Frequency (Hz)

0 100 200 300 400

Profile (m)

35.0 37.5 40.0 42.5 45.0 47.5 50.0 52.5 55.0 57.5 60.0 62.5 65.0

Phase φ(°)

(b) Phase

Figure 7.4:TE forward responses of the synthetic model with 15 stations and 10 periods in the RMT band plus 4% of noise. The black triangles at the top show the position of the stations.

For the RMT data, just the TE mode was simulated. Fifteen stations were placed with a spacing of 20 meters (i.e., 4 cells between stations). Ten frequencies belonging to the RMT band (14 - 250 KHz) were used (see table 7.1). The total number of data points obtained was 300: 150 of apparent resistivity and 150 of phase. All of them containing random Gaussian noise, 4% on the apparent resistivities and 1.1 degrees on phase.

The expected depth of investigation for a homogeneous halfspace of 100 Ωm and a frequency of 14 KHz (the lowest frequency used) is ≈ 60 m. Therefore, the tech- nique should be able to detect the bottom of the resistive block, but the same cannot be expected for the conductor since lower resistivities diminish the depth of investigation.

The forward responses shown in figure 7.4 are then in good agreement with the expectations. The bottom of the conductor does not seem to be reached while the bottom of the resistor does, as the resistivities below it are lower again.

Frequencies (KHz) 14000

20882 25781 32620 42617 58007 83531 130517 180000 232030

Table 7.1: Frequencies used to simulate data from the TE mode of the RMT method.

References

Related documents

Industrial Emissions Directive, supplemented by horizontal legislation (e.g., Framework Directives on Waste and Water, Emissions Trading System, etc) and guidance on operating

Re-examination of the actual 2 ♀♀ (ZML) revealed that they are Andrena labialis (det.. Andrena jacobi Perkins: Paxton & al. -Species synonymy- Schwarz & al. scotica while

46 Konkreta exempel skulle kunna vara främjandeinsatser för affärsänglar/affärsängelnätverk, skapa arenor där aktörer från utbuds- och efterfrågesidan kan mötas eller

I regleringsbrevet för 2014 uppdrog Regeringen åt Tillväxtanalys att ”föreslå mätmetoder och indikatorer som kan användas vid utvärdering av de samhällsekonomiska effekterna av

Närmare 90 procent av de statliga medlen (intäkter och utgifter) för näringslivets klimatomställning går till generella styrmedel, det vill säga styrmedel som påverkar

• Utbildningsnivåerna i Sveriges FA-regioner varierar kraftigt. I Stockholm har 46 procent av de sysselsatta eftergymnasial utbildning, medan samma andel i Dorotea endast

Det har inte varit möjligt att skapa en tydlig överblick över hur FoI-verksamheten på Energimyndigheten bidrar till målet, det vill säga hur målen påverkar resursprioriteringar

Figure D.24: Solution convergence for the DET data from Skediga of the objective function (a) and RMS (b) for five different trial step techniques: Linear CG step for the