• No results found

Comparison of data analysis methods KMSProTF and MsDEMPCA using Magnetotelluric data

N/A
N/A
Protected

Academic year: 2021

Share "Comparison of data analysis methods KMSProTF and MsDEMPCA using Magnetotelluric data"

Copied!
48
0
0

Loading.... (view fulltext now)

Full text

(1)

Comparison of data analysis methods

KMSProTF and MsDEMPCA using

Magnetotelluric data

Einar Valtersson

Natural Resources Engineering, master's

2020

Luleå University of Technology

(2)

Chapter 1

Preamble

1.1

Acknowledgements

I wish to thank everyone who has aided me during these trying times. Two individuals are worth extra mention. I would like to express my very great appreciation to my supervisor, Maxim Smirnov. This thesis would not be possible without your seemingly unending enthusiasm and patient guidance.

(3)

1.2

Abstract

In this work two ways of processing controlled-source magnetotelluric (MT) data were tried and compared against each other. The aim was to evaluate the differences between a multivariate-based processing method to a bivariate processing method. The software KMSProTF represents conventional processing using bivariate and robust statis-tics. MsDEMPCA utilizes a multi-variate Criss-Cross regression scheme to improve the condition of the data-matrix before robustly decomposing it into principal components. Data from the FENICS-19 survey in northern Scandi-navia was processed to transfer functions (TF) using the respective method. The TFs were visually interpreted in KMSProTF. There were no significant differences found between the methods. In addition a calibration between instruments was carried out, which caused an exclusion of parts of the data-set.

1.3

Sammanfattning

I detta arbete testades tv˚a s¨att att bearbeta magnetotellurisk (MT) data med kontrollerad k¨alla och j¨amf¨ordes mot varandra. Syftet var att utv¨ardera skillnaderna mellan en multivariat behandlingsmetod och en bivariat bearbet-ningsmetod. Programvaran KMSProTF representerar konventionell bearbetning med bivariat och robust statistik. MsDEMPCA anv¨ander en fler-variat ’Criss-Cross’ regressionsanalys f¨or att robust dela upp datat till huvudkompo-nenter. Data fr˚an FENICS-19-unders¨okningen i norra Skandinavien bearbetades till ’Transfer Functions’ (TF) med respektive metod. TF:erna tolkades visuellt i KMSProTF. Det fanns inga signifikanta skillnader mellan metoderna. Dessutom genomf¨ordes en kalibrering mellan instrument, vilket orsakade en uteslutning av delar av datam¨angden.

(4)

Contents

1 Preamble i 1.1 Acknowledgements . . . i 1.2 Abstract . . . ii 1.3 Sammanfattning . . . ii 1.4 Abbreviations . . . vi 1.5 Symbols . . . vi 2 Introduction 1 2.1 Aim and Research questions . . . 2

2.2 Scope and Limitations . . . 2

3 Theory 3 3.1 Maxwell Equations . . . 3

3.2 Magnetotellurics . . . 4

3.2.1 Assumptions of the Magnetotelluric method . . . 5

3.3 Transfer functions and impedance tensors . . . 5

3.4 Noise . . . 6

3.4.1 Statistics . . . 7

(5)

3.6 Geology . . . 8 4 Method 11 4.1 Field Measurements . . . 11 4.1.1 Magnetic Field . . . 13 4.1.2 Electric Field . . . 13 4.1.3 Data Acquisition . . . 14

4.1.4 Controlled source fields . . . 15

4.2 Calibration . . . 15 4.2.1 KMS - Metronix . . . 16 4.2.2 EDL - Metronix . . . 16 4.2.3 EMIT - Metronix . . . 19 4.2.4 Switched polarity . . . 20 4.2.5 Sample offset . . . 21 4.2.6 Scaling of amplitudes . . . 22

4.2.7 Wrong frequency, phase shift . . . 23

4.3 Data processing . . . 24

4.3.1 Resampling . . . 24

4.3.2 Time series coherence test . . . 24

4.3.3 Windowed Fast Fourier Transform . . . 25

4.3.4 Conventional processing . . . 26

4.3.5 Robust Principle Component Analysis . . . 27

4.4 Interpretation and Inversion . . . 29

5 Results 30 5.1 Transfer functions, KMSProTF . . . 30

(6)

5.2 Transfer functions, MSDEMPCA . . . 31

5.3 Source field . . . 31

5.4 Inversion . . . 32

6 Discussions 35 6.1 Data Matrix . . . 35

6.2 Plane Wave Assumption . . . 35

6.3 Finnish TF Phase . . . 36

6.4 MsDEMPCA software design . . . 36

6.5 Smoothness and Bias . . . 37

6.6 Considerations for Future Surveys . . . 37

7 Conclusion 38

(7)

1.4

Abbreviations

• EM - Electro-magnetic • MT - Magnetotelluric • CS - Controlled Source • 1D - One dimensional • 3D - Three dimensional

• MsDEMPCA - Missing Data Electromagnetic Principal Component Analysis • EDL - Earth Data Logger

• A/D - Analog-to-Digital

• WFFT - Windowed Fast Fourier Transform • SNR - Signal to Noise Ratio

• TF - Transfer Function • Deg - Degree

1.5

Symbols

• E - Electric field strength • H - Magnetic field strength • B - Magnetic induction • J - Electric current density

(8)
(9)

Chapter 2

Introduction

By interpreting features in the earth’s crust today, we can figure out important characteristics of the continents, such as their origin story and what they have endured to become what they are today. The interest in knowing the origin of the ground we stand upon could be seen as a part of humanity’s quest to understand our place in the universe. In a more sober context, most of the interest for the subsurface is related to mineral- or hydrocarbon extraction. In order to find and exploit the earth’s resources in the most effective way, knowledge about the processes which formed and has affected the subsurface is crucial. The minerals, or hydrocarbons, has been placed in the crust by specific processes controlling the transport and deposition of the commodity, of which traces can be observed in today’s crust. By interpreting the observed features in the subsurface, information such as the original source of the mineral, or which structures could trap hydrocarbons, can be inferred.

Geophysics can help describe the current condition of the crust at depth in a way no other geoscientific discipline is capable of. Instead of extrapolating features and processes seen today through time and space (uniformitarian-ism), geophysics is capable of indirectly estimating physical parameters at depth by utilizing well-known physical phenomena. One section of geophyiscs, called electro-magnetics (EM), use the relationship between electric and magnetic fields and how they interact with their surrounding described by Maxwell’s equations, in combination with surface measurements, to estimate the electric properties at depth. These EM methods come in many shapes and sizes in terms of the way of measuring and what they are able to resolve. One of them: Magnetotellurics (MT) uses the earth-sized current system in the ionosphere to transmit low-frequency signals deep into the Earth’s crust giving the method one of the deepest investigation depths in all of geophysics, allowing for interpretation of the crust’s larger scale features.

The crust under Scandinavia is part of the Baltic Shield, likely the remains of an ancient continent [1]. It grew through collision with other crustal fragments. This violent past means that the older rocks have undergone several phases of deformation. These older rock-domains host several world-class mineral deposits whose origins are hidden by 1.8 billion years of tectonic events [2] [3].

(10)

2.1

Aim and Research questions

The purpose of this thesis is to process Controlled Source Magnetotelluric (CSMT) data and evaluate the use of multivariate data analysis and estimate any source field effects. The data treated was gathered as part of a regional survey of northern Scandinavia named FENICS.

The research questions are summarized as:

1. Evaluate the use of multivariate techiques for processing magnetotelluric data. 2. Investigate the validity of the plane-wave assumption.

2.2

Scope and Limitations

Only data from the FENICS-19 survey will be considered, despite the potential utility of external data sources or surveys done previously.

This thesis will only evaluate the differences between two processing programs. KMSPro was chosen to represent the current industry standard. MsDEMPCA is currently being developed at LTU, by Maxim Smirnov who also provided a academic license for KMSPro.

(11)

Chapter 3

Theory

The magnetotelluric method generally uses natural variations in the earth’s electromagnetic field to estimate the conductivity of the subsurface [4]. In a controlled source investigation the natural variations are supplemented by a man-made source. By measuring the electric and magnetic fields over long periods the earth’s impedance tensor can be estimated. The application of this relationship between the electric and magnetic fields was first outlined by Tikhonov (1950) [5] and Cagniard (1953) [6]. One of the benefits of measuring for long time periods is a larger depth of investigation than other EM-methods.

This section will outline the theoretical concepts underlying the methods used and the interpreted results. Topics such as: Maxwell’s equations, and their applications for MT; the theory behind CSMT and how it differs from natural source MT; signal processing, and statistics.

3.1

Maxwell Equations

The Maxwell equations are a set of coupled equations describing the behaviors of, and the interactions between, the magnetic and the electric field. The following equation are written in differential form.

∇E = ρ 0

(3.1)

Equation 3.1, Gauss’ law for electric fields, states that the electric flux in an area (divergence) is proportional to the total charge within that area.

∇B = 0 (3.2)

(12)

Equation 3.2, Gauss’ law for magnetic fields, states that any given magnetic field won’t contain any monopoles, or technically that any field line that enters a specific surface also must leave it.

∇ × E = −∂B

∂t (3.3)

Faraday’s law, eq.3.3, states that variations in a magnetic field (B = flux density) over time will induce changes in the electric field guided by a closed closed loop oriented along the magnetic field.

∇ × B = µ0J + µ00

∂E

∂t (3.4)

Ampere’s law, eq.3.4, states that any electrical current in a closed loop will induce a magnetic field proportional to the total current flow.

3.2

Magnetotellurics

The magnetotelluric (MT) method is a electromagnetic method which measures fluctuations in the electric field, E, and magnetic field, B, at the earth’s surface in order to determine conductivity structures at depth. The fundamentals of the methods was first published in 1950 [5] and 1953 [6]. A central principle of the method is the exponentially decaying relation between depth-of-investigation and sounding period. This relation is described as the electro-magnetic skin depth. As described in eq.3.5 the skin depth (p can be seen as a function of the period, T , depeding upon the average conductivity of the medium, ¯σ and it’s magnetic permeability, µ [4].

p(T ) = (T /πµ¯σ)1/2 (3.5)

Most of the earth’s magnetic field is generated by motion in the outer core, however in addition to these internal fields there are super imposed external fields [7]. These external fields can be sourced by electric discharges within the ionosphere, i.e. lightning strikes, or by interactions between solar radiation (solar wind) and the earth’s magnetic field. In general the magnetic field is assumed to be homogeneous and predicable, formalized by the plane wave assumption [4].

The earth’s electric field is cased by changes in the magnetic field, induction, as seen in eq.3.3. While the electric field is clearly linked to the magnetic field through Faraday’s law and impedance, they do not necessarily share the same characteristics, and the plane wave assumption is not true for the electric field which heterogeneous.

(13)

3.2.1

Assumptions of the Magnetotelluric method

The magnetotelluric method is reliant on a collection of assumptions. [4]. • Electic and magnetic fields behave accoring the Maxwell’s equations.

• The earth does not generate any electromagnetic energy, it only dissipates and absorbs energy. • The earth acts like a ohmic conductor (obeys Ohm’s law, eq.3.6), and charge is conserved.

j = ωE (3.6) • Time-varying displacement currents are negligible compared to time-varying conduction currents, as a result

of the electric displacement field assumed being quasi-static. This reduces Ampere’s law (eq.3.4) to:

∇ × B = µ0J (3.7)

• All fields may be treated as analytic and conservative away from their sources.

• The electromagnetic source fields are assumed to be uniform, plane polarized, and normally indecent to earth’s surface. This plane-wave assumption implies that the EM-fields are time-invariant which is important to the MT-method as the measured relationship between the electric and magnetic field, impedance, should be reproducible no matter when the fields are recorded.

• Variations in electrical and magnetic permeability of rocks are negligible in comparison to the bulk rock properties.

3.3

Transfer functions and impedance tensors

Transfer functions are mathematical relationships between two signals in a dynamic system. The theory regarding these relationships has been available for many years. For a certain input signal X and correlating output signal Y, T is the transfer function between the signals in the frequency domain.

Y (ω) = Z(ω) X(ω) (3.8)

As the Maxwell equations are linear in the electric and magnetic fields, the relationship between them, the impedance, can be described with a transfer functions (in the form of a complex tensor Z). The tensor contains information about the amplitude, phase, and geometric relations between the fields.

(14)

In practice eq.3.9 is reduced to eq.3.10 due to a lack of electric field measurements in the z-direction (vertical component). This can be related to the plane wave assumption which entails that the vertical components should be insignificant. Ex Ey  =Zxx Zyx Zxy Zyy  Bx By  (3.10)

A linear expansion of eq.3.10 gives a set of equations (eq.3.12 and eq.3.12).

Ex= Zxx∗ Bx+ Zxy∗ By (3.11)

Ey= Zyx∗ Bx+ Zyy∗ By (3.12)

3.4

Noise

The systems models by MT-methods are most often underdetermined, and the reality they try to represent is almost infinitely complex. As such it’s important to implement a statistical scheme when trying to estimate the conductivity structures of the earth. While most of these schemes try to minimize the errors and biases in the resulting model, caused by noise in the measured signal or a underdetermined system. The method by which they achieve this can differ greatly. All measurements are contaminated by some kinds of noise. Any signal which doesn’t carry an useful/significant information can be considered to be noise. The source of the noise can be instrumental, natural, or artificial.

Instrumental noise is produced during the process of measuring and digitizing the signal. One example of typical instrumental noise is round-off errors in the A/D converter. Of course the instrument manufactures try to minimize the noise produced by their product. Natural noise sources are, obviously, natural in origin and produce EM-waves which interfere with the acquisition, processing, or interpretation of the data. An example of this could be auroral disruptions in the ionosphere which makes the measured EM-field less homogeneous, and could potentially invalidate the plane wave assumption.

(15)

3.4.1

Statistics

In order to combat the disruptive noise, statistical methods are applied. The least squares methods has been shown to be the best estimator for a linear model with a univariate error of zero mean[8]. This mean that noise which appears with a gaussian distribution or any other ’bell-shaped’ distribution will be handled by a least squares estimate. However the least square regression is highly sensitive to outliers which does not comply with the assumption of distribution. To solve this problem, the outliers can be removed, or more realistically with large and diverse data-sets one turns to robust statistics.

Robust techniques generally rely on an iterative least squares methods, which is weighted to account for departures from the expected distribution [9]. By comparing the expected value versus the residuals, weights are calculated and assigned. Data points which fit poorly are assigned a small weight, while good fitting data points are assigned a high weight. There are several functions which can be used to estimate the weights, ex. M-estimate [10]. Common statistical regression (bivariate) uses one independent variables to estimate one dependant variable. How-ever this approach is limited as it assumes the dependant is the result from a function of solely the independent variable. In realty this is rarely true.

y = β0+ β1x +  (3.13)

A more accurate representation of reality requires a multivariate (MV) approach, where several independent vari-ables affects one or several dependant varivari-ables. However this approach lacks the intuitive nature of bivariate statistics where the relation between dependent and independent parameters is simple. MV requires more data and more complex ways of describing the relationship between the dependent and independent variables.

Y = X ˆβ + ˆ (3.14)

Figure 3.1: PCA of arbitrary dataset. source: Weigand One of the simplest ways to extract the relationships

between multiple variables is the PCA (Principle Com-ponent Analysis) [11]. The PCA can be defined math-ematically as a orthogonal linear transformation which transforms the system in such a way that the coordinate axes are ordered in way of greatest variance. Ie. the first axis (principal component) contains the greatest variance and the second principal component contains the second most variance. In a simple 2D space, this is visualized as a rotation of the coordinate system to match the ’trend’ of the data points.

(16)

3.5

Invertion and Forward Modelling

By using Maxwells equations and other constraints it’s possible to calculate the expected impedances from a conductivity model of the subsurface. This process where the response from a theoretical model is estimated and compared to the real measured data is called Forward Modelling [12]. Inversion is an extension of Forward Modelling, where the process is reversed, or inverted. A subsurface conductivity model is estimated based on the real measured data. As comprehensive inversion and forward modelling are beyond the scope of the thesis, this will be significantly limited in extent and depth. There are several textbooks dedicated to this subject [13].

Forward modelling is based on the idea that the Maxwell’s equations can be used build a function (F ) which calculates an expected response ( ˆd) from an assumed model ( ˆm). In other words, a function which is capable of mapping vectors from the model-space into the data-space.

F ( ˆm) = ˆd (3.15) Inversion requires an inverted function F−1, which is maps vectors from the data-space into the model-space.

F−1( ˆd) = ˆm (3.16) In systems governed by linear sets of equations the function, F, can be described by a transformation matrix (A), and it’s inverse (A−1). In these linear cases the transformation between data-space and model-space can be estimated using simple linear statistical techniques. If F isn’t linear it’s no longer possible to form the inverse function F−1. Instead the inversion must depend on other more complicated techniques.

The goal of any inversion is to minimize a objective-function, which describes the characteristics of a solution. One example of an objective-functions is the Least-Square solution (eq.3.17), which produces the solution with the smallest residuals, or several solutions with zero-residuals.

min||F (m) − d||2 (3.17)

3.6

Geology

The survey was conducted in northern Scandinavia, an area located in the Fennoscandian (or Baltic) craton. The geology in the area is usually divided into two domains. Older underlying archaic provinces, named the Archean Provinces, and superimposed younger Proterozoic provices, named the Svecofennian Provinces [1]. Both of these provinces are host to a diverse set of metallic mineral deposits, examples being the world class deposits of Kiruna and Suurikuusikko [2].

(17)

Figure 3.2: Geological map of the center of the Fennoscanidan shield. From [14]

(18)

general the signs indicate a multi-stage compressional environment with accretion of crustal fragments occurring [3].

(19)

Chapter 4

Method

This chapter describes the methods and procedures used for measuring and processing the magnetotelluric data. The choice of method was done in consideration of the underlying theories and thesis. The chapter will begin with the practical field measurements, outlining the processing steps, and ends with a brief description of the inversion and interpretation steps.

4.1

Field Measurements

Figure 4.1: MT-station layout. From [16]

The data used in this thesis were measured in the field over several years during different field campaigns. The area is located in northern Scandinavia, on the land border of Sweden and Finland, see map fig.4.2. A station setup

(20)

Figure 4.2: Site locations of FENICS-19 in both Sweden and Finland

similar to fig.4.1 was used for each station, however the exact layout depended on the location and which set of instruments were used. Three Metronix loggers and three EDL loggers were used with magnetic coils. Two KMS systems without any magnetic coils were used in close proximity to an additional Metronix measuring station with coils, with the intent of using the magnetic field data from the one station for all three measurements.

(21)

4.1.1

Magnetic Field

The magnetic fields (or rather the magnetic inductions) at each site was measured using inductions coils, which utilize Faraday’s Law, eq.3.3 to induce a current in the copper coil responding to changes in the magnetic fields. The three coils at each site were installed perpendicular to each other in the cardinal directions (N-S, W-E, and vertically). As induction coils only are sensitive to flux changes perpendicular to the strike of the coil, three coils are required to fully capture the magnetic field [17]. Alternatively, fluxgate sensors could have been used instead of induction coils, however they are better suited for longer period measurements [4].

V = −n ∗ A ∗dB

dt (4.1)

Eq.4.1 describes the transfer function of a induction coil [17], where n is the number of turns in the coil and A is the cross-sectional area of the coil. The magnetic flux density, B represents the strength of the magnetic field in one direction, and V is the corresponding voltage output by the coil. As the output voltage depends on changes in the magnetic field (derivative of B over time) higher frequency signals will produce a stronger signal. To adjust for this frequency-dependency the coil manufacturer provides a calibration procedure which corrects the data in the frequency domain by down-weighting the higher frequencies.

4.1.2

Electric Field

The electric fields (or rather potential differences) were measured by 5 non-polarizing Cu-CuSO4 electrodes, placed in a ’X’ shape. The distance between the paired electrodes was 100m, and they were placed in the N-S, E-W directions. Using a multi-meter, the resistance in each dipole was checked to be in a reasonable range. If the electrode was not installed correctly or if the ground was very dry it would not be a good connection which results in a unreasonably high resistance.

The fluctuations in the electric field are determined measuring the potential differences over a certain measurement path. In this case the paths are two straight and perpendicular lines which simplifies the physics from eq.4.2 to eq.4.3, where each horizontal electric field component, Ei, is only dependent on the potential difference, φi, and the

distance between the electrodes, li [12].

φ = Z E · dS (4.2) Ei= φi li (4.3)

The purpose of the electrodes are to form a contact between the ground and the metallic wire. Dry soils is not particularly conductive in itself, however pore-water in wet soils can carry a charge due to its content of ions. At

(22)

the interface between a metallic and ionic conductor an ion-electron exchange will occur. Metal ions will enter the solution and the electrolyte ions will combine with the metal of the electrode. This redistribution of ions creates a potential difference associated with the electrode, which would be perceived as noise in resulting measured data. In order to reduce the potential difference, the metallic conductor is encased in a solution containing a salt of the same metal, in this case CuSO4 as the wire is made of copper [12]. Other electrode designs use Ag-AgCl or Pb-PbSO4 to achieve the same potential reducing effect. For higher frequency surveys (f > 10hz) the non-polarizing electrodes does not preform significantly better than steel electrodes [18].

4.1.3

Data Acquisition

For two hours during the ’quiet’ nights all active instruments did a burst of higher frequency measurements, 1000hz for EDL/KMS loggers, and 4096hz for Metronix loggers. It was during these two hours the artificial source was active. During the rest of the day all instruments were sampling at a lower rate, from 20hz to 128hz. When uninstalling a site, a cursory investigation into the quality of the data produced was preformed, in order to learn if any complications had arisen during the measurements. Examples of complications could be the electrode losing its galvanic contact or wildlife disturbing wires or instruments causing them to disconnect from the logger. One of the key components of MT measurements is the timing required for the contemporary nature of the data. Modern MT systems use GPS signals to synchronize the internal clocks between instruments. This allows for interpretation of time dependent data between sites. Techniques such as remote reference utilize the timing to reduce noise or increase certainties, which are only viable if the timing is accurate [12].

The ”loggers” digitize the analog voltage from the instruments. Before digitizing, the instrument applies several filter and gain functions in order to amplify the signal strength and reduce the signal noise, improving the signal-to-noise ratio (SNR) [19]. The specific pre-digitization procedure is proprietary to each instrument and poorly disclosed, however general processes used are well understood.

When digitizing a analog signal a A/D converter is used. They operate on the simple premise that a certain input voltage result in a certain digital output. This number, like all digital data, is described using bits (ones and zeros). The signal resolution is dependent on the number of bits used to describe the signal [20]. ex. Metronix uses a 24 bit A/D converter for low frequency measurements, meaning that each data point is described using 24 bits. This means that the Metronix logger can encode an analog input to one of 224 different levels. In order for the resulting digital output to be meaningful the gain and offset must be know.

The gain function controls how much the analog input signal is amplified to match the range of the input signal to the range of the A/D converter. If the range of the input signal is significantly lower than the A/D converter’s the resolution of the amplitude is lowered, but if the range of the input signal exceeds that of the A/D converter an overrange event occurs. Simply, the input is higher than expected and the A/D converter can’t encode it normally. In older converters this caused rollover, resulting in a switched polarity. In modern converters, however, it instead causes clipping as all the bits are assigned ones (111111...).

The offset describes the baseline of the A/D converter, in other words it describe the relationship between a digital zero output and the corresponding analog signal. If the input signal is expected to vary around 0 V the offset should be zero, while if the signal is expected to vary around 10 V the offset should reflect this.

(23)

gain and offset is highly dependent on the input signal, less on the instrument itself. Which is why it’s important to calibrate the logger at each site and check the digitized signal after installing each site. It should be apparent if the gain or offset is poor, as the resulting data would be clipping or not varying significantly. When exporting the data it’s usually converted from a count unit to a physical unit eg. mV or such.

4.1.4

Controlled source fields

”Ordinary” MT-surveys measures the magnetic and electric field which varies naturally due to interactions between solar winds and the earth’s ionosphere, and due to global lightning activity. However this survey complimented the broadband natural sources with an artificial source in the form of an active power line at an adequate distance away to be considered far-field.

4.2

Calibration

Due to the field data being measured by several different instruments, instrument-specific calibration was required. A signal generator was used to generate a sweep signal ranging from 1 Hz to 1 kHz. The signal was recorded simultaneously by all instruments with settings similar to those used in the field. Transfer functions calculated from the differing signals was used to quantify differences between instruments. Metronix was chosen as a reference as it has robust internal calibration.

Figure 4.3: Schematic sweep signal.

Initially, the transfer functions between two instruments were calculated using a simultaneous ’cut out’ of the signals, comprised of several sweeps. MATLABs function tfestimate was used to calculate the transfer functions [21]. The signal with higher sampling rate was re-sampled to match the other signals lower frequency. The start

(24)

time of each signal was given by a header-file associated with the signals. The end time of each measurement was either given by the header, or calculated using the sample frequency. A synchronous sample interval was extracted using a chosen start and end time. The transfer function was estimated based on these two sample intervals. Any potential time shift was calculated using eq.4.4. Common issues, such as switched polarities or samples offset by one were also visualized.

∆t = ω

°

360°∗ f (4.4) Due to the interesting results from the initial findings, more rigorous investigation was done to quantify the behavior and character of the differences between the instruments. Instead of ’arbitrarily’ selecting a certain time span, the time-series was divided into equal length segments and for each segment transfer functions between each instrument was calculated and evaluated using specified constrains. An example of such a constraint would be that the absolute value of the transfer functions should remain near 1.

4.2.1

KMS - Metronix

The transfer function was calculated on the low-frequency signals of a KMS instrument (80Hz) and a Metronix (128Hz). The sample intervals were significantly time-shifted (2.3 s) likely due to an incorrect start-time from one of the instruments. The time shift was ’corrected’ by shifting one of the time-series forward. The resulting signals after re-sampling, fig.4.4, are visually different, especially the amplitudes at higher frequencies. The resulting transfer function, fig.4.5a and fig.4.5b, is not what’s expected from two instruments measuring the same signal, i.e. absolute value of 1 and constantly low phase shift. The frequency dependant phase shift and scaling make these instruments difficult to use together, in addition to the incorrect timing. This could be a result of different filtering processes or a different instrument design for digitizing and timing measurements.

Due to the large time discrepancy, and the poor characteristics of the ’synced’ transfer function, the data from the KMS instruments will not be used for any further interpretations. Further investigations beyond the scope of this thesis are warranted.

4.2.2

EDL - Metronix

The transfer function was calculated based on the EDL instrument’s 1000Hz signal and the signal from the Metronix instrument’s 4096Hz signal, down-sampled to 1000Hz, fig.4.6. The resulting transfer function is reasonable in terms of absolute value and phase shift. The absolute value, fig.4.7a, illustrates the difference in filtering and scaling between the instruments. As an example, the oscillating amplitude at frequencies lower than 400Hz could indicate a filter similar to a Type I Chebyshev filter. The amplitude of the absolute value indicate a significant difference in scaling. The difference of about 106and is likely due to different gain functions and/or output physical units. The

(25)

Figure 4.4: Part of the extracted signals.

(a) Absolute value of TF between KMS and Metronix. (b) Phase shift of TF between KMS and Metronix.

Figure 4.5: Transfer function between KMS and Metronix.

(26)

Figure 4.6: Part of the extracted signals.

(a) Absolute value of TF between EDL and Metronix. (b) Phase shift of TF between EDL and Metronix.

(27)

4.2.3

EMIT - Metronix

The transfer function was calculated from the EMIT instrument’s 12 kHz signal and the signal from the Metronix instrument’s 4096 Hz signal. The EMIT signal was down-sampled to 4096 Hz. The resulting transfer function is similar to the transfer function between EDL and Metronix. The time-shift is slightly larger at 0.061 ms, which is equal to 0.73 samples at 12 kHz.

Figure 4.8: Part of the extracted signals.

(a) Absolute value of TF between EMIT and Metronix. (b) Phase shift of TF between EMIT and Metronix.

Figure 4.9: Transfer function between EMIT and Metronix.

(28)

4.2.4

Switched polarity

The following graphs describe the expected response if the measured signals had opposing polarities, ex. due to installing the instruments incorrectly.

Figure 4.10: Sinusoidal signals with different polarities

(29)

4.2.5

Sample offset

The following graphs describe the expected response if the measured signals were offset by 5 sample, ex. syncronizing the datasets incorrectly.

Figure 4.12: Sinusoidal signals offset by 5 samples

(a) The real part of the TF from a offset signal (b) The imaginary part of the TF from a offset signal

(30)

4.2.6

Scaling of amplitudes

The following graphs describe the expected response if the measured signals are scaled differently, ex. due to not compensating for gain functions or unit conversions.

Figure 4.14: Sinusoidal signals scaled by factor of 5

(31)

4.2.7

Wrong frequency, phase shift

The following graphs describe the expected response if the measured signals are sampled at different sampling rates, ex. due to not properly re-sampling during data-processing.

Figure 4.16: Sinusoidal signals sampled at different rates

(a) The real part of the TF from singals with different sample rates

(b) The imaginary part of the TF from singals with dif-ferent sample rates

(32)

4.3

Data processing

The purpose of this section is to describe the steps conducted to go from time-series directly from instruments to principle components and transfer functions ready for interpretation. These steps include: resampling, time-synichronizing, Fourier Transform, frequency band selection, Robust PCA, and transfer function estimation.

4.3.1

Resampling

The data from the field measurements were exported from the instruments, and organized based on site-location. The files were then converted from the instrument-specific file-format to a common file-format. The time-series were resampled to specific frequencies, using MATLAB’s resample function, which uses a Fi-nite Input Response (FIR) filter [21]. EDL-instruments measured using a sampling rate of 20Hz and 1000Hz, respectively for high and low frequency measurements. In order to avoid extrapolating data to a higher sam-pling rate, those frequencies were chosen for the inves-tigation. The higher sampling rate data from Metronix instruments were down-sampled to match the rate of the EDL data.

Since the different instruments output the signal with units, it’s required to scale the outputs to a single

phys-ical unit, such as mV. Metronix-instruments outputs with a unit of mV. EDL-instruments outputs the data with a unit ofµV. The relation between these instruments can be seen in fig.4.7a. The timing of the first data point for each acquirement session is synchronized using the GPS-clock start time given by the instrument.

4.3.2

Time series coherence test

The coherence of the time series channels are tested by checking the residuals between the real data and a least-square linear regression estimate. The model, eq.4.5, describes the assumed relationship between the input channels, X = [ ˆBx ˆBy], and the output channels, Y = [ ˆEx ˆEy].The regression parameter ˆb is estimated by solving the linear system of equations.

Y = X¯b (4.5)

(33)

certain segment is lower than a specified threshold the segment is removed from the time series. ¯ c = ||Y 0 estY ||2 ||Y0Y ||2∗ ||Y0 estYest||2 (4.6)

4.3.3

Windowed Fast Fourier Transform

All of the re-sampled signals were joined into a large matrix, after being scaled, coherence tested and time-synchronized. The time-series from all channels and sites are converted into the frequency domain using a Windowed (or Truncated) Fast Fourier Transform (WFFT). The algorithms for the WFFT are described in detail by [22]. The program divided the time-series into segments. A Fast Fourier Transform function was then applied to each segment separately, producing a set of Fourier coefficients for each channel and segment.

The Fourier transforms between frequency and time domain are commonly described by eq.4.7 and eq.4.8 respec-tively. f (ω) = Z f (t)e−iωtdt (4.7) f (t) =π 2 Z f (ω)eiωtdω (4.8) [22] gives the eq.4.9 for a Windowed Fourier Transform. Where the Fourier coefficient ft(ω) for a certain time

segment, t, and frequency, ω, depends on the window function, g, and the signals frequency distribution, f, for the time interval, u. In theory f (u) should describe the entire time-series from start to finish, however in practise the time series is divided into segments before applying the transform. g(u) is a window function which localizes the signal in time, mathematically it’s defined as a smooth function with boundaries (-t, t) outside of which it computes to zero.

ft(ω) =

Z ∞

−∞

du e2πiωug(u) f (u) (4.9)

The length of the window (2t) is an important parameter during the transform as it decides how many data points each segment includes. A longer window length allows for a higher spectral resolution, meaning the ability to discern between two near-lying frequencies. This is important for signals containing the omnipresent 50Hz signal and it’s harmonics originating from the power grid. If the spectral resolution is insufficient it’s not possible to discern the ’natural’ signal between the spikes of the harmonics. A longer window length, however also increases the potential influence of any outliers or other incoherent noises that are going to dominate the spectra. A narrow window on the other hand has a lower spectral resolution, but the influence of ’bad data’ is limited. Additionally, the narrow window has a higher temporal resolution, the timing of the data is better preserved.

This issue of selecting one window length is solved by doing the transformation in iterations, and decimating the signal between each iteration. The first iteration contains all frequencies and is transformed using a relatively

(34)

narrow window, giving a low spectral resolution and a high temporal resolution which can’t estimate longer period coefficients. However in the next iteration, the time-series has been down sampled by some factor (ex. 4, 8, 16...), but the segment length is the same, giving the scheme the ability to estimate longer periods with a higher spectral resolution.

An other important parameter for the transform is the segment offset/overlap, how much does the window move in time, for each segment. 0% overlap entails that none of the windows overlap in time, they appear one after each other. 100% overlap on the other hand means all windows overlap completely meaning the segments aren’t moving forward through time, obviously not practical. Due to statistical concerns regarding independent variables, a overlap of more than 50% is not recommended [23], in order to ensure that the segments can be assumed to be independent. Overlapping is used because it decreases the influence of ’bad data’ as all data points are considered twice, limiting the influence to 0.5 of a window length.

4.3.4

Conventional processing

The conventional processing was preformed using the KMSProMT software. It used similar techniques to the MSDEMPCA workflow, up to this point (Ex: WFFT and coherence tests). The goal of the software is to estimate the impendance using eq.3.11. and eq.3.12, however the measurements of the electric and magnetic fields are imprecise, so the addition of a remainder function, δZ which represents uncorrolated noise, is prudent. The Least Square solution to eq.4.10 and eq.4.11 which minimizes the remainder function is the best estimate for Z.

Ex= Zxx∗ Bx+ Zxy∗ By+ δZ (4.10)

Ey= Zyx∗ Bx+ Zyy∗ By+ δZ (4.11)

One of the problems related to the LS approach described in eq.4.11 is related to auto power. In most techniques the LS estimate is obtained by solving the system of equations as a bivariate linear regression problem, which involves isolating the Z components using cross-power spectral densities, which results in a system of equations similar to eq.4.12. The problem arises with the occurrences of auto power, ex:hExEx∗i, where any noise in these components

will be amplified as the components obviously are coherent with itself.

hExEx∗i = ZxxhHxEx∗i + ZxyhHyE∗xi (4.12)

The KMSProMT workflow allows for the usage of remote reference processing, which aims to relive the problem of auto-powers. As described in [4], it utilises the distance between sites to negate incoherent noise related to each site. By replacing one of the components of the problematic auto-power with a component from an other ’remote’ site, any noise local to one station is not amplified. Magnetic components are used as remote references as the magnetic field is assumed to be more homogeneous and is generally less contaminated than the electric field. This replacement turns eq.4.12 into eq.4.13, thus removing the auto-power.

(35)

An other issue is the contamination of strong signals from non-plane wave sources, such as the power grid. In order to reduce the influence of these sources which are limited to certain frequencies (ex. 50Hz) KMSProMT utlizes notch filters. Notch filters, or band-stop filters, should ideally be able to remove all direct influence of any frequency. It does this by applying a notch window function, W , to the spectra of the signal, X.

˜

X = X ∗ W (4.14)

4.3.5

Robust Principle Component Analysis

The MSDEMPCA software, described in [24], uses a robust criss-cross regression scheme to estimate the polarization parameters and principle components in order to reduce bias and improve the transfer function (impedance tensor) estimate. The input for the software is a N by M matrix containing the resulting segments from the WFFT, described in section 4.3.3. The program uses a certain estimation scheme, described below, to estimate spacial modes (PCs) and polarization parameters. The considered statistical model is a multivariate-errors-in-variable (MEV) model, eq.4.15, compare with eq.3.14

X = U A + E (4.15)

Where X is an N x J matrix containing the frequency domain data from the WFFT (described in section 4.3.3) for N channels and J time-windows. U is a N x K matrix containing the principal component vectors, Uk. A

contains the K x J polarization parameters. As the name suggests the model explicitly allows for noise/errors in each channel and time window, which are represented by E, a N x J matrix. The errors are assumed to have a Gaussian distribution.

The scheme uses alternating linear regressions (’criss-cross regression’), where the model is assumed to be linear based on alternating dependant and independent variables. Ex in eq.4.15, if there is a preliminary estimate of U then A can be estimated using linear regression, and vice-versa. The robustness of the scheme is improved by using robust regressions estimate, such as an M-estimate [24]. In practise the algorithm is divided into a inner loop and an outer loop. The inner loop applies na affinely invariant robust covariance estimate [10] to a fixed sub-array in order to find a convariance matrix before estimating the spacial modes, using a M-estimate (Maximum likelihood) [10], and the polarization parameters using a regularised Least-Square approach. The outer loop control which parts of the data matrix are inputs for the inner loop. Initially a ’core’ of the data matrix is set and used as a starting point for the algorithm before the outer loop adds additional channels or segments.

The incoherent noise variances , CN is estimated by fitting the residuals between each channel and a predicted set

of data. The predictor is a variant of the regression M-estimate outlined by [10]. It’s similar to an iterative weighted LS estimate, where the weights for each segment are determined by a residuals, wnj = w(rnj), between predicted

data and the observed data. The residuals r are given by eq.4.16. However this approach may fail if single sites are contaminated by heavy noise, due to the nature of the PCA finding spacial modes which describes the highest

(36)

Figure 4.18: Core Selection from [24]

variance, essentially letting the noise predict itself. To resolve this, the data from the currently estimated site is excluded from the covariance calculation.

rnj = |Xnj− ˆXnj| = |Xnj− N

X

k=1

Unkakj| (4.16)

During initialization of the scheme a ’core’ sub-array is chosen based on three parameters(P Ch1, P Ch2, P Seg).

These parameters are limits of the percentage of channels or segments required for the ’core’. This procedure is shown in fig.4.18. The first stage is controlled by P Ch1 which states a percentage of the channels which must

be active in each segment. The second stage removes channels based on P Seg which control how many active segments each active channel must contain. The final core selection stage is managed by P Ch2 which states how

many segments each active channel must contain. Selecting appropriate appropriate parameters, and by extension selecting a appropriate core-array, is crucial for producing the best possible estimate. In order to fill any gaps in the initial array, U and A is estimated, and used to fill in the array with predicted values.

After selecting a core-array and obtaining initial estimates of U and A, the scheme adds more segments from the data matrix and estimates the variables again, and then they are used to populated the empty parts of the data matrix. After which the number of channels are increased and the variables and the missing data is estimated. This process is repeated iterativly, until there are no more channels/segments to add. The scheme doesn’t add channels/segments arbitrarily, but it requires a certain overlap with the array of the previous iteration. Fig.4.19 provides a schematic of this process.

(37)

Figure 4.19: Array building from [24] UEx UEy  =Zxx Zyx Zxy Zyy  ∗UBx UBy  (4.17)

Unfortunately the scheme of selecting a core and building back up to near full data matrix failed due to a lack of overlap in the data. Instead, the data matrix was divided in to parts where the overlap was sufficient. This resulted in a limited amount of sites being processed at one time.

4.4

Interpretation and Inversion

In principle any impedances should be decreasing with increasing periods, as the crust is expected to increase in conductivity with depth. Any impedance which did not comply with these conditions were investigated and/or disre-garded. The primary goal of the thesis was to compare the processing techniques of MsDEMPCA and KMSProMT, in order to investigate this the transfer functions of a set of sites for the respective methods were compared. To compliment the visual inspection, inversion using KMSProMTs 1D-inversion tools was preformed on the deter-minant of the transfer functions. The proprietary software’s specific algorithms, or even objective-function, are not known. The one-dimensional nature of the inversion makes it very vulnerable to lateral distortion effects.

(38)

Chapter 5

Results

5.1

Transfer functions, KMSProTF

The Swedish data processed with the conventional processing using the KMSProTF software produced transfer functions (fig.5.1a) which align well with the constraints outlined in section 4.4. However the periods in the dead band are shifted down, likely due to a low SNR.

The Finnish data processed in the same way as the Swedish produced similar transfer functions (fig.5.1b).

(a) Swedish TFs (b) Finnish TFs

(39)

5.2

Transfer functions, MSDEMPCA

After dividing the data matrix into chunks with at least two sites overlap, the MsDEMPCA software was successful in estimating TFs for all of the sites. The figures 5.2a - 5.4b show the differences between the high frequency data processed with KMSProTF and MsDEMPCA, for the sites P01, P12, and P13. The two TFs are quite similar, however the MsDEMPCA TF is not as smooth as the conventional TF. The smooth nature of the KMSProTF TF is likely due to the several instances of averaging it has been exposed to. The spike in the KMSProTF TFs at 0.05s for site P12 (fig.5.3a and fig.5.3b) are due to a lack of data for those frequencies as they were filtered out during the conventional processing.

5.3

Source field

As seen in fig.5.5, the vector fields from the PCA does not seem particularly homogeneous. However, when consid-ering the divided nature of the data sets (different runs of the software) the interpretation improves, as the rotation of these vectors are quite arbitrary. The requirement of orthogonality is still preserved though, which is where some true issues become apparent. Certain sites such as p23 and p24 (South-eastern corner), which were part of the same run show clear orthogonality in the two first modes. Other sites such as p19 and p17 (North-western corner) does not show any significant orthogonality, although they remain sub-parallel in most modes.

Figure 5.6: SNR spectrum from PCA of sites P01, P12 and P13

The SNR spectrum, fig.5.6, visualize the amount of vari-ance captured by each mode in the PC-decomposition. If the signal only consisted of a perfect plane wave, with-out significant noise, there would only be two modes with eigenvalues above 1. In this case the large discrep-ancy between the two primary modes and the other, later, modes can be used to enforce the validity of the plane wave assumption.

As expected the SNR is significantly lower for all modes in the bands around 50hz due to anthropic noise. The peak at around 8hz can be explained by Schumann res-onance which produces a prominent signal in that band [25].

(40)

5.4

Inversion

In addition to the visual inspection of the differences between the MsDEMPCA and KMSProMT processed

data, an inversion was preformed. An outtake from site P01 in Finland is shown in fig.5.7. The main features between the resulting models are the same, as there is no error- or bias estimate it’s not possible to make any further conclusions regarding the precision/accuracy of the models.

(41)

(a) Apparent Resistivity of P01 (b) Phase of P01

Figure 5.2: P01, Red = MsDEMPCA, Blue = KMSProMT

(a) Apparent Resistivity of P12 (b) Phase of P12

Figure 5.3: P12, Red = MsDEMPCA, Blue = KMSProMT

(a) Apparent Resistivity of P13 (b) Phase of P13

(42)
(43)

Chapter 6

Discussions

This chapter will explore the implications of the results.

6.1

Data Matrix

The MsDEMPCA scheme of criss-cross regression failed to build the data-matrix from the initial core, due to limited temporal overlap between sites. This could be addressed with a wider rolling array, or possibly by the addition of external measurements, such as magnetic field data from the IMAGE observatories. The current situation with 5 channels, or 1 site, overlapping limits the regression algorithm, due to it requiring the system of equations to be over-determined. Using 5 channels to estimate 5 modes does not constitute a well-determined system as the number of variables are equal to the number of channels.

Unfortunately, the data from the KMS instruments were not included in the processing as a result of the calibration done between instruments. The addition of this data would not alleviate the lack of data, as the KMS sites did not include any magnetic field measurements, requiring extrapolation from other sites to calculate the transfer functions. As such, it would not serve the purposes of this thesis, however the next step, inversion and modeling, could make use of the E-channels from the KMS data. Though for longer periods the data from external sources such as observatories designed to measure geomagnetic activity (ex. Intermagnet) could be added to include more magnetic field data. However, these measurements are sampled at a low frequency ( 1hz) and would require further downsampling.

6.2

Plane Wave Assumption

The Plane wave assumption appear valid for the MsDEMPCA data as there are in most cases two primary modes which dominate in terms of SNR and can be considered near perpendicular to each other. As such they could be interpreted as representing the main part of the source field. Sadly, the ability to further investigate the source

(44)

field ends here, due to the limited amount of sites in each run of the MsDEMPCA algorithm. The ambition of this thesis included being able to detect a higher complexity in the source field, such as a third mode describing a gradient deviation from the plane wave assumption. This is not possible with only three or less sites in each run. The idea of simply appending each set of primary mode vectors to each other is not a valid strategy as the rotation of the vectors is based on the characteristics of the data/model-space. It could be equivalent of trying gluing a map torn into pieces back together without considering the north direction of each piece.

6.3

Finnish TF Phase

Many of the phase-determinants of the Finnish transfer functions have the same trend, starts around 0 deg and decreases towards -90 deg. This behavior indicates conductive parts of the subsurface, likely graphite’s related to greenstones. A homogeneous half-space would produce a constant phase.

6.4

MsDEMPCA software design

The MsDEMPCA software explicitly requires an overlap of segments/channels based on the number of modes it’s instructed to estimate. The limit is set by multiplying the number of modes with the minimum number of channels per each mode. As such the required overlap in time and space is explicitly based on the wanted outcome (number of modes). This seems intuitive as the required overlap increases with the complexity of the result. A 100 mode analysis poses higher demands on the data than a 2 mode analysis. However, this is only the explicit limitation stated by the software. The optimal amount of overlap is not necessarily the same, and likely differs between data sets.

One possible solution to this problem would be a pre-processing stage where the data-matrix is examined and evaluated based on its extent and characteristics. The parameters for the algorithm could be set auto-magically in order to optimize the end-result, but also improving the run time, memory usage, or giving the user additional information about the condition of the data-matrix before finishing the PCA.

Parameters, such as the core-selection criteria, could be tested to ensure that the algorithm will run to completion or, if that’s not possible, give the user the ability to rectify the situation by adding more data or decreasing their demands of the PCA. The conditioning of the statistics when building the matrix back up from the core could also be estimated to ensure a well-conditioned linear system of equations. The statistical performance of the rebuilding stage should be investigated, to answer the question of whether or not the explicit demand regarding overlap is sufficient, or if more/less is required for accurate estimations of the modes and/or polarizing parameters.

(45)

6.5

Smoothness and Bias

When comparing the TFs in figures 5.2a - 5.4b the smooth appearance of the blue (KMSProTF) impedance stands out in comparison to the red (MsDEMPCA) impedance. This is likely due to the extensive smoothing done during the conventional processing (Repeated median Siegel estimation and robust averaging). The act of smoothing the impedance causes a bias to occur, due to the non-linear nature of the algorithms. However, the impedances are expected to be smooth as they should represent bulk properties of the sub-surface. The presence of the bias reduces the accuracy of the estimate.

The MsDEMPCA is also biased towards smoothness, but in a different way. The act of truncating the PCA-matrix, or only considering the first two modes when producing the impedances, removes part of the signal. In theory these discarded modes should contain the data with the least variance, which in a case with a high SNR, should mostly contain incoherent noise. As such the impedances estimated using only the first two modes are in a way smoothed, by removing the noise.

In both these cases it’s possible to argue that the smoothing is warranted as it can produce transfer functions with higher utility. This does not, however, change the fact that the estimates are biased, and their true accuracy should be questioned.

6.6

Considerations for Future Surveys

In order to fully utilize the MsDEMPCA software, the temporal overlap between sites has to be significant. The specific time required depends on the modes to be estimated, minimum channels per mode, and the Fourier segment length/offset. Similarly the number of sites which should overlap is also varying depending on the demands of the PCA. In a case similar to the Finnish survey, where high frequency recordings are done during the night, the required overlap increases the demands of the survey as there must be a sufficient number of instruments recording at the same time, but some of these instruments must have been recording the night before as well. For some unknown reason two of the instruments of this survey were programmed to only record one night. This could have improved the overlap in the data.

The ’need’ for overlap in surveys intending to use the MsDEMPCA processing poses a higher demand on both the survey design, but also instrumentation and data quality. If the required overlap corresponds to two sites worth of data, the survey requires at least three sites worth of instrumentation, and only one of the sites can be moved at a time. In this situation with the bare minimum of instruments, a ’bad’ site (a site with unusable data) would require the one of the other sites to be remeasured as-well. Compare this to a survey which intends to utilize a remote reference technique, where a ’bad’ site only produces a need to remeasure itself.

However the survey design decisions which lead to adequate overlap align well with the design decisions for improving the data quality. Longer measurement duration or having more sites installed in parallel improves the quality of most surveys as segment stacking and/or remote reference methods relies on these redundancies.

(46)

Chapter 7

Conclusion

In the end, both the KMSProTF and MsDEMPCA processing tools produced significant, realistic, transfer functions. The purpose of the thesis was to evaluate the use of MV-techniques in CSMT-processing. The finding of this thesis could not comprehensively conclude that the MV-method is strictly better, as the data-set did not have the characteristics required to fully utilize the MsDEMPCA software. However, the method shows potential beyond that of the conventional processing in terms of producing unbiased and robust impedance estimates. As of today the conventional KMSProTF software is considerably easier to use, and requires less trial-and-error to produce significant results.

The surveys in both Sweden and Finland are likely sufficient to investigate subsurface conductivity structures using more complex 3d inversion techniques which are outside the scope of this thesis.

(47)

Bibliography

[1] ˚Ake Johansson. Geology of Fennoscandia, 2019.

[2] P¨ar Weihed, Pasi Eilu, Rune B. Larsen, Henrik Stendal, and Mikko Tontti. Metallic mineral deposits in the Nordic countries. Episodes, 31(1):125–132, 3 2008.

[3] Raimo Lahtinen. Main geological features of Fennoscandia. Special Paper of the Geological Survey of Finland, 2012(53):13–18, 2012.

[4] Fiona Simpson and Karsten Bahr. Practical Magnetotellurics, volume Book. 1997.

[5] N Tikhonov, A. Determination of the electrical characteristics of the deep starta of the Earth’s crust. Dokl. Akad. Nauk SSSR, 73(2):295, 1950.

[6] Louis Cagniard. BASIC THEORY OF THE MAGNETO-TELLURIC METHOD OF GEOPHYSICAL PROSPECTING. Geophysics, 18:605–635, 1953.

[7] Keith Moffatt. Dynamo Theory. The Encyclopedia of Astronomy and Astrophysics, pages 459–512, 2004. [8] Gauss–Markov Theorem. In The Concise Encyclopedia of Statistics, pages 217–218. Springer New York, New

York, NY, 2008.

[9] Gary D. Egbert and John R. Booker. Robust estimation of geomagnetic transfer functions. Geophysical Journal of the Royal Astronomical Society, 87(1):173–194, 1986.

[10] Peter J. Huber. Robust Statistics. Wiley Series in Probability and Statistics. John Wiley & Sons, Inc., Hoboken, NJ, USA, 2 1981.

[11] Herv´e Abdi and Lynne J. Williams. Principal component analysis. Wiley Interdisciplinary Reviews: Compu-tational Statistics, 2(4):433–459, 2010.

[12] Alan D. Chave and Alan G. Jones. The magnetotelluric method: Theory and practice. 2012.

[13] Richard C Aster, Brian Borchers, and Clifford H Thurber. Chapter Ten - Nonlinear Inverse Problems. In Richard C Aster, Brian Borchers, and Clifford H Thurber, editors, Parameter Estimation and Inverse Problems (Second Edition), pages 239–252. Academic Press, Boston, second edi edition, 2013.

[14] Lautra S. Lauri, Perttu Mikkola, and Toumo Karinen. The ˜2.44 Ga LIP in the Fennoscandian shield. Large Igenous Provinces Commission, 2012.

(48)

[15] Filipe Ad˜ao, Oliver Ritter, and Erik Spangenberg. The electrical conductivity of Posidonia black shales - from magnetotelluric exploration to rock samples. Geophysical Prospecting, 64(2):469–488, 2016.

[16] Maxim Smirnov, Toivo Korja, Lars Dynesius, Laust B. Pedersen, and Erkki Laukkanen. Broadband magne-totelluric instruments for near-surface and lithospheric studies of electrical conductivity: A Fennoscandian pool of magnetotelluric instruments. Geophysica, 44(1-2):31–44, 2008.

[17] Slawomir Tumanski. Handbook of magnetic measurements. 2016.

[18] G. PETIAU and A. DUPIS. Noise, Temperature Coefficient, and Long Time Stability of Electrodes for Telluric Observations. Geophysical Prospecting, 28(5):792–804, 1980.

[19] Keeva Vozoff. The magnetotelluric method in the exploration of sedimentary basins. Geophysics, 37(1):98–141, 1972.

[20] William D Penny. Signal Processing Course. Signal Processing Course, pages 1–178, 2000. [21] The MathWorks. Signal Processing Toolbox, 2019.

[22] Gerald Kaiser. A Friendly Guide to Wavelets. 2011.

[23] G Heinzel, A R¨udiger, and R Schilling. Spectrum and spectral density estimation by the Discrete Fourier transform (DFT), including a comprehensive list of window functions and some new flat-top windows. Technical report, 2002.

[24] M. Yu Smirnov and G. D. Egbert. Robust principal component analysis of electromagnetic arrays with missing data. Geophysical Journal International, 190(3):1423–1438, 2012.

References

Related documents

Volumetric Data Using Illumination and Transfer Functions. Linköping Studies in Science and Technology,

Tommie Lundqvist, Historieämnets historia: Recension av Sven Liljas Historia i tiden, Studentlitteraur, Lund 1989, Kronos : historia i skola och samhälle, 1989, Nr.2, s..

For each measured component as well as polarisation, a 6’th order Butterworth high-pass filter and a notch filter is created in frequency domain. Using Matlab’s inherent filter

During the years 2013–2016, worldwide Zika epidemics spread quickly throughout immune-naïve human populations. In 2015–2016, the Dominican Republic was struck by an epidemic of

Dummy variables for the day of the time use interview have been included in the sample selectivity equation for all models except double hurdle with known tobit selection, for

yrkeskategorierna i denna fallstudie undervisar i enskilda delar av ämnet kost. Klassläraren ger eleverna grundkunskap och idrottsläraren undervisar mer om vad, hur och varför. Om

(Sunér Fleming, 2014) In data centers, it is common to use Uninterruptible Power Supply (UPS) systems to ensure stable and reliable power supply during power failures (Aamir et

Compared with the classical PIN model, the adjusted PIN model allows for the arrival rate of informed sellers to be dierent from the arrival rate of informed buyers, and for