• No results found

Assessment of the mass of pollutant in a soil contaminated with chlorinated solvents.

N/A
N/A
Protected

Academic year: 2022

Share "Assessment of the mass of pollutant in a soil contaminated with chlorinated solvents."

Copied!
46
0
0

Loading.... (view fulltext now)

Full text

(1)

A SSESSMENT O F T HE M ASS OF

P OLLUTANT I N A S OIL C ONTAMINATED

W ITH C HLORINATED S OLVENTS

Jeanne Gautier

August 2014

(2)

© Jeanne Gautier 2014 Groundwater Chemistry

Done in association with the company ICF Environnement Division of Land and Water Resources Engineering Royal Institute of Technology (KTH)

SE-100 44 STOCKHOLM, Sweden

Reference should be written as: Gautier, J (2014) “Assessment of the mass of pollutant in a soil contaminated with chlorinated solvents” TRITA-LWR Degree Project 14:10 pp. 33

(3)

S

UMMARY IN

S

WEDISH

Bristen på bostäder har lett till att det blivit allt intressantare att omvandla före detta industriområden till bostadsområden. Omvandling av före detta industrimark förutsätter dock efterbehandling (sanering av föroreningar) för att förebygga olika typer av hälso- och miljörisker. För att på ett effektivt sätt kunna utföra olika typer av åtgärder krävs att man bestämmer halterna av föroreningar i jorden.

Detta innebär att man genomför provtagning och analys för att fastställa föroreningshalter i ett antal mätpunkter i jorden. På grund av att antalet mätpunkter är så begränsat är det nödvändigt att kunna uppskatta föroreningshalter även på resten av ytan för att kunna göra en heltäckande uppskattning av åtgärdsbehovet. De nuvarande metoderna som används av efterbehandlingsaktörer leder ofta till underskattning eller överskattning av föroreningshalter men kan också leda till andra problem av olika slag (t.ex. miljöproblem, ekonomiska och juridiska problem).

Syftet med den här undersökningen är att jämföra olika metoder för att uppskatta föroreningshalter från ett begränsat antal observationer. Detta görs med hjälp av uppgifter från ett område förorenat med klorerade lösningsmedel.

De metoder som använts inkluderar de som idag ofta används i efterbehandlingsprojekt (Mean 1, Mean 2), enkla interpolationsmetoder (Thiessenpolygoner, Natural Neighbor, Inverse Distance Weighting) och en metod som helt baseras på geostatistiska principer (Conditional Simulations).

De jämförs för att studera hur utfallet varierar beroende på metod och på en given uppsättning ingångsdata. Ett par viktiga slutsatser är att deterministiska metoder, som är de lättaste att tillämpa, ofta underskattar jordens föroreningshalter medan geostatistiskt baserade metoder kan ge mer realistiska resultat. De senare är dock svårare att tillämpa.

.

(4)
(5)

S

UMMARY IN

F

RENCH

La pression immobilière dans la plupart des grandes villes de France a entraîné les promoteurs immobiliers à se tourner vers la reconversion des friches industrielles.

Cela implique la mise en place d’une démarche de dépollution des sites afin d’éliminer les éventuels risques environnementaux ou sanitaires.

Pour dépolluer efficacement un site, il est nécessaire de connaitre avec le plus de précision possible la quantité de polluant présente. Une des étapes importantes de cette démarche est le prélèvement et l’analyse d’échantillons de sol pour un nombre fini de points. Il est donc nécessaire d’estimer la quantité de polluant à certaines locations non échantillonnées. Les méthodes utilisées pas les entreprises de dépollution mènent très souvent à des sous ou surestimation de cette contamination créant des possibles problèmes environnementaux, économiques et légaux.

Cette étude, menée au sein du Bureau d’Etudes spécialisé en Sites et Sols Pollués, ICF Environnement, a pour but de comparer différentes méthodes de calcul de la masse de polluant à partir de données provenant d’un site présentant un impact aux solvants chlorés. Les méthodes, déterministes et géostatistiques, sont comparées afin de déterminer les écarts obtenus sur les résultats entre les différentes approches avec le même jeu de données.

La conclusion majeure est qu’il est primordial de bien choisir son outil de calcul de la masse car les différences entre les méthodes sont importantes. Les méthodes déterministes ont tendance à sous-estimer la quantité estimée alors que l’approche géostatistique va permettre de conserver l’hétérogénéité présente dans les données brutes pour donner une vue réaliste de la réalité et un intervalle dans lequel se situe la masse de polluant dans le sol.

(6)
(7)

A

CKNOWLEDGMENTS

First of all I would like to thank Véronique Croze for giving me the opportunity to carry out this work at ICF Environnement. I would also like to thank her for her guidance throughout this study and for her constructive and useful recommendations.

I would like to thank my supervisor Jon Petter Gustafsson who inspired me to do my thesis in the site remediation sector with his course on Environmental Risk Assessment. I am really grateful for his availability and his support.

I wish to express my very great appreciation to all the professionals within the company who helped me in many ways during six months. I am particularly grateful to Elodie Michel and Franck Renoux for their valuable advice and their collaboration.

Thank you to Jean-Jacques Péreaudin from the company Geovariances for making Isatis software available.

Finally, I wish to express my deep gratitude to Dr Helene Demougeot- Renard for her great help and for sharing her wide knowledge of Geostatistics. Her input and opinion were a great help.

(8)
(9)

T

ABLE OF

C

ONTENT

Summary in Swedish Error! Bookmark not defined.

Summary in French v

Acknowledgments vii

Abstract 1

1. Introduction 1

1.1. Investigation and Evaluation 2

1.1.1. Site characterization 2

1.1.2. Samples acquisition 3

1.1.3. Interpretation 4

1.1.4. Sizing the treatment 4

1.2. Stating the problem 4

1.3. State of the art 5

1.3.1. Empiric reasoning 5

1.3.2. Deterministic Interpolation Methods 5

1.3.3. Manual delimitation 5

1.3.4. Consequences of the uncertainties 5

1.3.5. Geostatistical approach 6

1.4. Purpose and Objectives 6

2. Theoretical Background 6

2.1. Interpolation methods 6

2.1.1. Thiessen Polygons 7

2.1.2. Natural Neighbor 8

2.1.3. Inverse Distance Weighting 8

2.2. Geostatistical approach 10

2.2.1. Exploratory Data Analysis 10

2.2.2. Modeling the variogram 13

2.2.3. Kriging 13

2.2.4. Stochastic conditional simulations 13

3. Materials and Methods 15

3.1. Study case 15

3.1.1. Chlorinated solvents 16

3.1.2. Available data 17

3.2. Methodology 19

3.2.1. General assumptions 19

3.2.2. Deterministic methods 19

3.2.3. Stochastic conditional simulations: the geostatistical approach 20

3.3. Computer resources 25

4. Results 25

5. Discussion 29

5.1.1. Limits of the study 29

5.1.2. Towards a general geostatistical approach 30

5.1.3. Validation 30

6. Conclusion 30

References 32

Other references 33

(10)
(11)

A

BSTRACT

The scarcity of housing has led more and more developers to turn to the conversion of former industrial areas into residential areas. Brownfield redevelopment involves the cleanup of contaminated soil to eliminate any health or environmental risk.

The quantification of the amount of pollutant in soil is essential to carry out an efficient remediation. It involves sampling and analyzing the soil to determine the concentration of pollutant at a finite number of locations. It is therefore necessary to assess the pollutant amount at unknown locations to estimate the pollution for the whole site. The existing methods used by the depollution actors often lead to underestimation or overestimation of the contamination possibly creating environmental, economic and legal issues.

This study aims to compare different methods to assess the mass of pollutant using data from a site contaminated with chlorinated solvents. The methods comprise currently used methods (Mean 1, Mean 2), simple interpolation methods (Thiessen Polygons, Natural Neighbor, Inverse Distance Weighting) and a method based on a complete geostatistical approach (Conditional Simulations). They are compared to determine the variability of the results obtained with a specific set of data depending on the chosen method.

The deterministic methods, although easy to apply, will often underestimate the mass of pollutants contained in soil whereas the geostatistical approach can give a more realistic result, but is complex to implement.

Key Words: Soil Remediation; Mass of pollutant; Interpolation; Geostatistics.

1. I

NTRODUCTION

The increasing demand for housing has led to new land management policies that promote soil redevelopment of former industrial areas.

Polluted areas from past industrial activities are numerous in all the industrialized countries. In France, more than 300 000 sites are suspected to be contaminated (BRGM, 2005) and 5573 are identified in a government database (BASOL), which catalogues “contaminated, or potentially contaminated, sites and soils needing an action of authorities as a precautionary or curative measure”(BASIAS, n.d).

A site is considered to be polluted when it contains substances or waste in solid, liquid or gas form and if this contamination induces a risk for health or threat to the environment (Suez Environnement, 2006).

According to the French law of July 30, 2003, there is a health threat when there is:

• a source of pollution,

• pathways for it to travel, i.e. the possibility of the pollutant to be transmitted through air, surface water, groundwater and soil.

• the presence of people

According to the Polluter Pay principle, it is the entity responsible for the pollution that should undertake the cost of remediation (OECD, 2008). However, due to the housing and land scarcity in urban areas, it is common that developers buy a polluted site and take care of the cleanup cost as part of building projects. The remediation goal will depend on the intended use for the land i.e. it will be more demanding if the land is intended to be redeveloped as housings or schools.

The remediation plan aims to assess the contamination and find the best technique to reach the remediation goals, which are usually set by

(12)

France's Regional Environment, Development and Housing Department (DREAL). In order to clean up a polluted site, an investigation and evaluation stage is carried out since the soil redevelopment cost will depend primarily on the extent and the amount of pollution. Site-specific data is gathered and analyzed to identify the nature, the intensity and the location of the contamination (Demougeot-Renard & De Fouquet, 2004). The results of this first step will be used to choose the most suitable remediation technique and size the treatment.

The amount of pollution represents a significant input in the decision- making process. Miscalculations can lead to both environmental and financial risks: polluted soils might not be efficiently remediated and might need further treatment resulting in additional costs.

When you consider the hypothesis that soil specific parameters (density, dry matter ratio) are constant over the whole site, the mass of pollutant can be directly derived from the concentration of the pollutant in the soil (see section 1.2). However, the pollutant concentration is only known at a few locations because the soil sampling is often limited by cost and by time. Thus, the assessment of the mass of pollutant within a polluted site will require the estimation of the pollutant intensity throughout the whole site. The mass estimation problem is therefore a problem of interpolating the concentration of soil for the entire site. This study focuses on the estimation of the mass of pollutant into a site contaminated with chlorinated solvents. The aim is to compare the results when different methods of interpolation are applied to derive concentration values at un-sampled locations, leading to the calculation of the mass of pollutant. The first goal was to prove that the choice of method used for the mass estimation could lead to real differences in the results and the second was to enable the engineers within the company to choose knowingly.

The present report is organized as follows:

• The introduction part states the general remediation process and how the mass assessment is an essential part of it. The current methods used by remediation sector professionals are presented showing that it is important to define the right tools to estimate the pollution.

• The background part gives a theoretical overview of the different methods that were chosen.

• Part three focuses on the material and methods used for this study: the available set of data and the rationale behind the choices and hypothesis.

• Part four displays roughly the results that are later discussed in the discussion section.

• The discussion part draws conclusions from the results and state the limits of this study.

1.1. Investigation and Evaluation

An efficient remediation procedure starts with an assessment plan aiming at the quantification of the amount of pollution present in a given location in order to design the treatment (Mirsal, 2008). This involves the site characterization, data sampling, and interpretation, which will lead to the choice and the sizing of the treatment.

1.1.1.Site characterization

Site characterization focuses on the collection of all available “soft” data (non-numeric data) about the site: geographic, geologic, hydrogeological

(13)

information as well as any available data on past industrial activities.

Knowing the activities previously carried out on site can provide the possible types and location of the pollution. Other information help to identify possible pathways of contamination and together with concentration values will lead to the setting of remediation goals.

1.1.2.Samples acquisition

Before data collection, a sampling plan is designed taking into account the purpose of the study, the information collected when characterizing the site and the cost allocated to sampling. The geographical breakdown of the sampling can be regular or preferential.

The most common regular spatial pattern of sampling is the grid pattern.

The site is discretized in blocks and one or several samples are taken in each block: systematically (Figure 1) or randomly (Figure 2).

A preferential pattern is used when the location of the source of pollution is suspected given the previously collected information (Figure 3). When a potentially polluting activity has taken place for several years at a location, this location is usually investigated more thoroughly.

The soil samples can be collected using various types of procedures:

excavators, split spoon samplers, simple soil augers among others. The choice of the sampling device is important since these different methods will lead to different sampled volumes and cause more or less pollutant volatilization during the process.

Figure 1. Systematic grid pattern.

Figure 2. Random grid pattern.

(14)

Figure 3. Preferential pattern, the contoured zone is suspected to be contaminated.

Then the soil samples are sent to a lab to be analyzed and pollutant concentrations are obtained for the chosen points. The values are visualized and controlled to detect any sampling or analysis error. The visualization on maps enables us to obtain a rough overview of the extent and depth of the polluted zone.

1.1.3.Interpretation

The data collected based on a relevant pattern is then used as an input to evaluate the intensity and the location of the contamination. Since the pollutant concentration is only known for some specific points (cf.

chosen pattern), quantifying the amount of pollutant in the soil implies the estimation of the pollutant concentration at unknown locations. This process is inevitably associated with uncertainties.

This step is important in order to identify the most relevant remediation technique. This study will focus on improving the methods used in the interpretation process.

1.1.4. Sizing the treatment

Once the planners know how the site has been used in the past, the extent and nature of the pollution, together with the use of the land in the future, they can decide how to treat the site. Different methods are sized and compared according to their expected efficiency, their cost and their implementing time.

Sizing the treatment takes different aspects into accounts depending on the chosen remediation technique. The aim is to determine the functional requirements for the technique to be efficiently implemented.

For in situ treatments, the knowledge of the amount of pollutant present into soil is important to predict the amount to be retrieved or destroyed.

For instance, for in situ chemical treatments (oxidation or reduction), it will consist in quantifying the amount of chemicals to be injected in the soil and the injection locations. However, for a simple excavation it will imply estimating the amount of polluted soil and categorize it into classes depending on the concentration. The different classes are sent to different landfills with different associated costs.

The remediation costs are calculated for the different alternatives and the client selects a treatment plan among them.

1.2. Stating the problem

The mass 𝑀𝑖 of pollutant in a known volume 𝑉𝑖 can be directly calculated from the concentration value 𝐶𝑖 (1).

(15)

𝑀𝑖 = 𝐶𝑖𝛼𝑉𝑖𝜌

100 (1)

Where 𝛼 is the dry matter percentage in soil (%) 𝜌 is the density of the considered soil (kg/m3)

𝐶𝑖 is the pollutant concentration (mg/kg of dried soil) 𝑉𝑖 is the considered volume of soil (m3)

𝑀𝑖 is the mass of pollutant (mg)

However, the pollutant concentration, the dry matter ratio and the density are discrete values. This data is available only for the sampled volumes. To be able to calculate the mass contained within the whole site, the knowledge of these variables is also needed at locations where the soil was not sampled. Hence, estimating the mass of a chemical in soil for a whole site represents an interpolation problem.

To simplify the problem, the dry matter ratio and the density were considered constant. Therefore, the problem is interpolating the concentrations of pollutant.

1.3. State of the art

There is an absence of defined methodology to handle the sampled data, which can lead to misinterpretation of the information. The most common practices are empiric reasoning, interpolation with GIS software and manual delimitation (GeoSiPol, 2005).

1.3.1.Empiric reasoning

Empiric rules are applied taking into account “soft” data collected during the site characterization stage, usually the lithology and the contamination origin. Measured concentrations at sample points are assigned to areas in the light of those empirical rules (GeoSiPol, 2005).

For instance, if a location is contaminated and the cause is suspected to be a particular activity, the whole area hosting this activity is estimated to be polluted with the same intensity. The results will depend primarily on the person making the reasoning whether the quantity of pollution needs to be maximized or minimized.

1.3.2. Deterministic Interpolation Methods

Classical interpolation methods (e.g. inverse distance weighting or kriging which are described later in this paper) are carried out in some cases.

These approaches are usually implemented without proper data analysis to define the interpolation settings. The parameters can be arbitrarily adjusted to obtain the expected results. The priority is also usually put on the visual result of these interpolations (GeoSiPol, 2005).

1.3.3. Manual delimitation

The contaminated areas are manually delimitated. The entire area is considered to have the mean concentration of all the sample points. This technique is the most imprecise and the results are hardly justifiable.

1.3.4. Consequences of the uncertainties

The possible consequences of miscalculation of remediation volumes or pollutant mass can lead to several issues (GeoSiPol, 2005):

• The discovery of an unexpected amount of polluted soil, which will raise the remediation costs and time.

• An overestimation of the remediation cost that will cause the project to be declared unfeasible.

(16)

• Incorrect sizing when designing the remediation technique, which can threaten the efficiency and cost effectiveness of the proposed solution.

• Legal issued between the remediation company and the client.

1.3.5. Geostatistical approach

Geostatistics provide a set of tools to assess the precision of the estimation or at least get the degree of uncertainty associated with the calculations. Geostatistics have been applied to polluted industrial sites for nearly 30 years usually to assess metal contamination (Flatman, 1984).

The geostatistical approach relies on the understanding and the quantification of the spatial arrangement of the pollutant concentration, which can be obtained with a thorough analysis of data. This method is used in the research domain but rarely by the remediation companies.

1.4. Purpose and Objectives

There is a lack of a methodological framework to efficiently interpret the sampled data. A wrong assessment of the mass of pollution can lead to environmental, legal or economic issues. Remediation planners often report that underestimation of the amount of pollution led to unexpected clean-up costs for the client (Source: ICF Environnement) From this observation, it is interesting to study and compare the different methods that are used to calculate the mass of pollutant and to understand the uncertainties associated with the interpretation. This work was performed within the French remediation company ICF Environnement.

The goal is to compare simple and more advanced methods so that the company can assess the benefit of applying time-consuming methods.

The methods were tested using the extensive data from a site polluted with chlorinated solvents near Paris.

2. T

HEORETICAL

B

ACKGROUND

This section introduces the theoretical background and the theoretical frame on which this study is based.

2.1. Interpolation methods

The concentration values are only known at a finite number of locations.

To assess the extent and quantity of pollution, it is required to estimate the concentration of the pollutant throughout the whole site. Spatial interpolation tools are used to estimate unknown values at unsampled locations based on a limited number of measured values (Burrough 1986). Interpolated results only represent estimates and any analysis relying on spatial interpolation leads to a degree of uncertainty.

An interpolation method can be classified as:

Global or local: global methods use all of the known values for the predicting at a location whereas local methods only use a subset of the available sample points that surround the query location. The subset is called the “neighborhood”

Exact or inexact. With an exact interpolator, there is the constraint that the values of the original sample points should remain the same whereas when using an inexact interpolator, the predicted values and the observed values at the original sampled locations can be different.

Stochastic or deterministic depending if they use probability theory or not. Stochastic methods postulate that the concentration value is a random variable suggesting that the interpolated values are only one of

(17)

an infinite number that are coherent with the data points. Deterministic methods provide a unique estimated value and do not enable to know the error associated with the prediction.

This part presents some common interpolation method that will be applied to the studied site. They were chosen because they are the most commonly used for interpolating continuous variables.

2.1.1.Thiessen Polygons

Thiessen polygons also referred as Voronoï diagram define the areas of influence of each sample point of a dataset. The polygons are created so that every location within a polygon is closer to the sample point linked with that polygon than to the other sample points. Each point has the same value as the nearest sample point (Figure 4). This method creates area with the same values leading to discontinuities at the edges of the polygons.

Figure 4. Thiessen polygons – Example with few randomly sampled points.

Thiessen polygons method is local, exact and deterministic.

The density of the points determines the size of the polygons and the spatial distribution will determine the shape (Figure 5).

Figure 5. Thiessen polygons – Example with a regular dense grid pattern.

(18)

This method has the advantage of being conceptually simple to understand and easy to perform. However, it does not take into account the pollutant distribution related to its behavior in soil.

2.1.2. Natural Neighbor

This method also known as Sibson interpolation is local, exact and deterministic. The first step in order to implement this method is the construction of Thiessen polygons for all the sample points. Then, a new Thiessen polygon is created around the query point (Figure 6) as if it was a new sample point.

Figure 6. Illustration of the principle of Natural Neighbor Interpolation.

The proportion of overlap between the new polygon and the original polygons will be used as weight for estimating the value at the query point (2) (Sibson, 1981).

𝐶𝑞= � 𝑆𝑖

𝑆𝑡𝑡𝑡𝐶𝑖

𝑖

(2) Where Si is the area of overlap between the new polygon and the

original polygon of index i (m²).

Stot is the total area of the new polygon (m²).

𝐶𝑖 is the pollutant concentration at the know point of index i (mg/kg of dried soil)

𝐶𝑞 is the estimated pollutant concentration at the query point (mg/kg of dried soil)

2.1.3. Inverse Distance Weighting

The inverse distance weighting (IDW) predictor is the most common function of the Weighted Moving Average Methods family that predicts

(19)

the value of each query point using a linearly weighted combination of the value of the neighboring sample points (3).

𝐶𝑞 = � 𝑤𝑖𝐶𝑖

𝑖

(3) Where wi is the weight associated with the point of index i (m²).

𝐶𝑖 is the pollutant concentration at the know point of index i (mg/kg of dried soil)

𝐶𝑞 is the estimated pollutant concentration at the query point (mg/kg of dried soil)

In the case of IDW, the weights are a function of the inverse distance (Figure 7). The concentration at the query point is calculated with the following equation (4).

𝐶𝑞 =

∑ 𝐶𝑖 𝑑𝑖𝑝

𝑖

∑ 1𝑖𝑑𝑖𝑝

(4)

Where di is the distance between the query point and the point of index i (m).

𝐶𝑖 is the pollutant concentration at the know point of index i (mg/kg of dried soil)

𝐶𝑞 is the estimated pollutant concentration at the query point (mg/kg of dried soil)

p is the power value.

Figure 7. Principle of Inverse Distance Weighting.

(20)

This is a local, exact and deterministic method of interpolation.

The parameters for this process are the power value p and the search radius. A higher value of p translates into a bigger influence of the nearest points. The usual p value is 2.

The search radius defining the neighborhood can be:

• Variable: the number of input sample points is specified and the radius will therefore vary.

• Fixed: the radius within which points will be included in the calculation is specified.

The assumption behind this method is that the further away a location is from a query point, the less it will influence the value at this point. It is not linked to a real physical process that rules the spreading of pollutant in soil.

2.2. Geostatistical approach

Geostatistical approach is a stochastic approach based on a thorough analysis of data. The goal is to model the spatial structure of the pollution phenomenon to apply the right kind of estimation while assessing the uncertainty associated with the modeling.

Geostatistics is developed in the probabilistic framework, using the concept of regionalized random variables.

2.2.1. Exploratory Data Analysis

A statistical summary of the dataset comprising some elementary statistics calculation (mean value, percentile, variance…) is the starting point of the Exploratory Data Analysis (EDA).

The crucial step of EDA is plotting the experimental variogram and modeling it. The model will be used to represent the spatial variability and serve as an input for the modeling.

Elementary statistics

A statistical summary of the dataset can be obtained calculating:

• The mean value, usually referring to the arithmetic mean of the values (5).

• Maximum, minimum of the values and the total number of available values

• Percentiles: A percentile is the value of an observation below which a certain percent of observations fall. For example the 10th percentile is the value of an observation below which 10 percent of observations can be found and where 90 percent of observations have higher values.

The 25th percentile, 50th percentile and 75th percentile are also named the first quartile, the median quartile and the third quartile respectively (Zhang, 2013).

• Variance and standard deviation: these variables indicate how the data distributes itself around the mean value. The standard deviation s is the square root of the variance. . The variance is the average of the squared differences from the mean. (6).

• Frequency distribution: The histogram represents visually the repartition of data. It displays how often values fall within certain intervals (ESRI, n.d.). The cumulative frequency distribution is the plot representing the number of points that are lower than each value.

(21)

𝐶̅ =1 𝑛 � 𝐶𝑖

𝑛

𝑖=1

(5)

𝜎2=1

𝑛 �(𝐶𝑖− 𝐶̅

𝑛

𝑖=1

) (6)

Where 𝐶̅ is the mean value

Ci is the pollutant concentration at the know point of index i (mg/kg of dried soil)

n is the number of sample points.

Experimental variogram

The spatial variability of the concentration values can be represented with the variographic cloud and the semivariogram (Figure 8).

Figure 8. Example of a variographic cloud and the corresponding variogram.

The variographic cloud displays half of the squared difference for every couple of points (xi;xj) (7) as a function of the distance between these two points (GeoSiPol, 2005). The goal is to compare pairs of data values at regular distances from each other called “lags”.

γ𝑖𝑖=1

2 (𝐶𝑖− 𝐶𝑖)2 (7)

Where 𝐶𝑖 is the pollutant concentration at the sample point of index i (mg/kg of dried soil)

(22)

𝐶𝑖 is the estimated pollutant concentration at the sample point of index j (mg/kg of dried soil)

The results for each pair at a given lag are averaged and the variogram is obtained: the variogram or semivariogram is a visualization of the degree of the spatial correlation of data i.e. quantifies the spatial variability.

The semivariogram function γ(h) (variogram function being 2γ(h))) is half the average squared difference between points separated by a distance h (8) (Matheron, 1963).

γ(h) = 1

2 ∗ N(h) � (𝐶𝑖− 𝐶𝑖 )2

𝑁(ℎ)

(8) Where 𝐶𝑖 is the pollutant concentration at the sample point of

index i (mg/kg of dried soil)

𝐶𝑖 is the estimated pollutant concentration at the sample point of index j (mg/kg of dried soil)

N(h) is the number of couple of sample points (xi;xj) that are separated by the distance h.

In this equation, there is no preferential orientation, it can be necessary to consider direction in addition of distance: h then becomes a vector.

The parameters to describe a variogram (Figure 9) are:

• The nugget effect which represents spatial variability at very short distances or measurement errors.

• Sill : spatial correlation of points at an infinite distance from each other

• Range defined as the distance at which there is no longer any correlation (Yarus & Chambers, 2006)

These parameters are the variables to set while modeling the variogram.

Figure 9. Example of an experimental variogram with the associated parameters to consider for the modeling (Yarus &

Chambers, 2006).

(23)

2.2.2. Modeling the variogram

The characteristics of the experimental variogram are captured in a variogram model, which is a function that describes the variance at all possible distances.

The model is a combination of known functions that are chosen to fit the best way possible the experimental variogram. Among authorized models that ensure stability, spherical, exponential, Gaussian, linear, or power function can be mentioned (Appendix II). The goal of the modeling is to find the set of functions characterized by their sill, their range and their nugget effect to represent the best way possible the experimental variogram (Yarus & Chambers, 2006).

This model is used as an input for stochastic interpolators because the values are estimated by reproducing the variability defined by the model.

2.2.3. Kriging

The kriging is a local, exact and stochastic interpolation method used in the geostatistical approach. It is defined as the best linear unbiased estimation method. As the IDW interpolator, this method uses a linearly weighted combination of the known values (3). In this case, the weights are chosen according to the spatial structure of the concentration values modeled by the variogram. They are many types of kriging, depending of the main features od the phenomenon to be modeled.

This method assumes the presence of three components in the data: a general trend, spatially auto-correlated variations, and random noise.

This is the only interpolator that calculates estimation errors for the produced values, which can be showed as a map of standard deviations (Henkel & Grünfeld, 2004). Hence, the estimation of the concentrations by kriging can be associated with the uncertainty.

It will give the information of how much the estimated values fit the collected data. However, for this study it is necessary to multiply and sum the estimations that are obtained for each block. Hence, the precision of the final result is not possible to deduct from that.

2.2.4. Stochastic conditional simulations

A method of simulation aims at producing values showing the same spatial structure (variogram) and the same histogram as the observed data. In addition, the algorithm must ensure that the simulated values are equal to the observed values for the sampled locations.

The advantages of using the stochastic conditional simulations approach are to capture and preserve heterogeneity and to be able to assess the uncertainty of the process. Unlike the most common kriging methods, stochastic simulations do not have a smoothing effect.

The basic input parameters of this method are the spatial model (variogram) and the distribution of the sample values (histrogram, cumulative distribution function). The simulation algorithm will create a number of realizations reproducing the spatial model and the distribution. Each realization is different than the others because each process begins with a different number (called “random seed number”) and has a different interpolation path, i.e. the order in which the values are simulated (Isaaks & Skrivastava, 1989; SPE international, 2013). The process goes through the following steps (Leuangthong, 2012).

(24)

There are many different algorithms that can be used to generate stochastic simulations. One of the simplest one is the sequential Gaussian simulation algorithm which:

• Find nearby data and previously simulated value

• Perform kriging to determine the conditional mean and conditional variance

• Decide on a simulated value randomly from the conditional distribution defined by mean and variance coming from kriging.

This step depends on the chosen simulation method

• Check the simulated value so that it honors data, histogram and variogram

The concentration of a pollutant is soil can be considered as a realization of a continuous random function. The conditional simulation methods used in this case are pixel based i.e. that simulates one pixel at a time as opposed to object-based methods that operate simultaneously on group of pixels. The most popular pixel-based methods are turning bands simulation, sequential Gaussian,), truncated Gaussian, probability-field simulation (Yarus & Chambers, 2006) and sequential indicator simulation (GeoSiPol, 2005). These techniques have been developed in a multigaussian framework, they thus require at least thoften at the data have a Gaussian distribution. The Gaussian transformation of variable is then a prerequisite, since the distribution of concentration values are often positively skewed.

Gaussian transformation of variables

The simulation algorithms described above require at least that the pollutant concentration follow a Gaussian distribution. Usually, it is not the case and the histogram is skewed.

A pre-processing step is therefore necessary to transform a non- Gaussian variable into a Gaussian variable. One of the weel known transformation is called Gaussian Anamorphosis.

On the cumulative distribution functions, it associates each original value x with the value y corresponding to the same cumulative frequency value (Figure 10). The transformation function is called the anamorphosis function f.

It will also involve a post-processing step because the simulated realizations need to be transformed back in order to represent correctly the original variable.

Figure 10. Illustration of the Gaussian anamorphosis process.

(25)

Post-processing of simulations

The simulation algorithm will result in a defined number of realizations.

Then the simulated values are back-transformed to obtain the original distribution (non-Gaussian).

Knowing how different is one realization from another gives a measure of uncertainty. Summarizing the results by computing a frequency distribution enables to know the mean (or median) value and the standard deviation value for each pixel. This enables to make an evaluation for the value of this pixel and know the confidence interval (Yarus & Chambers, 2006).

Simulated values for each realization are multiplied by the volume (1) of each mesh of the modeling grid and summed for all the pixels to obtain the “accumulation” for each realization. The frequency distribution of the accumulation for all the realizations enables to derive an estimation of the accumulation value with the associated uncertainty.

3. M

ATERIALS AND

M

ETHODS

This work was performed using the data from a polluted site contaminated with chlorinated solvents. The remediation of this site is carried out by ICF Environnement since 2010.

The study case is presented to understand the framework of the study.

3.1. Study case

The different approaches to estimate the mass of pollutant in soil have been assessed and compared using the data from a former auto parts factory located near Paris (Figure 11).

Figure 11. View of the study site.

The site was acquired by a developer; the project involves the construction of housing, offices and shops. The buildings were designed to include 2 basements (until a depth of 6 meters).

An investigation and evaluation of the suspected pollution was carried out in 2010. It was pointed out that there is a contamination with chlorinated solvents in soils. In soil, the maximum concentration of chlorinated solvents is 540 mg/kg and 9.5 g/m3 in soil gas. The mass of

(26)

polluted soil was estimated to be around 2282 kg of chlorinated solvents (ICF Environnement, internal confidential report).

The soil was treated using Soil Vapor Extraction technique which is very efficient when dealing with volatile compounds (Appendix I). At the end of the process, it was concluded that the mass of pollutant in the soil was underestimated leading to residual pollution after the expected time of treatment.

3.1.1.Chlorinated solvents

Halogenated volatile organic compounds (VOCs), including chlorinated aliphatic hydrocarbons (CAHs) are frequently found in soil and groundwater on polluted sites (CityChlor, 2010). CAHs are used as solvents and referred in this study as “chlorinated solvents”. The most common compounds of these CAHs are perchlorethylene (PCE), and trichloroethylene (TCE)(Table 1). The studied site is mainly contaminated with TCE. The variable considered to evaluate the pollution extent is the sum of the concentration of all the chlorinated solvents.

Table 1. Most common chlorinated solvents (source).

Name Common Name Abbreviations Molecular

Formula Chlorinated Methanes

Tetrachloromethane Carbon tetrachloride CT CCl4

Trichloromethane Chloroform CF CHCl3

Dichloromethane Methylene chloride DCM CH2Cl2

Chloromethane Methyl chloride CM CH3Cl

Chlorinated Ethanes

Hexachloroethane perchloroethane HCA C2Cl6

Pentachloroetane - PCA C2HCl5

1,1,1,2-tetrachloroethane - 1,1,1,2-TeCA C2H2Cl4

1,1,2,2-tetrachloroethane - 1,1,2,2-TeCA C2H2Cl4

1,1,2-trichloroethane - 1,1,2-TCA C2H3Cl3

1,1,1-trichloroethane Methyl chloroform 1,1,1-TCA C2H3Cl3

1,2-dichloroethane - 1,2-DCA C2H4Cl2

1,1-dichloroethane - 1,1-DCA C2H4Cl2

Chloroethane - CA C2H5Cl

Chlorinated Ethenes

Tetrachloroethene Perchloroethene PCE C2Cl4

Trichloroethene - TCE C2HCl3

Cis-1,2-dichloroethene Cis-dichloroethene Cis-DCE C2H2Cl2

Trans-1,2-dichloroethene Trans- dichloroethene

Trans-DCE C2H2Cl2

1,1-dichloroethene Vinylidene chloride 1,1-DCE C2H2Cl2

Chloroethene Vinyl chloride VC C2H3Cl

These solvents have a variety of possible usage but they have mainly been used as degreasers in the dry cleaning or manufacturing industries.

They have a low miscibility and a relatively moderate solubility and they are denser than water (HCR, 2003). They belong to the Dense Non Aqueous Phase Liquids (DNAPLs) that are characterized by their particular behavior in soil.

Once they are spilled, usually from a storage tank, these pollutants undergo gravity and flow through the unsaturated zone until the water

(27)

table (Figure 12). Since they are denser than water, they don’t float and flow through the saturated zone until they reach the bottom of the aquifer. They adsorb to soil during their fall and form one or more pools of pure phase on top of less permeable layers and at the bottom of the aquifer.

Figure 12. DNAPLs’ behavior in soil (Cwiertny, D. M., & Scherer, M. M. , 2010).

There are several simultaneous processes that occur:

• The volatilization of the compound in the gaseous part of the unsaturated zone.

• The adsorption of the pollutant to the soil particles when flowing through the unsaturated and saturated zone.

• The dissolution of the contaminant in the saturated zone.

The pollutant will be present in gaseous, dissolute, adsorbed and pure phase in the soil.

This particular behavior makes it difficult to locate the source of the contamination. In the saturated zone, the collected water samples can contain only dissolved contaminant and do not provide information on the amount of pure phase pollutant in the “pools”.

In the same way, in the unsaturated zone, which is the focus for this study, soil sampling can miss the localized pollution and lead to errors.

This transport processes will lead to heterogeneity in pollutant concentration.

3.1.2. Available data

The study case is appropriate for this kind of work since a lot of data are available. A first diagnosis was made to assess whether or not remediation needed to be done and then a more thorough diagnosis was performed specifically in the identified polluted areas (Figure 13).

(28)

Figure 13 . Localization of the samples for the two data acquisition for the top layer and for the bottom layer.

The data are divided between two layers in agreement with the lithology (Table 2):

• The top layer: between 0 and 3 meters, composed of backfill coming from outside.

• The bottom layer: between 3 and 6 meters, composed of the original soil: clayey and sandy alluvia.

(29)

Table 2. Number of points.

First sampling Second sampling

Top Layer 461 points 212 points

Bottom layer 244 points 21 points

3.2. Methodology

From the study site dataset, different methods of calculation of the mass of pollutant in soil are used and compared. The goal was to compare rapid deterministic methods with a geostatistical approach.

3.2.1. General assumptions

The mass of pollutant within a known volume of soil can be calculated from the concentration, the dry matter ratio, the bulk density and the volume (1). In this part, the general assumptions needed to perform such computation are detailed.

The depths of the considered layers are supposed to be constant and equal to 3 meters. The concentration estimated in two dimensions will be allocated for the whole depth of a given layer. The surface of the site will be discretized differently depending on the method of interpolation.

The density is supposed to be constant and equal to 1800 kg/m3 for the top layer (between 0 and 3 meters deep) and equal to 2000 kg/m3 for the bottom layer between (values derived from the company’s experience).

The dry matter ratio was averaged and considered constant equal to 83 % to simplify the problem. The concentration was multiplied by the dry matter ratio divided by 100 and the density, the new variable to be modeled will be called content and designed with P (9), (10).

The new mass calculation is now:

𝑃𝑖 = 𝐶𝑖 𝛼𝜌

100 (9)

𝑀𝑖 = 𝑃𝑖𝑉𝑖 (10)

Where 𝛼 is the dry matter percentage in soil (%) 𝜌 is the density of the considered soil (kg/m3)

𝐶𝑖 is the pollutant concentration (mg/kg of dried soil) 𝑉𝑖 is the considered volume of soil (m3)

𝑀𝑖 is the mass of pollutant (mg)

3.2.2. Deterministic methods

Different interpolation methods or simply allocation of the data described in the literature background part, were carried out to estimate the concentration of pollutant at unknown points. The mass was then calculated with this input. The methods were applied separately to the top layer and to the bottom layer leading to two series of results.

Mean 1

The mean values of the content are allocated to the entire site.

Mean 2

This method consists in averaging the content values but only for the polluted zones. The polluted areas are first delimitated using natural neighbor interpolator with a grid of 1x1 m. Zones are created according to the interpolated concentration value.

(30)

Figure 14. Choice of the threshold value.

The legal limit of the concentration of chlorinated solvents for the soil to be considered inert is 2 mg/kg of dried soil.

A second threshold is found using the cumulative frequency distribution plot, which generally has to parts, the stationary part and the ramp part.

The break between these two parts can be defined as threshold to difference source pollution from residual pollution (Figure 14). The value is 20 mg/kg of dried soil.

The chosen categories will then be

• From 0 to 2: areas that are considered not polluted.

• From 2 to 20: Zone of pollution level 1

• From 20 to 200: Zone of pollution level 2

• From 200 to 2000: Zone of pollution level 3.

Then, the mean value of the content of each zone is allocated to the entire zone. Those values are used in the calculation (10).

This method is tested because it is a very natural way of reasoning but it does not have a mathematical rationale behind it.

Thiessen polygons

The polygons are generated. Each polygon is associated with a content value. The calculation is carried out for each polygon (10) and the masses are summed.

Natural neighbor

The values of the content are estimated for squares of 5x5 m with the natural neighbor interpolator. For each block, the calculation is performed. The values for the block are then added up.

IDW

The values of the content are estimated for squares of 5x5 m with the inverse distance weighting interpolator. For each block, the calculation is performed. The values for the block are then added up.

3.2.3. Stochastic conditional simulations: the geostatistical approach The geostatistical approach consists in performing conditional simulations. Indeed, a simple kriging will give the pollutant mass and the uncertainty for each mesh of the modeling grid but it will not give the

(31)

uncertainty for the total mass of pollutant contained in the soil since these local uncertainties cannot be summed.

The simulation method was carried out with the following steps. The methodology is detailed only for the top layer; the same principles were applied to the bottom layer.

Exploratory Data Analysis

The EDA is a key step of the stochastic conditional simulations method.

It is essential because it is the basis of the spatial modeling of data that will be the input of the simulation (Figure 15a, 15b).

Figure 15. – Histogram of the concentration value in normal scale (15a) and in logarithmic scale (15b).

This analysis will also enable the performer to spot the possible errors or inconsistencies among the data. It is generally recommended against mixing data coming from different campaigns in Geostatistics (GeoSiPol, 2005).

A first step is to verify if the two sets of data can be mixed.

The variographic cloud visualized with the basemap of the data show that the large variogram values are coming from the mixing of first sampling and second sampling data (Figure 16).

The data from the second diagnosis was therefore masked because it was taken with a different sampling method and at a different time and was showing inconsistencies with the first sampling data. The large variogram values (Figure 16) coming from this have to be removed because they don’t express the real spatial variability but a false variability coming from the fact that the two datasets were acquired at different times (Geovariances, 2013).

In the following of the geostatistical approach, only the first sampling (which was more regular and covered the site area) is considered.

(32)

Figure 16. Localization of the data responsible for the large variogram values.

Gaussian Transformation of data

The turning bands simulation method was chosen and since this method was developed in a multigaussian framework,it was necessary to transform the data distribution into a Gaussian distribution. A transformation function is applied to the data, that will be used later to back-transform the simulated values. The chosen method to perform a Gaussian tranformation is the Normal Score Transform (NST) method.

NST ranks the values from the dataset and match these ranks to equivalent rank in a normal distribution (Figure 17).

Figure 17. Histogram of the original data (left) and of the transformed data (right).

Fitting the variogram

The variographic cloud and variogram are plotted with the new Gaussian variable (Figure 18).

The variogram was fitted using three basic structures (Figure 19):

• Nugget effect: sill of 0.496

• Spherical function: sill of 0.336 and range of 167.72 m

• Spherical function: sill of 0.297 and range of 97.54 m

This function is used to model the spatial variability. The continuous model resulting from this fitting is used in the simulation algorithm.

(33)

Figure 18. Experimental variogram of the Gaussian variable.

Figure 19. Variogram model of the Gaussian variable.

(34)

Performing the simulations

The simulation goes through nodes from a grid that has to be defined in advance.

The modeling grid is set to be squares of 10x10 meters with a rotation of 50° to cover all the data points (Figure 20). The mesh of 10 meters is chosen because the data are generally spaced out from 10 meters. The grid in then intersected with the limit of the site to have a representative modeling grid (Figure 21).

Figure 20. Modeling grid.

A simulation algorithm involves the use of a search neighborhood, i.e.

the samples that are to be used while simulating the value at a target location. The chosen neighborhood is unique meaning that all the samples will be used for the simulation at a specific location.

The algorithm was set to be a “block” estimation: the average value over a cell centered at the grid node will be calculated. In practice, the average is calculated from a number of discretized points in each block. This number was set to 25. This discretization was performed along X and Y axis but not Z since the problem was considered as a 2D problem.

The chosen algorithm is the Turning Bands method, the number of turning bands was set to 400. The quality of the simulations depends on this number. A compromise has to be found between the number of simulations and the number of turning bands because the two numbers will take computing resources.

The number of simulation was set to 500.

Post-processing

The simulated result is back-transformed to obtain the original variable.

The simulations results are stored in a macro variable and are defined by an index from 1 to 500.

For each simulation, the accumulation, meaning the estimated variable multiplied by the volume, hence here, equal to the mass for each block is calculated and summed for the entire site to obtain the total estimated mass of pollutant for each simulation.

(35)

A distribution of the results is computed and elementary statistics on the result as well. This distribution enables to predict the value with the interval of possible error.

Figure 21. Limit of the site.

3.3. Computer resources

The ArcGIS Software was used to perform the deterministic methods and for the first visualization of data. The data are displayed using QGIS.

The probabilistic simulations were computed and processed with ISATIS, which is a comprehensive geostatistical software package developed by the company Geovariances.

4. R

ESULTS

The estimated masses of pollutant calculated with the different methods described above are displayed in this part for the top layer and for the bottom layer (Table 3; Figure 22; Figure 23). The units are kilograms of chlorinated solvents. As stated above, the concentrations were interpolated on a grid (different for each method) and they were used as input for the calculation of the mass of pollutant contained within the soil of the site.

Some conclusions can be derived directly from the conditional simulations mass distribution like confidence intervals. A confidence interval [a,b] at x % is the interval within which the estimated mass will be with a certainty of x %.

It enables the performer to size his remediation methods accordingly and to calculate an interval for the cost of the remediation.

The different confidence intervals for the estimated mass of pollutant within the two layers are displayed in Table 4.

(36)

Figure 22. Estimations of the mass of pollutant in the top layer of the site calculated with different methods.

(37)

Figure 23. Estimations of the mass of pollutant in the bottom layer of the site calculated with different methods.

(38)

Table 3. Estimated mass with the different methods.

Top Layer (0-3) Bottom Layer (3-6)

Mean 1 4800 490

Mean 2 1675 330

Thiessen Polygons 1864 336

Natural Neighbor 1827 292

IDW 2147 333

Conditional Simulation Median : 2545 Interval : [1739;3573]

Median : 514 Interval : [360;814]

The clear conclusion is that deterministic methods have a tendency to underestimate the quantity of pollution comparing to the geostatistical approach. The advantages and drawbacks of the different tools tested are outlined in Table 5.

Mean 1 and Mean 2 approaches were tested because it is very common to use those kinds of reasoning and they don’t require specific software packages. They are very quick to implement and depending on the raw data, they can give an idea of the quantity of pollutant. It is clear that averaging the concentration values for the whole site is not suitable for this site since the histogram shows a very wide range for the concentration values. Hence, by averaging the very high values it will have influence on the whole the site and the results will be unrealistic.

However, when you have a really small amount of sample points (under 20), these methods can be used, because it is not relevant to use more complex methods.

Table 4. Confidence intervals at 94, 90 and 80% for the estimated mass.

Top Layer (0-3) Bottom Layer (3-6)

94 % [2005;3204] [411;674]

90 % [2058; 3126] [423;656]

80 % [2142;2991] [434;624]

As expected, the results from the Thiessen Polygons approach and from the Natural Neighbor interpolator are very close since they rely on the same hypothesis on spatial variability (i.e. that points have areas of influence). Thiessen Polygons have the advantage of being “visually truthful”, meaning that the client can directly see that it is an interpolation based on hypothesis and not a representation of the truth.

Inverse Distance Weighting, Thiessen Polygons and Natural Neighbor interpolator all rely on a model of an implied spatial variability that is not the true one. It is implied in their implementation that the user assumes that the closer two points are, the more similar their concentration values will be. However, when you have a very heterogenic soil, this supposition is wrong and to understand the real spatial variability, it is needed to study it with EDA.

(39)

Table 5. Advantages and drawbacks of each method.

Advantages Disadvantages

Mean 1 Very easy to implement Lack of scientific rationale Very imprecise

Do not take into account spatial variability of raw data

Mean 2 Very easy to implement Concept easy to understand

Lack of scientific rationale Do not take into account spatial variability of raw data

Thiessen Polygons

Easy to implement

Concept easy to understand

Accuracy depends strongly on sampling density

Not fit to be presented (odd shapes) Impose a ‘fake” spatial variability Natural

Neighbor

Easy to implement

Concept easy to understand Visually convincing

Accuracy depends strongly on sampling density

Impose a ‘fake” spatial variability IDW Easy to implement Strongly depends on the power and

radius value set by the planner Impose a ‘fake” spatial variability Conditional

Simulations

Gives the associated uncertainty Respect the raw data and the “true”

spatial variability

Heterogeneity of data is taken into account

Complicated conceptually

A fairly good amount of data need to be available

Deep knowledge of Geostatistics necessary

Need of pre and post-processing steps

The geostatistical approach with conditional simulations seems the best suited to do this kind of estimation since it takes into account the real spatial variability of the data and gives an estimate with an interval. The possible uncertainty can be determined and the possible associated cost can be assessed. It enables the remediation company to calculate the possible additional costs and present it to the client beforehand.

5. D

ISCUSSION

In light of the results below, it is interesting to point out the limits of the study regarding the unresolved problems or the further research that needs to be done to fully comprehend this issue.

5.1. Limits of the study

This study does not bring a general conclusion on estimation methods since these results are strongly dependent on the studied set of data. The behavior of a pollutant in soil can be very different depending on the considered compound. Some can follow more homogeneous processes in soil, making the deterministic methods valid. It depends also on the porosity or permeability of soil that will influence processes that can be more or less compatible with the interpolation method. Chlorinated solvents have a very particular behavior in soil and they rarely spread homogeneously. Unfortunately, geostatistics does not take into account the physical processes that resulted in this particular distribution of data.

Hence, geostatistics gives an interpretation and do not reproduce the reality but provide realistic possibilities.

The hypothesis on the density has an important influence on the results;

however it was set to be the same for all the different methods.

Therefore, it does not influence the conclusion of the comparison.

References

Related documents

46 Konkreta exempel skulle kunna vara främjandeinsatser för affärsänglar/affärsängelnätverk, skapa arenor där aktörer från utbuds- och efterfrågesidan kan mötas eller

Both Brazil and Sweden have made bilateral cooperation in areas of technology and innovation a top priority. It has been formalized in a series of agreements and made explicit

The increasing availability of data and attention to services has increased the understanding of the contribution of services to innovation and productivity in

This study compares the performance of the Maximum Likelihood estimator (MLE), estimators based on spacings called Generalized Maximum Spacing estimators (GSEs), and the One

The network of streets roughly corresponds to the planned area of the city of Norrköping (Sources: T. Hallström, ‘The pipe-bound city in time and space: applying GIS to the

– Ja, jag tror att det berör oss, inte bara här i Sverige utan i mitt gamla land, Somalia. Tyvärr är vi beroende av de stora ländernas makt och deras hand att hjälpa oss. När

To speed up rendering, or to support data sets not fitting in GPU memory, the volume can be sub- divided into bricks (Scharsach et al., 2006) through which rays are cast

The presence of daughter products in all groundwater pipes indicates reductive dechlorination being performed in the contaminated area which will as time passes