• No results found

Simulation and parameter estimation of spectrophotometric instruments 

N/A
N/A
Protected

Academic year: 2021

Share "Simulation and parameter estimation of spectrophotometric instruments "

Copied!
95
0
0

Loading.... (view fulltext now)

Full text

(1)

DEGREE PROJECT

AT CSC, KTH

SIMULATION AND PARAMETER ESTIMATION OF

SPECTROPHOTOMETRIC INSTRUMENTS

SIMULERING OCH PARAMETERESTIMERING AV

SPEKTROFOTOMETRISKA INSTRUMENT

Avramidis Stefanos sav@kth.se Degree Project In Scientific Computing Level: Master

Supervisor: Michael Hanke Examiner: Michael Hanke

(2)

Simulation and Parameter Estimation of

Spectrophotometric Instruments

Abstract

The paper and the graphics industries use two instruments with different optical geometry (d/0 and 45/0) to measure the quality of paper prints. The instruments have been reported to yield incompatible measurements and even rank samples differently in some cases, causing communication problems between these sectors of industry.

A preliminary investigation concluded that the inter-instrument difference could be significantly influenced by external factors (background, calibration, heterogeneity of the medium). A simple methodology for eliminating these external factors and thereby minimizing the instrument differences has been derived. The measurements showed that, when the external factors are eliminated, and there is no fluorescence or gloss influence, the inter-instrument difference becomes small, depends on the instrument geometry, and varies systematically with the scattering, absorption, and transmittance properties of the sample. A detailed description of the impact of the geometry on the results has been presented regarding a large sample range. Simulations with the radiative transfer model DORT2002 showed that the instruments measurements follow the physical radiative transfer model except in cases of samples with extreme properties. The conclusion is that the physical explanation of the geometrical inter-instrument differences is based on the different degree of light permeation from the two geometries, which eventually results in a different degree of influence from near-surface bulk scattering. It was also shown that the d/0 instrument fulfils the assumptions of a diffuse field of reflected light from the medium only for samples that resemble the perfect diffuser but it yields an anisotropic field of reflected light when there is significant absorption or transmittance. In the latter case, the 45/0 proves to be less anisotropic than the d/0.

In the process, the computational performance of the DORT2002 has been significantly improved. After the modification of the DORT2002 in order to include the 45/0 geometry, the Gauss-Newton optimization algorithm for the solution of the inverse problem was qualified as the most appropriate one, after testing different optimization methods for performance, stability and accuracy. Finally, a new homotopic initial-value algorithm for routine tasks (spectral calculations) was introduced, which resulted in a further three-fold speedup of the whole algorithm.

(3)

Simulering och Parameterestimering av

Spektrofotometriska Instrument

Sammanfattning

Pappersindustrin och den grafiska industrin använder två instrument med olika optiska geometrier (d/0 och 45/0) för att mäta kvaliteten på papper. De instrumenten har rapporterats att ge oförenliga mätningar och även samma prover kan rankas olika i vissa fall. Det orsakar kommunikationsproblem mellan dessa industrisektorer.

En förundersökning drog slutsatsen att differensen mellan instrument skulle kunna bli avsevärt påverkas av yttre faktorer (bakgrund, kalibrering, medelsheterogenität). En enkel metod för att eliminera dessa yttre faktorer och därigenom minimera instrumentskillnaderna har utvecklats. Mätningarna visade att när de externa faktorerna elimineras, och det finns ingen inflytande av fluorescens eller glans, blir instrumentdifferensen liten, beror på instrumentgeometrin, och varierar systematiskt med spridning, absorption och transmittans av provet .

En detaljerad beskrivning av effekterna av geometrin på resultaten har presenterats angående ett stort urval prov. Simuleringar med radiative transfer modell DORT2002 visade att instrumentmätningarna följer den fysiska radiative transfer modell utom i fall av prover med extrema egenskaper. Slutsatsen är att den fysiska förklaringen av geometriska instrument mellanskillnader är baserad på dem olika grader av ljus genomträngning från geometrier, som så småningom resulterar i olika grader av bulk spridning nära ytan. Det visade sig också att d/0 instrumentet uppfyller antagandena om ett diffust fält av reflekterat ljus från pappersytan endast för prover som liknar den perfekta diffusa reflektorn men det ger ett anisotropiskt fält av reflekterat ljus när det finns betydlig absorption eller transmittans. I det senare fallet det 45/0 visar sig vara mindre anisotropiskt än det d/0.

Under tiden, har prestandan av DORT2002 förbättrats avsevärt. Efter anpassningen av DORT2002 till att omfatta den 45/0 geometrin, den Gauss-Newton optimeringsalgoritmen för att hitta lösningen av det omvända problemet kvalificerades som den mest lämpliga, efter att ha testat olika optimeringsmetoder för prestanda, stabilitet och noggrannhet. Slutligen infördes en ny homotopisk initial-värde algoritm för rutinuppgifter (spektrala beräkningar), vilket resulterade i en tre gånger snabbare algoritm.

(4)

CONTENTS

1. Introduction ... 1

2. Background ... 2

2.1. Radiative transfer ... 2

2.2. Kubelka-Munk theory ... 4

2.3. The DORT2002 model ... 5

2.4. The BRDF and the anisotropy index ... 7

2.5. Optimization methods for the parameter estimation problem ... 9

2.5.1 Kubelka-Munk vs. DORT2002 ... 9

2.5.2 The role of the asymmetry factor g in the problem ... 11

2.5.3 Methods for solving the optimization problem ... 11

2.6. Color spaces and color models ... 14

2.7. Instruments for reflectance measurements ... 16

2.7.1 The d/0° instrument ... 16

2.7.2 The 45°/0° instrument ... 17

3. Methods ... 18

3.1. Materials ... 18

3.1.1 Model-based case studies in the choice of samples ... 19

3.2. Measurements ... 20

3.3. External factors that influence the measurements ... 22

3.3.1 Background ... 22

3.3.2 Calibration ... 24

3.3.3 Heterogeneity and spot measurements ... 26

3.3.4 Other factors ... 27

3.3.5 Derived methodology ... 27

3.4. Computations ... 28

4. Analysis of the parameter estimation algorithms ... 32

4.1. Problem characterization ... 32

4.2. Stability and conditioning ... 34

4.3. Performance and accuracy ... 37

4.3.1 Further performance improvement for spectral calculations using a homotopic method ... 39

5. Results ... 42

5.1. Model-based case studies representing various samples ... 42

5.2. Explaining geometry-related instrument differences ... 45

5.3. Measuring the differences... 54

5.4. Model against measurements part I: Scattering and absorption coefficients, inverse problem 61 5.5. Model against measurements part II: Reflectance value reconstruction, forward problem... 66

5.6. Influence of the asymmetry factor g in the model simulations ... 70

6. Discussion ... 73

6.1. Future work ... 75

References ... 76

Appendix I: Nomenclature ... 78

Appendix II: Data figures ... 79

(5)

1

1. Introduction

The paper/pulp industry and the printing/graphics industry use two different spectrophotometer instruments (d/0 and 45/0) to measure the quality of paper prints. Both methods use reflectivity measurements as basis but differ significantly in geometrical characteristics and in the means of illumination (d/0 uses diffuse illumination and 45/0 uses collimated illumination from 45º). They, therefore, have been reported to yield incomparable measurements. Occasionally the same set of samples may even be ranked differently by the two methods [22, 24, 25]. This implies that a better understanding of the interaction of light with the paper medium is required to diminish cases of lower production efficiency and production waste.

The light scattering and absorbing properties of the paper medium can be inferred from the reflectivity measurements by the use of a model, which describes the light – paper interaction. The model currently used in the industry, namely the Kubelka-Munk (KM) model [15, 16], does not take angular resolved reflectance into account, and it thus gives a limited picture of the interaction between light and paper. The KM has therefore been reported to have several accuracy limitations [7]. KM has also been shown to be an approximate simplified case of the radiative transfer problem [6]. A problem formulation and solution of the latter has been implemented by Edström [8] and a computational tool, the DORT2002 has been developed [8, 10]. DORT2002 is in excellent agreement with the KM in cases where the assumptions of KM are satisfied. In the cases that KM has decreased accuracy, DORT2002, taking into account the angular resolution of the reflectance, exceeds KM.

Solving the inverse problem with DORT2002, namely calculating the optical properties of the paper from the reflectance measurements, provides a more accurate description of the paper medium. Presently, KM is used in the industry to infer the paper properties from the measurements of a variety of significantly different instruments. KM, assuming perfectly diffuse illumination and scattering, calculates accurately the paper properties measured only with instruments and samples that fulfill these assumptions. DORT2002, thanks to the angle-resolved model, is suitable for instruments of any illumination and scattering conditions. DORT2002 can therefore be used to model accurately various instruments. It follows that, by using DORT2002, the paper optical properties can be accurately measured, compared and ranked as physical quantities, independently of the instrument type used [7]. DORT2002 has already been successfully used to model the d/0 instrument case [20].

The objective of this work is to extend the same method and enhance the DORT2002 tool with the inclusion of the 45/0 instrument case, which is inadequately standardized; to validate it with measurements of different instrument types, to construct and to test different optimization routines for the inverse problem (parameter estimation of the paper medium), and to perform an analysis of the optimization methods in order to improve the computational performance of the algorithm. An evaluation of the influence of external factors (i.e. instrument calibration or construction) that can be responsible for the different results will take place and the conditions for the elimination of those will be examined to isolate the effect of the geometry. Finally, using the measurements and the model, the physical interaction of light with paper in cases of different illumination and geometry conditions will be explored and explained.

(6)

2

2. Background

Τo theoretically describe the interaction of light with turbid media, as for example paper, a model has to be adopted. The interaction of light with turbid media has developed into the area known as radiative transfer [6]. This field has mainly been developed by astrophysicists and within atmospheric physics, but is also applied in the paper industry. While the paper industry has adapted and refined solution methods for paper applications, models that are more comprehensive have been developed for the solution of the problem.

All the models presented below describe the paper as a continuum; therefore, they do not take into account irregularities in the paper.

2.1. Radiative transfer

The radiative transfer theory [6, 8] describes the interaction of radiation with scattering and absorbing media and leads to the calculation of the radiation field in a turbid medium that absorbs, scatters, and emits radiation. The theory is applied to various fields as neutron diffusion, optical tomography, light/atmosphere interaction and scattering in turbid media as paper, film etc.

The analysis of a radiation field is based on the amount of radiant energy dE, in a specified frequency interval (v, v+dv) which is transported across the area dA in directions within the solid angle dω. The energy flow is thought of as non-interacting beams of radiation in all directions. The intensity of the radiation I, is defined by the flow of energy E through:

cos

I

dAd dt

dE

,

where θ is the polar angle between dA and dω. This is illustrated in Figure 1.

Figure 1. The surfaces and angles used to define the intensity in the radiative transfer theory

The field is said to be isotropic (diffuse) at a point when the intensity is independent of the direction of the radiation at that point. If the intensity varies with direction, the radiation field is anisotropic. A radiation field invariant in all positions is called homogeneous and isotropic. By integration over a solid angle ψ the flux of energy F within ψ per unit area is

dA dω

(7)

3 1 cos F dE I d dAdt

 

.

The intensity unit is energy per area, per solid angle, per time, and describes how the energy flow varies with direction and position. If the radiation with intensity I travels a distance dx through the medium, a fraction dI of it will be extinct due to absorption and scattering and the intensity becomes I+dI where

e

dI  

Idx, (2.1.1)

with σe denoting the extinction coefficient. The extinction consists of two components, the absorption component, when the radiation is transformed into other forms of energy and the scattering component, when the radiation is scattered in other directions. Thus, the extinction coefficient can be divided in two parts:

e a s

,

where σα and σs are the absorption and scattering coefficients respectively. At this point, a dimensionless parameter, referred to as albedo a is defined as

s a s

a

,

with 0≤ a≤ 1. The physical meaning of the albedo is that if a = 1, then all the radiation is scattered and none is absorbed. A value for the albedo close to zero corresponds to most radiation being absorbed. Practically in paper applications, the ratio can be close to 1 in cases of very low absorption, but it can very rarely be close to zero since there is always scattering in paper media.

For a plane-parallel geometry, the distance x is conveniently described in the normal direction perpendicular to the plane. The extinction optical path measured from the top surface downwards is defined as,

0

cos

x

e e

dx

,

that is also referred to as optical thickness. The primed arguments used hereby are to denote quantities corresponding to incident radiation. Eq.(2.1.1) can then be written

cos

dI

d

e

I

 

, (2.1.2)

with the intensityIf( , cos , )

 

a function of the optical thickness and the angles. The direction of scattering of incident radiation coming from dω΄ to be scattered to dω (or from polar angle θ΄ and an azimuthal angle υ΄ to angles θ and υ respectively)1 is described by the phase function p(cosΞ). Ξ is the angle between the central directions of these solid angles. A phase function is a probabilistic interpretation for the scattering direction. After normalization, the phase function is

4

(cos )

1

4

d

p

,

and the radiation scattered into the solid angle dω is accounted for by the source function

4

(cos )

4

a

S

p

Id

. (2.1.3)

1 Primed arguments correspond to incident radiation; non-primed arguments correspond to reflected

(8)

4

By adding this source term to (2.1.2), the equation to be solved is derived:

cos

e

dI

I

S

d

  

, (2.1.4) or, equivalently, 4

cos

(cos )

4

e

dI

a

I

p

Id

d

 

. (2.1.5)

This equation therefore describes how the intensity varies with direction and position. The first term to the right represents extinction, which can be caused by absorption or scattering. The second term to the right represents the scattered radiation from and to other directions. In order to consider other sources, e.g. emittance of absorbed radiation with reduced energy as in the case of fluorescence, more terms have to be added to the right hand side of (2.1.3).

For the case of paper media, the radiative transfer model is studied in plane-parallel geometry, where the horizontal extension of the medium is assumed large enough to give no boundary effects on the sides. The boundary conditions at the top and bottom surfaces are assumed to be time and space independent.

All medium parameters, i.e. σs, σa and a, are wavelength dependent. Thus, when studying for example the visible spectrum, the wavelength is discretized into intervals of 10nm and each wavelength is associated with a parameter setup. It is desirable to express all the distances and the scattering and absorbing properties as functions of the dimensionless quantities albedo and optical thickness, since this eliminates the dependence on length scaling in the particular case studied. These quantities thereby allow for a higher level of generality.

2.2. Kubelka-Munk theory

The model described here, known as the Kubelka-Munk model (KM) of light scattering and absorption in turbid media, was first introduced in 1931 [15] and was further developed by Kubelka [16]. Its application in the paper industry is described by Pauler [21]. The Kubelka-Munk model reduces the problem of radiative transfer to light travelling in the two vertical parallel directions in a homogeneous medium. The medium is characterized by its Kubelka-Munk scattering and absorption coefficients s and k. If light of intensity Ι travels a distance

dx, the intensity will be reduced by (s+k)Ιdx. The scattered amount of light will reappear as

intensity in the opposite direction, j. The same conditions hold for this intensity. This gives the following differential equations for the intensity:

( ) ( ) dI s k Idx sJdx dJ s k Jdx sIdx            , (2.2.1)

The difference in sign is due to the definition of x, as can be seen in Figure 2.

Figure 2. The coordinate system and intensity definitions used in the Kubelka-Munk model. KM is constrained only to the upward and downward directions.

(9)

5

If r(x)= J/I defines the reflectance factor2, an infinitesimal change in r can be written 2

( 2 ( ) )

drsrr sks dx, (2.2.2)

using (2.2.1). In the case of an opaque medium the reflectance is theoretically constant with

dx, i.e. dr = 0 and the reflectance is denoted3 by r = R

∞. In this case, R∞ is derived from (2.2.2) as a function of s and k: 2

1

k

k

2

k

R

s

s

s

  

 

 

 

, (2.2.3)

Eq.(2.2.2) can be also integrated from x = 0 to x = d to give

(1

)(

1

ln

1

(1

)(

)

g g

RR

R

R

sd

R R

R

R

R

R

     

, (2.2.4)

where R = r(d) and Rg = r(0). It has been shown that the medium thickness d can be replaced by the density or, as is common in the paper industry, by the grammage w of the medium [27] if the density of the medium is constant. The unit of the scattering and absorption coefficients then becomes m2/kg instead of 1/m if the grammage is measured in kg/m2, i.e. changing the measure of extension implies the corresponding change in s and k. In practice, eqs.(2.2.3) and (2.2.4) are used to calculate s and k. This is done by measuring R∞ as the reflectance from a thick pad of paper sheets. The reflectance R, usually denoted4 R

0, is measured for a single sheet over a background with reflectance Rg. Normally a black cavity is used as background and then Rg = 0. By combining these values and the grammage, eqs.(2.2.3) and (2.2.4) can be solved for s and k. The expressions for s and k are

0 0

(1

)

1

ln

1

R R R

s

R

R

w

R

R

    

 

, 2 (1 ) 2 s R k R     ,

It has been shown by Neuman [20] and Edström [7] that s increases rapidly with R0 but decreases slightly with R∞, and that the rate of change is larger for strongly absorbing samples. This means that a small error in R0 can cause large deviations in s, and that the relative size of the deviation is larger for highly absorbing samples. The Kubelka-Munk scattering coefficient is also highly sensitive to, for example, measurement errors in regions close to the line R0≈ R∞. This illustrates the unreliability of the Kubelka-Munk model in the case of high opacity (opacity=R0/R∞∙100%).

2.3. The DORT2002 model

The solution methods for the radiative transfer theory have been studied during the last century and they pose several numerical challenges. Initially the radiative transfer problems were considered intractable because of the numerical issues and coarse approximations were used instead. The development of mathematical tools and the increase in computational power

2 The terms “reflectance” and “reflectance factor” are used interchangeably throughout the text to

denote the ratio of reflected to incident intensity.

3 The value of R

∞ is acquired by measuring the reflectance of an opaque pad of the medium.

4 The value of R

0 is acquired by measuring the reflectance of one single sheet of the medium over a

(10)

6

in the last decades opened the field for more efficient and faster solution methods. The methods available today are discrete ordinates (approximating integrals with a numerical quadrature), Monte-Carlo methods, or finite element methods.

The DORT2002 model developed by Edström [8, 10] solves (2.1.4) numerically and is an implementation of a solution method using discrete ordinates. Discrete ordinates are a discretization of the solid angle domain and can be described as cones, or channels, for different polar angles. The number of channels N affects the accuracy of the model and using N = 2 gives the same result as the Kubelka-Munk model under certain assumptions of diffuse illumination and isotropic field of reflected light from the medium [7]. The model uses the objective scattering and absorption coefficients σs and σa and is adapted to an application in the paper industry. Only scattering (no fluorescence) contributions to the source function are considered, therefore no extra term in (2.1.3) is added for fluorescence. The scattering and absorbing medium is considered plane-parallel and homogeneous, and no edge effects are considered in the plane. The model can handle two types of incident intensity: a diffuse component and a beam component, i.e. collimated light.

A commonly used phase function is the Henyey-Greenstein [14] phase function, 2 3 2 2 1 (cos ) (1 2 cos ) g p g g       ,

where the parameter g is the asymmetry factor. g=−1 gives backward scattering, g=0 gives isotropic scattering and g=1 gives forward scattering5. An illustration of this is shown in Figure 3.

Figure 3. The probability for scattering directions using the Henyey-Greenstein phase function and different values for the asymmetry factor g. Light is incident from the left and the scattering event occurs at the point marked with a black dot. For example strong forward scattering, g=0.8 gives a forward lobe, while g=0 gives isotropic single scattering.

The angular variables can be related by the cosine relation:

cos cos

cos

sin

sin

cos(

 

 ).

The solution method expands the phase function in a series of Legendre polynomials, dividing the hemisphere in 2N discretization points.

2 1 0

(cos )

2

1

(cos )

N l l l

p

l

P

 

.

The DORT2002 model can handle the Henyey-Greenstein phase function, but also any desired phase function by allowing the definition of the coefficients in the Legendre expansion. The intensity can be expanded in a similar way using the Fourier transformation.

5 This is not to be confused with the global isotropy. The asymmetry factor describes the anisotropy of

(11)

7

This procedure eliminates the dependence on the azimuthal angle and gives an equation of the same form as (2.1.4) for every term in the expansion. These equations can be solved separately and assembled to give the total solution. The output of the model is thus intensity resolved in azimuthal and polar angle:

2 1

0 0

( , cos , )

( , cos ) cos

(

)

N m e e l

I

 

I

m

 

 

,

where Im are the Fourier components of the intensity and υ

0 is some chosen reference. Previous implementations of this solution method have not been able to resolve the intensity azimuthally, and so this is a new feature in the DORT2002 model.

Inserting the expanded phase function and intensity, (2.1.5) becomes:

1 1

( , cos )

cos

( , cos )

( cos

)

( , cos

) cos

2

0,1,..., 2

1

m e e m m m e e

dI

d

a

I

p

d

I

d

m

N





. (2.3.1)

The integral of (2.3.1) is approximated by the finite sum: 1

1 1

(cos ) cos (cos )

m j j j f

d

f

  

.

The weights ω are calculated by some quadrature formula.

This is the general mathematical description of the DORT2002 algorithm. The specialized version takes advantage of the quadrangular symmetry of the problem and designates two quadrants from 0≤ θ≤ π/2 and π/2≤ θ≤ π for halving the amount of calculations. This requires a double-Gauss quadrature for designating the 2N discretization points so that the quadrature formula will not give oscillating results. The weights resulting from this treatment are always positive which is a requirement for unconditional stability [8].

In the final stage, the integral boundaries are also expanded by Fourier analysis and the system of equations is transformed in matrix form following the analysis by Stamnes and Swanson [26]. The discretized equation is solved as an eigenvalue problem. The general solution for the discrete problem gives the intensity at any depth but only on the quadrature points. Interpolation formulas are used to calculate the solution at any point or angle.

DORT2002 can be modified accordingly to model different kind of instrument geometries and illuminations by altering the corresponding choice of the Lagrange components, which describe the illumination direction and the interpolation equations, which correspond to the position of the sensor.

2.4. The BRDF and the anisotropy index

A commonly used way of representing angular resolved data, which will be used in the result section, is the bi-directional reflectance distribution function (BRDF) [20]. The BRDF is defined as: f R BRDF

 , (2.4.1)

(12)

8

where Rf is a special reflectance factor defined as the flux of energy reflected into a solid angle divided by the flux reflected from a perfect diffuser into the same solid angle when both media are under the same illumination:

,

cos

cos

r f r d

I

d

R

I

d

 

 

 

,

where Ir is the intensity reflected from a general surface, Ir,d is the intensity reflected from a perfect diffuser and ψ is the solid angle. For a perfect diffuser, the incident flux is identical to the reflected flux if the integration is performed over the whole hemisphere. The diffuse intensity is also independent of direction and can be moved outside the integral:

, , , 2 cos in r d r d r d F F I d I

  

 

 , and therefore: f in I R F

 , (2.4.2)

where Iψ is the reflected intensity propagating within the solid angle ψ. Substituting (2.4.2) into (2.4.1): ( ) in I BRDF F

The terms BTDF and BSDF can be used respectively for the transmittance and scattering bi-directional functions. 0 10 20 30 40 50 60 70 80 0.275 0.28 0.285 0.29 0.295 0.3 0.305 0.31 0.315 0.32 0.325 angle (deg) BR D F DIFF ILL 45/0 ILL D/0 ILL

Figure 4. Typical BRDF of the 45/0, the d/0 and the perfectly diffuse instruments (for unity albedo). The BRDF of the perfectly diffuse instrument is a horizontal line. For this case of no transmittance and no absorption (all radiation is reflected), the reflectance is 100% or 1 and the BRDF of the perfectly diffuse instrument is therefore 1/π.

The BRDF of an isotropic system is a perfectly horizontal line for all angles of reflection (Figure 4). However, in an anisotropic system, the BRDF becomes a curve. In order to be able to compare the anisotropic behavior of different systems, a numerical measure called the

(13)

9

anisotropy index was introduced by Neuman [20]. If the BRDF of a perfect diffuser is transformed into polar coordinates, the horizontal line over the angles of reflection becomes a perfect circle centered at the start of the coordinate system (Figure 5). The anisotropy index of the field of reflected light from the medium of an instrument-sample system is then defined as the ratio of the area of the polar BRDF produced by the system divided by the BRDF area of the perfect diffuser (the area of the unity circle). In these terms, a unity anisotropy index describes perfect isotropy and the anisotropy is larger the further the anisotropy index is from unity. 0 10 20 30 40 50 60 70 80 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 angle (deg) BR D F i n p o la r c o o rd in a te s

Ro Refl. Light Distribution for a=1

DIFF ILL 45 ILL D ILL

Figure 5. Typical light distribution of the examined instruments (for unity albedo) in polar coordinates. The perfectly isotropic reflectance is the unity circle in these coordinates. The quotient of the area of any instrument (red or black dashed) divided by the area of the isotropic circle (solid black) gives the anisotropy factor.

2.5. Optimization methods for the parameter

estimation problem

The forward problem that is described by light-medium interaction and solved by DORT2002 or KM consists of the calculation of the R0 and R∞ reflectance values from known scattering and absorption coefficients σs and σa. Since the coefficients are wavelength dependant, one separate problem is solved for every wavelength. It follows that the inverse problem is the calculation of σs and σa, the medium optical properties, from the reflectance values. For a paper medium, the optical properties are not measurable quantities, while the reflectance values for every wavelength can be obtained by spectrophotometric measurements; the inverse problem represents the practical application of the radiative transfer theory and DORT2002 in paper modeling.

2.5.1 Kubelka-Munk vs. DORT2002

The Kubelka-Munk has been shown by Edström [7] to be a special case of the radiative transfer model with certain extra assumptions. Therefore, KM and DORT2002 give the same results when the resolution of DORT2002 is set to the minimal two channels and when the assumptions of KM are met. KM utilizes the assumptions that the illumination of the medium

(14)

10

is diffuse, the scattering phase function is diffuse and that the field of its reflected light is diffuse, as well. The assumption of diffuse illumination is not perfectly met by any instrument. d/0 instruments provide an almost diffuse illumination while 45/0 instruments are far from such a behavior. The diffuse scattering phase function is controversial since the only available data in paper application from Granberg and Beland [12], reports a non-diffuse, forward scattering phase function and generally organic turbid materials are characterized by forward scattering. The diffuse field of reflected light from the medium is generally a dubious assumption. Only an ideal medium (“perfect diffuser”) would produce a diffuse field of reflected light and furthermore, the single scattering direction and the fact that some light escapes from the lower boundary surface of the finite thickness medium, both change significantly the distribution of the reflected light [7].

Moreover, the KM scattering and absorption coefficients cannot be given an interpretation outside the Kubelka-Munk model, namely they do not represent anything physically objective. This is contrary to the general formulation of the radiative transfer problem and its DORT2002 implementation, where the scattering and absorption coefficients are related to the mean path in a medium. They can thus be given a physical interpretation, which is a desirable feature for all models.

KM consists of two algebraic equations, which can be directly solved for both the forward and the inverse problem. This makes KM a very fast means of solving both problems. Compared to DORT2002, KM is quite faster but as discussed in 2.2, it has serious accuracy issues in some important cases of opaque media. Furthermore, it is not angle-resolved and cannot be used to model measurements from instruments with non-diffuse illumination conditions such as the 45/0 instruments. The DORT2002 however, being angle-resolved can be used with any kind of instrument or illumination type. On the other hand, DORT2002 takes significantly longer than KM since it consists of the solution of O(N) equations (where N is the number of polar discretization channels as defined in ch. 2.3) emanating from the numerical treatment of the radiative transfer integro-differential equation. Besides, the solution of DORT2002 corresponds to the forward problem and the solution of the inverse problem is not directly available. It can be rather obtained by an inverse algorithm also known as a parameter estimation algorithm that can be very time consuming and in some cases may not converge to a solution.

The convergence issue has been treated by Edström using a two-phase parameter estimation algorithm [11]. In optimization theory, the expression “two-phase” denotes a preparative first phase of treatment before the problem is solved in the second phase. As it will be discussed below, the convergence (and the speed) of the solution of a non-linear parameter estimation problem depends strongly on the initial values used. In the DORT2002 case, the pretreatment regards the efficient calculation of an initial point and this can be simply obtained by executing the KM method as a first step and use its results in getting initial values. The transformation of the KM coefficients to the radiative transfer physical coefficients is done by the formulas suggested by Mudgett and Richards [18] complemented with van der Hulst6 [28] anisotropic single scattering compensation which hold as KM is considered as the 2 channel case of the radiative transfer model [7]:

4 , 2 3(1 ) KM KM a s k s g

  , (2.5.1)

Despite their low accuracy, the KM results can still provide for better initial points than arbitrary values.

6 The van der Hulst formulas as defined in (2.5.1) will be used to modify the coefficients s and k of KM

whenever a comparison between them and the objective physical coefficients used in the radiative transfer theory is needed.

(15)

11

Edström has also improved significantly the speed of the DORT2002 algorithm [9] by taking advantage of: 1) the polar symmetry of the problem and 2) the fact that the simulation of the measurements instruments is accurately described by using just the necessary components of the Fourier polar analysis on the Henyey-Greenstein phase function. A specialized version of the DORT2002 code that performs only the necessary calculations for the paper-light physical model has been developed and adopted. Using fast parameter estimation algorithms, the performance of the specialized version of DORT2002 has been measured to be around 10-20 times slower than the simple KM [9].

2.5.2 The role of the asymmetry factor g in the problem

DORT2002, compared to KM, assumes non-diffuse scattering and requires a third input parameter, the asymmetry factor g of the Henyey-Greenstein phase function. In the forward problem case, g, which is also a wavelength dependant property of the medium, is a given value in the same fashion as the coefficients are. In the inverse problem, g can be estimated if reflection measurements are available from different sensor angles, as in goniophotometric measurements. If the reflection is measured only from a static angle, as in spectrophotometric measurements, then the asymmetry factor can be an arbitrary given value describing forward, backward or isotropic scattering, or it can be acquired from some scarce references in bibliography [12]. Whenever in this report, the results from DORT are compared with those of KM, isotropic scattering will be used in the former for reasons of comparability of the two methods (KM cannot account for anisotropic single scattering).

2.5.3 Methods for solving the optimization problem

The inverse problem or the parameter estimation problem is an optimization problem. Namely, it can be described as “find the parameters σs and σa (and g), such that the difference

between measured and modeled reflectance values is minimized”. Due to the structure of the

radiative transfer problem, the optimization problem is classified as non-linear with positive value constraints for the absorption and scattering parameters.

The solutions of non-linear optimization problems are iterative procedures. Starting from an initial point for the optical parameters, the respective intermediate reflectance values are calculated and a feasible direction to a local minimum is obtained by methods that use information from the derivatives of the objective function.

Figure 6. Simple qualitative schematic of iterative optimization methods for one variable. The iterations begin from the initial point and iteratively move towards the local minimum L1 (when the algorithm converges). However when the initial point is located on the other side of the local maximum (objective function peak), the algorithm will move towards L2.

(16)

12

The procedure is repeated until some convergence or divergence criteria are met. If the algorithm of the method converges, then the final values of the optical properties are considered optimal since the residuals of the reflectance values are minimal.

Optimization problems in their non-linear non-quadratic cases can be solved by various methods [19] such as the Newton method and quasi-Newton methods (BFGS, SR1, Broyden’s method) or less refined (Gradient descent) methods. Problems of minimizing the residual can be trivially transformed to least square problems, where the Gauss-Newton method and its improvements (Levenberg-Marquardt method) or less efficient methods (Non-linear Conjugate Gradient method) can be used. Other methods based on statistical regression instead of the derivative information, like the maximum likelihood method, are suitable when a large amount of measurement data needs to be fitted to a model; such statistical methods are substantially biased when the sample data is scarce and therefore are not suitable for the problem in question.

Newton method

The Newton method [19] (NM) is an algorithm for finding the zero of a non-linear function. During each iteration, the local minimum necessary condition f x( )=0, where x is the solution and f the objective function containing the residuals, results in x being updated

by 2 1

1 ( ) ( )

k k k k

x x   f x f x . The method requires the second derivative or the Hessian matrix of the objective function, which is a time consuming calculation of order O(n3), where n is the dimension of the Hessian and the number of the optimization parameters. If the first and second derivative information is not available (as the case is for the problem in question), it can be obtained by finite difference approximations. The Newton method uses a quadratic approximation to a non-linear function. It can therefore have a quadratic rate of convergence “sufficiently” close to the solution and in ideal cases where the Hessian is positive definite. If the above do not hold, then it is not guaranteed that the direction of descent acquired by the method is decent and the method may diverge. Moreover, the finite difference approximation of the second derivative is prone to numerical rounding errors. Line-search or trust-region methods enhance the Newton method with criteria of guaranteeing descent such as the Armijo condition. The Armijo iteratively finds a reducing factor a (0<a≤ 1) on the direction pkxk1xk such

that

f x

(

k

a p

k k

)

f x

( )

k

a p

k kT

f x

( )

k , with μ≈ 10-4, an adjustment parameter.

Quasi-Newton methods

The expensive calculation of the Hessian in the Newton method, led to the foundation of the Quasi-Newton [19] (QN) methods such as the Symmetric Rank 1 (SR1) method or the older DFP update, which use a substitute or an approximation to the Hessian. The most commonly used formula of update is used by the BFGS method where the Hessian is calculated only during the initial step and then B, the Hessian approximation is used and calculated by the

update formula 1 ( )( ) ( ) T T k k k k k k k k T T k k k k k B p B p y y B B p B p y p     where,yk  f x( k1) f x( k)and the direction p can be obtained with or without the Armijo criterion. In a simplified version, the Hessian calculation of the initial step can be substituted by initially setting the Hessian as the identity matrix but this can result in divergence or very slow convergence during the first iterations. The Quasi-Newton methods require O(n2) calculations but they do not converge quadratically (only super-linearly) and require a large storage space for the intermediate variables of the calculation.

(17)

13

Gauss-Newton method

When the problem is formulated as a non-linear least squares problem 2

1

1

( )

( )

( )

( )

2

2

T i

f x

f x

F x

F x

, the Gauss-Newton [19] (GN) method can be used. The

Gauss-Newton method uses the property of the Hessian in a least-square problem that 2

( ) ( ) ( )T

f x F x F x

    and calculates the direction of every iteration

as ( ) ( )T ( ) ( )

F x F x p F x F x

    . The Gauss-Newton method behaves like the Newton method near the solution (i.e. near zero-residual) without the costs associated with computing second derivatives. The Gauss-Newton does not perform well when F is a singular matrix or when the residuals are large, i.e. when the model does not fit the data well. The latter is not a concern in this case where two reflection measurements per wavelength are fitted with two medium parameters. The GN is therefore expected to perform well with small computational costs for this problem.

Levenberg-Marquardt method

With the Gauss–Newton method, the sum of squares f may not decrease during the iterative process. However, since p is a descent direction, unless f(xk) is a stationary point, it holds that

( k k) ( k)

f xapf x for all sufficiently small

a

>0. Thus, if divergence occurs, one solution is to employ a fraction

a

of the increment vector p in the updating formulaxk1xkapk (in a similar way as in the Armijo condition). An optimal value for

a

can be found by using any line search algorithm. In cases where the direction is such that

the optimal fraction

a

is close to zero, an alternative method for handling divergence is the

use of the Levenberg–Marquardt [17] (LM) algorithm, which is a "trust region method". The normal equations are modified in such a way that the increment vector is rotated towards the direction of steepest descent

F x

( )

F x

( )

T

I p

 

F x F x

( ) ( )

where I, is the identity matrix and when λ→+∞ then

p

F x F x

( ) ( )

 

, therefore the direction p

approaches the direction of the gradient.

The so-called Marquardt parameter λ, is optimized by an efficient strategy [17]: When divergence occurs, increase the Marquardt parameter until there is a decrease in f. Then, retain the value from one iteration to the next, but decrease it if possible until a cut-off value is reached when the Marquardt parameter can be set to zero; the minimization of f then becomes a standard Gauss–Newton minimization.

The LM due to this strategy can be more stable than GN but with an additional computational cost of finding the optimal parameter λ. It follows that the LM does not provide any advantages if the GN converges in the first place.

Other methods

Two other methods that would theoretically be suitable for this problem are the Non-linear Conjugate Gradient method and the Gradient descent method [19]. The first does not use information about the curvature and uses only the first derivative. It works well only when the function is approximately quadratic near the minimum, while a non-quadratic function will make very slow progress. Subsequent search directions lose conjugacy requiring the search direction to be reset to the steepest descent direction at least every n iterations, or sooner if progress stops. However, resetting every iteration turns the method into gradient descent. To find a local minimum with the gradient descent method one takes steps proportional to the negative of the gradient (or the approximate gradient) of the function at the current point. Gradient descent has problems with functions where the minimum is located in narrow curved valleys. That means that when the bottom of the valley is very flat the optimization tends to be zigzagging slowly with small step-sizes towards the minimum because the derivative is too

(18)

14

small inside the valley. These methods converge only linearly and are not suitable for functions with flatness around the solution, such as the ones of this problem.

Figure 7. The objective function of the two-variable inverse problem for a typical set of reflectances and grammage, indicating low convexity around the solution point (red triangular mark). Such a low convexity is undesirable and can slow down some optimization algorithms.

Termination criterion

All these methods, being iterative, require a termination criterion, which will terminate the algorithm and return the current solution iterate when the required accuracy is achieved. Nash & Sofer [19] propose

f x

( )

k

1

f x

( )

k

as an inequality which is suitable to terminate the algorithm when it is satisfied. When the gradient is smaller than some accuracy

ε, the method cannot improve the solution further, therefore the algorithm should terminate at

that point. The term

1

f x

( )

is multiplied by the accuracy ε, so when f x( )1 the criterion is equivalent to

f x

( )

k

f x

( )

k

which alleviates the influence of the large magnitude in the calculations of f and when f x( )1 the criterion is equivalent to

(

k

)

f x

. The desired accuracy for termination can be set to

 

mac, where εmac is the machine accuracy.

2.6. Color spaces and color models

A color model is an abstract mathematical model describing the way colors can be represented as tuples of numbers, typically as three or four values or color components. RGB and CMYK are well-known color models. Color models are abstractions and cannot describe a specific color without first defining the scale or reference. Without any associated mapping function to an absolute color space, they are more or less arbitrary color systems with little connection to the requirements of any given application [LACIE, Color Spaces & Color Translation]. Adding a certain mapping function between the color model and a certain reference color

space results in a definite "footprint" within the reference color space. This "footprint" is known as a gamut, and, in combination with the color model, defines a new color space.

RGB Color Model

(19)

15

Color spaces derive from color models and provide additional necessary scale or reference information. For example, the sRGB color space or Adobe RGB color space (1998) both define a scale that makes color representation possible. They both derive from the RGB color model and offer a quantitatively measured three-dimensional geometric representation of the colors that can be seen, or generated, using the RGB color model.

In order to provide a better understanding of colors, the CIE (International Commission on Illumination), the international authority on light, illumination, color, and color spaces, established standards in the 1930s for several color spaces representing the visible spectrum. This made comparisons possible among the varying color spaces of different viewers and devices. The CIE conducted a series of tests on a large number of people in order to define a hypothetical average human viewer and his/her response to color that they called the “standard observer.” Since the human eye has three types of color sensors that respond to different ranges of wavelengths, a full plot of all visible colors is a three-dimensional figure. The CIE developed the “XYZ color system”, also known as the “standard color system.” It is still used as a standard reference for defining colors perceived by the human eye, and as a reference for other color spaces. Like the RGB color model with additive primaries, CIE-XYZ uses three spectrally defined imaginary primaries: X, Y, and Z, which are the representation of color (electromagnetic waves) that may be combined to describe all colors visible to the “standard observer.”

The Lab color model was developed by the CIE in 1976 in order to improve color representation. It is the most complete color model used conventionally to describe all the colors visible to the human eye. It is a three-dimensional color space in which, each color can be precisely designated using its specific “a” and “b” values and its brightness “L.” The three parameters in the model represent the luminance of the color – “L” (the smallest L yields black) its position between red and green - “a” (the smallest a yields green) and its position between yellow and blue – “b” (the smallest b yields blue), scaled to a reference white point.

The color differences perceived to be equally large also have equal distances between them. These differences

can be expressed in ΔΕab. ΔΕab is a metric of color difference in the Lab color space. It is the Euclidean distance between two points/colors in Lab defined by

[

L

1

,

a

1

,

b

1

]

and

[

L

2

,

a

2

,

b

2

]

, and spans the interval 0 – 100. The measure is given by

 

 

2 1 2 2 1 2 2 1 2

L

a

a

b

b

L

Eab

. This formula has been refined by

CIEDE2000. It provides a measurement of both hue and density changes. It is important to note that an average viewer will only notice differences above 5-6 ΔΕab. Only a trained eye would notice differences from 3-4 ΔΕab. The human eye, however, is much more sensitive to changes in gray levels and mid-tones; a difference of 0.5 ΔΕab may then be noticeable. The advantage of this color space, however, is its device independence and its resultant objectivity. The same combination of a, b and L always describes exactly the same color. For these reasons, CIELAB is commonly used as a reference for the color translation process in ICC systems.

CIE Lab color model The CIE color triangle and

(20)

16

2.7. Instruments for reflectance measurements

There are different available instruments for measuring reflectance from paper samples. The most common ones, used in the paper and printing industry are the d/0 and 45/0 instruments respectively. The d/0 geometry illuminates the sample with a diffuse illumination and measures reflectance at 0º. The 45/0 geometry illuminates the sample with collimated light from 45º and measures reflectance at 0º.

2.7.1 The d/0° instrument

The instrument of most widespread use in the paper industry for measuring reflectance is the instrument having the d/0 geometry (diffuse illumination, sensor at 0º), as defined in ISO 2469 [2]. This instrument geometry is illustrated schematically in Figure 8. The paper sample is placed at the bottom of a sphere and is illuminated diffusely. The light source can be filtered from UV-light to avoid fluorescence. The inside of the sphere is covered with a highly scattering material to generate the diffuse illumination [21].

The detector is located at 0° (perpendicular to the sample) and subtends a cone the half-angle of which is no more than 4°. Both the sample and the detector are screened from direct illumination. Directed reflection is avoided by using a gloss trap that eliminates light with angles of incidence smaller than 15.5°. The instrument can separate the detected reflectance spectrally. The geometry of this instrument will be referred to hereby as d/0 geometry, the instrument having the geometry as the d/0 instrument and measurements done with the instrument as d/0 measurements.

Rydefalk and Wedin [23] give a more thorough treatment of the instrument and derive equations for the dependence of the instrument signals on the paper reflectance and the instrument illumination.

They also describe the function of the instrument’s reference detector. From this, it can be seen that the reference detector measures the incident illumination on a reference area on the inside of the sphere, and it can thus compensate for fluctuations in the incident light.

Figure 8. The d/0 instrument geometry (Courtesy of Lorentzen & Wettre AB)

In the everyday work of the paper industry, reflectance factors measured with this instrument are used to calculate the color of a paper according to the CIE color system, but also to calculate the Kubelka-Munk scattering and absorption coefficients, as described in section 2.2 and in ISO 9416 [5]. These are used as measures of a paper’s scattering and absorbing capabilities. The instrument is incapable of detecting any deviations from a perfectly isotropic intensity distribution since it measures the reflectance only in the normal direction. The output of the d/0 instrument is thus the reflectance factor in the solid angle subtended by the d/0 detector and this contains no information about the reflectance in other directions. The d/0 instrument reflectance factor is denoted here by Rd/0. Thus, errors may be introduced if Rd/0 is interpreted as total reflectance and if angular variations of the reflectance are not taken into account. On the other hand, the instrument is calibrated using the most absolute standardized routines and measures the samples through a method standardized by ISO, therefore, with provision of the considerations above, the output values of the instrument can be considered as “true values” whenever these are needed.

(21)

17 2.7.2 The 45°/0° instrument

Reflectance measurements are also done in many other industries e.g. for color measurement within the graphic arts industry, the print industry etc. Instruments for color measurement often have a 45/0 geometry [21] (collimated illumination at 45º, sensor at 0º), where the sample is illuminated at an angle of 45° with an maximum aperture of 5° (45±5°) and the measurement takes place at 0° (perpendicular to the sample) as seen in Figure 9. The sensor subtends a cone the half-angle of which is no more than 5°.

Figure 9. The 45/0 instrument geometry (Courtesy of Lorentzen & Wettre AB)

The advantage of the 45/0 geometry is that gloss reflection can be screened more efficiently, but one of the disadvantages is that the result is more dependent on the structure of the sample surface because of the directed illumination. The reflectance can thus be different in the machine and the cross directions depending on the fiber direction. The method and instrument are less standardized.

ISO 13655 [4] provides a general methodology for the reflection and transmission spectral measurements and parameter computations but it doesn’t specify the instrument geometry, the illuminant etc. It is emphasized that the correlations of ISO 13655, provided for improving the inter-instrument agreement, are not absolute. They are parameter-fitting equations, which perform a linear regression of the reading of the second instrument onto the readings of the first. The standard recognizes that many instruments do not have a standard illumination source, or standard wavelength intervals and that most instruments do not conform to ISO 5-4 [3], so it should be expected that two 45/0 instruments may differ in their measurement of the same optical property of a material. All the error considerations mentioned in 2.7.1 for the d/0, apply also for the 45/0 since the detector is also located at the normal direction and no information about other directions is available. Furthermore, it should be additionally noted that the illumination from a 45/0 instrument is not expected to yield an isotropic field even when the sample resembles the properties of a perfect diffuser, therefore calculating the Kubelka-Munk scattering and absorption coefficients from the measurements of a 45/0 instrument should be done with reservation or avoided.

(22)

18

3. Methods

Τhe main purpose of this work was to investigate and model the geometrical differences between various spectrophotometric instruments. Large differences have been reported in the bibliography [22]. In order to isolate the geometrical differences, all other external factors needed to be identified and eliminated from the measurements. Therefore a small-scale analysis of measurements under different conditions was performed. The inter-instrument differences depend on the nature of the samples. The dependence of the difference on the properties of the samples was examined through model-based case studies. Through this analysis, samples with interest were recognized in order to be used in the measurements. The DORT2002 model was also compared to the reigning outdated KM model and their strong and weak points were demonstrated. The DORT2002 model was initially setup to operate simulating the d/0 geometry. The quadrature discretization was modified as well as the interpolation formulas to accommodate the 45/0 geometry. It has been discussed that the DORT2002 is a forward model calculating the reflectance values from known parameters. Practically the reflectances are known and the medium parameters are unknown, which corresponds to the inverse problem. So, different optimization methods were built and fitted to the model and their performance and stability was explored in order to find the most suitable one in those terms. Since each measurement consists of numerous sub-measurements for each wavelength, the modeling process of each sample involves the solution of consecutive inverse problems. The inverse model was therefore optimized for these routine consecutive processes.

3.1. Materials

The available paper samples are characterized by grammage [kg/m2], filler content [%w], dye content [%w], fluorescence agent content (FWA) [%w], calendering intensity, gloss thickness [m] etc.

Pauler [21] describes the qualitative relation between these characteristics and the scattering and absorption coefficients. In general,

 higher grammage increases the opacity and therefore reduces the transmitting ability of a medium,

 filler content, which is used for strengthening, radically increases the opacity and the scattering ability; moreover it provides a smoother medium surface which theoretically results in a more diffuse field of reflected light,

 dye content of a specific shade, increases the absorption ability of the medium in the spectral area of the opposite shade; higher dye content means higher absorption,

 the addition of FWA in paper is very common (all office papers contain FWA) and is a photochemical means of increasing the reflectance in specific spectral areas (by converting UV radiation to reflected light in the lilac spectral area),

 calendering smoothens the surface of the medium resulting in a different distribution in the field of reflected light,

 gloss or varnish, which is used for waterproofing and enhancing the color saturation, depending on its composition, also increases the opacity and surface scattering and can interfere with the FWA function.

As discussed in 2.3, DORT2002 does not include a source term describing fluorescence and therefore non-fluorescent samples were chosen. Also, the surface morphology of the samples requires a stochastic (Monte-Carlo type) model and therefore calendered samples were not

(23)

19

used. Non-gloss samples were preferred for the same reasons. On the other hand, grammage, filler content, and dye content can be directly related to the scattering and absorption coefficients of the radiative transfer theory and DORT2002 and they were chosen as the free variables in the samples. This means that samples with different grammages, filler contents, and dye contents can be used to demonstrate the performance of the instruments under different circumstances and can provide a variety of cases to validate the model.

3.1.1 Model-based case studies in the choice of samples

It is generally accepted that the inter-instrument differences are dependent on the nature of the samples and therefore on the parameters of the samples. That is the rationale behind the motivation of other studies [22] to measure the inter-instrument difference using series of different samples. Thick samples have different behavior than thin ones; dyed samples behave differently than white ones, etc. The results of the studies confirm this proposition since the differences vary from small to quite large depending on the sample characteristics.

The DORT2002 can be used to generate the instrument difference as a function of the medium properties and therefore confirm mathematically and physically the aforementioned dependence and provide an explanation and a full picture of that. Such an investigation was performed as part of this work. Its results are presented in section 5.1 of the Results section because of the increased interest of the findings; but the conclusions were also used in the choice of samples. Samples with average characteristics were chosen for the average paper case, but also samples that, through model simulations, were expected to produce interesting behavior, were also chosen for the purposes of this work.

The model-based analysis in section 5.1 proposed that thick very white samples with very high scattering ability were expected to produce large differences. Also highly dyed samples with high absorbing ability should give large differences.

From the available laboratory samples, a series of dyed samples with low grammage were chosen as Sample Series 1, the samples with high absorption abilities. Two different filler contents were available:

Sample Series 1, Eight samples 30g/m2, no FWA, no gloss

% on fiber weight

Four samples, no fillers Four samples, 22% fillers

0.00% dye content 0.00% dye content

0.33% blue dye content 0.33% blue dye content

0.67% blue dye content 0.67% blue dye content

1.00% blue dye content 1.00% blue dye content

A second series of samples demonstrating high scattering ability was also chosen. Despite the fact that the absorption and scattering phenomena act independently, non-dyed samples were chosen in order to isolate the scattering. Three different filler contents were available as well as three different grammages:

Sample Series 2, Nine samples no dye, no FWA, no gloss

Three samples, no fillers Three samples, 15% fillers Three samples, 30% fillers

80g/m2 80g/m2 80g/m2

100g/m2 100g/m2 100g/m2

References

Related documents

Industrial Emissions Directive, supplemented by horizontal legislation (e.g., Framework Directives on Waste and Water, Emissions Trading System, etc) and guidance on operating

Re-examination of the actual 2 ♀♀ (ZML) revealed that they are Andrena labialis (det.. Andrena jacobi Perkins: Paxton &amp; al. -Species synonymy- Schwarz &amp; al. scotica while

unpredictable behavior of the Fourier components of the intensity in multilayer media with different asymmetry factors in different layers, the algorithm may in some cases not

Stöden omfattar statliga lån och kreditgarantier; anstånd med skatter och avgifter; tillfälligt sänkta arbetsgivaravgifter under pandemins första fas; ökat statligt ansvar

46 Konkreta exempel skulle kunna vara främjandeinsatser för affärsänglar/affärsängelnätverk, skapa arenor där aktörer från utbuds- och efterfrågesidan kan mötas eller

För att uppskatta den totala effekten av reformerna måste dock hänsyn tas till såväl samt- liga priseffekter som sammansättningseffekter, till följd av ökad försäljningsandel

Generella styrmedel kan ha varit mindre verksamma än man har trott De generella styrmedlen, till skillnad från de specifika styrmedlen, har kommit att användas i större

Parallellmarknader innebär dock inte en drivkraft för en grön omställning Ökad andel direktförsäljning räddar många lokala producenter och kan tyckas utgöra en drivkraft