• No results found

A Metric for Perceptual Distance between Bidirectional Reflectance Distribution Functions

N/A
N/A
Protected

Academic year: 2021

Share "A Metric for Perceptual Distance between Bidirectional Reflectance Distribution Functions "

Copied!
57
0
0

Loading.... (view fulltext now)

Full text

(1)

UPTEC IT 18 008

Examensarbete 30 hp Juni 2018

A Metric for Perceptual Distance between Bidirectional Reflectance Distribution Functions

David Ryman

Institutionen för informationsteknologi

(2)
(3)

Teknisk- naturvetenskaplig fakultet UTH-enheten

Besöksadress:

Ångströmlaboratoriet Lägerhyddsvägen 1 Hus 4, Plan 0

Postadress:

Box 536 751 21 Uppsala

Telefon:

018 – 471 30 03

Telefax:

018 – 471 30 00

Hemsida:

http://www.teknat.uu.se/student

Abstract

A Metric for Perceptual Distance between

Bidirectional Reflectance Distribution Functions

David Ryman

Bidirectional reflectance distribution functions (BRDFs) are used in the rendering equation to simulate light reflections in physically realistic way. A reflectance metric defines distances between all possible pairs of BRDFs. Deriving a perceptually based reflectance metric which accurately predicts how humans perceive differences in the reflective properties of surfaces has been explicitly state as an open research for over a decade. This work builds upon previous insights on the problem and combines them with new idea, defining the new Projective Area Weighted CIELAB (PAWCIELAB) metric. To evaluate the performance of the PAWCIELAB metric, it was

experimentally tested against an existing state-of-the-art metric, and the results indicate that the PAWCIELAB metric is the better reflectance metric with respect to human perception. The PAWCIELAB metric is useful in any application involving humans and light reflections, for example: 3D graphics applications and quality assurance of reflectance properties in a product. There is also room for improvement and extensions of the PAWCIELAB metric, which is described in the future work section at the end of this report.

UPTEC IT 18 008

Examinator: Lars-Åke Nordén Ämnesgranskare: Filip Malmberg Handledare: Jacob Selg

(4)
(5)

Populärvetenskaplig Sammanfattning

Reflektansfunktioneranvänds i Renderingsekvationen för att simulera ljusreflektioner i ytor på ett realistiskt sätt. Reflektansfunktionen för ett verkligt material kan mätas upp till en given önskad upplösning. Användning av en rå samplad reflektansfunktion för rendering är opraktiskt på grund av den stora mängd data som behövs och därför vill en approximera den med en analytisk reflek- tansfunktion. För att förstå effekterna av approximationer i en reflektansfunktion påverkar visuell noggrannhet behövs en reflektansmetrik.

En reflektansmetrik kan definieras som en funktion som tar in två reflektansfunktioner och matar ut avståndet mellan dem. Observera att avståndets betydelse är abstrakt, men det är noggrant valt att ge intuition.

Att härleda en reflektansmetrik som exakt förutsäger hur människor uppfattar skillnader i ytors reflektansbeteende har uttryckligen angetts som ett öppet forskningsproblem i över ett decenni- um [22, 13], men detta problem har sannolikt existerat under en längre tid. Att lösa problemet är ett grundläggande steg för att bland annat: (1) Skapa verktyg för anpassning av reflektansmodeller med uppmätta material. (2) Skapa förståelse av belysning-invariant 3D-skanning. (3) Förverkliga många blandad-verklighetsapplikationer. Detta arbete bygger på insikter från tidigare arbete inom fältet och kombinerar dem med nya idéer för att härleda en perceptuellt baserad reflektansmetrik.

I detta arbete konceptualiserades ett matematiskt ramverk för perceptuellt baserade reflektans- metriker. Genom att plugga in färgmetriken CIELAB och en ny viktfunktion i detta ramverk så skapades PAWCIELAB-metriken. En skalfaktor härleddes för att hantera variabelbyten. Genom att implementera och utvärdera PAWCIELAB-metriken mot den bästa tillgängliga reflektansmetri- ken (CWCbrt-metriken) visade det sig att reflektansmetrikerna mestadels överensstämmer med varandra, men det finns också en signifikant skillnad mellan dem. Genom att låta fem objektiva mänskliga domare avgöra i de konfigurationer där reflektansmetrikerna var mest oense så indike- rar resultaten att PAWCIELAB-metriken är bättre anpassad med mänsklig perception i 98 av de 100 testade konfigurationer.

Ramverket för perceptuellt baserade reflektansmetriker och PAWCIELAB-metriken kan kom- ma att ha en positiv inverkan på utvecklingen av analytiska reflektansfunktioner för datorgrafik- program. Dessutom kan ramverket och PAWCIELAB-metriken komma att bli användbart för att utveckla algoritmer som stämmer parametrarna i analytiska reflektansfunktioner så att de mat- char med uppmätta material. På en större skala kan detta arbete bidra till att realisera belysning- invariant 3D-skanning och därmed förverkliga vissa blandad-verklighetstillämpningar. Detta ar- bete kan också vara användbar i utveckling av kompressionsalgoritmer för reflektansfunktioner.

(6)
(7)

Contents

Populärvetenskaplig Sammanfattning . . . . v

1 Introduction . . . . 13

2 Background . . . . 14

2.1 Human Eye . . . . 14

2.2 Colour Spaces . . . . 14

2.3 Bidirectional Reflectance Distribution Function . . . . 15

2.4 The Rendering Equation . . . . 15

2.5 Half-diff Domain . . . . 16

2.6 BRDF Symmetry . . . . 16

2.7 Physically Based BRDFs . . . . 16

2.8 BRDF Acquisition . . . . 17

2.9 BRDF Modelling . . . . 17

2.10 Reflectance Metrics . . . . 18

3 Theory . . . .19

3.1 Sampled BRDF. . . . 19

3.2 Spectral BRDF . . . .19

3.3 Formalism of Reflectance Metric . . . . 19

3.4 A Framework for Perceptual Reflectance Metrics . . . . 20

3.5 Projective Area Weighted CIELAB Metric . . . . 20

3.6 Hypothesis . . . . 20

4 Implementation . . . . 21

4.1 Libraries . . . . 21

4.2 Hardware . . . .21

4.3 Half-diff Conversion Formulas . . . .22

4.4 MERL Domain . . . . 24

4.5 BRDF Texture . . . .24

4.6 MERL Scale Factor . . . . 24

4.7 Variable Change . . . .25

4.8 Discretisation . . . . 25

4.9 Framework Implementation . . . . 26

4.10 Framework Test . . . . 26

4.11 Visualising Projective Area Weight and MERL Scale Factor . . . . 28

4.12 RGB to CIELAB Conversion . . . . 28

4.13 Cosine Weighted Cube Root Metric Implementation . . . . 30

5 Judge Experiment . . . . 31

5.1 Search for Disagreement in BRDF Similarity Metrics . . . . 31

5.2 Selecting Configurations for Human Judgement . . . .31

5.3 Rendering Images for Human Judgement . . . . 32

5.4 Judge Instructions . . . . 32

5.5 Judge Experiment Setup . . . . 32

5.6 Judge Experiment Results. . . . 33

6 Post Experiment Survey . . . . 36

6.1 Post Experiment Survey Responses . . . . 36

(8)

7 Discussion . . . . 39

7.1 Importance of Background Colour . . . .39

7.2 Bias of Sphere Images . . . .39

7.3 Bias of Illumination . . . .39

7.4 No Side Randomisation . . . . 39

7.5 The Extension to Multiple Channels in the Cube Root Metric . . . . 39

7.6 Model for Statistical Hypothesis Analysis. . . .40

7.7 Judgements as Observations of a Discrete Probability Distribution . . . . 40

7.8 Formalisation of Hypothesis . . . . 40

7.9 Null Hypothesis . . . . 40

7.10 Sample Mean . . . . 40

7.11 P-Values . . . . 41

7.11.1 P-Value when Sample Mean is 3 . . . . 41

7.11.2 P-Value when Sample Mean is 2.8 . . . . 41

7.11.3 P-Value when Sample Mean is 2.6 . . . . 42

7.12 Statistical Significance . . . . 43

8 Conclusion. . . . 44

9 Future Work. . . . 45

9.1 Improved Judge Experiment . . . . 45

9.2 Perceptually Uniform Metric . . . . 45

9.3 View and Illumination Distribution in Common Environments. . . . 45

9.4 Extension to More General Forms of BRDFs . . . . 45

9.5 Effects of Multiple Bounce Reflections . . . . 46

9.6 Tools for Fitting Analytic BRDF Models to Measured Materials. . . .46

9.7 3D Scanners and Mixed Reality Applications . . . .46

9.8 Lossy BRDF Compression . . . . 46

Acknowledgements . . . .47

References . . . . 48

Appendix A: Deriving an Analytical Half-diff Scale Factor. . . . 50

Appendix B: Judged Configurations. . . . 53

(9)

Abbreviations

BRDF Bidirectional Reflectance Distribution Function CWCbrt Cosine Weighted Cube Root

PAWCIELAB Projective Area Weighted CIELAB PBR Physically Based Rendering

RMSE Root Mean Square Error

(10)
(11)

Nomenclature

λ Light wavelength.

h h

h Half-way direction between incident direction and outgoing direction.

nn

n Surface normal.

Ω Unit hemisphere centered around surface normal nnn.

ωi Direction of incident light.

ωo Direction of outgoing light.

θd, φd Spherical angle of incident direction in half-way space.

θh, φh Spherical angle of half-way direction in tangent space.

θi, φi Spherical angle of incident direction in tangent space.

θo, φo Spherical angle of outgoing direction in tangent space.

f BRDF.

Le Emitted light signal.

Li Incident light signal.

Lo Outgoing light signal.

Lr Reflected light signal.

t Time.

(12)
(13)

1. Introduction

Bidirectional Reflectance Distribution Functions (BRDFs)are used in the rendering equation to simulate light reflectance in surfaces in a realistic way. The BRDF of a material can be accurately measured to some desired resolution. However, using a raw sampled BRDF when rendering is impractical due the large amount of data it requires and, therefore, one seeks to approximate it with an analytical BRDF. To understand the impact of BRDF approximations on visual accuracy, a reflectance metric is needed.

A reflectance metric can be defined as a function that takes two BRDFs as input and outputs the distance between them. Note that the meaning of distance is abstract, but it is carefully chosen to provide intuition.

Deriving a reflectance metric which accurately predicts how humans perceive differences in the reflectance behaviour of surfaces has been explicitly stated as an open research problem for over a decade [22, 13], but this problem has likely been a concern for much longer. Solving the problem is a fundamental step in creating tools for fitting reflectance models with measured materials, understanding illumination-invariant 3D scanning, realising many mixed reality applications and much more. This work builds upon insights from previous work in the field and combine them with new ideas to derive a perceptual reflectance metric.

(14)

2. Background

To understand the reasoning behind the ideas introduced in this work, some background infor- mation is needed. Therefore, this section provides a basic description of light as electromagnetic waves and how the human eye pick these up to form colour perception. Furthermore, many fun- damental aspects of BRDFs are introduced, such as their definition, how they are use to model light reflections, parametrisation, symmetry and acquisition. Also, previous work on reflectance metrics is summarised.

2.1 Human Eye

Light can be understood as electromagnetic waves of varying frequency λ . The average human can roughly perceive light between wavelengths of 380 nm and 780 nm [18]. Figure 2.1 illustrates a portion of the electromagnetic spectrum.

Figure 2.1. Image from [20]. Electromagnetic spectrum with magnification on the humanly visible spec-

trum.

Humans have two types of photoreceptor cells: rods and cones. These can be found in the retina of a human eye. On one hand, cone cells dominate in a small region of the retina known as the fovea, where they are densely packed. The cones can further be categorized into three types:

L cones, M cones and S cones. The naming of the these cones stand for long (L), medium (M) and short (S), which refers the range of wavelengths that excite them. On the other hand, rod cells dominate in the peripheral vision, but also in the focal area in low light conditions.

2.2 Colour Spaces

Cone cells are essential to human colour vision. Because most humans have 3 types of cone cells, the space of human colours are 3-dimensional. Several colour spaces have been designed since the discovery of this fact, each with its own benefits and drawbacks.

(15)

Given that a steady state light beam with spectral distribution S(λ ) strikes a human eye, we can compute the LMS-tristimulus by integrating the spectral power distribution with the colour matching function of each cone cell. Let the colour matching functions of the L, M and S cones be ¯l, ¯mand ¯srespectively. Then the LMS-tristimulus is given by equation 2.1.

 L M

S

=

R

λS(λ ) ¯l(λ )dλ R

λS(λ ) ¯m(λ )dλ R

λS(λ ) ¯s(λ )dλ

 (2.1)

The resulting LMS-tristimulus vector in equation 2.1 can used as a coordinate in LMS colour space.

One problem with LMS colour space is that the relative Euclidean distance between colours does not match with the relative perceptual distance. Therefore, LMS colour space is not very intuitive to work with. Other colours space have been designed to resolve this issue, among them CIELAB. In CIELAB colour space, the relative Euclidean distance matches well with the relative perceptual distance between the colours as seen by a human with normal colour vision. There are actually multiple version of the CIELAB based on the parameters: field of view and spectral power distribution of the illuminant. Typically the field of view is either 2 degrees or 10 degrees.

The spectral power distribution is often taken to match daylight, which is standardized by the D series of illuminants.

2.3 Bidirectional Reflectance Distribution Function

Light reflection at an opaque surface can be modelled with a BRDF. The BRDF was defined in Nicodemus 1965 [16] (in the paper named “reflectance-distribution function"), where it was formulated similar to equation 2.2.

fri, ωo) = dLro) Lii) cos(θi)dωi

(2.2) In this sense, a BRDF is a physical property of a material. The unit of a BRDF is inverse steradians (sr−1). A BRDF indicates how much of the incident light signal from a particular direction is redistributed to an outgoing direction, for every possible pair of directions.

2.4 The Rendering Equation

BRDFs are an integral part of The Rendering Equation which was simultaneously introduced by Kajiya et al. 1986 [11] and Immel et al. 1986 [10]. This equation is particularly important within Physically Based Rendering (PBR) where it is used to simulate light reflections in a physi- cally realistic way [19]. Given a surface point with some BRDF, the rendering equation expresses the relationship between incident light signal, reflected light signal, emitted light signal and out- going light signal. The formalism of the rendering equation can be expressed with equation 2.3 and the auxiliary reflectance equation 2.4.

Lo(x, ωo, λ ,t) = Le(x, ωo, λ ,t) + Lr(x, ωo, λ ,t) (2.3)

Lr(x, ωo, λ ,t) = Z

f(x, ωi, ωo, λ ,t)Li(x, ωi, λ ,t) cos(θi)dωi (2.4) See figure 2.2 for a perhaps more intuitive illustration of the rendering equation as a schematic.

Position, wavelength and time have been omitted in this schematic in favour of simplicity.

(16)

Figure 2.2.Simplified schematic of The Rendering Equation. Consider a point on a surface with BRDF f .

Given the incident light signal Li, the BRDF acts as a signal filter that gives the reflected light signal Lr, and

by adding the emitted light signal Leone gets the outgoing light signal Lo.

2.5 Half-diff Domain

Rusinkiewicz 1998 [23] proposed a change of variables for efficient BRDF parametrization.

Before the paper, the incident and outgoing direction were often parametrized with spherical coordinates.

fr= fri, ωo) ωi= ωii, φi) ωo= ωoo, φo)

(2.5)

When studying a BRDF more closely, it becomes apparent that it has a substantial amount of symmetry around the half-way direction hhh.

hhh= ωi+ ωo

i+ ωok (2.6)

Therefore, it is convenient to parametrize a BRDF with the half-way direction and difference angles to incident direction θdand φdrelative to the half-way direction.

fr= fr(hhh, θd, φd)

hhh= hhh(θh, φh) (2.7)

See figure 2.3 for comparison of the coordinate systems. From here on, Ruzinkiewicz’s coordinate system for BRDF parametrisation is referred to as the half-diff domain.

2.6 BRDF Symmetry

In 3-dimensional space, a BRDF can be understood as a 4-dimensional function (2 dimension for incident direction and 2 dimensions for outgoing direction). However, the dimensionality of a BRDF can often be reduced through symmetry. A BRDF that depends on φhis called anisotropic.

Conversely, a BRDF that does not depend φh is called isotropic. Furthermore, if the BRDF also does not depend on φd, then it is called bivariate.

2.7 Physically Based BRDFs

A physically realistic BRDF has the following properties [13]:

(17)

Figure 2.3. Figure from Rusinkiewicz 1998 [23] illustrating spherical coordinate system (left) and Rusinkiewicz’s coordinate system (right).

• Non-negativity

fri, ωo) ≥ 0 (2.8)

• Energy conservation

∀ωo∈ Ω : Z

fri, ωo) cos(θi)dωi≤ 1 (2.9)

• Helmholtz reciprocity

fri, ωo) = fro, ωi) (2.10)

2.8 BRDF Acquisition

Traditionally BRDF measurements were gathered with a specialized measurement device known as a gonio-reflectometer [13]. More efficient and accurate material scanning devices have been built since then, but these are often custom rigs. Constructing an accurate, efficient and portable material scanner is a problem on its own, but it closely relates to perceptual reflectance metrics.

Matusik et al. 2003[13, 14] presents BRDF measurements on 100 isotropic materials, which are publicly available in the MERL database. This has been used extensively in other research, in particular to guide the design of analytical BRDF models. Ngan et. al. 2005 [15] builds upon this work and presents a device for capturing anisotropic materials.

2.9 BRDF Modelling

Previous work on perceptual reflectance metrics almost exclusively focus on fitting analytical BRDF models to measured materials. It is therefore important to have an understanding of the concept of BRDF model to grasp the formulation of previous metrics. Essentially a BRDF model is a function of BRDF domain variables (e.g. half-diff domain) and an arbitrary number of addi- tional parameters that determines the value of the BRDF model in any point of the domain. Some common parameter names are: specular, diffuse and roughness. A wide variety of BRDF models have been designed [6] in attempts to accurately describe, understand and simulate the reflectance behaviours of real world surfaces.

(18)

2.10 Reflectance Metrics

Fores et al. 2012[7] compares three BRDF error metrics from the literature by how well they fit with human perception. In all of these metrics, an analytical BRDF A with parameters p are evaluated against a measured BRDF M with n samples. All three metrics operate on one colour channel. These three metrics are:

• Root Mean Square Error (RMSE) E=

r

∑(M(ωi, ωo) − A(ωi, ωo, p))2

n (2.11)

• Cosine weighted RMSE E=

r

∑ cos2i)(M(ωi, ωo) − A(ωi, ωo, p))2

n (2.12)

• Cosine Weighted Cube Root (CWCbrt)

E= s

∑(cos2i)(M(ωi, ωo) − A(ωi, ωo, p))2)1/3

n (2.13)

Fores et al. 2012[7] found that the CWCbrt metric is better than the RMSE metrics, but the internal order of the RMSE metrics is not stated. Ngan et al. 2005 [15] expressed informal experimentation with metrics similar to those compared by Fores et al. 2012 and found that the cosine weighted RMSE is better than the RMSE. Controversial to observations by Fores et al.

2012, Ngan et al. 2005 states

For the metric based on the cubic-root, the best-fit BRDF often result in renderings with highlights that are too blurry compared to the measured data

but they provide no experimental data to back up this argument.

Havran et al. 2016 [9] takes a very different approach. They render images of a custom but very specific geometry of homogeneous material and utilise the knowledge of perceptual image metrics to compare the BRDFs of the materials. How this technique performs with relation to metrics which operate in BRDF domain is not known.

(19)

3. Theory

Empowered by understanding of light, human eyes and reflection models, we can formulate a theory about perceptually based reflectance metrics.

3.1 Sampled BRDF

To the best of my knowledge, all previous work compares a measured BRDF with an analytical BRDF. To keep things more generic, it seems like a good idea to abstract away the notion of measured and analytical behind a common concept which I refer to as sampled BRDF. Naturally, a measured BRDF is already sampled. An analytical BRDF in some parameter configuration can always be turned into a sampled BRDF by sampling it to some desired resolution.

3.2 Spectral BRDF

From what I have found, previous work on reflectance metrics consider only one colour channel of BRDFs or a combined grey-scale reflectance value. Therefore, they fail to capture chroma in their metrics. To keeps things general in the theory of this work, consider the full spectral reflectance of BRDFs. For the purpose of expressive convenience, let Λ denote the set of visible light wavelengths, S denote the set of spectral reflectance functions and F denote the set of all physically plausible BRDFs, then:

• A spectral reflectance function maps from a wavelength to a non-negative reflectance value.

S =s : Λ → R+

• A spectral BRDF maps from a direction pair to a spectral reflectance function.

F = f : Ω2→ S

Furthermore, assume that all spectral BRDFs in F honour energy conservation and Helmholtz reciprocity.

3.3 Formalism of Reflectance Metric

As far as I am concerned, no previous work provides a formalism for reflectance metrics.

Therefore, let us go ahead and define it. A reflectance metric can be defined as a function which takes two sampled BRDFs as input and outputs the distance between them. In this sense, a sam- pled BRDF can be thought of as a point in high-dimensional space and the reflectance metric defines the distance between any two points in the sampled BRDF space. Intuitively, there are some constraints on the distance. Let D denote a reflectance metric, then:

1. A reflectance metric maps from a pair of BRDFs to non-negative distance value.

D: F2→ R+ (3.1)

2. The distance between two identical BRDFs is 0.

∀ f ∈ F : D( f , f ) = 0 (3.2)

(20)

3. A reflectance metric is symmetric.

∀ f1, f2∈ F : D( f1, f2) = D( f2, f1) (3.3) 4. The reflectance metric honours the triangle inequality.

∀ f1, f2, f3∈ F : D( f1, f2) + D( f2, f3) ≥ D( f1, f3) (3.4) Taken together, this means that a reflectance metric is a pseudometric over BRDFs.

3.4 A Framework for Perceptual Reflectance Metrics

Previous successful metrics can all be formulated as the root of a weighted square sum over BRDF sample differences (see section 2.10). Therefore, it seems reasonable to compose a frame- work for perceptual BRDF metrics as in equation 3.5.

D( f1, f2) = rZ

2

w(ωi, ωo)d2( f1i, ωo), f2i, ωo))dωio (3.5) where

• w : Ω2→ R+denotes a bidirectional weight function.

• d : S2→ R+ denotes a colour metric.

Notable here is that the sum over observations has been replaced with an integral over the incident and outgoing direction, which is more generic. When dealing with the special case of fitting an analytical model against a sparse set of observations, one can utilize the weight function to control the importance of different direction based on some confidence of observed values. Phrasing the metric with an integral also enables a direct way handling of variable change through the Jacobian which comes in handy when mapping from standard domain to half-diff domain.

Both the weight function w and colour metric d in equation 3.5 are application dependent — w should capture illumination distribution and view distribution while d should capture illumination intensity and viewer sensitivity.

3.5 Projective Area Weighted CIELAB Metric

The Projective Area Weighted CIELAB (PAWCIELAB) metric is a new and never before tested metric. The weight function is given by the projective area scale of the surface when projected along each of the view direction and illumination direction. The projective area scale along one direction can be calculated by taking the cosine of the angle between the surface normal and projection plane normal. This yields equation 3.6.

wPAWCIELABi, θo) = cos(θi) · cos(θo) (3.6)

The colour metric of the PAWCIELAB metric can be understood as the Euclidian distance between the two BRDF samples in CIELAB colour space, which should accurately model the perceptual colour distance of humans. This can be mathematically modelled with equation 3.7

dPAWCIELAB2 ( f1i, ωo), f2i, ωo)) = kCIELAB( f1i, ωo)) − CIELAB( f2i, ωo))k2 (3.7) where CIELAB( f ) maps from the colour space of the BRDF samples to the CIELAB colour space.

3.6 Hypothesis

The hypothesis in this work is that the PAWCIELAB metric (see section 3.5) is a better percep- tual reflectance metric than the CWCbrt metric (see section 4.13).

(21)

4. Implementation

To test the hypothesis, both the CWCbrt metric and the PAWCIELAB metric have to be imple- mented. There are several ways to do this. In this work the implementation of the metrics is written in GLSL and wrapped with a C++ interface. There are several reasons for using GLSL:

(1) An isotropic BRDF can conveniently be stored in a 3D-texture with built in interpolation and wrapping. (2) Many of calculations in the metrics can be naturally expressed with built in GLSL functions. (3) The parallelism of GPUs can efficiently be used for rapid evaluation of the metrics.

(4) Custom rendering is close at hand.

4.1 Libraries

To work with OpenGL a few libraries come in handy. Following is a list of libraries used in this work.

• OpenGL [1]

• GLEW [2]

The OpenGL Extension Wrangler Library (GLEW) is a cross-platform open-source C/C++

extension loading library. GLEW provides efficient run-time mechanisms for determining which OpenGL extensions are supported on the target platform. OpenGL core and extension functionality is exposed in a single header file.

• GLFW [3]

GLFW is an Open Source, multi-platform library for OpenGL, OpenGL ES and Vulkan devel- opment on the desktop. It provides a simple API for creating windows, contexts and surfaces, receiving input and events.

• GLM [4]

OpenGL Mathematics (GLM) is a header only C++ mathematics library for graphics software based on the OpenGL Shading Language (GLSL) specification and released under the MIT license.

This library provides classes and functions designed and implemented following as strictly as possible the GLSL conventions and functionalities so that when a programmer knows GLSL, he knows GLM as well, making it really easy to use.

This project isn’t limited to GLSL features. An extension system provides extended capa- bilities: matrix transformations, quaternions, half-based types, random number generation, procedural noise functions, etc.

4.2 Hardware

All development and experiments were performed on a machine with specification given by table 4.1.

(22)

Table 4.1. Specification of machine used for development and experiments.

GPU Monitor

NVIDIA GeForce GTX 1080 Dell U2412M

4.3 Half-diff Conversion Formulas

BRDFs are conveniently expressed in half-diff domain or some similar domain (see section 2.5).

To implement the PAWCIELAB metric with BRDFs expressed in half-diff domain, some conver- sion formulas are needed. A definition of a tangent space and an associated coordinate system provides formalism, expressive power and aid reasoning in deriving such formulas. The tangent space is spanned by three orthogonal basis vectors: the tangent vector t, the bitangent vector b and the normal vector n. These vectors are assigned coordinates t = (1, 0, 0), b = (0, 1, 0) and n = (0, 0, 1). Hence, these vectors form an orthonormal basis. Furthermore, the coordinate sys- tem is right-handed so that, for example, when looking down upon the surface with the normal pointing towards you then the bitangent is oriented 90 degrees counter-clockwise to the tangent.

See figure 4.1 for an illustration.

t b

n

Figure 4.1.Illustration of the tangent space coordinate system.

Continuing on to derive conversion formulas, two rotation matrix formulas are handy. The first rotates around the bitangent b by an angle θ , see equation 4.1. The second rotates around the surface normal n by an angle φ , see equation 4.2.

rotb(θ ) =

cos(θ ) 0 sin(θ )

0 1 0

− sin(θ ) 0 cos(θ )

 (4.1)

rotn(φ ) =

cos(φ ) − sin(φ ) 0 sin(φ ) cos(φ ) 0

0 0 1

 (4.2)

With these rotation matrix formulas, one can derive the tangent space coordinates of incident and outgoing direction in terms of half-diff domain variables.

ω~i= rotnh) rotbh) rotnd) rotbd)n

= rotnh)

sin(θh) cos(θd) + cos(φd) cos(θh) sin(θd) sin(φd) sin(θd)

cos(θh) cos(θd) − cos(φd) sin(θh) sin(θd)

(4.3)

ω~o= rotnh) rotbh) rotnd) rotb(−θd)n

= rotnh)

sin(θh) cos(θd) − cos(φd) cos(θh) sin(θd)

− sin(φd) sin(θd)

cos(θh) cos(θd) + cos(φd) sin(θh) sin(θd)

(4.4)

Through geometric insight (see figure 4.2) we have ~ωi· n = cos(θi), which combined with equation 4.3 yields equation 4.5. Correspondingly, for the outgoing direction we get equation 4.6.

(23)

t n

i

i· n = cos(θi) θi

Figure 4.2.Diagram of incident direction orthogonally projected in the surface normal.

t

b n

i

projtb(~ωi)

i· t

i· b θi

φi

Figure 4.3.Diagram of incident direction orthogonally projected in the tangent plane.

These two formulas are highly useful. For example, they are used to compute the projective area weight function (see equation 3.6) and also to check if incident or outgoing direction is below horizon.

cos(θi) = cos(θh) cos(θd) − cos(φd) sin(θh) sin(θd) (4.5)

cos(θo) = cos(θh) cos(θd) + cos(φd) sin(θh) sin(θd) (4.6) Formulas for φi and φo are not as useful as those for θi and θo but, nevertheless, there are some applications e.g. when deriving an analytical scale factor for the half-diff domain (see section 4.6). So, for completeness, consider the orthogonal projection of the incident direction vector in the tangent plane, see figure 4.3. One can read the tb-plane coordinates before rotation by φhin equation 4.3 which gives equation 4.7 for φi, and correspondingly one gets equation 4.8 for φo.

φi= arctan (rotn(−φh)~ωi) · b (rotn(−φh)~ωi) · t

 + φh

= arctan

 sin(φd) sin(θd)

sin(θh) cos(θd) + cos(φd) cos(θh) sin(θd)

 + φh

(4.7)

(24)

φo= arctan (rotn(−φh)~ωo) · b (rotn(−φh)~ωo) · t

 + φh

= arctan

 − sin(φd) sin(θd)

sin(θh) cos(θd) − cos(φd) cos(θh) sin(θd)

 + φh

(4.8)

4.4 MERL Domain

To test the hypothesis on realistic data, the MERL database [13] was used. The BRDF samples in the MERL database are structured in a 3-dimensional domain acquired through symmetry- reduction of the half-diff domain. From here on, this domain is referred to as the MERL domain.

The materials are assumed to be isotropic, allowing complete reduction of φh and furthermore Helmholtz reciprocity (see equation 2.10) makes it possible to reduce φd to the range [0, π). The MERL domain also employs stretching along θh. For convenience, three new variables are in- troduced to parametrise the MERL domain 0 ≤ Mr, Ms, Mt < 1. These are defined through the inverse relationship of equations 4.9, 4.10 and 4.11.

φd = πMr (4.9)

θd

2Ms (4.10)

θh

2Mt2 (4.11)

4.5 BRDF Texture

The materials of the MERL database are sampled in a regular grid of the MERL domain, with 180 layers along Mr, 90 layers along Ms and 90 layers along Mt. Hence, each material has 180 · 90 · 90 = 1458000 samples in total. Each sample has 3 colour channels.

Isotropic BRDFs like those of the MERL database can be stored in a 3D-texture. The MERL domain coordinates were constructed to work directly as texture coordinates of the MERL domain by ranging from 0 to 1 and distributing the samples regularly. By storing the BRDFs in a texture, one can utilise built in trilinear interpolation, which is useful when rendering. Furthermore, since the MERL domain is symmetry reduced on the coordinate corresponding to φd (i.e. Mr), one can setup the BRDF texture wrapping method on that texture coordinate to repeat. The effect is that reading the texture at e.g. Mr= −0.3 will result in a lookup at Mr= 0.7, which is useful when rendering.

4.6 MERL Scale Factor

To compute the perceptual BRDF metric framework in equation 3.5 on BRDFs in MERL do- main, one must perform a change of variable which gives rise to a scale factor. The variable change from standard domain to MERL domain can be performed in a two step procedure, going via the half-diff domain. The scale factor of the first variable change can be seen in equation 4.12, but for clarity it is written out explicitly in equation 4.13.

io= sin(θi) sin(θo)dθiioo

= sin(θi) sin(θo)

det(∂ (θi, φi, θo, φo)

∂ (θh, φh, θd, φd))

hhdd

(4.12)

(25)

shal f di f fh, φh, θd, φd) = sin(θi) sin(θo)

det( ∂ (θi, φi, θo, φo)

∂ (θh, φh, θd, φd))

(4.13) A more thorough analytic expression of the half-diff scale factor can be found in appendix A.

The scale factor of the second variables change can be found in similar way to the first which gives the MERL scale factor in equation 4.14.

sMERL3

2 Mt· shal f di f f (4.14)

4.7 Variable Change

With scale factors readily available it is straightforward to perform the variable changes. Let f be a function independent of φhand symmetric around φd= π. Also assume f = 0 if cos(θi) < 0 or cos(θo) < 0 (i.e. if incident or outgoing direction is below horizon) to simply integral variable ranges. Note that both cos(θi) and cos(θo) are independent of φhand symmetric around φd= π (see equations 4.5 and 4.6). Then equation 4.15 gives the variable changed integrals over f .

Z 2

f dωio

=

Z

0

Z

0

Z π2

0

Z π2

0

f· shal f di f fhdhd

=

Z

0

h2

Z π

0

Z π2

0

Z π2

0

f· shal f di f fhdd

=4π

Z π

0

Z π2

0

Z π2

0

f· shal f di f fhdd

=4π

Z 1

0

Z 1

0

Z 1

0

f· sMERLdmrdmsdmt

=4π Z

m∈M f· sMERLdm

(4.15)

By applying the variable change in equation 4.15 to the framework equation 3.5 we get equa- tion 4.16.

D( f1, f2) = r

Z

m∈M

w(m)d2( f1(m), f2(m))sMERL(m)dm (4.16)

4.8 Discretisation

In order to implement the perceptual BRDF distance framework it must be discretised. To keep things general, let Nr, Ns, Nt be the number of layers along mr, ms, mt respectively. In the special case of MERL materials it would be that (Nr, Ns, Nt) = (180, 90, 90).

There are several ways to discretise the integral with different accuracy and complexity. In this work, the simple midpoint rule is deemed to be accurate enough while being relatively easy to implement. By letting the integral resolution be the same as that of the BRDF texture, the midpoints align perfectly with the voxel centers, since OpenGL textures are used (this is not true with DirectX). Let K denote the set of midpoints, more formally given by 4.17.

K:=



(nr+ 0.5

Nr ,ns+ 0.5

Ns ,nt+ 0.5

Nt )∀(nr, ns, nt) ∈ [0 : Nr− 1] × [0 : Ns− 1] × [0 : Nt− 1]

 (4.17)

(26)

Furthermore, the volume element dm is given by 4.18.

dm= dmrdmsdmt= 1 Nr

1 Ns

1 Nt = 1

|K| (4.18)

Then the midpoint discretisation of equation 4.16 is given by equation 4.19.

D( f1, f2) ≈ s

4π∑k∈Kw(k)d2( f1(k), f2(k))sMERL(k)

|K| (4.19)

4.9 Framework Implementation

The discrete framework equation contains a factor √

4π which can be discarded because the important part of the metric is the relative perceptual distance. Following the same argument, the denominator |K| could also be discarded, but it actually fills a role in preserving numerical stability by preventing overflow. But this denominator could also cause underflow if applied per BRDF sample before summing. To circumvent overflow and underflow, the denominator can be applied on partial sums, which can be directly tied to work groups of compute shaders.

The sum and denominator of the discrete framework is implemented as a compute shader with two abstract functions: one for the weight and scale factor and one for the colour metric. The reason for not applying the scale factor at the framework level is to keep it more general and thereby allow the CWCbrt metric to be implemented with the same framework.

After the compute shader is done, the result is read back by its C++ wrapper and the square root is performed by the host application on CPU.

Equation 4.20 summarizes what the framework implementation actually computes.

E(g, h, f1, f2) = s

k∈Kg(k)h( f1(k), f2(k))

|K| (4.20)

where

• g : R3→ R is an arbitrary function defined by the application but is intended be the product of a weight function and the MERL scale factor.

• h : R3× R3→ R is an arbitrary function defined by the application but is intended to be the square of the perceptual colour distance of the observer.

4.10 Framework Test

To check that the compute shader and MERL scale factor are correctly implemented, let h = 1 independent of its arguments while g is one the following functions:

gone(m) = 1 (4.21)

guni f orm(m) =

(0 if cos(θi(m)) ≤ 0 or cos(θo(m)) ≤ 0

sMERL(m) otherwise (4.22)

gpro j.area(m) = (

0 if cos(θi(m)) ≤ 0 or cos(θo(m)) ≤ 0

wPAWCIELAB(m)sMERL(m) otherwise (4.23)

One reason behind using these functions is because the framework is easy evaluate analytically when using them and therefore have an exact expected value.

(27)

Firstly, let us study gone. Note that this function breaks the assumption of being zero below the horizon, but the purpose of this function just for testing the framework accuracy and not to be used in a metric. The analytical expected value is given by equation 4.24.

E(g = 1, h = 1, f1, f2) = s

k∈K1

|K| = s

|K|

|K|= 1 (4.24)

Testing with g = 1 might seem completely uninteresting at first sight but it actually informs us about the numerical stability of the compute shader and verifies that the work groups synchronize properly.

The second function guni f orm is used to validate the MERL scale factor implementation. To find the analytical expression with this function it helps to go back to standard domain where the problem can be interpreted geometrically as the hyper volume given by the product of two hemisphere areas, see equation 4.25.

E(g = guni f orm, h = 1, f1, f2)

= s

k∈Kguni f orm(k)

|K|

≈ r 1

4π r

Z

m∈M

guni f orm(m)dm

= r 1

4π rZ

2

io

= r 1

4π q

(2π)2

=√ π

(4.25)

Lastly, gpro j.area validates the projective area weight function and also serve to double check the scale factor implementation so that it is not just a coincidence if it is correct in the other test.

Given equation 4.26

Z ω ∈Ω

cos(θ )dω

=

Z π /2

0

Z

0

cos(θ ) sin(θ )dφ dθ

=

Z

0 dφ1 2

Z π /2

0

2 cos(θ ) sin(θ )dθ

=2π1 2

Z π /2

0

sin(2θ )dθ

=π − cos(2θ ) 2



π /2 0

=π − cos(π)

2 −− cos(0) 2



=π 1 2+1

2



(4.26)

(28)

Table 4.2. Numeric accuracy in the framework test. The squared perceptual colour distance function was

forced to h= 1 in all points.

g= gone g= guni f orm g= gpro j.area

Expected 1

π ≈ 1.77245

π

2 ≈ 0.88622

Actual (180 × 90 × 90) 0.999997 1.77258 0.88625

it is straight forward to show equation 4.27.

E(g = gpro j.area, h = 1, f1, f2)

≈ r 1

4π r

Z

m∈Mgpro j.area(m)dm

= r 1

4π rZ

2

cos(θi) cos(θo)dωio

= r 1

√ π2

=

√ π 2

(4.27)

One insights that can be drawn from equation 4.27 is that the value computed by the PAWCIELAB metric implementation will be close to 1 when the perceptual colour distance is close to 1 in all points.

To conclude this section, the expected and actual values in the framework test can be found in table 4.2, where we can see that there was at least 4 digits of accuracy in all tests. This numerical error is small enough to be negligible in this work.

4.11 Visualising Projective Area Weight and MERL Scale Factor

To aid intuition of the projective area weight factor and the MERL scale factor and sanity check their implementation, it helps to visualise them. Several previous work visualise BRDFs with colour-graphs of so called half-diff slices. In particular to those used to seeing these types of graph, but also others, figure 4.4 provides a perhaps more intuitive way of understanding some of the core ideas introduced in this work.

Looking at figure 4.4, there is one outstanding artifact — a black line looking like a square root function in the top center and the top right colour-graph. These lines appear due to numerical instability near Mr= 0 in the implementation of the half-diff scale factor on which the MERL scale factor depends. Looking at the colour-graph of the MERL scale factor at Mr= 0.5, it can be seen that it is identical to the one at Mr= 0 except without the artifact. This insight suggests that the MERL scale factor is independent of Mrand, thus, it is possible that the expression and implementation of the half-diff scale factor can be simplified further to get rid of the artifact.

However, the artifact is a negligible error in this work and is therefore ignored.

4.12 RGB to CIELAB Conversion

To implement the PAWCIELAB metric, an RGB to CIELAB conversion algorithm is needed.

There are multiple versions to choose from, which depend on two factors: the colour matching functions of the observer and the white point chosen. The conversion algorithm used in this work is taken from Vishnevsky et al. 2000 [25], which assumes a 2 degree standard observer and D65 white point. The conversion algorithm takes a linear sRGB tuple (R, G, B) as input and converts to CIELAB colour space (L, a, b), going via CIE XYZ colour space (X ,Y, Z). The conversion

(29)

Mr wpro j 0.05 · sMERL 0.4 · wpro j· sMERL

0

0.5

Figure 4.4.Visualisation of the projective area weight function and the MERL scale factor as well as their

product with slices through the MERL domain (orthogonal to Mr) illustrated with intensity images. In all

of the images, Msgrows from 0 to 1 when moving from left to right and Mtgrows from 0 to 1 when moving

from bottom to top. The colour of each pixel was assigned with a perceptually linear greyscale of the value of the function specified at the top of the column, where 0 is black and 1 is white. The purpose of the constants 0.05 and 0.4 applied on the two rightmost column is to keep the pixel intensity in the range 0 to 1.

formula for linear sRGB to CIE XYZ is given by equation 4.28, and the conversion formulas for CIE XYZ to CIELAB is given by equations 4.29, 4.30, 4.31 and the auxiliary function in equation 4.32. Also, the D65 white point normalisation values in CIE XYZ colour space are given by equation 4.33.

 X Y Z

=

0.412453 0.357580 0.180423 0.212671 0.715160 0.072169 0.019334 0.119193 0.950227

 R G B

 (4.28)

L= 116 f

 Y YD65



− 16 (4.29)

a= 500

 f

 X

XD65



− f

 Y YD65



(4.30)

b= 200

 f

 Y YD65



− f

 Z ZD65



(4.31)

f(t) = (

t1/3 if t > (24/116)3

841

108t+11616 otherwise (4.32)

(30)

 XD65 YD65 ZD65

=

 95.047 100.00 108.883

 (4.33)

The CIE XYZ to CIELAB conversion formulas (equations 4.29, 4.30, 4.31 and 4.32) can also be found in the official technical report from CIE (2004) [18].

4.13 Cosine Weighted Cube Root Metric Implementation

To implement the CWCbrt metric (see equation 2.13) one can utilise the fact that it can be expressed in the generalised discrete framework of equation 4.20. Expressed in this framework, the CWCbrt metric is modelled with equations 4.34 and 4.35.

gCWCbrti) = (cos2i))1/3 (4.34)

hCWCbrt( f1, f2) =

(( f1− f2)2)1/3

1 (4.35)

Important to notice here is that Fores et al. 2012 [7] only consider monochrome BRDFs (i.e.

f1 and f2 are 1-component vectors) and therefore an interpretation had to be made on how to extend the metric to handle multichrome BRDFs (e.g. 3-channel RGB). The square and cube root in equation 4.35 is performed element-wise before summing up the channels (in the equation written as an L1-norm).

(31)

5. Judge Experiment

With the PAWCIELAB metric and the CWCbrt metric implemented, the next step is to figure out which one is better at predicting the perceptual distance between BRDFs with respect to humans.

The idea is to find pairs of BRDF pairs for which the two metrics disagree on which pair is more similar and then bring in objective human judges to assess the situation.

5.1 Search for Disagreement in BRDF Similarity Metrics

Two BRDF metrics D1and D2agreeon similarity order of two pairs of BRDFs ( fa.1, fa.2) and ( fb.1, fb.2) iff

sgn(D1( fa.1, fa.2) − D1( fb.1, fb.2)) = sgn(D2( fa.1, fa.2) − D2( fb.1, fb.2)) and conversely, they disagree iff

sgn(D1( fa.1, fa.2) − D1( fb.1, fb.2)) 6= sgn(D2( fa.1, fa.2) − D2( fb.1, fb.2)) where sgn(x) denote the sign function of a real number x.

sgn(x) :=





−1 if x < 0 0 if x = 0 1 if x > 0

(5.1)

To improve textual flow, a pair of BRDF pairs is from here on referred to as a configuration. In other words, a configuration is in this work defined as a pair of BRDF pairs.

Configurations for which the CWCbrt metric and the PAWCIELAB metric disagree are par- ticularly relevant in answering questions about which metric is better. In the best of worlds we could determine and analyse all such pairs, but considering the continuous nature of the BRDFs and the structure of the metrics this is hard if not impossible. Of course, it could be done to some sufficient resolution, but it would require substantial computational effort. Another approach is to scan all possible configurations of some database e.g. the MERL database.

The MERL database consists of 100 unique BRDFs which means that there are 1002  =100·992 = 4950 unique pairs of BRDFs when not counting pairs with the same BRDF repeated twice. Con- sequently, there are 49502  =4950·49492 = 12 248 775 unique configurations in the MERL database.

Scanning all 12 248 775 configurations for disagreements between the metrics, it was found that there are 3 247 730 order disagreements. Hence, the disagreement rate between the metrics is

3 247 730

12 248 775 ≈ 0.265 over the MERL database. This informs us that the metrics agree in majority over the MERL database, but there is also a substantial amount of disagreement.

5.2 Selecting Configurations for Human Judgement

With a known set of configurations for which the CWCbrt metric and PAWCIELAB metric disagree, the next step is to find out which of the two metrics that best matches with human perception. To do so, human judges are brought in to make one judgement per configuration. One problem in this case is that there are 3 247 730 configurations and asking the judges to do the

(32)

same number of judgements is not only wasteful but also infeasible. Therefore, the set to judge has to be trimmed down.

Selecting a subset for human judgement from the complete disagreement set should be done such that the subset has a manageable size while discarding as little information as possible. An upper bound of 100 configurations on the subset size was selected as to not overload the judges.

To select 100 configurations from the 3 247 730 available, two strategies were put in place. The first strategy was to sort the configurations by an ad hoc agreement value defined by

agreement= min(D1( fa.1, fa.2) · D2( fb.1, fb.2), D1( fb.1, fb.2) · D2( fa.1, fa.2)) max(D1( fa.1, fa.2) · D2( fb.1, fb.2), D1( fb.1, fb.2) · D2( fa.1, fa.2))

This ensured that configurations in which the metrics disagreed most would be selected. The second strategy was a rule that limits the number of times that a BRDF can participate in a con- figuration of the subset, with the limit set to 10. The reason for this rule was to prevent abuse of a single or a few points of failure in either of the two metrics, which otherwise could have lead to misleading results. The 100 configurations selected from this procedure along with computed metric values and agreement can be seen in appendix B.

5.3 Rendering Images for Human Judgement

I find that it is incredibly hard and inefficient to compare two BRDFs by staring at their numbers and functions. A much more intuitive and effective way can be achieved by visualizing them.

Colour-graphs is one step in this direction but they can be hard to understand to someone who is not an expert in the field. A rendered image of an object of homogeneous material where the BRDF determines how light is reflected offers a more intuitive way of understanding a BRDF.

Following previous work e.g. [24, 21, 17, 27], I render a "sphere image" for perceptual BRDF comparison. [17] and [27] use two point lights — one front light at (1, 1, 1) and one back light at (−1, −1, −3) for grazing angle reflections. However, I use a single directional front light with rays along direction (−1, −1, −1). The camera setup I use is a perspective projection camera with 60 degrees field of view. The light source is white i.e. it has RGB intensity (1, 1, 1). Lastly, I apply a gamma correction with exponent 1/2.2.

One drawback to sphere images is that the BRDF is not visualized in every point in which it is defined. This can be addressed by altering the direction of the light source.

A major benefit of rendered object images is that existing knowledge on human perception can be used to reason about the BRDF similarity with respect to human perception by comparing the rendered images of two BRDF (for example see Havran et al. 2016 [9]). In turn, this can be used to judge how well an energy function aligns with human perception.

5.4 Judge Instructions

To make the judge experiment repeatable it is important that the judges are given the same instructions. The instructions in English given in this work can be found in table 5.1. It happened at times that the judges would ask about what differences they should look for more specifically and when this happened they were told to strictly follow the instructions in order to not interfere with the results. Furthermore, the judges were encourages to read their judgement choice out loud for the first few configurations to ensure that their intent aligned with their actions. The instructions were also available in Swedish.

5.5 Judge Experiment Setup

See figure 5.1 for a photo of the experiment setup as seen by the judges. The monitor is a Dell U2412M with standard settings. Instructions are readily available in the background. In the center

(33)

Table 5.1. Judge Instructions

You will be shown 100 different configurations. Each configuration shows two spheres - one on the left and one on the right. Once every second the spheres on each side will change, alternating between two spheres on both sides. Your task is to choose one of the following three statements for each configuration:

1. I perceive a smaller change in the spheres on the left side than on the right side.

2. I perceive an equal change in the spheres on each sides.

3. I perceive a smaller change in the spheres on the right side than on the left side.

Press the corresponding number key on the keyboard to make your judgement. Use the up arrow key and down arrow key to navigate between configurations.

of the screen is the window displaying a configuration to make a judgement on. The command window at the bottom displays progress when a judgement is made.

5.6 Judge Experiment Results

Summarizing the judgements made by the five judges in the 100 configurations there was 1 vote for option 1 — “I perceive a smaller change in the spheres on the left side than on the right side”, 15 votes for option 2 — “I perceive an equal change in the spheres on each side” and 484 votes for option 3 — “I perceive a smaller change in the spheres on the right side than on the left side”. Votes for option 1 are in favour of the CWCbrt metric, votes for option 2 are undecided and votes for option 3 are in favour of the PAWCIELAB metric. This is because experiment is set up such that the CWCbrt metric distance is smaller on the left side and the PAWCIELAB metric distance is smaller on the right side. See figure 5.2 for an illustration of the judgement summary.

Grouping the judgements by configuration we see that in 85 of the 100 configurations, all five judges voted for option 3. In 13 of the 100 configurations, only one judge voted for option 2 while the other four judges voted for option 3. In 1 of the 100 configuration, two judges voted for option 2 while the other three judges voted for option 3. Lastly, only once did a judge vote for option 1, and for this configuration the rest of the judges voted for option 3.

(34)

Figure 5.1. Photo of experiment setup.

(35)

Votes for option 1 1

15

Votes for option 2

484

Votes for option 3

Figure 5.2.Judgement summary, illustrated with a cloud area chart.

(36)

6. Post Experiment Survey

For the purpose of detecting flaws in the judge experiment and possibly improve it in the future, all judges were asked to fill in a survey directly after they were done making their judgements.

6.1 Post Experiment Survey Responses

None of the judges reported vision impairment. Also, none of them added comments in the free text field at the bottom of the survey.

Only two of the five judges found the instructions clear. One judge wanted more precise in- structions in what changes to look for, but this was intentionally left out from the instructions to force the judges into making the decision themselves Two judges stated that they would have preferred to select the side with largest change, rather than smallest change. See figure 6.1.

Figure 6.1.Post experiment survey responses on instructions questions, illustrated with a pie chart.

(37)

Figure 6.2.Post experiment survey responses on frequency question, illustrated with a pie chart.

Four out of the five judges found that the sphere alternation frequency was just about perfect, while one found it to be too fast. See figure 6.2.

All judges found that the sphere size was just about perfect. See figure 6.3.

Figure 6.3.Post experiment survey responses on sphere size question, illustrated with a pie chart.

Three judges strongly agreed with that the hotkeys were easy to use and the other two agreed without emphasize. See figure 6.4.

(38)

Figure 6.4. Post experiment survey responses on hotkeys question, illustrated with a bar chart. The scale goes from (1) Strongly disagree to (5) Strongly agree.

References

Related documents

With the same speed limit displayed with the VSL as previously with a permanent road sign (blue bar), the average speed at all crossings dropped by 1 – 7 km/h.. With an increase

Auf dem Campus Norrköping liegt die Ausbildung flir die Vorschule und die Vorschulklassen, fiir das ,Fritidshem&#34; (schulische Nachmittagsbetreuung) und fur einen

Keywords: contingent valuation; willingness to pay; validity; sensitivity to scope; risk communication; community analogy; cardiac arrest.. JEL Code: D6, D83,

Det övergripande syftet med studien är att ta reda på om tillsynspersoner och företagare har olika upplevelser av tolkning och tillämpning av lagen om tillsyn (miljöbalken 26 kap §

Besides, a focus on culture, in particular dance, illustrates that the evolution and diversification of diplomacy are related to the diversification and evolution of arts through

In this thesis, we will focus on the design of objective metrics for visual quality assessment in wireless image and video communication.. The aim is to quantify the

Evaluation of image quality metrics With the exponential and double exponential mapping being identified as suitable models for objectively predicting perceptual image quality,

The pooling function parameters (ω,κ,ν) and expo- nential mapping parameters (a,b) obtained from the training are then used to compute the metrics on image set I W and val- idate