• No results found

Two-Dimensional Convex Hull Algorithm Using the Iterative Orthant Scan

N/A
N/A
Protected

Academic year: 2022

Share "Two-Dimensional Convex Hull Algorithm Using the Iterative Orthant Scan"

Copied!
50
0
0

Loading.... (view fulltext now)

Full text

(1)

Two-Dimensional Convex Hull Algorithm Using the Iterative Orthant Scan

DAVIS FREIMANIS JONAS HONGISTO

KTH ROYAL INSTITUTE OF TECHNOLOGY

SCHOOL OF COMPUTER SCIENCE AND COMMUNICATION

(2)

Algorithm Using the Iterative Orthant Scan

DAVIS FREIMANIS JONAS HONGISTO

Bachelor’s Thesis in Computer Science Date: June 5, 2017

Supervisor: Mårten Björkman Examiner: Örjan Ekeberg

Swedish title: Tvådimensionell Convex Hull Algoritm Med Iterativ Orthant Scan

School of Computer Science and Communication

(3)
(4)

Abstract

Finding the minimal convex polygon on a set of points in space is of great interest in computer graphics and data science. Furthermore, do- ing this efficiently is financially preferable. This thesis explores how a novel variant of bucketing, called the Iterative orthant scan, can be applied to this problem. An algorithm implementing the Iterative or- thant scan was created and implemented in C++ and its real-time and asymptotic performance was compared to the industry standard algo- rithms along with the traditional textbook convex hull algorithm. The worst-case time complexity was shown to be a logarithmic term worse than the best theoretical algorithm. The real-time performance was 47.3% better than the traditional algorithm and 66.7% worse than the industry standard for a uniform square distribution. The real-time per- formance was 61.1% better than the traditional algorithm and 73.4%

worse than the industry standard for a uniform triangular distribution.

For a circular distribution, the algorithm performed 89.6% worse than the traditional algorithm and 98.7% worse than the industry standard algorithm. The asymptotic performance improved for some of the dis- tributions studied. Parallelization of the algorithm led to an average performance improvement of 35.9% across the tested distributions. Al- though the created algorithm exhibited worse than the industry stan- dard algorithm, the algorithm performed better than the traditional algorithm in most cases and shows promise for parallelization.

(5)

Sammanfattning

Att hitta den minsta konvexa polygonen för en mängds punkter är av intresse i ett flertal områden inom datateknik. Att hitta denna polygon effektivt är av ekonomiskt intresse. Denna rapport utforskar hur me- toden Iterative orthant scan kan appliceras på problemet att hitta detta konvexa hölje. En algoritm implementerades i C++ som utnyttjar den- na metod och dess prestanda jämfördes i realtid såsom asymptotiskt mot den traditionella och den mest använda algoritmen. Den nya algo- ritmens asymptotiska värsta fall visades vara ett logaritmiskt gradtal sämre än den bästa teoretiska algoritmen. Realtidsprestandan av den nya algoritmen var 47,3% bättre än den traditionella algoritmen och 66,7% sämre än den mest använda algoritmen för fyrkantsdistribution av punkterna. Realtidsprestandan av den nya algoritmen var 61,1%

bättre än den traditionella algoritmen och 73,4% sämre än den mest använda algoritmen för triangulär distribution av punkterna. För cir- kulär distribution var vår algoritm 89,6% sämre än den traditionella algoritmen och 98,7% sämre än den vanligaste algoritmen. Parallelli- sering av vår algoritm ledde i medel till en förbättring på 35,9%. För vissa typer av fördelningar blev denna prestanda bättre. Även då algo- ritmen hade sämre prestanda än den vanligaste algoritmen, så är det ett lovande steg att prestandan var bättre än den traditionella algorit- men.

(6)

1 Introduction 1

1.1 Problem Statement . . . 2

1.2 Outline . . . 2

1.3 Scope . . . 3

2 Background 4 2.1 The Convex Hull Problem . . . 4

2.1.1 Graham’s Scan . . . 6

2.1.2 Quickhull . . . 6

2.1.3 Kirkpatrick–Seidel . . . 7

2.2 The Iterative Orthant Scan . . . 7

2.3 Algorithm Analysis . . . 8

2.3.1 Time Complexity . . . 8

2.3.2 Space Complexity . . . 8

2.4 Parallelization . . . 8

2.4.1 Shared-Memory Parallel Computers . . . 9

2.4.2 OpenMP . . . 9

2.5 Statistical Concepts . . . 11

2.5.1 Uniform Random Distribution . . . 11

2.5.2 Normal Distribution . . . 11

2.5.3 Error Function . . . 11

3 Algorithm 13 4 Method 23 4.1 Testing Environment . . . 23

4.2 Test Data . . . 23

4.3 Testing Process . . . 24

4.4 Implementation . . . 24

v

(7)

5 Results 25

5.1 Square Interval . . . 26

5.2 Circular Interval . . . 27

5.3 Triangle Interval . . . 28

5.4 Normal distribution . . . 29

6 Discussion 32 6.1 Discussion of Results . . . 32

6.2 Limitations . . . 34

7 Conclusion 35

References 36

Appendix A Code and Test Data 38

Appendix B Result Graphs 39

Appendix C Graham’s Scan 42

(8)

Introduction

Geometric algorithms are a sparsely studied area, often relegated to the dustbin of computational history as a not so enticing field of study.

Nonetheless, there are a number crucial geometric algorithms that serve as the backdrop to much of modern computer graphics, in particular algorithms such as determining if a point in two or three dimensions is inside a polygon (the crossing test [1]), finding the minimal spheri- cal container for a set of points in three-dimensions (bounding ball [2]) or finding the smallest possible polygon containing a set of points in N-dimensions [3].

With the advent of multiprocessor computers and faster GPUs, it has also become increasingly important to be able to work with al- gorithms in parallel, both financially and time-wise. In recent years, steps have been made to create efficient parallelized versions of old sequential geometric algorithms. A recent result in parallelizing the bounding ball algorithm using a novel bucketing approach, called the Iterative orthant scan, has shown great promise and it thus becomes an interesting problem to evaluate if the same method could be used for other geometric algorithms [4].

In this thesis, we present a sequential and a parallel version of an al- gorithm that computes the convex hull using a variation of the Iterative orthant scan presented in [4], perform a time complexity comparison of the sequential version with the traditional algorithm for computing the convex hull along with the current state of the art. We also per- form a performance comparison between the sequential and parallel versions of our algorithm, the traditional sequential algorithm and the current state of the art sequential algorithm.

1

(9)

1.1 Problem Statement

This thesis will investigate the following:

• Can the Iterative orthant scan can be applied to the problem of finding the convex hull?

• What is the worst-case time complexity of a convex hull algo- rithm implemented using the Iterative orthant scan?

• How does the performance of a convex hull algorithm imple- mented using the Iterative orthant scan compare to other convex hull algorithms?

• How does the performance of a parallelized convex hull algo- rithm implemented using the Iterative orthant scan compare to other convex hull algorithms?

1.2 Outline

This report will follow the following structure:

• Chapter 2 introduces the problem of finding the convex hull, and details the state of the art along with an introduction to the the- ory of algorithm analysis. Also presented is a background on parallelization and the statistical concepts used.

• Chapter 3 presents our algorithm for finding the convex hull, proof of correctness, proof of worst-case time complexity and a discussion of performance relating to different distributions of input.

• Chapter 4 outlines how the performance test is conducted and details the datasets used.

• Chapter 5 contains the result of our testing.

• Chapter 6 discusses the results.

• Chapter 7 includes the conclusions along with a discussion of possible further work.

(10)

1.3 Scope

This thesis will restrict itself as follows. We limit the comparison of our algorithm to the following four historic algorithms: Graham’s scan, Quickhull and the Kirkpatrick-Seidel algorithm. The parallelized algo- rithm is limited to only CPU parallelization. Furthermore, the datasets used for performance analysis are randomly generated and as such are not exhaustive and will not extensively cover edge case data. Finally, the asymptotic analysis and comparison will not include a space com- plexity analysis and will only constitute an analysis of time complex- ity.

(11)

Background

In this chapter, the relevant theoretical information along with the nec- essary tools that will be used will be presented.

2.1 The Convex Hull Problem

The focus of this thesis is on the problem of finding the convex hull of a set of points. The following definition is used in the literature and we will use it for our investigation.

Definition 1. Convex HullThe convex hull CS of a set of points S is the smallest subplane containing S, where for any two points s1, s2∈ S the line from s1 to s2 does not intersect the boundary of the subplane [5].

A convex hull is visualized in Figure 2.1a. In 2.1b the line between the points s1 and s2 crosses the boundary of the polygon and as such 2.1b is not convex.

4

(12)

(a) A convex hull (b) A non-convex hull

Figure 2.1: Convex and non-convex hulls

A convex hull of set A is a type hull operator on set A, which is a closure operator. As such, convex hulls possess the properties of closure operators [6]. In the proof of our algorithm, we require the following axioms.

Axiom 1. Closure Operator Axioms

Given sets X and Y . The closure operator cl on a set A is given as cl(A). The closure operator has the following properties:

X ⊆ cl(X) (A1.1)

X ⊆ Y → cl(X) ⊆ cl(Y ) (A1.2)

cl(cl(X)) = cl(X) (A1.3)

The following result of these axioms is of particular interest to us.

Corollary 1.

X ⊆ cl(cl(X) ∪ Y )

Proof. Corollary 1 follows directly from the Axiom A1.1:

X ⊆ cl(X) ⊆ cl(X) ∪ Y ⊆ cl(cl(X) ∪ Y )

Finding the convex hull is fundamental in computational geometry and has applications in computer visualization, pattern recognition, collision detection and more. As such, it is of interest to investigate ef- ficient computationally tractable algorithms for producing the convex hull of a set of points [7][8].

Next follows a presentation of historic algorithms for finding the convex hull, along with the current state of the art.

(13)

2.1.1 Graham’s Scan

Graham’s scan is the traditional algorithm for finding the convex hull and is the standard algorithm for comparison of new convex hull al- gorithms. The Graham’s scan algorithm first selects point y1 with the smallest y coordinate. The remaining points are sorted based on the angle between point y and the x-axis. The algorithm continues by se- lecting three points in sequence and checking whether they make a clockwise rotation as depicted in Figure 2.2 between points C − D − E.

If this is the case, the second point is not on the convex hull and it is discarded.

Figure 2.2: Graham’s scan clockwise rotation

This procedure is repeated until the algorithm returns to the start- ing point y1. Together the remaining points now form the minimal convex hull. The time complexity of the Graham’s scan algorithm is bound above by sorting and as such has a worst-case time complexity of O(N · log(N )) [5].

2.1.2 Quickhull

Quickhull is the most widely used convex hull algorithm, and is the current industry standard. The Quickhull algorithm selects the two points x1 and x2 with the smallest and largest x coordinates. These points will be in the convex hull. A line is drawn between x1 and x2

that divides the set into two subsets of points. The algorithm now finds the point x3 that is furthest away from the line. A triangle is drawn between x1, x2 and x3while eliminating points inside the trian- gle. This procedure is done in each subset recursively until there are no points left. The Quickhull algorithm has a worst-case time complexity of O(N2)and an average case time complexity of O(N · log(N )) [9].

(14)

2.1.3 Kirkpatrick–Seidel

The theoretically best algorithm is the Kirkpatrick–Seidel algorithm [10].

The Kirkpatrick-Seidel algorithm first finds the vertical line that divides the set of points into almost equal parts (that is the median divisor), then finds the upper and lower edges that cross this vertical line (the so called "bridges") and removes the points between these two edges.

Next the edges on the lower hull and the upper hull are computed recursively as above until all points have been discarded. This is a reversal of the classical divide-and-conquer steps and it allows the Kirkpatrick-Seidel algorithm to discard many points that cannot be part of the convex hull. This turns out to be highly fortuitous for the time complexity.

Kirkpatrick and Seidel showed that the time complexity of their namesake algorithm is O(N · log(E)) where E is the number of edges in the final hull. Note that E ≤ N , and as such this is in the average case better than the O(N · log(N )) of the previous algorithms.

Although this is the standard algorithm in regard to time complex- ity, it has been shown that this algorithm is slow in practice [11] due to the large constant term. Because of this, it will not be implemented and compared to in the performance analysis.

2.2 The Iterative Orthant Scan

The Iterative orthant scan introduced by Larsson, Capannini, and Käll- berg [4] is a bucketing approach, as it aims to improve performance of algorithms by clever partitioning of the input data. The Iterative orthant scan uses the fact that the input space can be divided into 2N orthants around the centroid of the input space. Each orthant is then scanned to produce a set of 2N or fewer points, where each point has the greatest euclidian distance to the centroid in its respective orthant.

These extreme points are then used to update the input space and then recalculate a centroid. This process is repeated until an exit condition is reached. Larsson, Capannini, and Källberg [4] used this method to find the minimal enclosing N-sphere and observed an increase in per- formance and convergence rate over conventional methods [4]. The iterative discovery of the extreme points was shown to be efficiently parallelizable on both the CPU and the GPU.

(15)

2.3 Algorithm Analysis

This thesis will consider both the theoretical and the practical com- parison of algorithms. There are two principal measures by which an algorithms theoretical limits can be quantified: time complexity and space complexity. This thesis will limit itself to analysis and compar- ison of time complexity. Space complexity is nonetheless presented here for completeness.

2.3.1 Time Complexity

The time complexity of an algorithm is a function of the size of the in- put. To compare algorithms, we compare the asymptotic behavior of this function, in particular the upper and lower bounds. How the func- tion of a particular algorithm behaves depends on the model of com- putation that is used. The two most prolific such models are the uni- form cost measure and the logarithmic cost measure [12]. Under the former, each elementary machine operation is equal in its contribution to the function (a constant term one) while the latter assigns weights to the contributions based on the number or bits involved in the machine operation. This thesis will only consider the uniform model of time complexity. Thus, to generate the function for a given algorithm, we find how many elementary machine operations the algorithm makes for any given input.

2.3.2 Space Complexity

Analogous to time complexity, space complexity is a function of the size of input. In space complexity, this function is of memory (most commonly in bits) used during the execution of the algorithm. We are again interested in the upper and lower bounds of the asymptotic behavior of this function as the input size grows. A space complexity analysis will not be performed in this thesis.

2.4 Parallelization

In the 1980s faster computers were made by increasing the clock fre- quency of the CPU. As the cores became faster and faster, the power consumption skyrocketed and power management became the biggest

(16)

challenge for processor manufacturers. Intel’s Pentium 4 processor had up to 30 pipeline stages at one point and the power consumption was exponential to the performance [13]. As a solution to the problem- atic increase in power consumption, the manufacturers started pro- ducing processors with multiple core units rather than a higher clock frequency.

In the past, compilers were responsible for making programs run fast on a processor, but with the invention of multi-core processors, the need for detailed input from the programmer became more important.

Therefore, many shared-memory APIs were developed for paralleliza- tion including OpenMP and CUDA.

2.4.1 Shared-Memory Parallel Computers

Shared-memory parallel computers (SMP) are a model of parallel com- puting where all processors can access the same memory. This is a very effective model that many parallel programming libraries use includ- ing OpenMP. However, this can lead to race conditions where multiple threads can write to the same memory at the same time.

Although SMPs were designed for Uniform-memory access com- puters, modern computers almost always use cache-coherent non-uniform memory access.

2.4.2 OpenMP

OpenMP (Open Multi-Processing) is a shared-memory API for paral- lelization on the CPU and is the current standard [14]. The first ver- sion of OpenMP was released for Fortran in 1997 and has later been extended for C and C++. OpenMP is not a whole new programming language, but rather a set of compiler directives that allows the pro- grammer to tell the compiler how it should parallelize the program, which is very difficult for the compiler without additional informa- tion.

OpenMP uses the fork-join programming model, which means that at the beginning of the program there is only one master thread. Later the program can enter a parallel block where the master thread spawns an amount of threads that execute in parallel. Each thread will execute the code in the block separately.

(17)

One of the most crucial property of a shared-memory API is its syn- chronization tools and how race conditions are treated. In OpenMP the programmer can choose from several different methods. The barrier method is the most basic method and simply tells each thread to wait for all the other threads before moving on. The other method is to find critical sections in the code and tell the threads that only one thread at a time is allowed to run that code segment.

(18)

2.5 Statistical Concepts

The central statistical concepts referred to in the thesis are presented next.

2.5.1 Uniform Random Distribution

Data exhibits uniform random distribution over an interval if the prob- ability of a generated point has equal probability at any point in the interval.

2.5.2 Normal Distribution

A ubiquitous distribution used to model a wide range of phenomena.

The normal distribution, in D dimensions is defined by a D · 1 mean vector µ that defines the largest point mass centroid of the data and a D · D covariance matrix Σ that defines the spread of the data around the mean vector. In the case of the two-dimensional normal distribu- tion the diagonal elements in Σ define the magnitude of the spread across the axis and the off-diagonal elements define the rotation in re- lation to the axis of the spread. In this thesis, we will consider only the two-dimensional normal distribution with spherical covariance ma- trix, that is: the covariance matrix has equal diagonal elements sigma2 and the off-diagonal elements are equal to 0. For such distributions, the square root of the diagonal elements is defined as the standard de- viation sigma.

2.5.3 Error Function

The error function erf(x) is defined in figure 2.3 [15]

erf(x) = 1

√π Z x

−x

e−x2dx

Figure 2.3: Error function

The numerical behavior of the error function for positive x is shown in Figure 2.4.

(19)

Figure 2.4: Behavior of the error function

The error function relates to the normal distribution in that it al- lows us to approximate the probability that a given data point ap- pears within the interval [−x, x] around the mean value µ. For the two-dimensional normal distribution with mean vector µ and spheri- cal covariance matrix with diagonal elements σ2 the probability that a given data point appears within the circle with radius x and the center in µ is given as [15]:

erf( x

√2 · σ)

Figure 2.5: Fraction of points inside circle radius x around µ

(20)

Algorithm

This chapter presents our algorithms, the proof of correctness, the worst- case time complexity and considerations for various kinds of input distributions.

Our algorithm is presented in Algorithm 1.

Algorithm 1Orthan Scan CH

1: Array of points P {(x1, y1), (x2, y2), ..., (xn, yn)}

2: Array of convex hull points BP {}

3: while P is NOT empty do

4: INVARIANT: all discarded points from P are either on or in- side the hull BP, BP is convex and BP is minimal.

5: Find centroid C of {P ∪ BP } .O(N)

6: Adjust coordinates {P ∪ BP } with C as the new origin . O(N)

7: Partition P into {P1, P2, P3, P4} based on quadrants .O(N)

8: EP ←maxOrthantPoints({P1, P2, P3, P4}) .O(N)

9: BP ←grahamScan({EP ∪ BP }) .O(N· log(N))

10: P ← P − BP .O(N)

11: OP ←findOuter(BP , P ) .O(N·E)

12: P ← OP .O(N)

13: end while

14: Return BP

We start with an input of an array of N points as a list of Cartesian coordinate pairs along with an empty result array. These points are not necessarily uniformly distributed around the (0,0) origin and hence we start by finding the mean origin of the given point space (Figure 3.1)

13

(21)

c = 1

|P | X

p∈P

(p.x, p.y)

Figure 3.1: New origin

On line 6 we adjust each point with respect to this new origin with the translation formula in Figure 3.2.

P = {(p.x − c.x, p.y − c.y) | p ∈ P }

Figure 3.2: Translation of points

Next, we partition the translated set of points on the basis of quad- rant (see Figure 3.3).

P1 = {p | p.x ≥ 0 ∪ p.y ≥ 0, p ∈ P } P2 = {p | p.x < 0 ∪ p.y > 0, p ∈ P } P3 = {p | p.x ≤ 0 ∪ p.y ≤ 0, p ∈ P } P4 = {p | p.x > 0 ∪ p.y < 0, p ∈ P }

Figure 3.3: Quadrant partition

This array of arrays of points is then analyzed for extreme points using the maxOrthantPoints procedure (Algorithm 2, Figure 3.4b). This step is the titular orthant scan. Next, we make use of the grahamScan procedure (Section 2.1.1, Appendix C and Figure 3.4c) to produce a convex hull of these extreme points along with the points currently in the result array. Then we remove all the boundary points from the set of points P (Figure 3.4d). Lines 11 and 12 find all of the outer points of the convex hull BP and update the original list of points with the outer points. These steps are iterated until the input array P is empty.

At this point we output BP which now contains the final hull. The progression of the algorithm is illustrated in Figure 3.4.

(22)

(a) Points (b) Extreme points

(c) Convex hull on the extreme points

(d) Removing interior points

(e) Final hull (f) Final hull with all points Figure 3.4: Progression of the Orthant scan CH algorithm

(23)

The application of the Iterative orthant scan approach is through the maxOrthantPoints procedure, presented in Algorithm 2.

Algorithm 2maxOrthantPoints

1: Array of arrays of points Q {{(x1, y1), ..., (xk, yk)}, {(xk+1, yk+1)...}}

2: EP ← {}

3: for P ∈ Q do

4: if P is empty then

5: continue

6: end if

7: curX ← (0, 0)

8: curY ← (0, 0)

9: currEucl ← {(0, 0)}

10: for p ∈ P do

11: if p.x2 > curX.x2 ∨ (p.x2= curX.x2∧ p.y2 > curX.y2)) then

12: curX ← p

13: end if

14: if p.y2 > curY.y2∨ (p.y2 = curY.y2∧ p.x2 > curY.x2)) then

15: curY ← p

16: end if

17: if p.x2+ p.y2= curEucl[0].x2+ curEucl[0].y2then

18: curEucl ← curEucl ∪ p

19: end if

20: if p.x2+ p.y2> curEucl[0].x2+ curEucl[0].y2then

21: curEucl ← {p}

22: end if

23: end for

24: EP ← EP ∪ curX ∪ curY ∪ curEucl

25: end for

26: Return EP

For each list of points maxOrthantPoints finds the point with the largest absolute x coordinate (lines 11-13), the point with the largest absolute y coordinate (lines 14-16) along with the set of points with the largest Euclidean distance from the origin (lines 17-22). In case of multiple points with the largest x or y coordinate we always pick the one with the largest y or x coordinate respectively. Finally, we return the complete set of these extreme points. The procedure on lines 3-25 is easily parallelized. We note that our approach is a modified version

(24)

of the orthant scan used by Larsson, Capannini, and Källberg [4] as we find not only the points with the largest Euclidean distance, but also the extreme points along each axis.

The findOuter procedure presented in Algorithm 3 iterates over the points and removes the ones that are interior to the convex hull (BP ) returned by the grahamScan procedure. Lines 3-7 checks if a given point is inside the hull. This is done by splitting up the convex poly- gon into |BP | = E triangles and checking if the point p is inside one of the triangles, therefore the time complexity is O(N · E).

Algorithm 3findOuter

1: Border points BP

2: OP ← {} .Outer points

3: for p ∈ P do

4: if P not inside hull then

5: OP ← OP ∪ p

6: end if

7: end for

8: Return OP

(25)

Theorem 1. Given a set P of n points as (x, y) coordinate pairs the Orthant scan CH algorithm will terminate and output a minimal convex hull.

Proof. We consider the cases when n = 0 and n > 0.

For n = 0, P is empty at initialization. The algorithm will not en- ter the loop on lines 3-13 and will return the empty set BP . This is congruent with Theorem 1 since an empty set is a minimal hull for an empty set of points.

For n > 0, we consider the correctness of the invariant of the loop on lines 3-13. As per the earlier argument; BP will contain a minimal convex hull for the discarded points from P before the first iteration as both sets are empty. Thus, the invariant is true before the first iteration.

During the first iteration, we find a convex hull made up of the extreme points of the initial P . The correctness of the grahamScan procedure ensures that the hull found (BP ) on the extreme points is minimal and convex [5], hence the second and third parts of the invariant is true. Next, we remove only the points that are on or inside the hull BP from P . Note that BP is also the minimal convex hull of the union of extreme points and the interior points. Hence, the first part of the invariant is true after the first iteration. Thus, the invariant is true after the first iteration.

On subsequent iterations, only points outside BP are considered for extreme points. These points are combined with BP and a new hull is produced using the grahamScan procedure. Similarly, this new hull will be minimal and convex. We note that the previous hull BP must be a subset of the new hull. This is the content of Corollary 1. Thus, the interior points of the previous hull must also be interior points of the current hull. Hence, the first part of the invariant is fulfilled.

Therefore, by the laws of induction and the proof for the first iteration above, the invariant is true during all iterations of the loop.

Every iteration of the loop considers a minimum of one new ex- treme point, and discards it on line 11. Thus P will be empty after at most n iterations. Hence the algorithm will always terminate. Since P will be empty at the termination of the algorithm, the set of discarded points will be equal to P and thus, as per the correctness of the invari- ant from above, we will have produced a minimal convex hull for the points in P . This completes the proof.

(26)

Theorem 2. The worst-case time complexity of the Orthant scan CH algo- rithm is O(N2· (E + log(N ))) where E is the number of edges in the hull.

Proof. We consider each subroutine in turn.

Finding the centroid on line 5 requires us to sum each point in P. For each point, there are two arithmetic addition operations and a single division operator once the sum is calculated. Thus, the time complexity is bound above by N . Adjusting the coordinates on line 6 requires us to translate each point. For each point, two arithmetic operations are done. Thus, the time complexity is bound above by 2N . Partitioning the points into sets based on quadrant (line 7) requires us to examine the x and y coordinate of each point. Thus the time com- plexity is bound above by N . The maxOrthantPoints procedure will it- erate once over all the points, making a constant number of operations on each. Thus, the time complexity is K ·N , where K is a constant. The grahamScan procedure has a known time complexity of O(N · log(N )).

Discarding the hull points from P requires one iteration over all the points and is thus bound by N The findOuter procedure will consider each triangle formed by an edge in the hull and the center and iterate over all points in P to find the ones that lie outside the hull. Thus, the time complexity is E · N where E is the number of edges in the hull.

Discarding points requires one iteration over all the points. Thus, the time complexity is N .

The loop on lines 3-13 thus has the time complexity 2N +2N +2N + K · N + E · N + log(N ) · N + N. The loop 3-13 will run at most N times and thus the time complexity of the algorithm total is N · (2N + 2N + 2N +K ·N +E ·N +N ). That is it is bound above by O(N2·(E +log(N ))).

Thus, the proof is complete.

It should be noted that this is the worst-case time complexity. In actuality, the number of points considered in each step of the algo- rithm is, save for a few edge cases, less than n since we are able to dis- card all the interior points in each step and thus not consider them in subsequent iterations of the loop. Additionally, the number of points considered by the grahamScan procedure is at most n (in the situation where all n points are on the hull) and this can only occur once (in the final iteration). Similarly, the number of points checked for on line 10 is at most n during the first iterations and with the exception of edge cases, will decrease substantially as we are able to discard points.

(27)

Depending on the nature of the distribution of points, the asymp- totic time complexity of the algorithm is improved. Next, we present two examples of distributions with better than worst-case asymptotic time complexity.

If points are distributed uniformly over a square interval ({|x| ≤ a

|y| ≤ b}) the convex hull will trivially tend to a set of extreme points {(a, b), (−a, b), (−a, −b), (a, −b)} as the number of points grows and as such the loop will only iterate once, thus the worst-case performance becomes linear. This phenomenon is illustrated in Figure 3.5 where the convex hull of 5000 points is made up of only few points.

Figure 3.5: Hull on square interval

If the points are distributed as a perfect circle the time complexity reduces to O(N · log(N )). This because the first iteration will find all N points as extreme points, then find the hull on these points using the grahamScan procedure and discard these points. P will now be empty and the findOuter procedure will return an empty set OP in constant time. Thus P will be empty after the first iteration and the algorithm will terminate. Since the loop only ran once and the findOuter pro- cedure returned in constant time, the time complexity of the Orthant scan CH on a set of points that is a perfect circle is bound above by the Graham’s Scan and thus the worst-case time complexity reduces to O(N · log(N )).

Our theoretical analysis of the Orthant scan CH algorithm is thus complete.

(28)

Although the constant term is not of interest when considering the asymptotic time complexity, it is of interest when analyzing real-time performance. Next, we will consider how the constant term varies over an important distribution which does not lead to a reduction in worst-case time complexity, namely the normal distribution. If the points exhibit a normal distribution with mean vector (x, y) and spher- ical covariance matrix Σ with diagonal values σ2, the algorithm will also have improved performance. First the origin is shifted to (x, y), after which we know that the circle with the radius equal to the small- est distance to an edge on the hull from (x, y) will be completely inside the hull. Thus, we can compare the radius of this smallest circle to σ and using the error function (as in Section 2.5.3), calculate the proba- bility that a point is inside this circle, which is equivalent to finding the expected fraction of points within this circle. Figure 3.6 demonstrates this calculation.

P ∼ N ((x, y), Σ) .

H1 = {((x0, y0), (x1, y1)), ((x1, y1), (x2, y2)), ...((xt, yt), (x0, y0))}

dist((xa, yb), (xb, yb)) = (yb− ya)x − (xb− xa)y + xbya− ybxa

p(yb− ya)2+ (xb− xa)2 rmax= min{dist((xk, yk), (xk+1, yk+1)) | 1 ≤ k ≤ t, t + 1 = 0}

ξ = erf( rmax

√2 · σ)

Figure 3.6: A Lower bound for the fraction of points inside hull H1

The Figure 3.7 illustrates the above calculation.

(29)

Figure 3.7: Largest circle around the mean

The qualitative content of the calculation, due to the nature of the error function, is that if rmax ≥ 2 ·√

2 · σ then ξ ≥ 0.99, that is 99% of the points should statistically be within the hull H1. Since the extreme points are evenly spread in the quadrants, the value of rmax should be large and thus we expect that the fraction of the input points dis- carded in the first iteration. As such the performance improves, since the number of iterations decreases as multiple points can be discarded in a single iteration, and subsequent iterations only need to consider a small fraction of the original point set. The constant term is reduced as the number of iterations of the loop and the number of points con- sidered in each iteration of the loop is reduced. Thus, the algorithm seems well suited for normally distributed data.

(30)

Method

This chapter presents the datasets used, the method used for measur- ing performance and what technologies were used in the conduction of the experiments.

4.1 Testing Environment

All programs were run using C++11 compiled with g++ 5.4.0 and OpenMP 4.0. The results were gathered on a computer with a i7 4500U CPU clocked at 1.8GHz, 8GB 1600MHz DDR3L RAM and a SSD.

4.2 Test Data

The datasets used for testing can be found in Appendix A. The test- ing data consist of two-dimensional uniformly distributed points in square, circle and triangle interval and a set of two-dimensional nor- mally distributed points with mean (0, 0) and spherical covariance with diagonal elements equal to 1. The sizes of the datasets were 5 000, 50 000, 500 000 and 5 000 000. The particular sizes were decided on as datasets with less than 5 000 points had a very low running time and sets larger than 5 000 000 did not yield any discernible advantage in additional information. Additionally, the square and the circle dis- tributions are used in other contemporary papers for bench marking convex hull algorithms [16] and other geometric algorithms [4] so it seems reasonable to use them for our purposes as well. The triangular distribution was chosen because of its distinct shape from the square

23

(31)

distribution. Finally, the normally distributed data was picked due to its ubiquitous nature. The testing data for the square, triangular and circular distributions was generated using the rbox program included in the Qhull package [17]. The normally distributed data was gener- ated in MATLAB.

4.3 Testing Process

For each distribution, the average time of 10 runs for each dataset was measured. The tests were performed with three different sequential algorithms: Quickhull, Grahams’s scan and Orthant scan CH. The paral- lel version of Orthant scan CH was also tested. For the normally dis- tributed data, the calculations were conducted as per Figure 3.6 to find the average of the lower bound of points expected to be within the hull after the first iteration.

4.4 Implementation

Both the sequential and parallel versions of the Orthant scan CH algo- rithm were implemented in C++. These implementations are found in Appendix A. The parallel implementation was implemented using the OpenMP parallelization API described in Section 2.4.2. OpenMP is one of the most widely used parallelization libraries and was therefore chosen over other libraries in the experiments. The easy setup and the flexibility of hardware it supports was also key factors when choosing OpenMP. The implementations are equal but the parallel version runs the maxOrthantPoints and findOuter procedures using multiple threads.

The Quickhull algorithm used is the one implemented in the open- source program Qhull [17].

The Graham’s scan algorithm is simply the procedure used in our algorithm applied for all points. The implementation is found in Ap- pendix A.

(32)

Results

The running times of the three sequential convex hull algorithms and the parallelized version for different distributions of data is presented in this chapter. For each distribution and number of points, the time presented is the average out of 10 runs. Note that the running time for 500 points is less than 10−3 seconds, and as such is not considered in the tables. The figures use the 500 points versions due to their succinct visualization of the nature of theses distributions. For each distribu- tion, a table of the number of iterations and the average running time of the maxOrthantPoints and findOuter procedures is displayed for the 5 000 000-point set. The grahamScan procedure had a negligible run- ning time and as such is not presented. All times are in seconds and rounded off to the nearest permille. Note that in the tables, the Or- thant scan CH is denoted OSCH and the Graham’s scan is denoted GS.

The parallelized version is prefixed with para. In addition to the ta- bles, Appendix B contains loglog plots of the results. In these figures, the y-axis denotes the execution time and is in seconds while the x-axis denotes the number of points.

25

(33)

5.1 Square Interval

For the square interval, visualized in Figure 5.1, the Orthant scan CH al- gorithm performed better than Graham’s scan but worse than the Quick- hull algorithm. For the case of 5 000 000 points the Orthant scan CH was 47.3% faster than Graham’s scan while the Quickhull algorithm was 66.7% faster than our algorithm. The parallel version of the Orthant scan CH was 50.8% faster than the sequential version, 74.1% faster than Graham’s scan while the Quickhull was 47.7% faster than the parallel version. The results are presented in Tables 5.1, 5.2 and figure B.1.

Figure 5.1: 500 points in a square interval

Points Quickhull GS OSCH Para OSCH

5 000 0.001 0.002 0.002 0.010

50 000 0.005 0.039 0.023 0.036 500 000 0.062 0.457 0.240 0.160 5 000 000 0.923 5.262 2.773 1.363

Table 5.1: Uniform random distribution in a square interval

(34)

Algorithm Iterations maxOrthantPoints findOuter

OSCH 8 0.109 0.128

Para OSCH 8 0.065 0.055

Table 5.2: Subroutine performance on 5 000 000 points over a square interval

Number of points/edges in final hull:24

5.2 Circular Interval

For the circular interval, visualized in Figure 5.2, the Orthant scan CH algorithm performed worse than Graham’s scan and worse than the Quickhull algorithm. For the case of 5 000 000 points Graham’s scan was 89.6% faster than the Orthant scan CH algorithm and the Quickhull algorithm was 98.7% faster than the Orthant scan CH algorithm. The parallel version of the Orthant scan CH was 38.1% faster than the se- quential version while Graham’s scan was 83.2% faster than the parallel version and the Quickhull was 98.0% faster than the parallel version.

The results are presented in Tables 5.3, 5.4 and figure B.2.

Figure 5.2: 500 points in a circular interval

(35)

Points Quickhull GS OSCH Para OSCH

5 000 0.001 0.003 0.006 0.011

50 000 0.006 0.039 0.081 0.165 500 000 0.075 0.478 0.998 0.625 5 000 000 0.708 5.810 55.985 34.631

Table 5.3: Uniform random distribution in a circular interval

Algorithm Iterations maxOrthantPoints findOuter

OSCH 106 0.013 0.503

Para OSCH 106 0.007 0.202

Table 5.4: Subroutine performance on 5 000 000 points over a circular interval

Number of points/edges in final hull: 459

5.3 Triangle Interval

For the triangular interval, visualized in Figure 5.3, the Orthant scan CH algorithm performed better than Graham’s scan but worse than the Quickhull algorithm. For the case of 5 000 000 points the Orthant scan CH algorithm was 61.1% faster than Graham’s scan while the Quickhull algorithm was 73.4% faster than our algorithm. The parallel version of the Orthant scan CH was 26.6% faster than the sequential version, 71.5% faster than Graham’s scan while the Quickhull was 63.9% faster than the parallel version. The results are presented in Tables 5.5, 5.6 and figure B.3.

(36)

Figure 5.3: 500 points in a triangular interval

Points Quickhull GS OSCH Para OSCH

5 000 0.001 0.003 0.004 0.008

50 000 0.005 0.045 0.040 0.095 500 000 0.055 0.486 0.234 0.159 5 000 000 0.565 5.486 2.132 1.565

Table 5.5: Uniform random distribution in a triangular interval

Algorithm Iterations maxOrthantPoints findOuter

OSCH 12 0.070 0.061

Para OSCH 12 0.057 0.041

Table 5.6: Subroutine performance on 5 000 000 points over a triangu- lar interval

Number of points/edges in final hull:19

5.4 Normal distribution

The normally distributed data was generated with mean [0,0] and spher- ical covariance matrix with diagonal values equal to 1. The distribu- tion is visualized in Figure 5.4. For this distribution the Orthant scan

(37)

CH algorithm performed better than Graham’s scan but worse than the Quickhull algorithm. For the case of 5 000 000 points the Orthant scan CH algorithm was 60.8% faster than Graham’s scan while the Quickhull algorithm was 77.7% faster than our algorithm. The parallel version of the Orthant scan CH was 28.3% faster than the sequential version, 71.9% faster than Graham’s scan while the Quickhull was 68.9% faster than the parallel version. In addition to the performance comparison below, the calculations in Figure 3.6 were done on each of the gener- ated distributions and the average percentage of points discarded in the first iteration was 99.97%. The results are presented in Tables 5.7, 5.8 and figure B.4.

Figure 5.4: 500 points in a normal distribution

Points Quickhull GS OSCH Para OSCH

5 000 0.002 0.003 0.003 0.005

50 000 0.005 0.039 0.022 0.016 500 000 0.050 0.457 0.244 0.172 5 000 000 0.465 5.318 2.083 1.494

Table 5.7: Normal distribution

(38)

Algorithm Iterations maxOrthantPoints findOuter

OSCH 7 0.211 0.185

Para OSCH 7 0.148 0.097

Table 5.8: Subroutine performance on 5 000 000 points over a normal distribution

Number of points/edges in final hull:23

(39)

Discussion

This chapter contains the discussion and analysis of the results pro- duced in Chapter 5 and the time complexity discussed in Chapter 3.

This chapter also contains a subsection about the limitations of our analysis.

6.1 Discussion of Results

The Orthant scan CH algorithm has a worse worst-case time complex- ity than the conventional algorithms. Compared to the Kirkpatrick- Seidel algorithm, the worst-case time complexity is one degree higher along with having a worse logarithmic term. Compared to the Quick- hull algorithm the worst-case time complexity is a log(N ) term worse.

As presented in the analysis, it is clear that for many types of distri- butions the Orthant scan CH algorithm reduces to a linear bound or it is bound above by the grahamScan procedure. As such, the asymptotic behavior is very dependent upon the nature of the data. The constant term could be argued to be small in the case of the normal distribu- tion, which lends credence to the notion that the algorithm is useful due to the ubiquity of normally distributed data. However, generally it is clear that the Orthant scan CH is not an improvement over state of the art algorithms in terms of asymptotic performance.

For the tested distributions, it was clear that the maxOrthantPoints and the findOuter procedures were the most expensive in terms of time.

Both procedures were on average several magnitudes larger than any other procedure. This was expected since the grahamScan procedure only in very rare circumstances runs over the complete set of data and

32

(40)

in all of the distributions presented only ran over a set of points which was approximately equal the number of points in the final hull. The other procedures were linear with a small constant term and as such were also expected to be less than maxOrthantPoints and findOuter.

These parts can most benefit from parallelization.

On a square interval the Orthant scan CH algorithm had a faster performance time than the Graham’s scan algorithm. The number of iterations was also comparatively low to the number point in the in- put. This shows promise in that the number of exterior points added to the hull decreased substantially in each iteration. The state of the art Quickhull algorithm on the other hand performed faster than the Orthant scan CH algorithm. Of note is that the Orthant scan CH al- gorithm did not exhibit a linear time even for 5 000 000 points. This implies that to achieve the theoretical advantage of the square interval the data must be even larger.

On the triangular interval, the Orthant scan CH performed faster than the Graham’s scan and slower than the Quickhull algorithm. The performance was similar to the square interval which was to be ex- pected due to the similarity of the distributions.

On the circular interval the Orthant scan CH algorithm degenerated and the performance was a magnitude slower than both the Graham’s scan or the Quickhull algorithm. This was expected because the na- ture of the worst time case for the Orthant scan CH algorithm is exactly an almost circular hull. The almost circular hull requires many iter- ations, and only a few points are added to the hull and discarded in each iteration, while each iteration requires one execution of the ex- pensive findOuter procedure. This problem is compounded by the fact that disc had many points in the final hull compared to the triangular and the square distributions and as such the E term in findOuter pro- cedure was large in comparison. The maxOrthantPoints procedure had similar running times to its running time on the square and triangular interval and it can thus be concluded that for the circular interval it was the findOuter procedure which was the major contributor to the degenerate running time. It is reasonable to conclude that the theo- retical advantage of the perfect circle hull is very hard to realize with real-time data due to floating point limitations of a computer system.

Thus, it is clear that for data distribution uniformly over a circular disc the Orthant scan CH algorithm is not useful.

Our algorithm’s performance was improved on Normally distributed

(41)

data compared to the performance on the triangular, the square and the circular intervals. This was expected as per the analysis of the con- stant term in the Chapter 3. The experimental results pertaining to the lower bound of the fraction of points inside the first hull show that on average more than 99.9% were statistically expected to be discarded in in the first iteration. In addition to reinforcing the notion of the small constant term, this result also implies that were the algorithm to termi- nate prematurely after the first iteration, the hull produced would be a good approximation of the final hull. As such, the algorithm seems well positioned in environments where the running time has an up- per limit. It seems reasonable to conclude that our algorithm shows great promise in fields where the generation of the hull of normally distributed points with spherical covariance is desired.

Parallelization of the maxOrthantPoints and the findOuter procedures lead to a clear improvement in the performance for larger input sizes.

On average, the percentage improvement in the case of 5 000 000 points across all tests was 35.95% For the smaller input sizes, the parallelized version performed worse due to the added overhead, however already at 500 000 points the advantage of parallelization was clear. The im- provement was not substantial enough to match the sequential Quick- hull algorithm. This implies that although the Orthant scan CH can be improved using parallelization, it is not capable of competing with the Quickhull algorithm, even when parallelized. It is clear that the findOuter and the maxOrthantPoints procedures (as presented) are too costly to be successfully used in an algorithm that improves the state of the art, even when parallelized.

6.2 Limitations

The datasets tested in thesis are limited to the ones presented. This means that there is a possibility that several important edge cases when the algorithms performance suffers or improves have not been consid- ered.

Additionally, no type of covariance other than the spherical was in- vestigated and as such, the analysis should be taken as only pertaining to normal distributions of with spherical covariance.

(42)

Conclusion

In this thesis, we have presented an application of the Iterative orthant scan to the problem of finding the convex hull. It is clear that our al- gorithm better than the conventional Graham’s scan algorithm in terms of performance on point distributions over a square or triangular in- terval. The algorithm is however significantly slower than the cur- rent state of the art both in performance and in the theoretic asymp- totic case. This is true also for the parallelized version of the Itera- tive orthant scan which, although exhibiting substantial performance improvements compared to the sequential version, did not match the performance of the Quickhull algorithm.

As a conclusion: we have shown that the Iterative orthant scan can be used in an algorithm to find the convex hull, that the worst-case time complexity of such an algorithm is worse than the state of the art, that the performance of such algorithm is good in comparison to the traditional algorithm for finding the convex hull and that the perfor- mance of a parallelized version of such an algorithm is also good in comparison to the traditional algorithm. We thus consider the prob- lem statements answered.

Although the results show that our application of the Iterative or- thant scan does not improve the state of the art, it is arguable that the method shows promise, in particular if the distribution of the data is known. A possible area of further study is evaluating the algorithm presented for other datasets and distributions.

35

(43)

[1] P.S. Heckbert. Graphics Gems IV. Graphics gems series. AP Pro- fessional, 1994. ISBN: 9780123361554. URL: https : / / books . google.se/books?id=CCqzMm%5C_-WucC.

[2] Bernd Gärtner. “Fast and Robust Smallest Enclosing Balls”. In:

Proceedings of the 7th Annual European Symposium on Algorithms.

ESA ’99. London, UK, UK: Springer-Verlag, 1999, pp. 325–338.

ISBN: 3-540-66251-0.URL: http://dl.acm.org/citation.

cfm?id=647909.740295.

[3] R. Miller and Q. F. Stout. “Efficient Parallel Convex Hull Al- gorithms”. In: IEEE Trans. Comput. 37.12 (1988), pp. 1605–1618.

ISSN: 0018-9340.DOI: 10.1109/12.9737.

[4] Thomas Larsson, Gabriele Capannini, and Linus Källberg. “Par- allel computation of optimal enclosing balls by iterative orthant scan”. In: Computers & Graphics 56 (2016), pp. 1–10. ISSN: 0097- 8493. DOI: http://dx.doi.org/10.1016/j.cag.2016.

01.003.URL: http://www.sciencedirect.com/science/

article/pii/S0097849316300024.

[5] Mark De Berg et al. “Computational geometry”. In: Computa- tional geometry. Springer, 2000, pp. 1–17.

[6] M.L.J. van de Vel. Theory of Convex Structures. North-Holland Mathematical Library. Elsevier Science, 1993.ISBN: 9780080933108.

URL: https://books.google.se/books?id=oDUmytVzSPMC.

[7] Godfried T. Toussaint and David Avis. “On a convex hull al- gorithm for polygons and its application to triangulation prob- lems”. In: Pattern Recognition 15.1 (1982), pp. 23–29. ISSN: 0031- 3203. DOI: http://dx.doi.org/10.1016/0031-3203(82) 90057-7.URL: http://www.sciencedirect.com/science/

article/pii/0031320382900577.

36

(44)

[8] Ming C. Lin, USA Dinesh Manocha, and Jon Cohen. Collision De- tection: Algorithms and Applications. 1996.

[9] C Bradford Barber, David P Dobkin, and Hannu Huhdanpaa.

“The quickhull algorithm for convex hulls”. In: ACM Transac- tions on Mathematical Software (TOMS) 22.4 (1996), pp. 469–483.

[10] David G Kirkpatrick and Raimund Seidel. “The ultimate pla- nar convex hull algorithm?” In: SIAM journal on computing 15.1 (1986), pp. 287–299.

[11] Mary M Mcqueen and Godfried T Toussaint. “On the ultimate convex hull algorithm in practice”. In: Pattern Recognition Letters 3.1 (1985), pp. 29–34.

[12] Alfred V. Aho and John E. Hopcroft. The Design and Analysis of Computer Algorithms. 1st. Boston, MA, USA: Addison-Wesley Longman Publishing Co., Inc., 1974.ISBN: 0201000296.

[13] Ed Grochowski and Murali Annavaram. “Energy per instruction trends in Intel microprocessors”. In: Technology@ Intel Magazine 4.3 (2006), pp. 1–8.

[14] Rohit Chandra. Parallel programming in OpenMP. Morgan kauf- mann, 2001.

[15] Larry C Andrews. Special Functions of Mathematics for Engineers.

eng. 2nd ed.. PM49. Bellingham: SPIE, 2005.ISBN: 0-8194-2616-4.

[16] G. Mei, J. C. Tipper, and N. Xu. “An algorithm for finding con- vex hulls of planar point sets”. In: Proceedings of 2012 2nd Inter- national Conference on Computer Science and Network Technology.

2012, pp. 888–891.DOI: 10.1109/ICCSNT.2012.6526070.

[17] Qhull. Qhull code for Convex Hull, Delaunay Triangulation, Voronoi Diagram, and Halfspace Intersection about a Point. 1995.URL: http:

//www.qhull.org/.

(45)

Code and Test Data

Source code is uploaded to a git repository found at:

https://github.com/davisfreimanis/KEX2017

38

(46)

Result Graphs

Figure B.1: Peformance on uniform random distribution in a square interval

39

(47)

Figure B.2: Peformance on uniform random distribution in a circular interval

Figure B.3: Peformance on uniform random distribution in a triangu- lar interval

(48)

Figure B.4: Peformance on normal distribution

(49)

Graham’s Scan

Algorithm 4Graham’s Scan

1: procedureGRAHAMSSCAN(EP)

2: let N = number of points

3: let points[N] = array of points EP

4: swap points[0] with the point with the lowest y-coordinate

5: sort points by polar angle with points[1]

6: let M = 1

7: for i ← 2 to N do

8: Find next valid point on convex hull.

9: while ccw(points[M-11], points[M], points[8]) = 0 do

10: if M > 1 then

11: M ← M − 1

12: else if i = N then

13: break

14: else

15: i ← i + 1

16: end if

17: end while

18: M ← M + 1

19: swap points[M] with points[i]

20: end for

21: end procedure

22: procedureCCW(p1, p2, p3)

23: return (p2.x − p1.) · (p3.y − p1.y) − (p2.y − p1.y) · (p3.x − p1.x)

24: end procedure

42

(50)

References

Related documents

Kozlov conjectured [4, Conjecture 6.2] that the convex hull of the face vectors of r-colorable complexes on n vertices has a simple description in term of the clique vector of the

In this paper I discuss convex sets which are both closed and bounded and with non-empty interior (in some finite- dimensional affine space over the real numbers) and I refer to

46 Konkreta exempel skulle kunna vara främjandeinsatser för affärsänglar/affärsängelnätverk, skapa arenor där aktörer från utbuds- och efterfrågesidan kan mötas eller

För att uppskatta den totala effekten av reformerna måste dock hänsyn tas till såväl samt- liga priseffekter som sammansättningseffekter, till följd av ökad försäljningsandel

The increasing availability of data and attention to services has increased the understanding of the contribution of services to innovation and productivity in

Generella styrmedel kan ha varit mindre verksamma än man har trott De generella styrmedlen, till skillnad från de specifika styrmedlen, har kommit att användas i större

Parallellmarknader innebär dock inte en drivkraft för en grön omställning Ökad andel direktförsäljning räddar många lokala producenter och kan tyckas utgöra en drivkraft

Närmare 90 procent av de statliga medlen (intäkter och utgifter) för näringslivets klimatomställning går till generella styrmedel, det vill säga styrmedel som påverkar