• No results found

Shift Design and Driver Scheduling Problem

N/A
N/A
Protected

Academic year: 2021

Share "Shift Design and Driver Scheduling Problem"

Copied!
50
0
0

Loading.... (view fulltext now)

Full text

(1)

IN

DEGREE PROJECT

MATHEMATICS,

SECOND CYCLE, 30 CREDITS

,

STOCKHOLM SWEDEN 2018

Shift Design and Driver

Scheduling Problem

CRISS ALVIANTO PRIYANTO

KTH ROYAL INSTITUTE OF TECHNOLOGY

SCHOOL OF ENGINEERING SCIENCES

(2)
(3)

Shift Design and Driver

Scheduling Problem

CRISS ALVIANTO PRIYANTO

Degree Projects in Optimization and Systems Theory (30 ECTS credits) Degree Programme in Applied and Computational Mathematics (120 credits) KTH Royal Institute of Technology year 2018

Supervisors at Bzzt AB: Johan Lindberg Supervisor at KTH: Xiaoming Hu Examiner at KTH: Xiaoming Hu

(4)

TRITA-SCI-GRU 2018:051 MAT-E 2018:17

Royal Institute of Technology

School of Engineering Sciences

KTH SCI

SE-100 44 Stockholm, Sweden URL: www.kth.se/sci

(5)

Abstract

Scheduling problem and shift design problems are well known NP-hard problems within the optimization area. Often time, the two problems are studied individually. In this thesis however, we are looking at the combination of both problems. More specifically, the aim of this thesis is to suggest an optimal scheduling policy given that there are no predefined shifts to begin with. The duration of a shift, along with the start and end time may vary. Thus we have proposed to split the problem into two sub-problems: weekly scheduling problem and daily scheduling problem. As there are no exact solution methods that are feasible, two meta-heuristics method has been employed to solve the sub-problems: Simulated Annealing (SA) and Genetic Algorithm (GA). We have provided proofs of concepts for both methods as well as explored the scalability. This is especially important as the number of employee is expected to grow significantly throughout the year. The results obtained has shown to be promising and can be built upon for further capabilities.

Key-words: Integer Linear Programming, Scheduling Problem, Shift-design prob-lem, Genetic Algorithm, Simulated Annealing.

(6)
(7)

Sammanfattning

Schemal¨aggning och skiftdesignsproblem ¨ar v¨alk¨anda och v¨alstuderade NP-sv˚ara beslut-sproblem inom optimeringsomr˚adet. Oftast s˚a studeras dessa problem enskilt, men i detta arbete s˚a studeras en kombination av b˚ada problemen. Mer specifikt ¨ar m˚alet med detta ar-bete att f¨oresl˚a ett f¨ornuftigt handlings¨att till att skapa ett veckoschema d¨ar skift inte ¨ar pre-definierade f¨or alla veckor. Starttiden, sluttiden och varaktigheten av ett skift kan f¨or¨andras fr˚an vecka till vecka. D¨arf¨or har problemet delats upp till tv˚a delar: Veckoschemal¨ aggnings-och dagsschemal¨aggningsproblem. Trots uppdelningen s˚a ¨ar b˚ada delproblem f¨or komplexa f¨or att l¨osas exakt. D¨arf¨or har tv˚a metaheuristiska metoder anv¨ants som l¨osningsmetoder: Simulerad Gl¨odgning och Genetisk Algoritm. I detta arbete bevisas b˚ada l¨osningsmetoderna till att vara bra nog, och dessutom studeras ¨aven skalbarheten av modellen. Detta senare ¨ar s¨arskilt viktigt eftersom antal anst¨allda som ska schemal¨aggas f¨orv¨antas att ¨oka genom ˚aren. De erh˚allna resultaten har visat sig vara lovande och bevisligen s˚a kan modellen expanderas med fler villkor

Nyckelord: Integer Linj¨ar Programmering, Schemal¨aggning Problem, Skift Design Problem, Genetisk Algoritm, Simulerad Gl¨odgning Algoritm.

(8)
(9)

Acknowledgements

First and foremost i would like to thank God for giving me this opportunity and capabilities to grow throughout this work. Truly, i would not be able to accomplish this work without the guidance and strength from God.

Secondly, i would like to give my thanks to Bzzt AB and Johan Lindberg for the oppor-tunity to conduct this project for them. My discussion with Johan has given me a lot of valuable insights and he has been very supportive throughout this work.

Thirdly, i would like to thank Professor Xiaoming Hu, for all his feedback and inputs during the modelling process. Also i would like to thank San-San Ma for early discussions and developments of this project.

Lastly, i would like to thank my family and my significant other, Elin, for their under-standing and supports throughout my time with this project. They were truly my source of motivation and i wished to make them proud. I would also thank my good friends: Charisse for helping me with the figures in this thesis, Jonathan for the feedbacks and inputs and Dani for his insights on the structure of this thesis.

(10)
(11)

Contents

List of figures List of tables 1 Introduction 1 1.1 Background . . . 2 1.2 Problem Formulation . . . 2 1.3 Related Works . . . 3 2 Modelling 5 2.1 Approach . . . 5 2.2 Problem Structure . . . 5 2.2.1 Weekly Scheduling . . . 6 2.2.2 Daily Scheduling . . . 8 2.3 Overview . . . 9 3 Methods 12 3.1 Simulated Annealing . . . 12

3.1.1 Concept & Definitions . . . 12

3.1.2 Implementation . . . 13

3.2 Genetic Algorithm . . . 15

3.2.1 Concept & Definitions . . . 15

3.2.2 Implementation . . . 16

4 Experimental Results 21 4.1 Weekly Scheduling Problem . . . 21

4.1.1 Scalability . . . 22

4.1.2 Parameters Analysis . . . 22

4.2 Daily Scheduling Problem . . . 24

4.2.1 Scalability . . . 25 4.2.2 Parameter Analysis . . . 26 5 Discussion 28 5.1 Proposed Model . . . 28 5.1.1 Improvements . . . 28 5.1.2 Alternative Methods . . . 29 5.2 Further Work . . . 29 5.2.1 Stochasticity . . . 29 5.2.2 Feeback-System integration . . . 30 6 Conclusion 32 Bibliography 33

(12)

List of figures

1.1 A Pod-taxi [1] . . . 1

3.1 Simulated Annealing Algorithm . . . 15

3.2 Solution Encoding . . . 17

3.3 Crossover Process . . . 18

3.4 Mutation Process . . . 18

3.5 Genetic Algorithm . . . 19

(13)

List of tables

2.1 Decision Variables . . . 6

2.2 Indices and sets . . . 6

2.3 Parameters . . . 7

2.4 Variables for driver k . . . 8

3.1 Acceptance criteria . . . 13

3.2 Basic Terminology . . . 16

3.3 Variables for driver k . . . 16

4.1 Parameters for test case 1 (SA) . . . 21

4.2 Obtained solution . . . 21

4.3 Reference Solution . . . 22

4.4 Scalability test (SA) . . . 22

4.5 Initial Temperature Variation . . . 23

4.6 Minimum Temperature Variation . . . 23

4.7 Variation of α . . . 23

4.8 Variation of Σ . . . 24

4.9 Parameters for test case 1 (GA) . . . 24

4.10 Scalability test (GA) . . . 25

4.11 Variation of Population size . . . 26

4.12 Crossover Rate Variation . . . 26

(14)
(15)

Chapter 1

Introduction

Bzzt AB is a start-up company that offers taxi service for short trips within inner city Stock-holm. The company aims to provide a cheaper and more eco-friendly way to traverse the inner city of Stockholm. As opposed to a standard taxi service, Bzzt only charges their customers by the distance they fare. There are no other additional fees. Furthermore, Bzzt AB operates their services with an electric three-wheeled vehicle. These vehicles are named: Pod-taxi. Some of the features of a Pod-taxi is reminiscent of a traditional auto rickshaw. Except that a Pod-taxi is a zero-emission vehicle and produce significantly lesser noises.

Figure 1.1: A Pod-taxi [1]

Three people (including the driver) could fit in a Pod-taxi. The electricity that is utilized to charge all the Pod-taxis, are obtained from renewable resources. All Pod-taxis are man-ufactured by the Swedish company: Clean Motion. The company is well known for its development of efficient and clean vehicles.

The service can be booked through a mobile app, provided that the booking occurs within the proximity of inner city Stockholm. Through the app, a customer is given an overview as to how many vehicles are available, an estimated travel time and an estimated cost for the ride. Again, the customer only pays for meters travelled, regardless of traffic. The service cost about 30 SEK per kilometer. A minimal fare can be guaranteed with an efficient system that manages the fleet of available vehicles. This in turn would maximize the occupancy rate. Therefore, the majority of the profits comes from the sheer volume of successful short trips, rather than costly long individual trips. Additionally, this is also done without com-promising the drivers salaries. All contracts given to the drivers, are subject to the Swedish Transport Workers Union. This ensures that all drivers receives a fair salary.

(16)

However, Bzzt AB is still a relatively new company and is very keen on optimizing their performances. The system that is currently in implementation, is constantly being subject to troubleshooting and upgrades. One aspect of the system that is not quite established yet is the scheduling system. Therefore, the aim of this thesis is to propose an optimal scheduling policy, which may further optimize the occupancy rate.

1.1

Background

At the present time, the scheduling of all drivers is handled by the chief of operator. She is responsible to ensure that there are enough shifts every week and that the overall demand is fulfilled. All drivers are given the liberty to compose his/her own weekly schedule with the shifts that are given out. Thus, it is up to each driver to make sure that he/she meets the weekly working hour quota.

The shifts that are given out, is designed to closely follow historical data. Therefore, there are no fixed shifts such as: morning and night shifts. The features of the shifts may vary from day to day basis. Again based on the demand and the intuition of the chief of operator. However, the company has yet gathered data for a whole year. Also, as the number of drivers increases, it has become more difficult and time consuming to produce the shifts. Especially as the chief of operator has other responsibilities as well.

There is a need for a model that can efficiently produces the shifts on a weekly basis. Such model would ensure that there are enough shifts for everybody, with the historical data of the demand as an input.

1.2

Problem Formulation

The aim of this thesis is to propose a model that produces shifts based on historical data. Such model should be able to follow the characteristic of the demand that is given as an input. Also, while ensuring that all drivers gets enough working hour each week. There are other conditions as well that must be considered, such as: the properties of a working shift, driver type and other practicalities.

Some of the properties of a working shift are directly prescribed by the Swedish Transport Workers Union. However, Bzzt AB aims to eventually expand its operation internation-ally and thus, these properties may vary. For that reason, we introduce these properties as parameters rather than fixed numbers.

• A shift length may vary depending on the driver type (i.e full-time or part-time driver) • A break must be given after a certain number of hours. For safety reasons as well as

for the drivers overall well being.

• Additional shifts and overtime work can only be approved by the chief of operator. The last item on the list above, is to ensure that the shifts being given out will maximize the chance to meet the overall demands. Naturally, some shifts are more popular than others and there may be some shifts that will be left over. There could be a scenario where the leftover shifts are the ones with the highest chance to satisfy the demands. The idea is then to ensure that these important shifts are taken at all times, despite its popularity. This can be done by minimizing the abundancy of the shifts.

(17)

Lastly, the suggested solution should be scalable and generalized enough to be able to han-dle different inputs. Again, this is due to Bzzt AB would eventually open up their service in other cities, in other countries. The solution should not be overfitted to solely handle the operation in Stockholm. Other places may have other regulations, which means other specific constraints may need to be implemented. For that reason, the solution must be generalizable and robust enough to be built upon.

1.3

Related Works

Shift designing and scheduling of employee is an important non-trivial optimization prob-lem. It is widely known that the problems are NP-hard and optimality cannot always be guaranteed. Typically, the process of scheduling and shift designing are interrelated. One approach would be to tackle scheduling and shift designing simultaneously. This would increase generality of the model but also increase its size and complexity [2]. Splitting up the problem and solving it would make it easier to tackle [3] but it does not guarantee that a good solution, given the designed shifts, can be found. The latter approach was taken by [4], in which they utilized local search method to improve their initial solutions iteratively during search.

Alternatively, one could also just focus solely on the scheduling problem. Nurse schedul-ing problem is a well known example of such problem. The aim is to find a way to assign nurses to shifts in an optimal way, given various constraints. Given the nature of the prob-lem, there are several meta-heuristics way to solve the problem. [5] conducted a comparative study in which they compare several well known meta-heuristics methods such as: Firefly algorithm, Particle swarm optimization, Simulated Annealing and Genetic algorithm. [6] proposed that a combination of the Genetic algorithm and a local search algorithm would perform better than a solitary algorithm like Genetic algorithm.

Another similar problem would be: the minimum shift design problem. [7] focused solely on designing the shifts. They had an explicit constraint of minimizing the number of shifts and also made use of a local search algorithm. [8] presents a framework for tackling minimum shift design problem by implementing ”Operating Hours Assistant” software. The software made us of a local search algorithm and were mainly considering shift designing problem.

(18)
(19)

Chapter 2

Modelling

The aim of this chapter is to provide the approach that was taken for modelling the problem. For the first section, the assumptions and the scope for the model is discussed. Next, a more detailed description on as to how the problem can be divided into two sub-problems. The last section of this chapter goes through an overview of the suggested solution model.

2.1

Approach

The proposed scheduling model is going to be driven by the demand of customer orders. Thus, the first step in composing such a model would be to compose a model of the de-mand first. The historical data of customer order is available and contains the following information:

• The time and date of a customer order. • The place where the customer order was made • The customers end destination.

All available data has been collected ever since the launch of Bzzt AB. At the time this thesis was written, about 10 months worth of data has been collected for the Stockholm inner-city region.

The arrival of customer orders can be viewed as random since it is not known when a customer order will arrive but the arrival rate can be estimated with historical data. Nat-urally, future customer orders arrival rate may be estimated as a random variable that is Poisson distributed. Such statistical forecast model is deemed to be outside the scope of this thesis. Therefore, the demand will be modelled as deterministic instead.

The demand is represented as a h × d matrix where h is the maximum consecutive hours that the service is operational, and, d is the number of days. Naturally, the days has been discretized by hours and thus the elements of the matrix will be the average round-up num-ber of bookings during the given time slot. This effectively gives us a deterministic model where the demands are represented as integers.

Finally, Bzzt AB also has data on how many orders each driver may accept per working hour. With the this data and the historical data of customer orders, we can estimate a number of driver that is in demand per hour and per day.

2.2

Problem Structure

Despite having the demand as deterministic, the search space for optimal solution is still vast. Especially since each driver may have a unique weekly schedule with unique type of

(20)

shifts. The problem can then instead be split into two parts. Because of the identity of the driver can be disregarded, this allows the problem to be approach in a non conventional way. Futhermore, there are two types of drivers:full-time driver and part-time driver. The two types of drivers are entitled to different amount of working hour per week. They may also work overtime or choose to work less one week and compensate on the following week. Our model will take consideration the average working hour that the two types of drivers are entitled to. This would give a fair approximation as the total working hour, especially for full-time drivers, should not vary substantially from week to week basis.

The problem can now be split into two sub-problems. The first part will be about how to distribute the amount of drivers throughout the week. This will give us an approximate required number of drivers for each day of the week. The second problem would then be about how to schedule these prescribed amount of drivers throughout the whole day. The following sub-sections will go through the process of modeling the solution format for the two sub-problems.

2.2.1

Weekly Scheduling

For the weekly scheduling problem, the problem can be modelled as an 0-1 integer linear programming. We define first all the relevant sets and parameters before proceeding with defining the objective function along with the constraints.

Definitions

Essentially, the model should decide whether a driver should work on a particular day. Because there are two types of drivers, the model calls for two binary decision variables: fi,d

and pj,d. The two variables represents full-time and part-time drivers.

Table 2.1: Decision Variables

Variables Description

fi,d = 1 if full-time driver i works on day d

pj,d = 1 if part-time driver j works on day d

Each driver is indexed with i or j and each day of the week is indexed with d. Furthermore, we recall from section 2.1, that the operational hour is indexed by h. The sets for full-time and part-time driver are defined as I and J respectively. More generally the indices can be defined in table 2.2 below.

Table 2.2: Indices and sets

Index Description

i ∈ I = {1, ..., F } Full-time drivers j ∈ J = {1, ..., P } Part-time drivers d ∈ D = {1, ..., D} Days

h ∈ H = {1, ..., H} Hours t ∈ T = {f ull, part} Driver type

Where F and P are the total number of available drivers for the week. Also, D and H, is the total number of operational days and hour respectively, for the service offered.

Lastly, the parameters for the model are defined. The average working hour per day, for the drivers, is defined as hourstype. Next, the average working days per week for all drivers is

(21)

defined as daystype. We note that each parameter may differ in values, depending on the

type of driver. A full-time driver may for example work longer hour and/or more days of the week as compared to a part-time driver. As for the number of orders received/hour, we assume homogeneity for all drivers. Thus, all drivers, on average, may accept about the same amount of orders per hour. This parameter is defined as orderst and we consider the

variation this parameter may have from day to day basis. This parameter may, for exam-ple, be larger for a red day as opposed to a regular weekday. An overview of the defined parameters can be seen in table 2.3 below.

Table 2.3: Parameters

Parameter Description

hourst Average working hours/day for drivers with type t

dayst Average working days/week for drivers with type t

orderd Average accepted order/hour for day d

Objective Function & Constraints

The objective for the model is to meet the demand as closely as possible. The objective function is formulated as

X

d X h demandh,d− orderd X i

fi,dhoursf ull− orderd

X j pj,dhourspart (2.1)

From section 2.1, demandh,dcomes from a demand matrix, where each element comes from

historical data. The objective function in 2.1 is formulated so that the model would aim to have an even spread of scheduled drivers. The absolute value is there to ensure this and thus, the minimum objective value should be 0. Which means that all the demands are being met, for each day of the week.

Each type of driver has a prescribed number of working days within a week. A full-time driver may for example work 5 days a week on an average. This is represented as the constraint

X

d

fi,d= daysf ull (2.2)

for full-time drivers, and

X

d

pj,d= dayspart (2.3)

for part-time drivers.

A summary of the complete model can be seen in 2.4 below

min.

X

d X h demandh,d− orderd X i

fi,dhoursf ull− orderd

X j pj,dhourspart s.t. X d

fi,d= daysf ull, i ∈ I

X d pj,d= dayspart, j ∈ J fi,d∈ {0, 1}, pj,d∈ {0, 1} d ∈ D, h ∈ H (2.4) 7

(22)

2.2.2

Daily Scheduling

For the daily scheduling problem, the model should, for each driver, decide the following things:

• The start and end time of the shift • The duration of the shift

• The time slot for a break

Again, we define first all the relevant sets and parameters before proceeding with defining the objective function along with the constraints.

Definitions

We first introduce the set of all drivers, i.e I ∪ J as K, where k ∈ K. For each decision that is needed to be made, a variable is introduced. The start time of the shift is decided by an integer variable xk. Next for the duration of the shift, it is decided by an integer variable

yk. Implicitly xk+ yk would give us the shift end time. Lastly, for the time slot for a break,

another integer variable zk is introduced.

We recall from section 2.2.1, the set H contains the operational hours for any given day in the week. With that said, the shift starting and end time, must be contained within the operational hours. Aditionally, no driver should start or end their shift with a break. This gives us the following relations:

Table 2.4: Variables for driver k

Variables Description

xk Shift start time, xk∈ H

yk Shift duration, xk+ yk∈ H

zk Break time, xk < zk < xk+ yk

We also introduce N which is the number of available cars. As the company grows, the number of car also increases. Alternatively, the number of cars may also decrease from time to time due to service. Lastly, there is a limit to how many drivers may start and end their shifts simultaneously. This is to minimize congestion inside the garage.

Objective Function & Constraints

The input for the model is the demand matrix which were introduced in section 2.1. More specifically, a column of the demand matrix should be the input, as we are only interested in the demands of one particular day. Thus, the variables xk , yk and zk is designed to map

into a matrix of ones and zeros. The dimension of such matrix is H × K, where H is the total operational hours and K is the total number of drivers to be scheduled for the day. The element of the matrix is defined as driverh,k where

driverh,k=

(

1 for h ∈ [xk, xk+ yk]

0 for h = zk & h /∈ [xk, xk+ yk]

(2.5) The objective function can now be formulated as

(23)

X

h demandh,d− orderd X k driverk,h (2.6) where we note that demandh,d and orderd varies from day to day and thus, the index d

is included. Furthermore we introduce two constraints to minimize congestion inside the garage, and to ensure that there is always enough cars in reserve.

For the first constraint, we first define maxf low, which is the upper limit for the sum of

cars going in and out of the garage. Naturally, a driver needs to park the car in the garage when his/her shift ends. This makes up the flow of cars going in. Conversely, a driver starts his/her shifts by getting a car from the garage, which makes up the flow of cars going out. As the garage may be limited in space, congestion may occur if too many drivers ends and starts their shifts concurrently. Thus, maxf low is introduced as a soft-constraint, which

inherently means that given enough demand, we may choose to violate this constraint. A weight constant is also assigned as wd. This weight may differs from day to day basis to

match the variance of the demand inputs.

The second constraint is a hard one and needs to be one. All cars has limited run-time before they need to get recharged. It is currently estimated that in order for the operation to avoid car deficit, there must be less than 40% of all available cars out in the field, at all times. This gives us

X

k

driverk,h≤ 0.4N for all h ≤ H − 2 (2.7)

where the term for all h ≤ H − 2, signifies that it is allowed to utilize all the cars available for the last 2 hours before closing. As each cars run-time is 2 hours, in average before they need to get recharged. And the cars will be charged after closing time until the next time the next opening time.

A summary of the complete model can be seen in 2.8 below

min.

X

h demandh,d− orderd X k driverk,h + wdmaxf low s.t. X k

driverk,h≤ 0.4N for all h ≤ H − 2

d ∈ D, h ∈ H, k ∈ K (2.8)

More details will be given in the next chapter where a meta-heuristic method will be applied to solve the formulated problem.

2.3

Overview

This section provides an overview of the two proposed models from the previous sections. We recall that the problem has been split into two sub-problems: a weekly and daily scheduling problem. The complete structure of the approach is summarized below.

1. Produce a demand matrix by averaging historical data, where time is discretized per hour.

2. Deduce how many drivers are needed for each day of the week. This is done by solving (2.4), with the demand matrix as an input.

(24)

3. For each day of the week, solve (2.8), to produce a complete schedule. The columns in the demand matrix will serve as an input.

Due to the vast size of the solution domain and the complexity of both suggested models, meta-heuristics methods are employed to produce the solutions. The first model, (2.4), is a 0-1 integer linear programming model with only equality constraints, and it is known to be N P -hard. The second model, despite the resemblance, is not a linear programming model and requires a specific algorithm to produce the solution. In the next chapter, we will go through the two meta-heuristics methods that was chosen to solve the two sub-problems.

(25)
(26)

Chapter 3

Methods

This chapter is about the two meta-heuristic methods that was chosen to solve the formulated sub-problems in the previous chapter: Simulated annealing and Genetic algorithm. The two methods are some of the most popular and widely used methods to solve scheduling problems. Robustness and simplicity were two of the more important factors that was considered for determining suitable methods. The methods must be robust enough to be able to handle large amount of inputs. Also, the methods should be simple enough as the parameters that controls the methods should not need as much maintenance.

3.1

Simulated Annealing

Simulated annealing is the selected method to solve the weekly scheduling problem. The method is selected because it has some of the fewest parameters to control among meta-heuristics methods. [9] gives a general framework for solving general 0-1 integer linear programming problems, which will be adhered to for our implementation.

3.1.1

Concept & Definitions

Simulated annealing is a probabilistic method that mimics annealing in metallurgy. The process of cooling of metal in heat bath is called annealing. The idea is that by cooling a material slowly, larger crystal can be formed and reduce their defects. Thus, with simulated annealing, the algorithm will slowly decrease its probability of accepting worse solutions as the temperature decrease. This allows the algorithm to accept less than optimal solutions early in the process and allows for a more extensive search for a global optimum.

Simulated annealing, searches the solution space through random perturbation of the cur-rent state and then evaluates the acceptance probability of the new state. The acceptance probability is produced by the acceptance probability function that is defined as

P = exp(cold− cnew Tcurrent

) (3.1)

where cnew and cold is the new state and the old state objective value respectively. The

current temperature is defined as Tcurrent. It is an important parameter that indicates the

number of iteration the algorithm is on. We observe that if cold > cnew holds true, then

P > 1 and thus we accepts the new state. However, if the opposite holds true, then we might still consider to accept the new state. This will depend on the acceptance probability. The same goes for the case of cold= cnew. This special case would result basically in coinflip as

the two states has equal chance to be selected.

The function in (3.1) produces a number between 0 and 1, which can be seen as a rec-ommendation on whether or not to jump to the new state. This number is then compared

(27)

to another randomly-generated number between 0 and 1, which we define as r. Thus, we have two cases,

Table 3.1: Acceptance criteria

Cases Description

P > r Move on to the new state P < r Stay in the current state

We note that Tcurrent is also key to how large P may become. In the early process, the

temperature may be high and thus the chances that the algorithm accepts worse states is also high. As the temperature gets lowered, the chances that a worse state to be accepted will also get lowered.

An overview of the algorithm can be seen below.

1. Generate a random solution & calculate its objective value. Set the initial temperature. 2. Generate a random neighboring solution & calculate its objective value

3. Compare the two solutions, by using the acceptance probability function. Accept one solution and reject the other accordingly.

4. Update the current temperature, Tcurrent.

5. Steps 2-4 is repeated until an acceptable solution is found or a minimum temperature has been reached.

The overview above serves as a road map for the next section where each steps will be given in details.

3.1.2

Implementation

For the first step, a random solution needs to be generated. We refer back to the model (2.4), from section 2.2.1. min.

X

d X h demandh,d− orderd X i

fi,dhoursf ull− orderd

X j pj,dhourspart s.t. X d

fi,d= daysf ull, i ∈ I

X

d

pj,d= dayspart, j ∈ J

fi,d∈ {0, 1}, pj,d∈ {0, 1}

d ∈ D, h ∈ H (3.2)

A random solution is generated by first fulfilling the constraints in (3.2), and then randomize the placements for the working days of each drivers. In this way, we are guaranteed that all generated random solutions, will remain feasible. The objective value can then be calculated for the solutions generated. According to the objective function stated in (3.2).

Next, a neighbouring solution is generated by slightly perturbing the solution that was

(28)

generated. The perturbation is done by selecting a random driver and alter his/her shift slightly. We then calculate the objective value for the neighbouring solution.

With the two solutions at hand, a decision must now be made for which solution should be kept for the next iteration. The two scenarios that may occur are

• cold> cnew, thus the new, neighbouring solution is more optimal and we reject the old

solution

• cold < cnew, the old solution is more optimal. However, there is still a chance that

the new solution to be accepted, according to the calculated acceptance probability defined in (3.1)

in the second scenario, we refer back to the acceptance criteria defined in 3.1, to decide which move to employ next.

After the algorithm have chosen the solution it wish to proceed with, the last step is to update the current temperature. This is done by

Tnew= αTcurrent (3.3)

where, α is a constant that is responsible for the rate of cooling down. Thus, α directly affects the number of iterations the algorithm is running. The algorithm will be terminated when the current temperature has reached a minimum temperature Tmin.

It is generally accepted that the algorithm performs better when the steps 2 and 3 are repeated several times before the temperature gets updated. The number of times the steps are repeated may generally range from 100 to 1000 iterations. This part of algorithm also ensures that the solution space is being searched thoroughly. We define Σ as the number of iterations being run on each temperature.

The algorithm is summarized by the figure below.

(29)

Figure 3.1: Simulated Annealing Algorithm

3.2

Genetic Algorithm

The daily scheduling problem is a higher dimensional problem compared to the weekly scheduling problem. Early trials showed that the simulated annealing could not give a con-sistent enough answers and thus genetic algorithm is the method that was chosen instead. Various surveys such as [10] and [11] has revealed that genetic algorithm, is a powerful tool for solving a the direct representation of a problem. This is very desirable in our case, since the proposed model should be generalized enough to adapt to various changes in the con-straints. A direct representation of the problem, will simplify the process of modifying the problem.

Genetic algorithm is a powerful search technique that is often used for solving NP hard optimization problems. It is a more complex model than simulated annealing but makes up for it with the solutions quality.

3.2.1

Concept & Definitions

Genetic algorithm (GA) is a search based algorithm that aims to emulate the principles of genetics and natural selection. Inherently, the search process is probabilistic, however, GAs has a direction in the sense that the search is directed into regions with better performances. This is done by exploitation of historical information. This process is akin to the process of passing on genetics information according to the natural selection. As observed by Charles Darwin, competition among individuals for survival results in the fittest individuals to dom-inate over the weaker ones.

GA is different from SA, in the sense that SA only employs one solution and repeatedly

(30)

enhance it until the algorithm meets its termination criteria. In GA, there are more than one solution and instead there is a pool or a population of feasible solutions existing at the same time. GA would then recombine and mutate these solutions, which would produce new solutions. This particular process is repeated over various generations. Based on their ob-jective values, these solutions is then assigned a fitness value. The fitness value of a solution will determine its chance to mate and yield more fitter solutions.

From GAs aspect, all solutions that exist within a population is referred to individuals. Thus, the process of mating refers to the process of two solutions recombining with each other. Essentially, the fitter individuals are given higher chances to mate. Thus the genetic information of the solutions with higher qualities, tends to be passed on to the next genera-tion of solugenera-tions.

Listed below are the terminologies that is frequently used throughout this thesis.

Table 3.2: Basic Terminology

Terminology Description

Population A subset of all feasible solutions. It is updated for every generation. Chromosome A solution and resides among the population.

Gene A position of a one element in the chromosome

Allele A value that a gene takes for a particular chromosome. Fitness function The objective function.

Parents A pair of solutions that are chosen to perform crossover Offsprings Solutions made by combining the chromosome of its parent Crossover Equivalent to mating thus, produces offsprings.

Mutation Produce a neighbouring solution to the offspring.

3.2.2

Implementation

The first step in implementing GA is to define the chromosome and its sub-parts. We refer back to section 2.2.2, and to the variables defined

Table 3.3: Variables for driver k

Variables Description

xk Shift start time, xk∈ H

yk Shift duration, xk+ yk∈ H

zk Break time, xk < zk < xk+ yk

which gives us the following genes

[xk, yk, zk] (3.4)

and a chromosome, which contains k genes for k drivers. This allows for the algorithm to have full control over the shift design. This is illustrated in the figure below.

(31)

Figure 3.2: Solution Encoding

Next, would be to determine the fitness function. We recall the solution which is represented in the chromosome must be decoded into driverk,h as described in section 2.2.2. Thus the

fitness function is equivalent to

min.

X

h demandh,d− orderd X k driverk,h + wdmaxf low s.t. X k

driverk,h≤ 0.4N for all h ≤ H − 2

d ∈ D, h ∈ H, k ∈ K (3.5)

Next step would be to determine the mechanism behind parent selection.

The parent selection process is done by a process called Tournament selection. We introduce the parameter K, where K ∈ <. The selection is performed by selecting K individuals, randomly, from the current population. These individuals are then compared to one another and the best individual out of the selected ones, is chosen as a parent.

After the parent has been selected, two offsprings can now be produced in a process called crossover. There are several ways to go about the crossover. We have chosen to utilize Uniform crossover. Essentially, we treat each gene separately and flip a coin for each gene to decide if it will be included in the offspring or not.The process is summarized in the figure below.

(32)

Figure 3.3: Crossover Process

With the offsprings determined from the crossover, a mutation may occur. The mutation process is done by first selecting a random gene in the chromosome and then randomize its values. Essentially, this means that a random driver is selected and his/her shifts is altered. As such, a neighbouring solution is created. With the mutation, the process of offsprings creation is complete and the offsprings may now be considered to be inserted into the current population.

Figure 3.4: Mutation Process

The survival selection process is done by comparing the offsprings with their parents. After the selection is done, two individuals will be eliminated and two will be inserted into the population. Thus, the new population may contain: both offsprings, both parents or the combination of the two.

The algorithm is summarized by the figure below.

(33)

Figure 3.5: Genetic Algorithm

(34)
(35)

Chapter 4

Experimental Results

In this chapter, we presents the results that were obtained by applying the two methods that were presented in the previous chapter: Simulated annealing (SA) and Genetic algorithm (GA). The implementation were done using Python programming language. Furthermore, all tests were run on a Intel(R) Core(TM) i5-6200U CPU, 2.40 GHz with 8 GB of RAM.

4.1

Weekly Scheduling Problem

We first present a proof of concept to show that SA is an appropriate method for solving the weekly scheduling problem. This is done by comparing the solution produced by SA, with a known solution as reference and also with a random demand input.

A table with the parameters used for the first test case is presented below.

Table 4.1: Parameters for test case 1 (SA)

Tinitial Tmin α Σ daysf ull dayspart hoursf ull hourspart orderd

10 0.001 0.99 1000 5 3 8 4 1

The values in table 4.1, are typical values. At present day, there are about 50 full-time drivers and 50 part-time drivers. On average, a full-time driver works 5 days a week and part-time driver works 3 days a week. Also, a full-time driver is set to work 8 hour/day, and a part-time driver is set to work 8 hour/day. Furthermore, the parameters that controls the methods are also typical values for a general SA.

The results obtained, with the above mentioned parameters, is presented below

Table 4.2: Obtained solution

Total Drivers

Day

Monday Tuesday Wednesday Thursday Friday Saturday Sunday

Full-time 50 50 50 50 50 0 0

Part-time 50 50 49 1 0 0 0

where the obtained objective value is 8 and the run time is 99 seconds. We compare the obtained solution with the reference solution below.

(36)

Table 4.3: Reference Solution

Total Drivers

Day

Monday Tuesday Wednesday Thursday Friday Saturday Sunday

Full-time 50 50 50 50 50 0 0

Part-time 50 50 50 0 0 0 0

We note that one of the time driver in Table 4.2 is misplaced. There should be no part-time driver scheduled for thursday. This computes to 1% error. The algorithm is otherwise able to capture the overall characteristic of the optimal solution. However, Bzzt AB aims to expand their business and to eventually even open up their services in other major cities. Naturally, there will be more drivers required and thus the next natural step is to determine the robustness of our proposed model.

4.1.1

Scalability

At the present time, there are about 100 drivers available for the Stockholm inner-city region. They expect to expand rapidly as the demand increases. Thus, in the table below, we present several test cases where we increase the number of drivers. These test cases were run with the same parameters as in Table 4.1 and with the same ratio of full-time and part-time driver.

Table 4.4: Scalability test (SA)

Total Drivers 200 400 800 1600 Objective Value 24 80 280 1032

Error [%] 2 4 6.3 11.6

Run Time [s] 131 167 177 276

All the test cases were performed with an optimal solution that were known beforehand. This allows the error to be calculated as P driversmisplaced

P drivers . We observe from Table 4.4, that

the error increases approximately in proportion to the increase of total drivers. The run time does not increase significantly until we reach 1600 drivers. And the objective values, in this case, should be viewed as a measure of how much hour is lost to inefficiency. That is compared to the total hour that all drivers accumulate during the week. Thus the objective values can be seen as acceptable for most cases presented above.

Again, all the test cases were run by keeping all parameters constant. The only param-eter that were increased steadily were the total number of drivers. In the next section we present the effect of varying the parameters.

4.1.2

Parameters Analysis

The method-controlling parameters that were presented in Table 4.1 are supposed to be problem specific. There are however, no exact guidelines as to how the parameters are sup-posed to be set. Different problems may require different settings. Thus, in this section we present empirical evidences as to how each parameter affects the algorithm.

All the test cases that were run for this section, were performed by altering one param-eter and keeping the rest constant. The settings is set to the same settings that were used for the proof of concept in the previous section. Also, the total driver is set to 400. As it is likely that Bzzt AB may reach that number soon.

(37)

Below we present the results.

Table 4.5: Initial Temperature Variation

Tinitial 1 5 20 100

Objective Value 104 80 80 72

Error 4.75 4 3.75 3.25

Run Time 134 197 231 276

We observe from table 4.5, that as we increase Tinitial, the error decreases to a certain

de-gree. It appears greatly increasing the initial temperature does not neccesarily yield a great reduction in the error. This is to be expected as an increase in the initial temperature allows the algorithm to run and process more solutions. Especially, early in the process where the probability to accept worse solution is high.

Next is to vary the minimum temperature.

Table 4.6: Minimum Temperature Variation

Tmin 0.1 0.01 0.001 0.0001

Objective Value 184 120 72 64

Error 8.75 5.25 3.25 3

Run Time 89.9 140.8 174.3 238.3

The minimum temperature is equivalent to the stopping criteria of the algorithm. As the temperature approach a low number, the algorithm should focus more on local search and be less tolerant to sub-optimal solutions. Thus, lowering Tmin, allows the algorithm to

ex-tensively perform local search towards the end of the process. This particular behavior can be seen in table 4.6

The parameter α controls the increment that SA takes for every iteration. In the next table we present the effect of varying α.

Table 4.7: Variation of α

α 0.9 0.95 0.99 0.999

Objective Value 456 264 104 8 Error 19.75 11.5 4.75 0.25 Run Time 15.8 35.2 174.9 2152.4

From table 4.7, we observe that increasing α significantly increases the run time and also the quality of the obtained solution. A high α means that the algorithm spends more time dwelling at every temperature and this is crucial in the early stages. At the early stages, spending longer time at higher temperature essentially means that the algorithm may cover more ground in the solution space.

The last parameter we will analyze is Σ.

(38)

Table 4.8: Variation of Σ

σ 100 500 1000 2000

Objective Value 392 136 104 48

Error 17 6.25 4.5 2.25

Run Time 16.3 94.8 182.7 347.2

A higher Σ also gives a better solution quality albeit increasing the run time significantly as well.

4.2

Daily Scheduling Problem

In this section, similar to the previous section, we would like to first present a proof of concept. The presentation is done in a similar fashion to the previous sections. We first present the parameters that were used for the first test case.

Table 4.9: Parameters for test case 1 (GA)

popsize pcrossover pmutation generation H maxf low N K orderd

10 0.5 0.9 65000 24 25 100 100 1

where the daylength H is set to 24, and there are 100 drivers in total, of which only 25 drivers may start and end their shifts simultaneously. Also, there are 100 available cars N and for the sake of simplicity, each driver may only process one order per hour orderd. The

results is presented in the figure below.

(39)

Figure 4.1: Proof of Concept for GA

We observe that the model managed to follow the characteristics of the demand fairly well. Currently, Bzzt AB has their business open for up till 21 hour but this might change and 24 hour is set as the upper limit. The run time for solving the problem is about 35.5 minutes. Also the error can be estimated by

errorGA=

ObjectiveV alue

T otalDemand (4.1)

where T otalDemand is the sum of the demand for the whole day. The error is then computed to be 2.7% for our first test case.

4.2.1

Scalability

We would like to study the scalability of our proposed model in this section. Given the 60 minutes run time limit, we would like to find out the maximum number of drivers that may be scheduled for a particular day. We present the results in the table below.

Table 4.10: Scalability test (GA)

Total Drivers 200 400 600 1000 Objective Value 32 179 338 867

Error [%] 2.9 8.1 10.2 15.7

Generation 77548 35178 30089 20035

(40)

Again, all the test cases were performed with an optimal solution that were known before-hand. Also the model were run with the same parameters that were presented in Table 4.9. The only exception being that the number of generation where it varies. As the solution space increases significantly, the number of generation the algorithm managed to go through within 1 hour decreases. This is to be expected as more computations are then required. Furthermore, we also observe that the error increases almost proportionally with the in-crease of total drivers. This is within 1 hour of run-time. If more time is allowed, the model could further drive down the error.

4.2.2

Parameter Analysis

In this section we would like to vary the parameters that directly affects the GA. The pa-rameters is first set as in Table 4.9.

We first present the variation of the parameter popsize

Table 4.11: Variation of Population size

Population Size 5 20 50 100 Objective Value 46 45 58 50

Error 4.1 4 5.3 4.5

run-time 36.6 38.9 48 48

of which we observe that it does not seems to affect the final objective value significantly. It does however impact the run-time to some extend.

The next parameter we suspect to have some significant impact would be the crossover rate pcrossover.

Table 4.12: Crossover Rate Variation

Crossover Rate 0.1 0.25 0.75 0.9 Objective Value 49 43 34 42

Error 4.4 3.9 3.1 3.8

run-time 38.4 39.8 48 46.2

From the Table 4.12 we notice that the crossover-rate does not have a major impact on the final results.

The last parameter we study is the mutation rate pmutation.

Table 4.13: Mutation Rate Variation

Mutation Rate 0.1 0.5 0.8 0.99 Objective Value 127 76 40 55

Error 11.5 6.9 3.6 5

run-time 35.2 45 48 48

The mutation rate appears to have the biggest impact on the end result. This is reasonable because in our case, one single driver does not have a significant impact on the overall objective value. Thus, to maintain variation throughout the process, a high mutation rate is required.

(41)
(42)

Chapter 5

Discussion

In this chapter, a discussion on the proposed model is given. We discuss further possibilities to improve upon the model by considering alternative meta-heuristics methods. Also, the possibility to compose a different model altogether is discussed. Such a model may consider the stochasticity nature of the problem as well as a feedback-system integration, which would make the model more autonomous.

5.1

Proposed Model

The two models that we proposed in this thesis, are not without its limitations. Mainly, the two models are very dependent on how the demands are modeled. Furthermore, by splitting the problem into two sub-problems, potentially, there could cases where some drivers do not get their minimum required working hour per week. Arguably, this particular issue may resolve itself as all drivers are allowed to choose the shifts that fits them. However, there may still be an extreme case where all prescribed shifts are shorter than usual. This would cause the issue of minimum required working hour not being met, to persist.

Admittedly, due to the non conventional nature of the problem, our proposed model cannot be exactly compared to some of the more conventional shift-design and scheduling problems. Thus, much of the improvements can only be speculated. Nevertheless, in the next section we go through some of the possible improvements that may be implemented into our current model.

5.1.1

Improvements

An improvement that may be made for the weekly scheduling problem (WSP), is to add further constraints such as: the number of available cars, the number of extra shifts and the way the demands may be spread out throughout the day.

The number of available cars could possibly be introduced as a weigh factor to limit the risk of running out of car during any particular day. There is an inherent risk that there are simply too many drivers scheduled for a particular day and not enough cars to sustain the service throughout the day. Therefore, this is also related to the way the demands may be spread out. An extreme case worth considering is when the demand spikes for some hour during the day. The model only sees the total demand at the end of the day and would not consider the case that the majority of the day the demands are pretty low. The model might then go on an extreme to meet the demand for that particular day and overshoots with assigning too many drivers for that day. This scenario can be averted by assigning weighing factors that considers: total available cars and the average demand/hour during the day. The number of extra shifts can, in this case, be considered as the number of drivers that are willing to work overtime. Introducing extra shifts, essentially means that the model

(43)

considers more drivers to assign than there actually are. This would allow more flexibility for the drivers to choose their shifts and meet their weekly working hour quota. However, there should be an extensive study as to how many extra shifts can actually be considered. There should only be as much overtime per week as per worker regulation as well as how the weekly demand may look like.

Another improvements would be to further consider the car cycling aspect for the daily scheduling. Currently, this is represented as the constraint that states: No more than 40% of available cars may be in use at the same time. However, the reality is that some cars may require a longer charging time than others, and vice versa. Thus, there are poten-tials to incorporate a model that deals with how the cars may be rotated with being in use and being charged. Such a model could then suggest an upper limit of available cars throughout the days. This would then serve as a constraint for the daily scheduling problem. Lastly, the daily scheduling problem could also concerns itself with how the fleet of cars are managed. If all the cars out on the field are being positioned optimally, this may reduce the actual demand. With optimal positions, the drivers could potentially increase the num-ber of order they are able to process per hour. Another model would be required. It would be an iterative process between the daily scheduling model and the fleet management model in order to form a consensus.

5.1.2

Alternative Methods

Simulated Annealing (SA) and Genetic Algorithm (GA) are some of the most commonly used methods for tackling scheduling problems. Naturally, there has been some who propose a combination of both methods, a Memetic algorithm. [12] presented a method of combining GA and SA, and has shown to improve the overall solution quality. This was especially true for higher dimensional problem, which is the case with our proposed model for daily scheduling problem. Potentially, a hybrid algorithm may also be powerful enough to tackle the combined problem.

The combined problem is one where we assign an identity for each driver. Thus, it would be possible to keep track if all drivers is able to meet their weekly quota. Effectively, the overarching constraints between the two sub-problems would be then that each driver must meet his/her weekly quota.

Memetic algorithm is a fairly new area of research within evolutionary computation but it has showns some promising results. Admittedly, there will be more parameters to tune and adjust due to the complexity of the model.

5.2

Further Work

In this section we discuss some of the possible extension to our proposed model. The ex-tensions we present below, we believe, are some of the more obvious ones and may produce significant improvements to the overall model.

5.2.1

Stochasticity

The two models are demand dependant and thus it is worth to explore the possibility to re-fine the demand model. A stochastic demand model would bring a better reflect the reality. The model would then try to produce solutions that may maximize the chance we meet the demands per time unit. Thus a statistical forecasting model for the demand would have a

(44)

great value.

Additionaly, we could also introduce stochasticity into the general scheduling model itself. Because all drivers have the possibility to choose the shifts that suits them, there may be some shifts that are popular than others. There is some likelihood that some shifts will not be selected and by introducing stochasticity we may detect these patterns. Eventually some buffer may be introduced to cover some of these non-popular shifts. Which would again maximize the chance to meet the overall demand.

5.2.2

Feeback-System integration

The meta-heuristics methods that were chosen requires some maintenance. Also, as men-tioned in section 5.1.1, there are several aspects that can be considered: the batteries of the cars and fleet management. All of the above mentioned aspects would benefit if we integrate our model into a feedback-system. Through several iterations, the model can gradually refine its solutions by adjusting according to the feedbacks given. The model would then be more autonomous and this all can be done by a machine learning method.

(45)
(46)

Chapter 6

Conclusion

In this thesis, 2 models for the 2 sub-problems of shift design and scheduling problem has been proposed. Given some example problems, we have proven that our chosen methods works and is also scalable. As the models are meant to be run for every new week, run time is not an issue.

The first sub-problem, weekly scheduling problem, SA has proven to be adequate. This is promising as SA is relatively simple model with few controlling parameters. Thus, over-fitting the model will not be an issue as for this case. Also, upgrading the model may also be done at ease.

The second sub-problem, is a more complex one and it reflects in the heuristic method that is required to solve the problem. GA has proven to be adequate in delivering near op-timal solutions without much finetuning of its controlling parameters. Further studies could be conducted in exploring some of the operators within the algorithm.

A combination of the two methods, (GA and SA) could be a powerful enough technique to tackle the two sub-problems simultaneously. This may holds especially true if run-time is not of particular concern.

(47)

Bibliography

[1] Bzzt AB. The official Swedish homepage for Bzzt AB. https://www.bzzt.se/, ac-cessed: 07/02/2018.

[2] F. Glover and C. McMillan. The general employee scheduling problem: An integration of ms and ai. Computers and Operations Research, 13(5):563–573, 1986.

[3] N. Balakrishnan and Richard T. Wong. A network model for the rotating workforce scheduling problem. Networks, 20:25–42, 1990.

[4] Andrea Schaerf Nysret Musliu and Wolfgang Slany. Local search for shift design. Eu-ropean Journal of Operational Research, 153:51–64, 2004.

[5] Tryambak Chatterjee Arindam Baidya & Sriyankar Acharyya Snehasish Karmakar, Sug-ato Chakraborty. Meta-heuristics for solving nurse scheduling problem: A comparative study. 2nd International Conference on Advances in Computing, Communication, & Automation, 2016.

[6] Tad Gonsalves and Kohei Kuwata. Memetic algorithm for the nurse scheduling problem. International Journal of Artificial Intelligence and Applications, 6(4), 2015.

[7] Guy Kortsarz Nysret Musliu Andrea Schaerf & Wolfgang Slany Luca Di Gaspero, Jo-hannes G¨artner. The minimum shift design problem. Springer Science+Business Media, 2007.

[8] Musliu N. & Slany W. Gartner, J. Rota: a research project on algorithms for workforce scheduling and shift design optimization. AI Communications: The European Journal on Artificial Intelligence, 14(2):83–92, 2001.

[9] D.T. Connolly. General purpose simulated annealing. Journal of Operational Research, 1992.

[10] Abela Abramson. A parallel genetic algorithm for solving the school timetabling prob-lem. Technical Report, Division of I.T., C.S.I.R.O, 1991.

[11] H-L. Fang. Investigating genetic algorithms for scheduling. MSc Dissertation, Depart-ment of Artificial intelligence, University of Edinburgh, 1992.

[12] D. Adler. Genetic algorithm and simulated annealing: A marriage proposal. Proc. IEEE Int. Conf. Neural Networks, vol. 2, San Franciso, CA, 1993.

[13] E. Aycan & T. Ayav. Solving the course scheduling problem using simulated annealing. Advance Computing Conference IACC 2009. IEEE International, 2009.

(48)
(49)
(50)

TRITA -SCI-GRU 2018:051

References

Related documents

46 Konkreta exempel skulle kunna vara främjandeinsatser för affärsänglar/affärsängelnätverk, skapa arenor där aktörer från utbuds- och efterfrågesidan kan mötas eller

The increasing availability of data and attention to services has increased the understanding of the contribution of services to innovation and productivity in

Generella styrmedel kan ha varit mindre verksamma än man har trott De generella styrmedlen, till skillnad från de specifika styrmedlen, har kommit att användas i större

Närmare 90 procent av de statliga medlen (intäkter och utgifter) för näringslivets klimatomställning går till generella styrmedel, det vill säga styrmedel som påverkar

På många små orter i gles- och landsbygder, där varken några nya apotek eller försälj- ningsställen för receptfria läkemedel har tillkommit, är nätet av

Det har inte varit möjligt att skapa en tydlig överblick över hur FoI-verksamheten på Energimyndigheten bidrar till målet, det vill säga hur målen påverkar resursprioriteringar

Detta projekt utvecklar policymixen för strategin Smart industri (Näringsdepartementet, 2016a). En av anledningarna till en stark avgränsning är att analysen bygger på djupa

Industrial Emissions Directive, supplemented by horizontal legislation (e.g., Framework Directives on Waste and Water, Emissions Trading System, etc) and guidance on operating