• No results found

Value of Stochasticity in Hydropower Planning Optimization

N/A
N/A
Protected

Academic year: 2021

Share "Value of Stochasticity in Hydropower Planning Optimization"

Copied!
82
0
0

Loading.... (view fulltext now)

Full text

(1)

Degree project in

Value of Stochasticity in Hydropower

Planning Optimization

MARKO VISTICA

Stockholm, Sweden 2012

XR-EE-ES 2012:008 Electric Power Systems

(2)

ii

Acknowledgment

Firstly, I would like to thank Hubert Abgottspon, my supervisor at ETH, for agreeing to supervise this project and for all the guidance and help during these past six months.

A very special thanks goes to Yelena Vardanyan, my supervisor at KTH, who gave me plenty of valuable advice. Also, I have to thank my examiner, prof. Lennart Söder for accepting this thesis proposal.

(3)

iii

Abstract

With respect to market liberalization, efficient use of resources is becoming more important for players in the market. In order to achieve that different optimization techniques were developed which enable better operational efficiency. These techniques can be segmented into two different categories, depending on their time horizon:

• Yearly time horizon – mid-term hydropower scheduling • Daily time horizon – short-term hydropower scheduling

These two time horizons account for two case studies presented in this thesis.

In the first case study (mid-term planning), the focus is on determining power plant’s optimal operating strategy, while taking into account the uncertainty in inflows and prices. Stochastic dynamic programming has been chosen as mid-term optimization technique. Since stochastic dynamic programming calls for a discretization of control and state variables, it may fall under the curse of dimensionality and therefore, the modeling of stochastic variables is important.

By implementing a randomized search heuristic, a genetic algorithm, into the existing stochastic dynamic programming schema, the optimal way of using the stochasticity tries to be found. Two price models are compared based on the economic quality of the result.

The results give support to the idea of using search heuristics to determine the optimal stochasticity setup, however, some deviations from the expected results occur.

Second case study deals with short-term hydropower planning, with a focus on satisfying the predefined demand schedule while obtaining maximum profit. With short-term hydropower planning being a nonlinear and nonconvex problem, the main focus is on the linearization of unit performance curves, as well as satisfying technical constraints from the power plant perspective. This optimization techniques also includes the water value in the solution. The problem has been solved by means of mixed integer linear programming.

The results from the second case study are fully in line with the expectations and it is shown that mixed integer linear programming approach gives good results with good computational time.

(4)

iv

Contents

Acknowledgment ...ii

Abstract ... iii

List of figures ... vii

List of tables ... viii

List of Acronyms ... ix

List of Symbols ... x

Chapter 1. Motivation and outline ... 1

1.1. Motivation ... 1

1.2. Objective... 2

1.3. Outline ... 2

Chapter 2. Literature survey ... 3

2.1. Stochastic dynamic programming ... 3

2.2. Modeling of stochastic variables ... 3

2.3. Evolutionary algorithms ... 4

2.4. Short-term optimization - mixed integer programming ... 4

Chapter 3. Model overview and previous work done ... 6

3.1. Basic formulation... 6

3.2. Hydropower plant model ... 6

3.2.1. Hydropower plant description ... 6

3.2.2. Modeling of the hydropower plant ... 8

3.3. Modeling of stochastic variables ... 10

3.4. Stochastic dynamic programming ... 11

3.4.1. Introduction to dynamic programming ... 11

3.4.2. Profit-to-go function (PTG) ... 11

3.4.3. Backward recursion ... 12

3.4.4. Forward step... 13

3.4.5. Drawbacks of SDP ... 14

3.4.6. Advantages of SDP ... 14

3.5. Introduction to short-term planning ... 15

3.6. Power plant model ... 16

4. Genetic algorithm ... 18

4.1. Introduction to evolutionary algorithms ... 18

4.2. Genetic algorithm ... 20

(5)

v 4.3.1. Representation ... 21 4.3.2. Initialization ... 22 4.3.3. Parent selection ... 22 4.3.4. Crossover ... 23 4.3.5. Mutation ... 23 4.3.6. Fitness function ... 24 4.3.7. Performance ... 25

5. Mixed integer linear programming ... 26

5.1. General introduction ... 26

5.2. MILP modeling ... 27

5.3. Building matrices ... 28

5.4. Branch and cut method ... 29

6. Implementation of the solution ... 32

6.1. General introduction – case study 1 ... 32

6.2. Mixed integer linear programming ... 33

6.2.1. Generating testing scenarios ... 33

6.2.2. Problem formulation ... 35

6.3. Stochastic price modeling for SDP ... 37

6.3.1. Geometric Brownian motion ... 37

6.3.2. Stochastic representations ... 38

6.3.3. Determining node values ... 39

6.3.4. Determining transition probabilities ... 40

6.4. Genetic algorithm in the SDP schema ... 40

6.5. General introduction – case study 2 ... 43

6.6. MILP approach – case study 2 ... 44

6.6.1. Unit curve linearization ... 44

6.6.2. Problem formulation ... 46 6.6.3. Water values ... 48 7. Results ... 49 7.1. Case study 1 ... 49 7.1.1. MILP solution ... 49 7.1.2. Genetic algorithm ... 52 7.2. Case study II ... 56

8. Conclusion and future work ... 62

(6)

vi

8.2. Future work ... 63

Appendix 1. Power plant data ... 64

A.1. Case study 1 data ... 64

A.2. Case study 2 data ... 64

Appendix 2. Short-term hydropower planning using the genetic algorithm – basic concept and implementation ... 65

A2.1. Implementation of the solution ... 65

(7)

vii

List of figures

Figure 1. Inflows to the upper reservoir ... 7

Figure 2. Power plant model ... 8

Figure 3. Decision making under uncertainty ... 10

Figure 4. Power plant model - case study 2 ... 17

Figure 5. Global and local maximum ... 18

Figure 6. Genetic algorithm flowchart ... 20

Figure 7. Single point crossover ... 23

Figure 8. Mutation ... 24

Figure 9. Performance curve of a genetic algorithm ... 25

Figure 10. Branch-and-cut algorithm ... 30

Figure 11. Feasible solution for LP and MILP problems ... 31

Figure 12. Implementing GA and MILP procedures ... 32

Figure 13. Some Monte Carlo inflow scenarios ... 34

Figure 14. Price scenarios ... 35

Figure 15. Price scenarios ... 38

Figure 16. Stochastic modeling as Markov chain ... 39

Figure 17. Creating price intervals ... 40

Figure 18. Implementing GA into SDP framework ... 42

Figure 19. Schedule and price for the short-term planning ... 43

Figure 20. Unit performance curves for different head values ... 44

Figure 21. Unit performance curve ... 45

Figure 22. Piecewise linearization of unit performance curve ... 45

Figure 23. Water value curve ... 48

Figure 24. Yearly schedule of the power plant ... 49

Figure 25. Upper reservoir solution ... 50

Figure 26. Lower reservoir solution ... 51

Figure 27. Optimal stochasticity setup over 56 phases ... 54

Figure 28. Analysis of different stochasticity setups ... 54

Figure 29. Fitness function evolution ... 55

Figure 30. 24 h schedule of designated power plants... 57

Figure 31. Operating schedule for Stalden 2 ... 57

Figure 32. Operating schedule for Zermeiggern 1 ... 58

Figure 33. Operating schedule for Zermeiggern_pump_2 ... 58

Figure 34. Discharge data for Zermeiggern1 turbine ... 59

Figure 35. Hourly profit with associated prices ... 60

Figure 36. Reservoir level Mattmark within a 24 hour period ... 61

Figure 37. Reservoir level Zermeiggern within a 24 hour period ... 61

Figure 38. 24 hour schedule with GA ... 68

Figure 39. Mattmark content using GA ... 69

(8)

viii

List of tables

Table 1. Different coding possibilities ... 21

Table 2. Data concerning different units ... 48

Table 3. Profits over different scenarios ... 51

Table 4. Analysis of different stochasticity setups ... 53

Table 5. Piecewiese linearization data ... 56

Table 6. Daily inflows into the reservoirs ... 56

Table 7. Case study 1 power plant data ... 64

Table 8. Case study 2 power plant data ... 64

(9)

ix

List of Acronyms

GA genetic algorithm

MILP mixed integer linear program SDP stochastic dynamic programming PTG profit-to-go function

(10)

x

List of Symbols

Sets

 set of generating power plants  set of pumping power plants  set of indices of a plant

 number of periods in the horizon  set of scenarios

Parameters

, electricity price at stage t

, cost of spilling water

,_ technical minimum of a power plant

 minimum power output of power plant i

 minimum water discharge of power plant i

 maximum water discharge of power plant i

 maximum water discharge of block l of power plant i

 slope of the block l of power plant i

γ value of stored water in €/m3  schedule at stage t

Variables

 profit-to-go-function

, upper reservoir filling at stage t in MWh

, upper reservoir filling at stage t in MWh

 inflow at stage t in MWh

 turbined energy at stage t

 pumped energy at stage t

, spilled energy from the lower reservoir at stage t

, spilled energy from the upper reservoir at stage t

, secondary control reserve bid price

 amount of secondary control offered

Π profit

 power output from power plant i in time period t

 pumped power in power plant i in time period t

 discharge from the power plant i in time period t

,  water discharge from block l of plant I in time period t

 binary variable stating on/off status of power plant I in period t

,  binary variable stating if water discharge exceeded the maximum amount in

block l

 shut down status at time t

(11)

1

Chapter 1. Motivation and outline

1.1. Motivation

With respect to market liberalization and increasing market competition, efficient use of resources is of big importance for hydropower producers. Failing to utilize these resources might lead to low profits, unreliable supply and being overpowered by the competition[1]. That is why there is an increasing effort put into hydropower planning optimizations.

This is usually done in different time intervals, typically ranging from one hour to a couple of years. In the first part of the thesis, the main focus will be on midterm planning, which refers to a yearlong planning period with weekly or monthly intervals. The aim of the midterm planning is to maximize the profit in the time span, while taking into account seasonal evolutions of parameters like inflows or prices. The key decisions are how much water to store in the reservoir and how much to discharge from the power plant in each stage. It also serves as a frame for the short-term planning that’s performed in a period from a day to a couple of days. The aim of the short-term planning is satisfying the demand.

This thesis is consisted of two case studies, where Case study 1 is going to deal with midterm planning, while short-term planning is going to be covered in case study 2. Both case studies are done independently of each other, meaning no results from one case study influence the results of the second case study.

The planning is performed on a typical hydropower plant that is described in more detail in Chapter 3 of this thesis. When considering midterm planning, one has to realize that not all information is known at the moment of decision making and thus the decisions are made under uncertainty. The variables causing uncertainty are:

• Uncertain water inflow into the reservoirs • Uncertain market prices

These variables, because of their nature, will from now on be referred as stochastic variables. There are several methods suitable for solving stochastic problems (e.g. stochastic linear programming, stochastic dynamic programming, dual dynamic programming and similar) and the method of stochastic dynamic programming is chosen for this thesis, as it can handle non-linearity and non-convexity of the problem, as well as take into account randomness of stochastic variables.

(12)

2 input data (prices) with respect to some evaluation criteria (e.g. profit, execution time etc.). This would account for a more efficient use of stochasticity. That is what this thesis is going to deal with by implementing the genetic algorithm into the stochastic dynamic programming frame.

Short-term planning uses the data obtained by mid-term planning to form an operating schedule on a day ahead basis. This will be the focus in the second part of the thesis. The planning will be performed for a real Swiss power plant, whose description can be found in Chapter 3. The main goal of short-term planning is to satisfy the load curve, while maintaining feasible operation. The load curve is a plot of load variation during time. It can be done in several time intervals, but a daily load curve is of interest in this thesis.

This operating schedule is going to be obtained by implementing the mixed integer linear programming method.

1.2. Objective

The goals of this thesis are:

• Implementing the genetic algorithm into the existing stochastic dynamic programming frame used for mid-term hydropower planning

• Determining the optimal stochasticity setup with respect to the final profit

• Use MILP to perform a short-term optimization of a typical Swiss hydropower plant As an additional task, it was planned to perform the short term planning by means of genetic algorithm. However, due to time limitations, the task wasn’t performed completely. The implementation part can be found in Appendix 2 and can serve as a basis for future research.

1.3. Outline

(13)

3

Chapter 2. Literature survey

This chapter provides a review of literature that could be of interest when researching stochasticity in mid-term hydropower optimization. Firstly, an overview of literature concerning stochastic dynamic programming is given, followed by the modeling of stochastic processes. Afterwards, literature focusing on evolutionary algorithms is introduced, as well as some examples of its use with other optimization methods. Finally, literature on mixed integer programming is presented.

2.1. Stochastic dynamic programming

P. Kall and S.W. Wallace (1994)[2] wrote a basic textbook on stochastic programming that introduces the topic, ranging from basic concepts to more advanced views. It starts off by presenting basic concepts, followed by the introduction of dynamic systems. Here, Bellman`s principle of optimality is introduced, as well as scenario trees and stochastic dynamic programming. Chapter 3 covers recourse problems – decomposition methods for stochastic programs, such as L shaped decomposition method. Chapters on probabilistic constraints and preprocessing are following and the final chapter deals with network problems.

A. Eichorn (2010)[3] in his workshop presentation gives an overview of stochastic programming application in power systems. He starts with an overview of the power system and the introduction of mid-term planning. He then introduces the concept of stochastic programming, starting with linear programming and afterwards introducing two stage and multi-stage stochastic programming. Following this, a dynamic approach is introduced as he covers stochastic dynamic programming and stochastic dual dynamic programming.

2.2. Modeling of stochastic variables

T.G. Siqueira, M. Zambelli, M. Cicogna, M. Andrade and S. Soares (2006)[28] compare different streamflow models which are used as input data in hydropower planning optimization. Three models were introduced – one based on average values of historic data, one based on probability distribution functions and the final one adopts a Markov chain based on a lag-one periodical auto-regressive model. The results have shown that all models lead to similar results, with only minor differences and thus the deterministic approach (first introduced model) can be used for optimizing complex multi-reservoir problems.

(14)

4 M. Birger, A.Gjelsvik, A.Grundt and K. Karesen (2001)[23] deal with price modeling in mid-term hydropower operation optimization. The optimization technique used is SDP and the price model is based on a Markov chain principle, where in each step different number of price nodes are identified and transition probabilities are calculated. A method for inclusion of extreme prices is offered, as is the method for long-term price uncertainty modeling.

2.3. Evolutionary algorithms

A.E. Eiben and J.E. Smith (2003)[8] in their textbook dedicate a chapter to evolutionary algorithm. It starts with an introduction to the algorithm; it is explained in the form of flowchart as well as pseudo-code. It then continues to describe specific stages of the algorithm, including the explanation of the fitness function. This is followed by a set of examples, for instance the solution to the Knapsack problem. This chapter is finalized with the application of this algorithm in optimization problems.

E. Alba and C.Cotta (2004)[4] introduce the concept of evolutionary algorithms. They start from the origin in biology and connect it with the optimization problems. The overview of main types of evolutionary algorithm is given as well as the applications in industry. Special focus is put on explaining different stages in the algorithm, with accompanying examples. G. Jones (2002)[5] in the chapter of the book talks about the evolutionary algorithms and their application. He starts by introducing genetic algorithms and gives a canonical code of the algorithms. Afterwards, he introduces evolutionary strategies and evolutionary programming. Finally, he deals with applications of those algorithms in computational chemistry.

T. Back, D.B. Fogel and Z. Michalewicz (1997)[10] in their book “Evolutionary Algorithms and Their Standard Instances”. Here, he explains the basic of genetic algorithms such as the principle of operation and provides the pseudocode. He also covers the theory and application of genetic algorithm processes, like crossover, representation and mutation.

2.4. Short-term optimization - mixed integer programming

G-W. Chang et. Al. (2001)[21] present experiences with mixed integer linear programming in short term hydro scheduling. They believe MILP is a powerful tool for solving large scale short-term planning problems. In this paper, they introduce the model with explanations of the variables, objective function and the constraints. They tested the model on two power systems and it gave satisfying results in reasonable time.

(15)

5 simulated and results are presented. It is shown that this method is not suitable for very large systems since the accuracy of results decline.

A.Borghetti, C. D’Ambrosio, A. Lodi and S. Martello (2008)[25] in their paper focus on short-term hydro planning of a pumped-storage power plant with head dependency. The focus is on model explanation and introduction of linearization process by introducing binary variables. Afterwards, they introduced an enhanced linear model that they later compared to the previous one. It is shown that the enhanced mode, though having higher computational times, gives better results.

(16)

6

Chapter 3. Model overview and previous work done

This chapter explains the models used in the thesis. In case study 1 , the hydropower plant in question will be described, followed by the modeling of the stochastic variable and theoretical background on stochastic dynamic programming. For case study 2, a brief introduction to short-term scheduling is given, followed by an explanation of the model used.

Case study 1 – determining optimal stochasticity setup

3.1. Basic formulation

Mid-term scheduling is performed in this case study in a time horizon of one year, with a time resolution of one week. As already mentioned, it has to take into account the stochasticity of inflows and prices during this period. The aim of the optimization is to maximize the profit, while taking into account the value of stored water in the reservoir.

Mid-term scheduling problems are large scale, nonlinear and nonconvex problems, that normally would require large computational effort if solved directly. SDP method has been chosen for this thesis, as the complexity of the problem is easily overcame, since only one stage is considered at the time. Also, in comparison to other stochastic solving methods like SDDP, it gives similar quality results[27].

Discretization and proper modeling of stochastic variables is important for result accuracy. By implementing the genetic algorithm into the SDP frame and using it as a tool, the optimal discretization setup of prices tries to be found.

3.2. Hydropower plant model

3.2.1. Hydropower plant description

The hydropower plant in use is a typical Swiss pumped-storage hydropower plant with two reservoirs. The upper reservoir is the seasonal reservoir and the water from the upper reservoir is used to produce electricity. The lower reservoir serves as balancing reservoir that has a purpose of, basically, refilling the upper reservoir during off peak hours. There is also the possibility of spilling water from the reservoirs because of physical (full reservoir, large inflows) or economic reasons.

(17)

7

Figure 1. Inflows to the upper reservoir

Since the actual data is confidential, Fig 1. shows an example of an inflow so the correlation and seasonality can be better understood. One can notice that there is almost no inflow in the first third of the year and the last couple of months, due to the fact that most water is accumulated as snow or ice. Somewhere at the beginning of May, the snow starts melting and the inflow increases rapidly. The inflow period between May and late October basically accounts for the year round inflow. This strong seasonality is the reason for building large reservoirs, so the production is not dependent on the inflow itself.

(18)

8

Figure 2. Power plant model

3.2.2. Modeling of the hydropower plant

As one can see from Fig 2., the reservoir content is denoted with a v, while the turbined and pumped energy are denoted by u and b respectively. The inflow is denoted with q and the spilled water is denoted by s. All the parameters shown are expressed in MWh/h.

Physically, reservoir filling, inflows and spillage are calculated in m3, but in order to simplify building the optimization problem, a conversion factor (kWh/m3) is introduced:

  3.8 /

The efficiencies of the turbine and the pump also need to be included in the calculations and they are:

   0.85

The efficiencies of the turbine and pump need to be included in the model when pumping possibilities exist. If one unit of energy needs to be pumped,

 units of energy have to be

(19)

9 In order for the solutions to be within boundaries of what is physically and technically possible, a set of constraints need to be determined. Two types of constraints are applied to this model:

• Equality constraints • Variable bounds

Equality constraint

      (3.1.)

As one can see from (3.1) the equality constraint links the state variable  in successive time

steps and how the state is achieved. Reservoir is considered a state variable as it mathematically describes the state of a dynamic system. It is said the reservoir level at time period t+1 depends on the inflow between t and t+1 (), the amount of water that was spilled

during the same period () and the decisions on the amount of turbined and pumped water

(, ) that were made at the beginning of stage t.

Variable bounds

Variable bounds impose a technical or physical limit to introduced variables. A set of inequality constraints can be seen below:

    

    (3.2.)

    

    

(20)

10

3.3. Modeling of stochastic variables

There are two stochastic variables that cause uncertainty in this optimization problem and they are water inflows and market prices. Modeling of these stochastic variables will be shown later in this chapter, while for now the focus is on introducing the concept of stochasticity.

If one has to optimize a process lasting through several time stages, uncertain nature of the variables will greatly influence on decision making processes. One can assume that all variables are known when period  in time is reached and the system is in state . Based on the current state of variables, but also future expectations, a decision has to be made. An optimal decision to this problem is obtained by the process of optimization. This decision is then applied to the system in stage , which will lead to system moving into a different state .

After reaching , the operator of the process has no control over the system as stochastic

processes begin to occur and they could lead the system into several different states at time   1 . The probability of reaching these states can be estimated from the probability distributions of stochastic processes.

When the system reaches time stamp   1, it is in the state , where another decision has to

be made by the operator, same as at stage , leading into state .

To illustrate this process better, Fig. 3. is presented.

(21)

11

3.4. Stochastic dynamic programming

3.4.1. Introduction to dynamic programming

The term dynamic programming was introduced by Bellman, when describing the theory dealing with multi-stage decision processes. When these decision processes account for uncertainty, the term stochastic dynamic programming is used. The principle and the main idea behind stochastic dynamic programming is given with the following example.

A system that evolves over T time periods is considered. At time period ,  and  are used

to represent the state of the system and the control action respectively. That means that, in period , the state of the system is determined by its history:

 , 

If the goal of the optimization is to maximize the profit over the entire time horizon, the objective function might look as (3.3)

max   ∑,  ,   0, … ,  (3.3) This is, of course, a quite elaborate problem and a tail subproblem of maximizing the profit from time  to time  can be considered.

max   ∑ ,  (3.4) According to Bellman, no matter how state  has been reached, the remaining decisions must be optimal for the tail subproblem. Stochastic dynamic programming will first solve all tail subproblems for the final stage, followed by the previous one and so on. The original problem is solved at the final step by using the solutions of all tail subproblems.

This approach proved to give good results as it breaks down a big problem into smaller, easily solvable subproblems.

3.4.2. Profit-to-go function (PTG)

Stochastic dynamic programming is one of the more common optimization techniques used with stochastic problems. For a multi-stage problem like a mid-term optimization problem, one decision is going to be made for each state in each stage. It should be noted, that decisions for all but the first stage, depend on the outcome of stochastic variables[2].The output of the SDP optimization problem is the so called “profit-to-go” function. As it can be seen from (3.3), PTG describes how much profit will is expected to be obtained in the future depending on the current state of state variables, if the optimal policy is applied to the system.

(22)

12 In a hydropower planning optimization problem, the profit depends entirely on the state variables and can be written in an recursive form, like shown in (3.3)

, ,  max ,  ,    ,  , , subject to (3.5)                  

As it is seen, the profit only depends on the reservoir content and the current prices. It should also be mentioned that the PTG at stage +1 can be calculated and is thus considered zero. 3.4.3. Backward recursion

The aim of backward recursion is to find a PTG function for each stage in the optimization process. As this type of simulation can’t deal with continuous variables, since that would mean that it would have to “try out” for every single combination of state and control variables, those variables need to be discretized into a finite number of possibilities. As this is stochastic dynamic programming, the distributions of stochastic variables need to be taken into account.

There are several steps that need to be followed when performing the backward recursion: 1. Backward recursion starts at the end of the optimization period – at time step T.

Stochastic variables need to be discretized into a predetermined number of values. 2. For each of the possible reservoir fillings, energy prices and control variables

(turbining and pumping) the profit is calculated. The combination of values that leads to the best profit is saved in a ˝look up˝ table that the operator uses to determine the optimal control action. This procedure is then repeated for every stage until the first one.

(23)

13 However, the optimization also takes into consideration the information from the stage T-n+1, which contains information from previous stage and so on. This way, the optimization time span as a whole is taken into account.

As it can be seen, the backward recursion basically gives an optimal policy of how a power plant should be operated. In order to test this policy, the so called forward step is introduced. The pseudocode for the SDP formulation used in this thesis is as follows:

, ,  0

for each stage   , -1, …, 1

for each reservoir content level 

for each turbining possibility

for each pumping possibility 

for secondary control possibility 

for each spot price possibility ,

calculate profit for selected values using 

end for

calculate expected profit over all realizations of the stochastic process ,

end for

end for end for

select  and  giving maximum profit for selected reservoir content level

end for

determine maximum profits for all reservoir content levels end for

3.4.4. Forward step

(24)

14 At =0, the initial state of the system is known, meaning the reservoir filling as well as the price is known. The operator then checks the look up table and implements the optimal decision for the systems state. Based on the stochastic inflows, the system will end up in one of the possible states at =1 (as explained in Fig. 3.). There, an optimal decision is made again based on the current state and this procedure is performed until the final stage T is reached.

3.4.5. Drawbacks of SDP

The main drawback of using SDP is the computational intractability, which is also known as the curse of dimensionality. Since the optimization algorithm requires discretization of all state and control variables, if a detailed modeled is wanted, the computational effort might become too high[3]. For instance, if 4 variables (, , ,  are discretized into 50 values,

that means that in each stage ·· ·  possibilities need to be evaluated in order to find

the optimal solution.

This discretization problem and the effort to find an optimal stochasticity setup is further addressed in this thesis, as it presents the core problem of Case study 1.

3.4.6. Advantages of SDP

(25)

15

Case study 2 – short-term hydropower planning

3.5. Introduction to short-term planning

As mentioned previously, with respect to market liberalization, a lot of power producers face new challenges with the final goal of maximizing their profits and maintaining their position in the market. This problem is also dealt with in short-term optimization that is formulated as an optimization problem where the aim is to determine the unit commitment, maximize the profit and meet the system demand, while taking into account various constraints[26].

The purpose of STHS is to develop schedules for the hydro plants for a period ranging from several hours to a couple of days[25]. In this case, an optimization period of 24 hours will be taken into account.

STHS problems are nonlinear, discrete, nonconvex and large-scale problems. Nonlinearity and nonconvexity come from the relation between the output power and the discharges, while the discreetness comes from the on/off status of the power plants[21].

A Mixed Integer Linear Programming (MILP) approach has been chosen in this thesis as it enables for easy addition of constraints and also, the nonlinearities can be incorporated into the model by piecewise linearizing the unit performance curves. In addition, by using binary variables, the discreteness of the problem can be modeled easily.

(26)

16

3.6. Power plant model

The plant in question is an existing Swiss power plant with two reservoirs. The plant configuration can be seen in Fig. 4.

The power plant is a pumped-storage power plant, where the lower reservoir serves as a balancing reservoir, while the upper reservoir is a seasonal reservoir. It has five turbines and two pumps: • Stalden 1 • Stalden 2 • Zermeiggern 1 turbines • Zermeggern 2 • Saas Fee • Zermeiggern_1_pump • Zermeiggern_2_pump

The power plant data can be found in the Appendix 1. It should be noted that turbines Stalden 1 and Stalden 2, have larger available power then turbines Zermeiggern 1 and Zermeiggern 2. As is the case with power plant from Case study 1., this power plant is subject to seasonal inflows and one should consult Fig. 1. for an inflow representation. Equations (3.1) and (3.2) are valid for this system as well.

All the parameters are modeled as follows: • Reservoir content – m3

• Inflows – m3/s • Power output – MW • Water value – EUR/ m3 • Discharge – m3/s

Since inflow and discharge units are not the same, a multiplication factor is used to bring them to the same scale:

1 m3/s = 3600 m3/h

Unlike in mid-term planning, the relation between the discharge and the power output has to be modeled precisely and this is done through the use unit performance curves. The concept behind unit performance curves will be explained in later chapters.

(27)

17

(28)

18

4. Genetic algorithm

The aim of this chapter is to introduce basic concepts of genetic algorithms. All the concepts are purely theoretical and are not directly applied in the implementation of this thesis. For the practical implementation, Chapter 6. should be consulted.

4.1. Introduction to evolutionary algorithms

The ability of living creatures to live in most remote and isolated areas and adopt to most hostile environments is the result of a Nature’s mechanisms called evolution. The efficiency of evolution as an optimization process has sparked interest among scientist who deal with optimization techniques and a whole branch of techniques were developed based on Darwinian theory and called evolutionary algorithms. These algorithms try to mimic the process of evolution as closely as possible in order to find good solutions to a problem. There are several different evolutionary algorithm techniques discussed in [5] and the most common are:

• Genetic algorithm

• Evolutionary programming • Evolution strategies

All three algorithms can yield optimal solutions given complex, multimodal and discontinuous search spaces[5].

The main focus in this thesis is going to be on the genetic algorithm, as it can be implemented into the existing framework easily and also handles large search spaces very well.

It is worth mentioning that evolutionary algorithms fall into the category of stochastic optimizers, which means they operate with a degree of randomness. When going through the search space, they are sampling wide variety of areas while also trying to find areas for future sampling [7].

Figure 5. Global and local maximum

(29)

19 Figure 5. provides an example of a search space with some distinct points. Finding an optimal solution is trying to find a minimum or a maximum of the function, depending on the requirements. What differs a good optimization method from a not so good one is the possibility to find a global optimum, as opposed to being stuck at a local optima. This will be discussed later in more detail.

Terminology

Since evolutionary algorithms are based on biological theory, it is necessary to introduced proper terminology that is used in these optimizations[4][7][8]:

population – a group of P individuals that the algorithm manipulates. Each individual is composed of one or more chromosomes

fitness value – measure of solution quality

parent selection – process of selecting best individuals based on fitness value that then form the basis for the new generation

crossover – a form of recombination where parents produce children

(30)

20

4.2. Genetic algorithm

As previously mentioned, genetic algorithm (GA) is one of more common evolutionary algorithm that is based on Darwinian theory of evolution. A simple flowchart of GA and its phases can be seen in Fig. 6.

Figure 6. Genetic algorithm flowchart

A simple flowchart of GA and its phases can be seen in Fig. 6. Algorithm starts with a set of solutions called the initial population. This population can be determined randomly or be pre-determined. Based on a fitness function, all the individuals in the population are assigned fitness value which describes the individuals quality of solution. Several individuals are then chosen as parents, which means they serve as a basis for creation of next generation. This process is done based on the fitness value, so the higher the fitness value, the higher is the possibility of an individual to become a parent. By performing crossover on the parents, children are created and a new generation is formed. Mutation is then performed on the new population. This process is repeated until a stopping criteria is reached.

(31)

21

Generate an initial population of n individuals   , … , 

Evaluate the fitness value  of each individual in the generation While (Stopping criteria are not satisfied)

Select parents from current population based on their fitness

Perform crossover on selected parents in order to generate children and form new population

With a mutation probability P perform mutation on the new population New population becomes current population

End

Return best value

4.3. Genetic algorithm components

4.3.1. Representation

Representation is the first step in any GA since its purpose is to translate the real world problem into a formulation that’s computationally solvable.

The most common representation is in the form of string of numbers, where each part of the string represents a piece of information from the original setup. Usually, bit-string

representation is used, where all the elements are either 0 or 1. A simple example of this would be if a potential solution to the GA problem is an integer 10. This integer would be represented by a string of bits 1010.

There are several bit coding possibilities[8] and the most famous one are: • Gray code

• Standard binary coding

The difference between the two is that with Gray code, all successive values are differentiated by only one bit, while it may not be the case with standard binary coding[11].

Table 1. Different coding possibilities

Gray code Standard binary coding

Integer Binary Integer Binary

7 0100 7 0111

8 1100 8 1000

(32)

22

4.3.2. Initialization

Initialization stands for the process of generating the first population of individuals, where each individual represents a solution to the problem. When defining the initial population, one should take into account several parameters:

• Population size – the numbers of individuals in the population

• Population diversity – based on individual’s location in the search space

One would generally like to have as big of an initial population as possible, however, that would significantly increase the computational speed, as more iterations need to be done, so a balance needs to be found. The size initially depends on the representation, genetic operators used and similar[5].

There are also two possibilities of forming the initial population, with more common one being creating the initial population randomly so the individuals are scattered all over the search space. The other option is manually setting the initial population with worthy values that were obtained from similar research or experience.

The population diversity is extremely important for a GA optimization. Having a diverse population means having solutions in all areas of the search space, which then makes it easier to search for regions with higher quality solutions[8]. Ideally, one would rather have a smaller population that is spread out, but a large population concentrated in one region of the search space.

4.3.3. Parent selection

After the initial population is generated and the fitness values of all the individuals are known, it is time to perform a parent selection process. This process, alongside crossover, is responsible for the very basic concept of GA and that is the survival of the fittest.

This process selects the best individuals in a population, based on their fitness value and they are called parents. The parents then undergo a change and create offspring that forms the next generation.

(33)

23

4.3.4. Crossover

After the parents have been selected, a crossover operator is applied on them and this operator takes characteristics from both parents and creates two offspring individuals. The crossover forces the children to have parts of chromosomes from the parents, thus improving combinatorial diversity that could lead to exploring new areas of search space with fitter results then current. Parts of parent chromosomes that are used are based on random drawing and crossover is thus considered to be a stochastic operator [3].

An example of how a crossover can be performed is a single point crossover. This is a simple way of creating crossover. A cut point in parents chromosomes is chosen randomly and the portions after the cut are exchanged between parents, thus forming two new children. This is shown in Fig 7.[17].

Figure 7. Single point crossover

One should also notice that this method is biased towards the low order schemas. This is so, because if the fixed positions in the schema are far apart, there is a high probability that some binary values will be altered, thus losing quality characteristics. This disadvantage is reduced in the following two methods [5].

4.3.5. Mutation

(34)

24 However, too much new information in the system would destroy the quality solution (schemas) and that’s why the mutation probability  is set to low values (typically of 1%)

[5].

Mutation is done after crossover, so it’s performed on children, rather than parents.

The principle behind the mutation is that a random number is drawn and if the number falls below the mutation probability, the mutation will occur.

Figure 8. Mutation

4.3.6. Fitness function

Once the population is created, the respective solutions need to be evaluated in order to determine their fitness in the environment. The environment in optimization problems is the objective function. So basically, the fitness function evaluates how good a certain solution is, for the problem of interest. This makes the fitness function one of the more crucial components of GA.

(35)

25

4.3.7. Performance

Figure 9. shows a typical performance curve of a genetic algorithm.

Figure 9. Performance curve of a genetic algorithm

It shows that most progress concerning finding the optimal solution is done in the first iterations, while it flattens out later on. From this figure some conclusion concerning initial population and stopping criteria can be drawn. Firstly, the before discussed possibility of setting the meaningful initial population, based on some other simulation, now seems rather unreasonable. This is because it will only take a couple of iterations for the GA with a random initial population to “catch up” to the one that started with a meaningful initial population. Also, having long optimizations might prove unnecessary since the solutions obtained at the very end of the graph in figure 12, will only slightly differ from those obtained at the half way point.

This phenomenon is occurring because of the effectiveness of crossover and mutation functions. They enable effective movement through the search space and quick elimination of large non-optimal areas[3].

(36)

26

5. Mixed integer linear programming

5.1. General introduction

Most commonly used optimization models are linear models. Those are the models in which all functional relations are linear and methods used to solve these problems are generally known as linear programming[12].

However, the assumption that all variables can obtain real values is not always valid in practice. There are many problems where the integrality of the variables is very important and this leads us to another group of methods commonly known as mixed integer linear programming (MILP). Unlike linear programming, some of the variables considered are integers (or binaries, which then can be considered mixed binary programming).

MILP has established itself as a popular optimization technique for problems that are making decisions under uncertainty, whether it is in operations control, scheduling problems, artificial intelligence and similar. One can find relevant literature about various applications in [18][19].

MILP as such is a subset of a larger field of mathematical programming. In mathematical programming, one has a model that represents a real life system and a set of variables who’s manipulation tries to mimic the actual happening in the system. This model is also represented by a function that serves as a connector between input variables and output data. By trying to optimize this function, one tries to evaluate the quality of the solution obtained by decision making in the process. In order to ensure that the system is behaving as it would in real life, a set of constraints and boundaries is introduced[12].

MILP usually deals with large scale complex problems who’s variables are interdependent. Therefore, an approach where each possible solution is explicitly examined would be computationally untraceable. Therefore, MILP uses the technique called implicit enumeration. According to [12] implicit enumeration is: “ A method of solving integer programming problems, in which tests that follow conceptually from using implied upper and lower bounds on variables are used to eliminate all but a tiny fraction of the possible values, with implicit treatment of all other possibilities.’’

(37)

27

5.2. MILP modeling

As said previously, MILP is a minimization or a maximization of a linear function that is subject to linear constraints. Most common form can be seen below:

 ∑ (5.1.) subject to ∑,  ,   1, … . ,  (5.2.) ∑,  ,    1, … . ,  (5.3.)  0,   1, … ,  (5.4.) , … ,   (5.5.)

As one can see, by introducing (5.5.) a normal linear problem is turned into a mixed integer problem. Equation (5.1.) represents the objective function of the model and that is the functions who’s solutions needs to be optimized.

In general, in order to build a proper MILP model, three steps are needed: 1. Determine the variables that represent the model   , … , 

2. Build the objective function that represents the wanted solution to the problem 3. Build constraints and boundaries so the model behaves within set limitations

Different solution techniques exist that solve these kind of problems and the most popular ones are:

• Branch and bound • Cutting planes • Branch and cut

(38)

28

5.3. Building matrices

In order to implement a MILP problem into MATLAB, some thought has to be put into building the constraints into matrices suitable for solvers. CPLEX, solver used in this thesis supports the use of both equality and inequality constraints. That means that the equations (5.2) and (5.4) will form an inequality matrix, while the equation (5.3) will for an equality matrix. The size of these matrices depends exclusively on the number of time stages used in the optimization.

In order to explain the formation of the matrices, a simple example is provided below.

Example

A pumped-storage power plant is examined. The reservoir level is dependent on the turbining rate as well as the pumping rate. The inflow is only accounted for in the upper reservoir. The aim of the optimization is to maximize the profit over a period of one month with a daily time step.

Now the setup of the problem is as follows:

It is seen that the state vector X is composed of 4 variables: • Turbining rate

• Pumping rate

• Content of the upper reservoir • Content of the lower reservoir Now, matrices and are as follows:

When discussing equality constraints, they have a form of

where matrix represents the constant values in the equation. An example of equality equations over different time stages is shown below.

Control variables

(39)

29 , ,    , ,    . . . . . . . . . . . . . . . , ,   

Finally, after the equations for all the time stages are written, the suitable form is built.

Since these matrices are large in size and contain very few non-zero elements, sparse-matrices in MATLAB are used to prevent RAM memory problems. Other equality and inequality matrices are built in the same way.

5.4. Branch and cut method

Branch and cut method has been extensively used in many commercial solvers (for instance CPLEX) and has good results in reaching optimal solutions.

(40)

30

Figure 10. Branch-and-cut algorithm

As seen from Fig. 10, this method is based on a search tree that is made up of nodes. Each node stands for a linear programming sub-problem that needs to be evaluated. The starting point for this method is performing linear programming relaxation of the MILP problem. LP relaxation is a relatively straightforward process in which all integrality constraints are removed and replaced with continuous equivalents. A small example is shown below

    0,1

    0    1

(41)

31

Figure 11. Feasible solution for LP and MILP problems

In this figure, straight red lines represent the continuous constraints and the area between them is a feasible solution for a LP problem. Blue crosses represent the feasible solution for an integer problem and now it is clear how the feasible are of an LP problem is bigger than MILP problem.

After the LP relaxation has been done, cutting planes for the first (root) node are introduced and an incumbent solution is found. This solution is the current best solution that satisfies all the integrality requirements The cuts are added to the node until solutions are breaking the cuts. Once no violated cuts are found, the process is stopped.

Once the cutting process is stopped, branching occurs. Here, the main problem is divided into sub-problems, thus generating new nodes in the node tree. For each sub-problem, the integrality constraints are again neglected and cutting planes occurs. Once this phase is done, the obtained node solution is checked against the integrality constraints. If the solution satisfies all the constraints and its value is bigger than the current incumbent value, it becomes the new incumbent. If the value is less than the current incumbent, the node will be branched further. If, at any time during this process, the node becomes infeasible, it is removed from the tree [20].

This process is then performed until there are no more active nodes and the optimal solution has been found.

0 1 2 3 4 5 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

(42)

32

6. Implementation of the solution

Case study 1

6.1. General introduction – case study 1

As said previously, the goal of this case study is to determine the optimal stochasticity setup in mid-term hydropower planning. This is done for two main reasons:

• Computational traceability • Quality of the results

Having presented the theoretical introduction to all the optimization and programming techniques in previous chapters, the implementation of solution will be given.

A flowchart of the solution can be seen in Fig.12:

(43)

33 Based on historical values, one can generate a certain number of testing scenarios and use them to determine the “ideal profit” (by means of MILP), meaning the profit a company would obtain if it knew all the inflows and all the prices for the upcoming year. This is, of course, not reasonable and therefore one tries to get as close as possible to this desired value. This is where the stochastic dynamic programming steps in. The aim of stochastic dynamic programming is to determine the optimal policy, almost like a look up table, where decisions are suggested based on the current state of the system.

This policy is then tested with the same testing scenarios previously used in MILP calculation. The profits of both techniques are then compared and optimal solution tries to be found. The role of genetic algorithm is to generate the initial population of the stochastic solution and based on the phases presented in Ch. 4., generate new populations that are closer to the optimal value.

More detailed look on how these were implemented can be found in the following chapters.

6.2. Mixed integer linear programming

6.2.1. Generating testing scenarios

In order to obtain results that are as close to the optimal one as possible, several inflow and prices scenarios need to be created. The starting point for creating scenarios is available historical data for inflows and prices.

Since the price data is available in hourly increments, while the inflows are known as daily values, it was assumed that the distribution of inflows during the day is constant:

Inflowday=24m3 => Inflowh=24/24=1m3/h

As the reservoir is large in proportion, the hourly inflow changes shouldn’t influence the state of the system and hence, this assumption was made.

Unfortunately, since the data is known only for the last couple of years, it wasn’t possible to use actual yearly data as scenarios, but a distribution had to be assumed. All the variables were presented with a normal distribution,

 ~ ,  (6.1.)

where  is the mean value and  is the standard deviation on an hourly basis. That basically means that for each hour, values from different years were taken and their mean and standard deviation were calculated.

(44)

34 much details and it should be noted that no potential correlation values between variables were taken into account.

Monte Carlo method is a technique that deals with random numbers and probability distributions to obtain meaningful results. Normally, given the set of input parameters and the accompanying equations, one gets a certain output. The idea behind the Monte Carlo simulation is to evaluate this model using a set of random parameters as inputs. These parameters are generated from the probability functions of the variables (inflows, prices), thus mimicking the sampling procedure of the actual population[16]. This means that in each hour, some number of random values were chosen from the normal distribution of the variable. In this thesis, 10 inflow and 10 price scenarios were defined and some of them can be seen in Fig. 13 and Fig. 14.

Figure 13. Some Monte Carlo inflow scenarios

As it can be seen, the scenarios are good representations of inflow data shown in Fig.1. and it’s seasonality and should, therefore account for meaningful results.

The price scenarios can be seen in figure 14.

(45)

35

Figure 14. Price scenarios

6.2.2. Problem formulation

It has already been mentioned that MILP is used to find an “ideal” solution to the optimization problem, taking into consideration known values of inflows and prices. Price and inflow scenarios from the previous chapter are used to test the model.

There are several assumptions that need to be taken into account:

• All turbining, pumping, spilling and reservoir variables are continuous

• The secondary control on offer is either 0 MW or 40 MW which is the maximum possible amount

• There is one binary variable – the one that states whether the secondary control is on offer or not

• When secondary control is on offer, the turbining amount has to be at least the amount of technical minimum

• No pumping is possible when secondary control is offered

All the calculations were done in MATLAB R2011b with CPLEX optimization toolbox.

Time frame

The chosen time frame for the simulation is 8832 hours, which is equivalent to a year and four extra days. The simulation is done in hourly intervals.

(46)

36

Objective function

The objective of the optimization is to maximize the profit:

   ∑,·  ,·  ,· , ,· , ,·  ,·  (6.2.)

where , is the price of the secondary control for an hour t and  is the amount of

secondary control offered. Other expressions stand for turbining, pumping and spilling profits/loses.

The objective function is subject to a set of equality and inequality constraints:

Equality constraints

Equality constraints evolve around the upper and lower reservoir levels.

_  _     ,   , (6.3.)

_  _   ,  , (6.4.)

The variable  is a binary variable stating whether the secondary control is on offer or not. The time variable  is modeled so it accounts for the state of the period at the end of time . So for instance _ is the reservoir level at the end of time , which means after all the turbining/pumping occurred and inflows were taken into account.

Inequality constraints

The non-equality constraints set the limits for turbining and pumping. For instance (6.5.) enables the combined turbining and secondary control offer to exceed the maximum turbining value. (6.6.) shows that in case the secondary control is offered, the turbining has to be at least the amount of the technical minimum. Finally, (6.7.) sets the constraint that no pumping is possible when secondary control is offered.

     (6.5.)

   0 (6.6.)

(47)

37 Boundary conditions 0  _ _ (6.8.) 0  _ _   (6.9.) 0  , ∞ (6.10.) 0  , ∞ (6.11.)

Boundary conditions assign upper and lower values to variables that have not yet been considered, which are reservoir levels and spillage possibilities.

6.3. Stochastic price modeling for SDP

Optimal operation of hydropower plants depends heavily on spot price forecasts. There are several ways of building these forecast and the most popular are based on the historical prices or on a forward simulation of some kind. These forecasts will be given as a certain number of future price scenarios, within a yearly horizon and a weekly time step. Since the historic data is available from only 5 last years, this would provide a limited insight, thus a forward simulation is chosen.

6.3.1. Geometric Brownian motion

Geometric Brownian motion is an easy to use but nevertheless sensible price model that takes into account the stochasticity of the price and determines possible evolutions of the price over a predetermined price period. It takes into account the fact that the prices become more insecure with time.

  1  ̂   · √ ·  (6.12)

Equation (6.12) represents the discrete time version of the Geometric Brownian motion.  is the random variable representing the price, ̂ is the drift term,  is the volatility and the distribution  is standard normal.

As seen, in order to simulate possible future prices, the current price and the volatility need to be known. The volatility used in (6.12) is obtained from historic data and normalized to an appropriate time frame if necessary. This is done as follows:

  

(48)

38 where  is the time frame to which the prices want to be normalized[24].

The volatility was calculated in the following manner:

• A time series of a couple of days was chosen as a base for the calculation • Logarithmic price changes between the prices were calculated

• The volatility within the price series was calculated

• Using (6.13), the volatility was normalized to an appropriate time frame

For this thesis, 50 scenarios are generated by the Geometric Brownian motion, with a horizon of one year and weekly time steps.

Figure 15. Price scenarios

Figure 15. shows 50 scenarios generated by the Geometric Brownian motion that are used by SDP to determine the optimal policy of the hydropower plant.

6.3.2. Stochastic representations

Now that 50 scenarios have been created, a discrete Markov chain model is made. The model is shown in Fig.16. According to [25], the prices in one week are correlated with prices in the previous week, which justifies the use of Markov chain model.

(49)

39 It consists of a given number of price nodes  in each time step and transition probabilities

, that determine the probability of a price   1 if at time step , the price was .

It should be noted that the number of nodes and thus transition probabilities can vary from one time step to another. In Fig. 16., the first time step has 4 price nodes, while the second one has 3 price nodes. In this thesis, the minimum number of nodes in a time step is 1, while the maximum is 4. This has been chosen, since having more price representations in a single time step would significantly prolong the simulation time.

Figure 16. Stochastic modeling as Markov chain

There are two processes required in this model: • determining node values

• determining transition probabilities

6.3.3. Determining node values

The first step is to specify the number of price nodes and calculate the values that are going to represent the nodes.

The principle of determining node values is as follows:  are generated by the Geometric

Brownian motion and each scenario has a value for each of the time steps in the optimization horizon, which means there is  values for each time step. All the prices within one time

step are sorted by value [23].

If  nodes want to be created in a time step , the price range is divided into  intervals. This is done by taking the minimum  and the maximum  price and the space in between

(50)

40

Figure 17. Creating price intervals

Those intervals may be consisted of different number of price scenarios. The node value is determined by calculating the mean value of all the prices within an interval.

6.3.4. Determining transition probabilities

The second step in the process is determining the transition probabilities between time steps  and +1, for every .

How this is calculated is shown in (6.14)

,  ,

 (6.14)

Where  is the number of scenarios belonging to the node  in time step  and , is

the number of scenarios from the node  in time step  that belong to the node  in time step +1.

6.4. Genetic algorithm in the SDP schema

Fitness function

The fitness function is the difference between the profits obtained by MILP and by SDP. This means that the fitness function is trying to find a stochastic setup that will bring the profit obtained by SDP as close as possible to the ‘’ideal’’ one of MILP. This can be seen in (6.15)

 ∑,,

 (6.15)

References

Related documents

Úkolem je navrhnout novou reprezentativní budovu radnice, která bude na novém důstojném místě ve vazbě na postupnou přestavbu území současného autobusové nádraží

Stöden omfattar statliga lån och kreditgarantier; anstånd med skatter och avgifter; tillfälligt sänkta arbetsgivaravgifter under pandemins första fas; ökat statligt ansvar

Generally, a transition from primary raw materials to recycled materials, along with a change to renewable energy, are the most important actions to reduce greenhouse gas emissions

Both Brazil and Sweden have made bilateral cooperation in areas of technology and innovation a top priority. It has been formalized in a series of agreements and made explicit

För att uppskatta den totala effekten av reformerna måste dock hänsyn tas till såväl samt- liga priseffekter som sammansättningseffekter, till följd av ökad försäljningsandel

Samtliga regioner tycker sig i hög eller mycket hög utsträckning ha möjlighet att bidra till en stärkt regional kompetensförsörjning och uppskattar att de fått uppdraget

Additionality: It must be proven that the emissions reductions would not have occurred had the project not received the financial contribution generated by the sale of carbon

By comparing the data obtained by the researcher in the primary data collection it emerged how 5G has a strong impact in the healthcare sector and how it can solve some of