• No results found

Pre-Runtime Scheduling of an Avionics System

N/A
N/A
Protected

Academic year: 2021

Share "Pre-Runtime Scheduling of an Avionics System"

Copied!
48
0
0

Loading.... (view fulltext now)

Full text

(1)

IT 16 043

Examensarbete 30 hp June 2016

Pre-Runtime Scheduling of an Avionics System

Max Block

Institutionen för informationsteknologi

(2)
(3)

Teknisk- naturvetenskaplig fakultet UTH-enheten

Besöksadress:

Ångströmlaboratoriet Lägerhyddsvägen 1 Hus 4, Plan 0 Postadress:

Box 536 751 21 Uppsala Telefon:

018 – 471 30 03 Telefax:

018 – 471 30 00 Hemsida:

http://www.teknat.uu.se/student

Abstract

Pre-Runtime Scheduling of an Avionics System

Max Block

This master’s thesis addresses a scheduling problem arising when designing avionics – the electronic systems used on aircraft. Currently, the problem is commonly solved using mixed integer programming (MIP). We prove the scheduling problem at-hand, which is similar to the well-studied cyclic job shop scheduling problem, is NP-hard. Furthermore, we propose an approach using constraint programming (CP) – a programming paradigm where entities called constraints define the relations between variables. Constraints do not specify a step or sequence of steps to

execute, but rather the necessary properties of a solution. The CP approach implemented

in the high-quality free OscaR CP solver manages around 1500 tasks in total over 10 processors within a 10 minute timeout, which is good enough for CP to be investigated further as a possible paradigm for solving the considered scheduling problem. We also compare Gurobi Optimizer, a high-quality commercial MIP solver, to Gecode, another high-quality free CP solver, when run on a model for the problem described in the MiniZinc modelling language.

Examinator: Edith Ngai Ämnesgranskare: Pierre Flener Handledare: Elina Rönnberg

(4)
(5)

Acknowledgements

I would like to thank Isak Bohman for letting me use your scheduling problem instance generator when developing a generator for instances of this thesis’ problem.

Thank you, Jean-No¨el Monette, for giving me a solid foundation for constraint modelling both inMiniZinc and in general. Thank you, also, for taking the time to answer all my questions.

And last, but certainly not least, I would like to thank Pierre Flener. Thank you for kindling my interest in CP, for helping me structure my thesis work, and for tirelessly and thoroughly helping me fine-tune my report.

(6)
(7)

Contents

1 Introduction 9

1.1 Setting . . . 9

1.2 Questions . . . 9

1.3 Scope . . . 9

2 Background 10 2.1 Real-Time Scheduling . . . 10

2.1.1 The basics . . . 10

2.1.2 The precedence relation . . . 11

2.1.3 Hard and soft real-time systems . . . 11

2.1.4 On-line vs. pre-runtime scheduling . . . 11

2.2 MiniZinc . . . . 12

2.3 OscaR . . . . 12

2.4 Gantt charts . . . 13

2.5 Random Forests . . . 13

2.5.1 Decision Trees . . . 13

2.5.2 Random Forests . . . 14

3 Job Shop Scheduling 14 3.1 Problem Statement . . . 14

3.2 Cyclic Job Shop Scheduling . . . 15

4 Constraint Programming 16 4.1 Constraint Satisfaction Problems . . . 16

4.2 Propagation . . . 17

4.2.1 Propagators . . . 17

4.2.2 Consistency . . . 17

4.2.3 Propagation step . . . 18

4.3 Global Constraints . . . 18

4.4 Search . . . 19

4.4.1 Branching . . . 19

4.4.2 Exploration . . . 21

4.5 Constrained Optimisation Problems . . . 21

4.6 Implied Constraints . . . 21

4.7 Symmetry Exploitation . . . 21

4.8 Reification . . . 22

5 Scheduling of an Avionic System 22 5.1 Problem Statement . . . 22

5.2 Problem Complexity . . . 23

5.3 Characteristics of Instances in Avionics . . . 24

6 Constraint Model 24 6.1 Instance Data and Derived Constants . . . 24

6.2 Decision Variables . . . 24

6.3 No Task is Crossing the Cycle Boundary . . . 25

6.4 Tasks on the Same Timeline May Not Overlap . . . 25

6.5 Respecting Precedences . . . 26

(8)

6.6 A Chain Must Completely Finish Before Its Next Iteration . . . 27

7 Test Instances 28 7.1 Algorithm Idea . . . 29

7.2 Tasks . . . 29

7.3 Chains . . . 31

7.4 Input Parameters . . . 31

8 Search 33 8.1 The First-Fail/Best-First Principle . . . 33

8.2 Min/Min Scheme . . . 33

8.3 SetTimes Branching . . . 34

8.4 Conflict Ordering Search . . . 34

8.4.1 Last-Conflict Based Reasoning . . . 34

8.4.2 Conflict Ordering Search . . . 34

8.5 Accumulated Failure Count Based Variable Selection . . . 35

9 Experimentation 35 9.1 Instance Selection . . . 36

9.1.1 Machine Characteristics . . . 36

9.1.2 Selecting Test Instances . . . 36

9.1.3 Selected Test Instances/Test Method . . . 37

9.2 OscaR Experimentation . . . . 38

9.2.1 Machine Characteristics . . . 38

9.2.2 Results . . . 38

9.2.3 OscaR Model Evaluation . . . . 41

9.3 Solver Technology Comparison . . . 41

9.3.1 Machine Characteristics . . . 42

9.3.2 Experimental Setup . . . 42

9.3.3 Results . . . 42

9.3.4 Conclusion . . . 44

10 Conclusion 45 11 Future Work 45 11.1 Search Based on Instance Structures . . . 45

11.2 Implied Constraints and Symmetry Breaking . . . 45

11.3 Optimisation . . . 46

11.4 Limited Discrepancy Search . . . 46

References 47

(9)

1 Introduction

1.1 Setting

A modern aeroplane hosts lots of advanced electronics; electronics in an aircraft setting is called avionics. Avionic systems include communications, information-gathering sensors, processing units refining the gathered information, equipment presenting the refined data to the pilot, and the hundreds of systems that are fitted to an aircraft to perform individual functions. These can as simple as an external flashing light, as complicated as the targeting system of a military aircraft, or anything in between.

Some units update the state of the aircraft repeatedly during operation, giving rise to a complex flow of data between different units and thus putting requirements on the order of the activities’ executions. Furthermore, for an avionic system it is not sufficient that the logical result of a computation is correct; it is crucial that the result is produced at the correct time and missing deadlines could lead to devastating events. In other words, avionic systems are examples ofhard real-time systems, see Section 2. The problem is similar to that of the Cyclic Job Shop Problem defined in Section 3.

During the last two decades or so, a large part of the avionics industry has switched architec- ture paradigm from usingfederated systems, where each subsystem has its own computers per- forming its own functions, to using an integrated architecture calledintegrated modular avionics (IMA). IMA is a hard real-time multiprocessor system where applications share hardware re- sources.

The idea was to make more efficient use of a smaller amount of avionics, and the IMA ap- proach can cut off as much as half of the number of processors, as is the case for the Airbus A380 suite. Such an architecture necessitates strict requirements on the spatial and temporal partitioning of the system to achieve fault containment, and a common standard for this parti- tioning is ARINC 653 [25, 29].

Typically, the IMA architecture gives rise to multiprocessor scheduling problems that be- come computationally demanding for large-scale instances. The introductory parts of the PhD thesis [1] provide an extensive introduction to the area of scheduling avionic systems. Finding an efficient solution method to such problems is necessary for being able to use IMA.

1.2 Questions

This master thesis addresses a sub-problem arising in the scheduling of an avionic system, defined in Section 5, and it is part of a research project collaboration between Link¨oping Uni- versity and Saab. The goal is to solve the problem using aconstraint programming approach implemented as a scheduling tool. Constraint programming is defined in Section 4, and the implementation is described in Sections 6 and 8. The purpose of the thesis is to evaluate the performance of this scheduling tool and determine if the approach is appropriate for the prob- lem at hand. A description of the experimental process is found in Section 9, the results in Section 9.2, and the conclusion in Section 10. In Section 11, we list some possible future exten- sions to the scheduling tool.

1.3 Scope

Scheduling of real-time systems can refer to eitheron-line scheduling, where scheduling de- cisions are made during runtime, orpre-runtime scheduling, where the schedule is decided at compile time. This project will consider pre-runtime scheduling, and all activities are repeated

(10)

within a certain time frame, making the schedule cyclically repeated with requirements span- ning between the end and beginning of the schedule.

The focus of this thesis is to find a constraint programming based solution capable of finding solutions to instances of the scheduling problem within reasonable time and computational resources for instances of real-life size.

Due to confidentiality reasons the evaluation instances are not real avionic data, but rather sharing characteristics with real instances with respect to task durations, dependency meta data, etc. Part of the thesis is designing a tool for generating test instances. This is further described in Section 7.

2 Background

The following material contains the necessary terminology as well as some background to the problem stated in Section 5.

2.1 Real-Time Scheduling

A more thorough description of real-time scheduling and real-time systems is given by Kopetz in [15].

2.1.1 The basics

Inscheduling problems, one is considering a finite set of units of execution called tasks respecting a finite set of constraints. In this thesis, a task τ is a triple �EST,LCT,δ� which are to be inter- preted as anearliest starting time, a latest completion time, and a duration, respectively. The LCT may also be called thedeadline. The tasks are to be placed in a schedule such that all tasks are scheduled, all tasks start after their EST, end before their LCT, and execute for their respective duration.

Tasks share a set of resources, corresponding to e.g. CPUs or places in memory, andschedul- ing is assigning a starting time and a resource to each task such that the start time is at least the EST and the start time plus the duration is at most the LCT. The resources usually have an upper bound on concurrent usage, such as a single-core processor, which only one task at a time may be scheduled to use. Furthermore, tasks are usually related byprecedence, restricting the order in which tasks may be executed. A schedule satisfying all scheduling constraints is calledfeasible.

In some settings, there is anevaluation function taking a feasible schedule as input, mapping it to a real number. Then the solutions can be ordered and anoptimal schedule is one either minimising or maximising the evaluation function. An optimal schedule could, e.g., be the one minimising the end time of the last task.

Sometimes, a task is also associated with a value describing how much of a resource is con- sumed during execution. However, in this thesis, all tasks are using exactly one resource unit each, making such values superfluous.

Acyclic scheduling problem is a scheduling problem where all tasks must occur an indefinite number of times. Cyclic scheduling problems exist in many domains including:

• Parallel computing, where loop scheduling problems occur when designing optimising compilers for parallel architecture computers [11].

• Automated manufacturing systems, where many activities usually correspond to cyclic operations [2]. For instance, tasks executed by industrial robots on mass production line

(11)

machines are repeated cyclically. Also, the maintenance tasks executed by technicians on these robots occur cyclically during the production process after a fixed time of machin- ing.

• Many common human activities can be viewed as cyclic activities. For instance, time tabling activities in a school typically consist of finding a week-long cycle to be repeated over a certain duration such as the whole school year or semester.

2.1.2 The precedence relation

Tasks may be subject toprecedence constraints. Precedence is a binary relation over tasks where 1, τ2) is in the relation if and only if τ1has finished executing before τ2starts to execute. That is, the start time of τ1plus its duration is at most the start time of τ2. The precedence relation in an instance can be viewed as a directed graph.

We denote that thedependency (τ1, τ2) is an element in this relation by τ1≺ τ2. Each de- pendency is here associated with a gap range [γmin, γmax]. At least γmintime units must pass between the end of execution of τ1and the beginning of τ2, and the same span may be at most γmaxunits long.

2.1.3 Hard and soft real-time systems

Real-time systems are often divided into three classes: hard, firm, and soft. Inhard real-time systems it is absolutely crucial deadlines are not missed, and failing to meet a deadline leads to system failure and often severe consequences [15]. For example, systems for medical purposes may be hard, if missing deadlines may lead to lives lost.

In firm and soft real-time systems, the system is not considered to have failed even after missing a few deadlines. A system where results have no practical use after a missed deadline is often called firm, whether late results in soft systems may still have some use. A system forecasting weather is firm since a missed deadline is (usually) not disastrous but the computed result is not worth anything when the time of the forecast has passed.

2.1.4 On-line vs. pre-runtime scheduling

Scheduling can be eitheron-line or pre-runtime. On-line scheduling is done during operation and consists of following a set of pre-defined rules for determining which task a processing unit should work on. On-line scheduling is usually very memory-efficient, and will often lead to a high throughput of tasks due to its flexibility. In some cases, however, on-line scheduling will lead to tasks missing their deadlines, especially in non pre-emptive systems, where an executing task may not be stopped before its completion. For example, assume a processing unit has just started on a task taking a large amount of time to compute when another task with an early deadline is deployed. It may then be impossible to finish the second task before its deadline, making the schedule infeasible.

Pre-runtime scheduling, on the other hand, is done before operation and consists of con- structing a schedule that is then followed during runtime. Such schedules can be extremely large and thus memory-consuming. On the other hand, finding a feasible, or even optimal, schedule guarantees no task will ever miss their respective deadline, making them suitable for hard real-time systems.

On-line scheduling algorithms usually are based on making locally good decisions and therefore are low in time complexity. E.g., earliest deadline first (EDF) where, whenever a scheduling event occurs such as a task finishing, the task closest to its deadline is the next to

(12)

be scheduled for execution. Finding the task with the earliest deadline is linear, and therefore polynomial, in the number of tasks to be scheduled.

Pre-runtime scheduling, on the other hand, is a harder problem to solve if feasible or even optimal schedules are required. Cyclic scheduling problems can be roughly classified into two main categories: problemswithout resource constraints and problems with resource con- straints. Among the first category are the following two problems:

• Thebasic cyclic scheduling problem (BCS) consists of assigning starting times from R to a finite set of tasks in a schedule, where no resource constraints are considered. Each task is to be scheduled � times. The aim is to minimise the total runtime divided by the number � as � goes to infinity. The tasks are subject to uniform constraints, which are constraints (i,j,k) on tasks τi, τj stating that τi must have executed k times more than τj. These constraints are calleduniform since k is constant. The problem is stated by Hanen and Munier in [11], and proven to be solvable in O(n3· logn) time where n is the number of tasks.

• In [21], Munier extends BCS to thebasic cyclic scheduling problem with linear precedence constraints (BCSL) where k in BCS is no longer constant, but a linear polynomial k(�).

Naturally, the polynomial k(�) is positive on N.

The second category,having resource constraints constraining the number of concurrent tasks, are:

• The hoist scheduling problem (HSP) can be found in the electronics industry, where prin- ted circuit boards must undergo a sequence of chemical and electro-plating operations in successive tanks, corresponding to resources. There are robots or hoists transporting the parts that must avoid collisions on a common track. An infinite number of parts must be processed, and the objective is to minimise the average cycle time in the long run. For one robot executing one job consisting of a set of tasks, the problem is (strongly) NP-hard [2].

• The cyclic job shop problem, discussed in Section 3.2.

2.2 MiniZinc

MiniZinc [20] is a constraint-based modelling language for constraint problems where models are usually easy to read and can be used as input for any back-end from a range of different techniques. There are, among others, back-ends using constraint programming, Section 4, or SAT solving, a field arising from trying to find a truth assignment satisfying a given Boolean formula. An example model can be seen in Figure 1.

2.3 OscaR

OscaR [22] is a Scala toolkit for solving Operational Research (OR) problems, and the abbrevi- ation stands for “Scala in OR”. The techniques currently available inOscaR are:

• Constraint programming (CP) (module based on the formerScampi).

• Constraint-based local search (module based on the formerAsteroid).

• Linear programming.

• Discrete event simulation.

(13)

1: array[1..9, 1..9] of var 1..9: puzzle;

2:

3: % All different in rows

4: constraint forall (i in 1..9) (

5: alldifferent( [ puzzle[i,j] | j in 1..9 ]) );

6:

7: % All different in columns.

8: constraint forall (j in 1..9) (

9: alldifferent( [ puzzle[i,j] | i in 1..9 ]) );

10:

11: % All different in sub-squares:

12: constraint

13: forall (a, o in 1..3)(

14: alldifferent( [ puzzle[(a-1) *S + a1, (o-1)*S + o1] |

15: a1, o1 in 1..3 ] ) );

16:

17: solve satisfy;

Figure 1: A constraint model of the Sudoku problem, written in theMiniZinc modelling lan- guage. Adapted from [19].

• Derivative free optimisation.

• Visualisation.

This thesis will only use the CP module ofOscaR.

2.4 Gantt charts

A Gantt chart is a type of bar chart illustrating a project schedule. It is named after Henry Gantt, an American mechanical engineer/management consultant, who developed the chart in the 1910s. However, it was independently invented by the Polish economist, engineer and management researcherKarol Adamiecki in 1896.

Gantt charts are very useful for illustrating schedules such as those discussed in this report.

Figure 2 depicts a Gantt chart representing a system for some timeframe. Each box corresponds to the execution of a task and each arrow to a dependency. Sometimes a row will represent a single task for visibility reasons, as in Figure 3. Such cases are identified by the row labelling, using τ, or announcement in the text. Mostly a row will represent resources usable by at most one task at a time (calledunary).

2.5 Random Forests

We discussdecision trees, a machine learning method for classification or regression, as well as the extended method using aforest of decision trees.

2.5.1 Decision Trees

Decision tree learning uses a decision tree as predictive model, mapping variable observations to conclusions about the target value. An example based on survival data of the Titanic is seen in Figure 4. Beginning at the root, we predict whether a person survives or not by progressing

(14)

p1

p2 p3

p4

Figure 2: Example Gantt chart.

τ1 τ2

τ3

Figure 3: Three tasks τ1,2,3on separate rows.

downwards using the labelled branches. A male aged 9 would go left twice from the root and end in a red leaf representing likely death, whereas a female would go right once into a green leaf representing likely survival. This is an example ofclassification, where the outcome is either survival or death. Decision trees can also be used forregression, where the predicted outcome is a real number, for example representing the predicted price of a consumer good.

Algorithms for constructing decision trees usually work top-down, choosing the variable that best splits the training data, see [27]. Therefore, decision trees can be used for evaluating the importance of the different observed variables.

2.5.2 Random Forests

Random forests [14], or random decision forests, operate by constructing a multitude of decision trees at training time. Each tree is given a subset of the input data when training, and the res- ulting random forest model outputs the mean prediction of all trees in the forest for regression models. Random forests correct for the individual decision trees’ habit of overfitting to their training set, see [13].

3 Job Shop Scheduling

We define the job shop scheduling problem and extend it to the cyclic job shop scheduling problem.

3.1 Problem Statement

The job shop scheduling problem (abbreviated JSP) is a problem where a number of tasks τ1, τ2, . . . , τkare to be scheduled over a set of m distinct resources, e.g. processors.

(15)

Age ≤ 9.5

sib + sp ≤ 2.5 sib + sp > 2.5 Age > 9.5

Male Female

Figure 4: A decision tree showing a prediction of the survival of passengers on the Titanic based on passenger observations. sib + sp is the sum of siblings and spouses to the evaluated person aboard the ship. Red leaves indicate predicted death and green survival.

Each task τ has an earliest start time (EST), a latest completion time (LCT), and a duration δ. Furthermore, τ is associated with a resource rτ. While a task is using a resource,no other task may use the same resource. Such a resource with capacity one is calledunary. Pre-emption is not allowed, meaning that a task that has started to execute must execute in its entirety.

Lastly, there is a set of precedence constraints of the form τi≺ τj specifying that the task τi

must be completed before τjcan begin. Each precedence constraint is associated with a range min, γmax] specifying that the time between the end of τi and the beginning of τjis within this range.

The problem may either be of schedule optimisation or of determining feasibility. In the first case, we aim for a solution having minimalmakespan, which is the maximum completion time of all tasks, or having maximalthroughput, which is the number of completed tasks within some time frame, or some other optimisation metric. If determining feasibility, we aim to find a schedule where the makespan is at most some upper limit or prove no such schedule exists.

More on the topic of the job shop scheduling problem can be found in [2, 3].

3.2 Cyclic Job Shop Scheduling

In the cyclic job shop problem (CJSP), we assume that the set of tasks is a template which we wish to repeat indefinitely. For example, the tasks may represent steps towards building a widget in a factory. Some parts usually need to be made in a particular order, so some tasks are subject to precedence constraints. In the CJSP, the construction of a single widget may span over more than one cycle.

In a typical optimisation CJSP, we will want to overlap the construction of more than one widget to make efficient use of the resources. In [10], it is shown that a non-cyclic schedule for some given task set may outperform any cyclic schedule with respect to average makespan, i.e., the average cycle time when the number of instances of the task set goes to infinity. There are advantages to using cyclic schedules, however. First, cyclic schedules are much easier to implement; we only need to communicate a short sequence of actions. Second, the cost of computing optimal non-cyclic schedules grows exponentially with the number of widgets [3].

However, for the problem discussed in this thesis, instances of the same type of widget may not be made in parallel. In addition, the same instance of a task is not allowed to execute over more than one cycle. Furthermore, the aim will only be to find a feasible schedule rather than finding an, in some sense, optimal schedule.

More on the subject of the cyclic job shop problem as well as a formal definition can be found in [2, 3, 10].

(16)

4 Constraint Programming

Constraint programming (CP) is a programming paradigm wherein relations between variables are stated in the form of constraints. A constraint does not specify how to find a solution, but rather a property of any solution to a given problem, making CP a form of declarative programming.

In a constraint programming solver,propagators are posted with the sole responsibility of making sure their respective constraints are not violated. The efficiency of a CP toolkit is highly correlated to how well its propagators are implemented.

Even though the terminology of CP may not be familiar, the basic concepts should be famil- iar to anyone who has tried to solve aSudoku puzzle. Sudoku is a number puzzle in the form of a 9 × 9 grid, divided into nine pairwise disjoint 3 × 3 boxes. A Sudoku starts with some cells prefilled with numbers, calledclues. The puzzle is solved when each cell in the grid is filled with a whole number in 1,2,...,9, satisfying three rules:

1. The numbers in eachrow must be different from each other.

2. The numbers in eachcolumn must be different from each other.

3. The numbers in each 3 × 3 box must be different from each other.

A Sudoku is an example of aconstraint satisfaction problem (CSP) defined in Section 4.1. Sec- tion 4.5 will extend CSPs with anobjective function in order to find a best solution to a given problem. Sections 4.2, 4.3, and 4.4 will further describe the methods of CP.

4.1 Constraint Satisfaction Problems

Aconstraint satisfaction problem (CSP) is a triple �V ,D,C� consisting of a finite set V of decision variables, a domain function D defined on the variables vi in V mapping a variable to a set, and a finite set of constraints C. The store is the current collection of domains. The CSP is solved when all decision variables in V have been assigned a single value within their respective domains such that all constraints are satisfied.

A store is said to be (strictly) stronger than another store if the (non-empty) store has at least one domain being a proper subset of its counterpart in the weaker store. The strongest possible store is when all domains are singleton sets, and such stores correspond to a solution to the CSP.

In the case of Sudokus, each cell corresponds to a variable and each variable has the domain {1,2,...,9}. Each of the three rules in the previous section applies to nine groups of nine cells each, so in total the rules can be expressed using 27 constraints, each on nine variables.

Solving a CSP using a constraint programming solver is an iterative process alternating between two states;propagation and search. During propagation, values are removed from the domains until no more values can be removed by the propagators. During search, the solver divides the search space into mutually exclusive alternatives, thereby creating a search tree. In other words, if the solver branches on a variable, the intersection of the domains of that variable in any two branches is empty and the union of the domains in all branches is the domain before branching. Branching can make other decisions than variable assignments, e.g. the order of tasks in a schedule, as seen in Section 8.3. Apartial assignment is any node different from the root. There, the branching heuristics will have made a decision not necessarily logically implied by the existing constraints.

For example, in one branch the solver may try to place the number 4 in a given cell in a Sudoku, and exclude 4 from the domain for that cell in the other branch. The solver traverses the resulting tree, where a leaf is either a solution to the CSP or a node where finding a solution is no longer possible. Propagation and search will be described in following sections.

(17)

2 5 1 9

8 2 3 6

3 6 7

1 6

5 4 1 9

2 7

9 3 8

2 8 4 7

1 9 7 6

(a) Sudoku puzzle with 30 clues.

2 5 1 9

8 2 3 6

3 6 7

1 6

5 4 1 9

2 7

9 3 8

2 8 4 7

1 9 7 6

1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9

(b) The domains of the variables in column

#4 after propagating only AllDifferent(col#4).

The rest of the variables all still have domain {1,...,9}.

Figure 5: Example Sudoku (5a) and one of its constraints (5b).

4.2 Propagation

The purpose of propagation is to try and prune values from the current store, thus creating a stronger store and getting closer to finding a solution. Pruning is performed by removing values not taking part in any solution from the domains, calledunsupported. Propagation is the difference between constraint programming and brute force search.

4.2.1 Propagators

Apropagator is an implementation of a constraint, working on the current store and pruning unsupported values. Strictly speaking, a value is calledsupported with respect to a propag- ator if the value corresponds to a value in every single other variable’s domain satisfying the constraint. The set of variables handled by the constraint is called itsscope.

For example, assume a variable V has a domain of {1,2,3,4} and a constraint V ≤ 3. A propagator may restrict the domain to {1,2,3}.

A propagator may remove the support of a value when it has pruned values from the store.

When a propagator cannot achieve more by running an additional time it is said to be atfixpoint.

If a propagator finds that it can never reduce any stronger domain, then it issubsumed, and the solver will never run it in the current subtree.

If a propagator prunes the domain of a decision variable to the point that the domain be- comes the empty set, then we say that the current node in the search tree isfailed. Failed nodes signal that there cannot be any solution in that part of the search space.

4.2.2 Consistency

A store where every value in the domain of some variable occurring in a constraint has support is calledvalue consistent for that constraint. Propagating to value consistency is relatively weak

(18)

as it might not detect that the node is the root of a subtree with no solution. For example, assume we have three variables x,y,z where each variable has domain {1,2}. A propagator propagating that these three variables should be different from each other to value consistency enforcing the conjunction of (x � y),(x � z), and (y � z) is not going to fail; x = 1 has support in y = 2 and z = 2. Quite clearly, both y and z should not be able to be 2 in a solution. However, this is not detected by value consistency. Similarly, y = 1 has support in x = 2 and z = 2, and z = 1 in x = 2 and y = 2.

A stronger consistency level isdomain consistency (DC). A store is domain consistent for a constraint if every value in every domain of the decision variables takes place in a solution.

Domain consistency usually implies the search space is going to be relatively small. However, the solver may spend a long time in each node if the constraints are costly to propagate to domain consistency, and the extra cost compared to the decrease in search space may not be motivated.

A compromise isbounds consistency (BC). In BC, for every decision variable and the lower and upper bounds of its domain, there exist values in the domains of the other variables such that the collection of values form a solution.

Generally, there is no way of knowing which consistency level is ‘best’ and experimentation is needed.

4.2.3 Propagation step

The CP solver has a set of propagators in its state. In every step of propagation, the propagators are run in some order until one of the following has occurred:

• The current node has failed, implying no solutions, and backtracking is necessary.

• All domains in the store are singletons, implying the solver has found a solution.

• No domains are empty, at least one domain contains more than one element, and all propagators are either at fixpoint or subsumed. In order to solve the problem, search is necessary.

4.3 Global Constraints

A constraint is not limited to one or two or any other fixed number of variables. Aglobal con- straint is a constraint over an arbitrary number of variables. The motivation behind global con- straints is to give one propagator all available information rather than dividing the information between more constraints.

Each of the constraints of the Sudoku problem is of the form “all 9 variables in the row (or column/box) take different values”. This type of constraint is very common, and implemen- ted in the global constraint AllDifferent, first formulated in [17], with a domain-consistent propagator described in [26]. The result after propagating AllDifferent on column #4 of the Sudoku in Figure 5a to DC is seen in Figure 5b. Propagating AllDifferent again on the same column is not going to do anything, so the propagator is at fixpoint.

Global constraints are often superior to their logical decomposition into constraints of fixed arity. For example, AllDifferent(x,y,z) is decomposable into the disjunction of (x � y),(x � z), and (y � z). Assuming the domains of all three variables are the set {1,2} it is apparent to a human that assigning the three variables different values is impossible, as we saw when we defined value consistency earlier in this section.

(19)

Figure 6: Example search tree. Solutions are represented by a green diamond, the blue nodes are unresolved nodes, and the red nodes are failed. The black lines represent decisions made by the brancher.

There are a lot of existing global constraints, and a lot of ongoing research regarding them;

AllDifferent is only one out of hundreds described in the Global Constraint Catalog [7]. How- ever, no general standard describing which constraints should be implemented in a toolkit ex- ists by the time of this thesis and toolkits differ on the set of implemented global constraints as well as the implementation quality.

4.4 Search

Propagation by itself is generally not enough to find solutions to a given problem, as the set of propagators would have to prune every domain to singleton sets, which is unlikely for most problems. Between every two steps of propagation is a search step. Search is divided into branching and exploration. Branching defines how to define the search tree, and exploration how totraverse the search tree.

The order in which alternatives are considered can strongly impact the efficiency of the algorithm. An example of this can be seen in Figure 6. Exploring the leaves from right to left in this representation is going to lead to a solution faster than from left to right.

4.4.1 Branching

Branching describes a search tree by selecting an unfixed variable and partitioning its domain into subsets. Each partition describes a decision on which values the chosen decision variable can have in a solution.

A branching decision is often divided into avariable selection and a value selection. There is no definite general best way to branch on any problem, and there are many common heurist- ics. In Figure 7c, every propagator is at fixpoint and the search tree branches into the subtrees representing Figures 7d and 7e, respectively.

There is nothing restricting the domain partition to exactly two subsets; e.g., one could select a variable in the node of a search tree and create a child for each value in its domain. In this report, however, branchings will be binary.

(20)

2 5 1 9

8 2 3 6

3 6 7

1 6

5 4 1 9

2 7

9 3 8

2 8 4 7

1 9 7 6

1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 1 2 3

4 5 6 7 8 9

1 2 3 4 5 6 7 8 9

1 2 3 4 5 6 7 8 9

1 2 3 4 5 6 7 8 9

1 2 3 4 5 6 7 8 9

(a) Result after AllDifferent(row#3) when con- tinuing from Figure 5b. Note that the domain of row 3 column 4 shrinks further.

2 5 1 9

8 2 3 6

3 4 6 7

1 6

5 4 1 9

2 7

9 3 8

2 8 4 7

1 9 7 6

1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 1 2 3

4 5 6 7 8 9

1 2 3 4 5 6 7 8 9

1 2 3 4 5 6 7 8 9

1 2 3 4 5 6 7 8 9

1 2 3 4 5 6 7 8 9 1 2 3

4 5 6 7 8 9 1 2 3 4 5 6 7 8 9

(b) Result after propagating AllDifferent on the highlighted box. The highlighted cell with a only 4 has been reduced to a singleton domain, so it is fixed.

2 5 1 9

8 2 3 4 6

1 3 4 6 7

9 1 3 6

5 4 7 1 9

2 1 7

9 6 3 2 8

2 8 1 4 7

1 9 5 7 6

1 2 3 4 5 6 7 8 9

1 2 3 4 5 6 7 8 9

1 2 3 4 5 6 7 8 9

1 2 3 4 5 6 7 8 9

1 2 3 4 5 6 7 8 9 1 2 3

4 5 6 7 8 9

1 2 3 4 5 6 7 8 9

1 2 3 4 5 6 7 8 9

1 2 3 4 5 6 7 8 9 1 2 3

4 5 6 7 8 9

1 2 3 4 5 6 7 8 9

1 2 3 4 5 6 7 8 9

1 2 3 4 5 6 7 8 9 1 2 3

4 5 6 7 8 9

1 2 3 4 5 6 7 8 9

1 2 3 4 5 6 7 8 9

1 2 3 4 5 6 7 8 9

1 2 3 4 5 6 7 8 9 1 2 3

4 5 6 7 8 9

1 2 3 4 5 6 7 8 9

1 2 3 4 5 6 7 8 9

1 2 3 4 5 6 7 8 9 1 2 3

4 5 6 7 8 9

1 2 3 4 5 6 7 8 9

1 2 3 4 5 6 7 8 9

1 2 3 4 5 6 7 8 9

1 2 3 4 5 6 7 8 9

1 2 3 4 5 6 7 8 9 1 2 3

4 5 6 7 8 9

1 2 3 4 5 6 7 8 9

1 2 3 4 5 6 7 8 9

1 2 3 4 5 6 7 8 9 1 2 3

4 5 6 7 8 9 1 2 3 4 5 6 7 8 9

1 2 3 4 5 6 7 8 9

1 2 3 4 5 6 7 8 9 1 2 3

4 5 6 7 8 9

1 2 3 4 5 6 7 8 9

1 2 3 4 5 6 7 8 9

1 2 3 4 5 6 7 8 9

(c) Every AllDifferent propagator is at fix- point.

2 5 1 9

8 2 3 5 4 6

1 3 4 6 7

9 1 3 6

5 4 7 1 9

2 1 7

9 6 3 2 8

2 8 1 4 7

1 9 5 7 6

1 2 3 4 5 6 7 8 9

1 2 3 4 5 6 7 8 9

1 2 3 4 5 6 7 8 9

1 2 3 4 5 6 7 8 9

1 2 3 4 5 6 7 8 9 1 2 3

4 5 6 7 8 9

1 2 3 4 5 6 7 8 9

1 2 3 4 5 6 7 8 9 1 2 3

4 5 6 7 8 9

1 2 3 4 5 6 7 8 9

1 2 3 4 5 6 7 8 9

1 2 3 4 5 6 7 8 9 1 2 3

4 5 6 7 8 9

1 2 3 4 5 6 7 8 9

1 2 3 4 5 6 7 8 9

1 2 3 4 5 6 7 8 9

1 2 3 4 5 6 7 8 9 1 2 3

4 5 6 7 8 9

1 2 3 4 5 6 7 8 9

1 2 3 4 5 6 7 8 9

1 2 3 4 5 6 7 8 9 1 2 3

4 5 6 7 8 9

1 2 3 4 5 6 7 8 9

1 2 3 4 5 6 7 8 9

1 2 3 4 5 6 7 8 9

1 2 3 4 5 6 7 8 9

1 2 3 4 5 6 7 8 9 1 2 3

4 5 6 7 8 9

1 2 3 4 5 6 7 8 9

1 2 3 4 5 6 7 8 9

1 2 3 4 5 6 7 8 9 1 2 3

4 5 6 7 8 9

1 2 3 4 5 6 7 8 9

1 2 3 4 5 6 7 8 9

1 2 3 4 5 6 7 8 9 1 2 3

4 5 6 7 8 9

1 2 3 4 5 6 7 8 9

1 2 3 4 5 6 7 8 9

1 2 3 4 5 6 7 8 9

(d) In the branch where the the encircled 5 has been assigned as a branching decision.

AllDifferent on the highlighted row will re- duce the domains to the point the cell in column 5 cannot take any value.

2 5 1 9

8 2 3 1 4 6

1 3 4 6 7

9 1 3 6

5 4 7 1 9

2 1 7

9 6 3 2 8

2 8 1 4 7

1 9 5 7 6

1 2 3 4 5 6 7 8 9

1 2 3 4 5 6 7 8 9

1 2 3 4 5 6 7 8 9

1 2 3 4 5 6 7 8 9

1 2 3 4 5 6 7 8 9 1 2 3

4 5 6 7 8 9

1 2 3 4 5 6 7 8 9

1 2 3 4 5 6 7 8 9 1 2 3

4 5 6 7 8 9

1 2 3 4 5 6 7 8 9

1 2 3 4 5 6 7 8 9

1 2 3 4 5 6 7 8 9 1 2 3

4 5 6 7 8 9

1 2 3 4 5 6 7 8 9

1 2 3 4 5 6 7 8 9

1 2 3 4 5 6 7 8 9

1 2 3 4 5 6 7 8 9 1 2 3

4 5 6 7 8 9

1 2 3 4 5 6 7 8 9

1 2 3 4 5 6 7 8 9

1 2 3 4 5 6 7 8 9 1 2 3

4 5 6 7 8 9

1 2 3 4 5 6 7 8 9

1 2 3 4 5 6 7 8 9

1 2 3 4 5 6 7 8 9

1 2 3 4 5 6 7 8 9

1 2 3 4 5 6 7 8 9 1 2 3

4 5 6 7 8 9

1 2 3 4 5 6 7 8 9

1 2 3 4 5 6 7 8 9

1 2 3 4 5 6 7 8 9 1 2 3

4 5 6 7 8 9

1 2 3 4 5 6 7 8 9

1 2 3 4 5 6 7 8 9

1 2 3 4 5 6 7 8 9 1 2 3

4 5 6 7 8 9

1 2 3 4 5 6 7 8 9

1 2 3 4 5 6 7 8 9

1 2 3 4 5 6 7 8 9

(e) Backtracking to (c) and entering the branch where 1 is assigned to the encircled cell.

4 2 6 5 7 1 3 9 8

8 5 7 2 9 3 1 4 6

1 3 9 4 6 8 2 7 5

9 7 1 3 8 5 6 2 4

5 4 3 7 2 6 8 1 9

6 8 2 1 4 9 7 5 3

7 9 4 6 3 2 5 8 1

2 6 5 8 1 4 9 3 7

3 1 8 9 5 7 4 6 2

(f) The solved Sudoku

(21)

4.4.2 Exploration

A common method for exploration isdepth-first search (DFS), where the leftmost unexplored branch is explored until reaching a leaf corresponding to a failed node or a solution. If the leaf is a failed node the solver will backtrack, traversing the branches in reverse order until a node with an unexplored branch is found.

4.5 Constrained Optimisation Problems

Aconstrained optimisation problem (COP) is a CSP with an objective function defined on the solu- tion space mapping solutions to a value. The objective is either to maximise or to minimise the value of the objective function.

A common search paradigm for COPs isbranch-and-bound (BAB). BAB performs a systematic search of the solution space in the form of a rooted tree with branches. Each time a solution is found, a constraint forcing every subsequent solution to be better is posted. If the algorithm can detect that the current branch cannot lead to a better solution than the current best, then that part can be removed, orpruned, from the tree.

4.6 Implied Constraints

It is sometimes possible to improve a constraint model by adding more constraints than needed for a correct solution. Such constraints are calledimplied constraints since they are logically implied by the constraints in the base model. Being locically redundant, such constraints are sometimes calledredundant constraints. The term redundant constraint could also refer to an implied constraint that does not improve propagation or running time. The implied constraints do not change the set of solutions, but rather increase the propagation capabilities by the solver by providing more information to it.

An implied constraint reduces search if at some point during search, a partial assignment will fail because of the implied constraint, and the search would have continued without the implied constraint. Implied constraints do not change the set of solutions.

Sometimes there are global constraints immediately subsuming implied constraints, and implied constraints may only be useful because a suitable global constraint does not exist. It should be noted that one can have too much of a good thing even when it comes to implied constraints, even when the implied constraints cannot be substituted for a global constraint.

4.7 Symmetry Exploitation

A(constraint) symmetry of a CSP P is a permutation of the variable-value pairs that preserves the satisfaction of constraints of P, and hence also preserves the solutions of P.

A symmetry σ maps any variable-value pair (xi, ai) to another, σ(xi, ai). Avariable symmetry affects only the variables, so any (xi, ai) is mapped to (xσ(i), ai).

Existence of symmetries in a CSP imply both solutions and non-solutions may be repres- ented more than once in the search space; a (non-)solution could be permuted to another (non-)solution using a symmetry σ. We hope to exploit an identified symmetry in order to reduce the search space, hopefully meaning shorter runtime. Such exploitation of symmetry is usually calledsymmetry breaking.

(22)

4.8 Reification

Let us assume, without loss of generality, that the domain D contains exactly two distinguished values {0,1} for encoding false and true, and let us call a Boolean variable a variable with initial domain D. From a logical point of view, the reification of a constraint c is a constraint, written B↔ c, that contains one extra Boolean variable B which is true if and only if the constraint c is satisfied. In effect, D |= (B ↔ c) ⇔ ((B = 1 ∧ c) ∨ (B = 0 ∧ ¬c)).

From a constraint programming point of view, the reification of c is the constraint c ⇔ B.

Reification is useful for constructing new functionality using existing constraints. For example, assume we want exactly two out of the three constraints c1, c2, c3to hold. This is equivalent to

(c1∧ c2∧ ¬c3) ∨ (c1∧ ¬c2∧ c3) ∨ (¬c1∧ c2∧ c3) which, in turn, is equivalent to the reified constraint

c1⇔ B1∧ c2⇔ B2∧ c3⇔ B3∧ B1+ B2+ B3= 2

The benefit becomes even more obvious if we instead assume we have even more constraints where exactly two should hold. For 10 constraints, a non-reified constraint would need a dis- junction of�10

2� = 45 conjunctions of 10 constraints, each with exactly 2 non-negated ones. A reified version would need only the conjunction of 11 constraints, the first 10 reifying of the variables and the last one ensuring that exactly 2 of the reified variables are true.

5 Scheduling of an Avionic System

In this section, we will define the problem of interest for this thesis as well as characteristics of the instances occurring in real avionics systems.

5.1 Problem Statement

• Eachprocessor is represented by a timeline of the same length T . One cycle consists of T time units.

• For each timeline, there is a known set of tasks that shall execute on it.

– Tasks may not span over two cycles by starting before the end of one cycle and ending in the next, thus overlapping the cycle boundary.

• Each task has an earliest start time, a latest completion time, and a duration; all these are constants that have T as upper bound if we index time from 0 to T .

• There are dependencies between tasks, with minimum and maximum (non-negative) gaps. The tasks taking part in the dependency may or may not be part of the same timeline.

– τ1 ≺ τ2 is respected either if τ2 is put later than τ1on the same timeline, or if τ2is scheduled in such a way that the first cycle ends before τ2is executed. In the latter case, τ2 would run its first time in a cycle after the first execution of τ1. However, the gap constraint must still be respected. In Figure 2, the first task on p4depends on the last task of p3in the previous cycle.

– The dependencies may form chains of length k, specifying the order of several tasks:

1≺ τ2, τ2≺ τ3, . . . , τk−1≺ τk}.

(23)

– The last task of a chain must end before the next chain starts.

– Chains may overlap from one cycle to the next.

5.2 Problem Complexity

In [10], Hanen shows that the decision version of CJSP is NP-hard. Here, we show that the problem discussed in this thesis also is NP-hard, by reduction from CJSP. In this section, the problem we want to show is NP-hard is referred to as theavionic scheduling problem (ASP). The constraints of ASP are identical to those of CJSP, with the addition of the non-overlapping of chains as well as that any given task instance must terminate in the same cycle it started.

Theorem 1. ASP is NP-hard.

Proof. Assume we are given some input for CJSP:

n, T ,{τ1, . . . , τk},{τi≺ τj, . . .}�

, interpreted as a number of timelines, timeline length, set of tasks, and set of precedences, respectively. We transform this into an instance of ASP by the following steps:

Transforming timelines: Both n and T are trivially transformed to the same, clearly in poly- nomial time.

Transforming tasks: We want the resulting instance of ASP to be a solution exactly when the CJSP instance is a solution. If not altering the task set, a resulting schedule for ASP would imply a solution to CJSP. However, failure of finding a solution to ASP does not imply that the CJSP problem is not solvable since we do not allow overlapping the cycle boundary in ASP.

In order to create an instance where the tasks may overlap the cycle boundary, each task τi= �ESTi, LCTi, δi� is transformed into δi tasks of unit length, with new dependencies of gap 0 relating them.

Let the jthof these tasks be denoted τij, and for simplicity we omit the duration:

τij= �ESTij, LCTij, 1� = �ESTij, LCTij

Between each pair τij, τi(j+1), we add a dependency τij≺ τi(j+1)with associated range [0,0]. This forces all unit tasks to execute without interruption as soon as the first task has executed, and we have a chain of tasks without gaps.

The first of these new tasks, τi0, has earliest starting timeESTi. Even if we let all other tasks have EST equal to 0, the precedences imply the task τij cannot start beforeESTi+ j.

Similarly, the LCT of the last task, τi(δi−1), isLCTi. All other tasks can be givenLCT equal to T , since the dependencies will make sure they end before the next task in the chain. Further- more, this is done in polynomial time.

Transforming dependencies: The original set of dependencies is left unaltered, with the ad- dition of the dependencies implied by the splitting into unit-length tasks.

Transforming into chains: The notion of chains is not used in CJSP, so the set of chains in the input for ASP is the empty set.

Conclusion: We have seen how to, in polynomial time, transform an instance of CJSP into an instance of ASP where the respective problems are solvable exactly at the same time. Since CJSP is NP-hard by [10], so is ASP. This concludes the proof.

(24)

5.3 Characteristics of Instances in Avionics

The instances of interest in this thesis project are subject to the following constraints:

• The number of timelines is between 3 and 30.

• There are between 102and 104tasks on each timeline.

• The timeline length T is between 107and 109.

• The sum of the durations of the tasks of a timeline covers between 10% and 70% of T .

• The number of dependencies typically ranges from 1 to 4 times the number of tasks, and each task is included in at least one dependency.

• Chains involve 6 tasks. The first three tasks are on one timeline, and the rest on a second timeline. Typically, up to 4 chains start with the same three tasks and continue on differ- ent timelines. 90% of all dependencies are involved in at least one chain.

6 Constraint Model

We propose a constraint model for the problem at hand. Each proposed constraint is discussed in its own subsection. We use the short-hand notation 1..� for the set {1,2,...,�}.

6.1 Instance Data and Derived Constants

• n: the number of timelines (processors),

• T : the timeline length,

• {τ1, . . . , τk}: a set of tasks, each associated with a timeline, – EST(τ): the earliest start time of τ,

– LCT(τ): the latest completion time of τ, – δ(τ): the duration of task τ,

– EST(τ), LCT(τ), and δ(τ) all take values from 0..T , – ντis the set of tasks executing on timeline ν,

– ECT(τ), the earliest completion time: equal to EST(τ) + δ(τ)

• {τi≺ τj, . . .}: a set of precedence constraints, each with an associated gap [γ1, γ2].

• {ch1, . . . , ch}: a set of chains. Each chain consists of a sequence of dependencies τa≺ τb, τbτc, . . ., such that the first task in each dependency but the first is the second task in the prior dependency.

6.2 Decision Variables

For each τ in the set of tasks, we have a decision variable start(τ) of domain 0..T , which should be interpreted as the start time of τ with respect to the beginning of a cycle.

In order to secure consistency and make modelling easier, the redundant variableend(τ) is used, constrained to be equal tostart(τ) + δ(τ). This redundant variable is going to be used to simplify stating the problem constraints in Sections 6.3 through 6.6. However, it is not used when searching for solutions in the search space. By setting the domain ofend(τ) to 0..T for each τ, we ensure no task is going to continue executing even after the timeline has ended.

(25)

Cycle Boundary

Cycle 1 Cycle 2

Case 1 Case 2

Figure 8: Spans when a task can be executed for the two cases arising when comparingEST to LCT. In the first case, EST is smaller than LCT. In the second case, EST is larger than LCT.

The case where both are equal implies the task has duration 0 and may be handled by the first constraint.

6.3 No Task is Crossing the Cycle Boundary

For any task, either the earliest starting time is before the latest completion time, or the LCT precedes the earliest starting time. The two cases are depicted in Figure 8, where the grey bars represent the span between the EST and the LCT of a task τ. In the second case, the span wraps over two cycles.

If the earliest start time is before the latest completion time, then we only need to express that the end is in the spanECT(τ)..LCT(τ).

For any task where the LCT is before the earliest start time, there are two different cases we need to take into consideration; Either the task executes in the same cycle as the earliest starting time or it executes in the next cycle before the deadline. Each case correspond to a subset of the timeline.

The respective cases are handled by the conjunction of the following constraints:

∀τ : EST(τ) ≤ LCT(τ) → end(τ) ∈ ECT(τ)..LCT(τ)

∀τ : EST(τ) > LCT(τ) → end(τ) ∈ (δ(τ)..LCT(τ) ∪ ECT(τ)..T )

It should be noted that, although implications usually are bad for runtime in constraint models, the premises of the implications here are constant expressions depending on the instance data rather than variable assignments. Thus, exactly one of the two conclusions are enforced.

6.4 Tasks on the Same Timeline May Not Overlap

Many constraint programming solvers support the Disjunctive constraint defined in the Global Constraint Catalogue [8]. Disjunctive takes a collection of tasks as argument, where each task is assumed to have a set of possible start times and durations. Disjunctive enforces that no two tasks of non-zero duration overlap, as illustrated in Figure 9.1 Describing this constraint becomes:

∀ν ∈ 1..n : Disjunctive(ντ) which adds one Disjunctive constraint per timeline.

1Zero-length tasks may be used for modelling reasons, where sets of tasks may be put in equal-sized arrays. Since the cardinalities of the task sets may differ, zero-length tasks may be added.

(26)

τ1

τ2 τ3

Figure 9: Non-overlapping tasks τ1, τ2, and τ3.

Cycle Boundary τ1

τ2

Figure 10: Example of precedence constraint satisfied by executing the second task of the de- pendency τ1≺ τ2early in the next cycle. The figure depicts two cycles.

6.5 Respecting Precedences

Respecting precedences within the same cycle. Remember that a precedence constraint is a tuple (τ1, τ2) together with a range corresponding to a maximum and minimumgap [γmin, γmax] for the dependency. The tuple may also be denoted by τ1≺ τ2. If τ1is scheduled before τ2in a cycle, the dependency is satisfied if and only if the gap constraint is satisfied as well.

The start time of τ2 is at least the end time of τ1 plus the minimum gap. Additionally, the start time of τ2is at most the end time of τ1 plus the maximum gap. In other words, the following two inequalities must hold:

start(τ2) ≥ end(τ1) + γmin

start(τ2) ≤ end(τ1) + γmax

which can be combined into the expression start(τ2) ∈�

end(τ1) + γmin..end(τ1) + γmax

� and further simplified into

start(τ2) − end(τ1)�

∈ γmin..γmax (6.1)

Each precedence constraint is respected, even though it crosses a cycle boundary. Imagine a task set consisting of τ1, τ2where τ2has a duration of 1 unit and must be executed in the first time unit. Assume τ1can be executed any time and executes for 1 unit. Furthermore, there is a dependency τ1≺ τ2with gap [0,0]. The only feasible schedule is when τ1is scheduled in the last time unit and τ2in the first, making the gap constraint satisfied. In order for this to work, τ2cannot be executed in the very first cycle. Figure 10 depicts this situation. It is apparent that τ2directly follows τ1when the next cycle starts, satisfying the dependency constraint.

References

Related documents

46 Konkreta exempel skulle kunna vara främjandeinsatser för affärsänglar/affärsängelnätverk, skapa arenor där aktörer från utbuds- och efterfrågesidan kan mötas eller

För att uppskatta den totala effekten av reformerna måste dock hänsyn tas till såväl samt- liga priseffekter som sammansättningseffekter, till följd av ökad försäljningsandel

The increasing availability of data and attention to services has increased the understanding of the contribution of services to innovation and productivity in

Regioner med en omfattande varuproduktion hade också en tydlig tendens att ha den starkaste nedgången i bruttoregionproduktionen (BRP) under krisåret 2009. De

describes how a one-hour introductory course in design of experiments (DOE) can be used to attract high school students to study science and engineering in general and..

För det tredje har det påståtts, att den syftar till att göra kritik till »vetenskap», ett angrepp som förefaller helt motsägas av den fjärde invändningen,

Samtidigt som man redan idag skickar mindre försändelser direkt till kund skulle även denna verksamhet kunna behållas för att täcka in leveranser som

They may appeal primarily to EU law lawyers, but they may very well be of immediate interest for anyone interested in sports law and governance of professional sports, for