• No results found

Evaluating pheromone intensities and 2-opt local search for the Ant System applied to the Dynamic Travelling Salesman Problem

N/A
N/A
Protected

Academic year: 2021

Share "Evaluating pheromone intensities and 2-opt local search for the Ant System applied to the Dynamic Travelling Salesman Problem"

Copied!
33
0
0

Loading.... (view fulltext now)

Full text

(1)

IN

DEGREE PROJECT

TECHNOLOGY,

FIRST CYCLE, 15 CREDITS

,

STOCKHOLM SWEDEN 2017

Evaluating pheromone intensities

and 2-opt local search for the Ant

System applied to the Dynamic

Travelling Salesman Problem

(2)

Evaluating pheromone intensities and 2-opt local

search for the Ant System applied to the Dynamic

Travelling Salesman Problem

Lagerqvist, Klas

Svensson, Erik R.

June 4, 2017

Degree Project in Computer Science, DD142X

Supervisor: Jeanette Hellgren Kotaleski

(3)

Abstract

(4)

Sammanfattning

(5)

Contents

1 Introduction 3 1.1 Purpose . . . 4 1.2 Problem statement . . . 4 1.3 Scope . . . 4 1.4 Outline . . . 4 2 Background 5 2.1 TSP . . . 5 2.2 Dynamic TSP . . . 5

2.3 Self-organization and stigmergy . . . 6

2.4 Double Bridge Experiments . . . 6

2.5 Development of artificial ants . . . 7

2.6 Ant System . . . 7

2.7 Ant Colony Optimization (ACO) . . . 10

2.8 2-opt local search . . . 10

3 Method 11 3.1 Algorithm . . . 11

3.2 Measuring performance and robustness . . . 12

3.3 TSP Instances . . . 13

3.4 Making TSP instances dynamic . . . 13

3.5 Setting Q values . . . 14

3.6 Setting α and β . . . 14

3.7 Setting the ant population’s size . . . 15

(6)

5.2.1 Choice of parameters . . . 24 5.2.2 On the measurements . . . 25 5.3 Future studies . . . 25

(7)

Chapter 1

Introduction

Ants are social insects that perform a variety of different tasks such as building colonies, foraging and caring for their offspring. Individual ants do not possess any significant intelligence, but when multiple ants work together, they have the capability of carrying out quite complex tasks.

During the 20th century a great deal of research on ant behavior was performed, in an effort to understand the processes by which ants are able to behave in such complex and seemingly intelli-gent ways. One aspect of ant behavior that has been studied is the foraging behavior of certain ant species. As it turns out, ants are good at finding short paths between a food source and their nest. That is, they try to make sure not to take any detours on the way to the food and on the way home. The mechanism that allows the ants to find a shorter path instead of a longer, is called stigmergy [10, 3]; a topic that is covered the next chapter.

By drawing inspiration from the path-finding techniques employed by foraging ants, a new field of heuristic algorithms called Ant Colony Optimization (ACO) algorithms, arose during the early 1990s [6, 17]. ACO is essentially a meta-heuristic that can be used to solve different kinds of NP-hard optimization problems [6]. A meta-heuristic is a broad problem solving technique that can be applied to many different problems, as opposed to a heuristic, which is usually tailored to only one specific problem.

The first ACO algorithm invented was the Ant System (AS), which was used to solve the Trav-eling Salesman Problem (TSP) [7, 5]. Since then, ACO algorithms have been able to achieve world class performance for many different optimization problems [7]. However, the TSP still serves as a benchmarking problem for many novel ACO algorithms.

(8)

1.1

Purpose

Since the DTSP is a more realistic problem it is beneficial to have good algorithms for solving it. As ACO depends on sending out multiple ants to continuously improve and explore the graph, such algorithms might have potential to be good at solving DTSP.

1.2

Problem statement

In this thesis, we study a modified version of the first ACO algorithm: the Ant System, in which 2-opt local search has been added. We test how well the algorithm performs while solving both TSP and DTSP, when using different parameter configurations that determine how much pheromone the artificial ants can deposit on the edges in the graphs. Moreover, each test is run with 2-opt both enabled and disabled to study how it impacts the performance.

1.3

Scope

There are multiple ways of measuring the performance of a heuristic algorithm for a certain problem. The time complexity and speed of the algorithm or the quality of the solution that is generated, could be measured. Since processing power and the possibility for more processing cores are continuously increasing, which allows for greater parallelism and faster times, we decided to not look at run-times and instead focus on the end result generated by the algorithm and how robust it is when faced with the continuous changes of the DTSP. We focus on how the amount of pheromone deposited, and how 2-opt local search affects the performance and robustness of the algorithm.

1.4

Outline

(9)

Chapter 2

Background

In order to fully understand the ideas behind the ant inspired algorithm techniques, it is necessary to review some of the studies performed on ants and other social insects’ behavior, and closely related concepts such as stigmergy and self-organization. We will also describe the original ACO algorithm: Ant System, as it makes up the foundation of our algorithm. However, to begin with, we will give a brief overview of the Traveling Salesman Problem.

2.1

TSP

The Travelling Salesman Problem (TSP) is a well known NP-hard problem. In the problem a set of cities v1, v2, ..., vn are given, as well as the distance between each pair of cities. The problem is

that a salesman starts in one of the cities and needs to visit every other city once, and then return to the starting city. The goal is for the salesman to make the route through the cities, also called the tour, as short as possible.

More formally, the tour can be expressed as vi1, vi2, ..., vin, where vi1 denotes the starting city.

The salesman must minimize the total distance travelled P

jd(vij, vij+1) + d(vin, vi1) [12]. Here

d(vi, vj) denotes the distance between the two cities vi and vj. From now on, cities will be referred

to as vertices, and when TSP needs to be distinguished from its dynamic counterpart, it will be referred to as Static TSP (STSP).

2.2

Dynamic TSP

(10)

2.3

Self-organization and stigmergy

A complex system, is a system composed of different entities or agents interacting with each other. It is common for such a system to possess a global behavior that is more complex than the behavior of the entities at the local level. An ant colony is an example of a complex system, in which the be-havior of the colony as a whole is rather complex, but where the ants themselves do not possess any significant intelligence [17]. The emergence of this highly structured and complex behavior observed at the colony-level, arising from simple interactions between the ants, is called self-organization [2]. A concept that is crucial to understanding the ants’, and other social insects’, self-organizational ca-pabilities, is stigmergy. Stigmergy is essentially a form of indirect communication by which animals modify their environment in order to communicate [18, 11]. The concept was originally introduced by Grassé in 1959, in order to explain the way in which termites use a form of indirect communica-tion when they construct their nests. The termites infuse pheromone into pellets of mud which they lay randomly on the ground. Other nearby termites can sense the pheromone, and it stimulates the other termites to lay their mud pellets on the same spot. This process continues, and if the density of the termites in the region where the mud is piling up is high enough, a nest will start to take form [6]. Since its inception, the concept of stigmergy has been applied to other social insects, such as ants. Ants’ foraging behavior also works by means of stigmergy. They lay trails of pheromone on the ground as they walk on the way to a food source and on the way back to the nest [6]. When other ants come across a trail of pheromone, it increases the probability of them following that trail, so this indirect communication allows ants to guide each other in the direction of the food. In essence, stigmergy may refer to any form of indirect communication whereby agents modify their environment to communicate [11].

2.4

Double Bridge Experiments

The self-organizing properties of foraging ants was studied in great detail in 1990 when Deneubourg and his colleagues [3] conducted an experiment on Argentine ants of the species Iridomyrmex hu-milis. In the experiment a nest was connected to a food source via two different bridges of equal length, so the ants were limited to exiting the nest and then choosing one of the bridges to walk across (Figure 2.1a).

Initially when the ants left the nest, each ant randomly picked one of two bridges to walk across. As the ants walked across the bridges, they deposited pheromone trails along their path. Due to random fluctuations, more ants would tend to select one of the bridges, thereby leaving behind a greater amount of pheromone on it. As a result, subsequent ants would have a higher probability of choosing that bridge, as they prefer stronger trails of pheromone over weaker. The amount of pheromone on the more pheromone intensive bridge would steadily increase, eventually leading the ants to converge to that bridge.

(11)

(a) Equal bridge length (b) Unequal bridge length

Figure 2.1: Double bridge experiment

From these experiments, Deneubourg and his colleagues developed a stochastic model, which could describe the ants behavior at any given time. In their model they chose to make two simplifications, but it was deemed that these simplifications would not affect the model’s preciseness [3, 10]. A description of the model and its formulas is omitted, but the interested reader can find them in Dorigo and Stützle’s book Ant Colony Optimization [6].

2.5

Development of artificial ants

As shown by the experiments, the Argentine ants possess an ability to find the shortest path to a food source, via indirect communication. This ability was intriguing enough that computer scientists decided to try to mimic the ants behavior in algorithms [6].

2.6

Ant System

The first ant inspired algorithm was the Ant System (AS), which was invented by Marco Dorigo and his colleagues in 1991 [5]. Dorigo drew inspiration from the insights uncovered during the double bridge experiments, to create an algorithm that could solve the TSP. To solve the problem, Dorigo created artificial ants that were set to traverse the graph, dropping an artificial pheromone along their way, until they had converged to an adequate tour. A modified version of this algorithm, which contains an additional local search step, is what is studied in this thesis, and the algorithm must therefore be described in detail.

(12)

The idea behind the AS algorithm is that digital ants are set to traverse a graph, where they semi-randomly pick which path to take. Each ants starts at a random vertex and traverses the graph, by moving to unvisited vertices in each step (lines 7 - 14).

In each step, when an ant selects the next vertex to move to, it favours edges (paths connecting the vertices) that have a higher amount of pheromone on them - similar to the real ants. However, the artificial ants also favour shorter edges above longer ones, so they try to move to the closest neigh-boring vertices. The process of selecting the closest neighbours is referred to as the nearest neighbor heuristic. Edges with more pheromone on them, and edges with lower weight will get selected with a higher probability.

Each ant repeats this selection until it has visited every vertex, and finally it returns to the starting vertex to complete the tour. After all of the ants have created their respective tours the pheromone is deposited on each path every ant took through the graph (lines 19 - 24). This is different from how real ants behave as they deposit pheromone in real time.

After each iteration of the algorithm the pheromone values in the graph are lowered to simulate pheromone dissipation (lines 16 - 18), which happens to real pheromone as they dissipate after a while. Dissipation is supposed to lead the ants to converge to one path as the pheromone value along it increases while the pheromone value along the other edges dissipates and are forgotten [17]. The parameters used in the algorithm are defined as follows:

• τij represents the amount of pheromone on an edge.

• α and β determines how much the ants favor pheromone over the nearest neighbor heuristic, respectively (line 12).

• ρ is the amount of pheromone dissipation (value ranges between 0-1). • AntN r determines the size of the ant population.

• Q affects how much pheromone is deposited in the depositing phase (lines 19 - 23). The amount of pheromone deposited on each edge in a tour is Q/ln where ln is the length of the

(13)

Data:

A complete non-directional graph G = (V, E) A distance function d : E → R

A set of parameters α, β, ρ, τ0, Q, AntN r

Result:

An approximation of the shortest Hamilton cycle for G

1 Begin

2 foreach edge (i,j) ∈ E do 3 τij ← τ0;

4 end

5 while stop criteria not met do 6 foreach ant n ∈ {1,...,AntNr} do

7 put ant n on a random starting vertex of V; 8 while ant n has not visited all vertices of V do

9 let i be the vertex on which ant n is currently located; 10 let Cand be the set of unvisited vertices;

11 randomly choose j ∈ Cand with respect to probability: 12 pij =

(τij)α∗ (1/dij)β

P

l∈Cand(τil)α∗ (1/dil)β 13 move ant n to vertex j; 14 end

15 end

16 foreach edge (i,j) ∈ E do 17 τij ← τij∗ (1 − ρ); 18 end

19 foreach ant n ∈ {1,...,AntNr} do

20 let ln be the length of the cycle built by ant n 21 foreach edge (i,j) of the cycle built by ant n do 22 τij ← τij+ Q/ln;

23 end 24 end 25 end

26 return best Hamiltonian cycle built in the last iteration 27 End

(14)

2.7

Ant Colony Optimization (ACO)

Since the invention of the original AS algorithm, many scientists have created improved versions of this algorithm, as well as modified it solve other optimization problems than the TSP. Such algo-rithms are nowadays referred to as Ant Colony Optimization algoalgo-rithms, in short ACO algoalgo-rithms [6, 4].

2.8

2-opt local search

2-opt local search is a heuristic algorithm that can be used to solve the traveling salesman problem, or to improve an existing solution generated by another TSP algorithm. 2-opt systematically selects two vertices in the tour, vi and vj where i < j. vi is disconnected from vi−1 and is connected

with vj+1, same is done with vj that is connected with vi−1. If this change improves the tour,

(15)

Chapter 3

Method

This chapter describes how we measured the algorithm’s performance when solving Static and Dy-namic TSP problems. We cover which problem instances were selected for the STSP tests. The same static instances were also used for the DTSP tests; thus, we describe a process for introducing a real-time element into these static problems, which turns them into DTSP instances.

A portion of the chapter is dedicated to explaining how the numerical parameters were set up for the different TSP instances. We also provide a table that exhaustively describes the parameter configurations used for determining pheromone levels.

3.1

Algorithm

The algorithm studied is a slightly modified version of the Ant System; where a 2-opt local search step has been added in between the tour construction step and the pheromone dissipation step. We also created a switch, so that the local search step could be turned off and on, allowing us to run experiments both with and without local search enabled (see Algorithm 2). The best path of each iteration in the algorithm is stored.

1 Begin

2 Initialize pheromones for 1 through 100 iterations do 3 Each ant constructs a path;

4 Optional: 2-opt local search improves each path; 5 Update pheromones;

6 Store best path of current iteration; 7 end

8 End

(16)

3.2

Measuring performance and robustness

Many different ways of measuring the performance of ACO algorithms have been invented by re-searchers studying ACO algorithms and dynamic optimization problems in general [1, 19]. We used a model for measuring performance, which was proposed by Rand and Riolo as a means of measuring the behaviour of Genetic Algorithms [16]. Mavrovouniotis and Yang showed that this model could be applied when studying ACO algorithms tailored to their implementation of the DTSP [14]. In the model, performance refers to the quality of the solutions produced by the algorithms; not to how fast the algorithm runs. Thus, better performance means better solutions, and when it comes to TSP, a better solution means a tour with a shorter length. Since this thesis also deals with DTSP, we needed to be able to measure how well the algorithm could handle real-time changes in the TSP instance. Therefore the robustness measurement proposed by Rand and Riolo [16] was also used. For performance, both the average and the best performance can be measured. We only measured the best performance Pbest for each of our tests. The average performance could be calculated, but

the AS algorithm generally returns the best solution found thus far when it terminates, so it made sense to study the best performance. Pbest is calculated as follows:

Pbest= 1 G G X i=1   1 N N X j=1 Pijbest   Pbest

ij is the best solution produced in iteration i and run j of the algorithm. It is calculated

by finding the shortest tour produced by the ant population at the end of each iteration. These values are averaged over all runs N , for each iteration i. Then these values are averaged again over the number of iterations G, producing a single measurement for each test.

Robustness gives a measurement of how much the algorithm is able to adapt to changes in the problem instance, and this tells us how consistent the algorithm is in producing good solutions. We chose to look at the best robustness Rbestbecause we only measured the corresponding performance

value Pbest. Rbest

ij is calculated as the quotient of the best performance of iteration i divided by the

best performance of iteration i + 1, during run j. If the solution quality declines in iteration i + 1 compared to iteration i, then the robustness value will be less than 1.

Rbest ij =          1, if P best i Pbest i+1j > 1 Pijbest Pbest i+1j otherwise

(17)

Rbest is then calculated in exactly the same way as Pbest; by averaging the best robustness Rbest ij

over the number of runs, and the number of iterations. Rbest= 1 G G X i=1   1 N N X j=1 Rbestij  

Rand and Riolo’s measurements for performance and robustness were useful since they could be applied to both Static and Dynamic TSP instances; the only information they needed in the calcula-tions were the best solution for every iteration. Information about the underlying problem instance and whether it was dynamic or static in its nature, was irrelevant.

3.3

TSP Instances

When testing TSP algorithms, TSP instances from the library TSPLIB are commonly used [5, 14]. In our experiments five TSPLIB instances of varying sizes were used. These were: eil76, kroA100, kroB200, pr299 and gr666.

The number in each instance specifies the amount of vertices in the graph. eil76, kroA100 and kroB200 were selected because they were used by Mavrovouniotis and Yang when they developed and tested a novel algorithm for solving DTSP [14]. We therefore assumed that these instances had been carefully selected to work with the DTSP. pr299 and gr666 were chosen to find out what results would come out of the algorithm if ran on slightly larger problems.

3.4

Making TSP instances dynamic

(18)

We repeated this swapping procedure until 10% of the vertices had been switched, and this process was repeated between each iteration, thereby permuting 10% of the vertices each time. Which vertices got permuted was determined randomly. The length of the optimal tour would remain the same because the vertices were simply reordered, the only thing that changed was the ordering of the vertices in the optimal tour.

3.5

Setting Q values

The Q parameter in AS controls the amount of pheromone deposited on each edge, and it has to be configured individually for each problem instance. These Q values has to be found empirically by testing different values. As mentioned in the background, if the optimal solution to an instance is known beforehand, a good starting point is to try a Q value close to the length of the optimal tour [17]. For the instances used in our experiments, all optimal solutions were known.

Accordingly, we tried various Q values close to the optimal solution until a rough Q value that gave promising results had been found, for each static instance. Then we interpolated between the Q value found and 0 to get three more Q values. This produced five different Q values for each instance (including 0). While we did these initial configuration tests, 2-opt was disabled. All of these values, along with the optimal length, are displayed in Table 3.1.

The reasoning behind using multiple Q values for each test was to find out how the amount of pheromone deposited by the ants affected the algorithm. We were interested in seeing to what extent different pheromone intensities could affect the solutions produced.

Table 3.1: All Q values for each instance

Q 1 Q 2 Q 3 Q 4 Q 5 Optimum gr666 0 100 000 200 000 300 000 400 000 294 358 pr299 0 17 500 35 000 52 500 70 000 48 191 kroB200 0 10 000 20 000 30 000 40 000 29 437 kroA100 0 7 500 15 000 22 500 30 000 21 282 eil76 0 300 600 900 1200 538

3.6

Setting α and β

(19)

3.7

Setting the ant population’s size

The artificial ants used in the algorithm will henceforth be referred to as the ant population. There-fore the number of ants AntN r in the algorithm will be referred to as the ant population’s size. During initial experiments with the number of ants on STSP instances, we did not find any signifi-cant differences in the results between 10 and 100 ants. As each ant required a bit of processing for it to complete its tour, we opted to use 10 ants to decrease the run-times of the algorithm.

3.8

Number of iterations

In the initial tests we also noticed that increasing the number of iterations after a certain point did not produce better results. Furthermore, running the algorithm more than 100 iterations caused a slight problem as more iterations meant that the pheromone levels would reach zero for certain paths, due to prolonged pheromone dissipation. This would mean that for certain vertices, all edges would have zero chance of getting selected, which led to the ants getting stuck.

3.9

Test method

(20)

Chapter 4

Results

This chapter presents the results from the tests. The first section will cover the performance results: how well the algorithm was able to produce short tours. The second section covers the robustness results: how consistently good solutions were produced.

4.1

Performance

Figures 4.1 through 4.5 shows the performance results from the tests run on each TSP instance. Each bar represents the Pbest value from each test. The lighter shaded bars indicate tests that were run with lower Q values, and the darker shaded bars represent tests that were run with higher Q values.

(21)

Figure 4.1: Performance results for eil76 test instances

(22)

Figure 4.3: Performance results for kroB200 test instances

(23)

Figure 4.5: Performance results for gr666 test instances

The largest Q value for each instance generally gave the best performance for STSP whereas the lower Q values yielded better performance for DTSP. The highest Q value actually reduced the performance for DTSP without 2-opt, compared to the second highest Q value. The only exception was eil76 where the two highest Q values yielded a bad performance compared to the lower three values. eil76 also achieved a much poorer performance for Dynamic TSP without 2-opt compared to the same tests for the other instances.

The 2-opt local search for improving the tours, had a strong influence on the performance, as it evened out much of the noticeable differences between the performance achieved with different Q values. However, the effect that the Q values have on the performance can still be observed, but the effect is much less discernible.

(24)

4.2

Robustness

Figures 4.6 through 4.10 presents the robustness in both the Static and Dynamic TSP tests. Each bar represents the Rbest value from each test. It should be noted that higher robustness value

signifies a better robustness (as opposed to performance), with 1 being the best possible robustness.

Figure 4.6: Robustness results for eil76 test instances

(25)

Figure 4.8: Robustness results for kroB200 test instances

(26)

Figure 4.10: Robustness results for gr666 test instances

A pattern can be observed throughout the tests: the highest Q value yielded the best robustness for STSP, and the same value also resulted in the worst robustness for DTSP. This pattern can be observed in almost all of the tests. 2-opt improved the robustness by a noticeable amount in every case. The best possible robustness was higher for STSP, when configured with the right Q value. The worst possible robustness for STSP (when lower Q values were used), was roughly the same as the best possible robustness for the DTSP.

(27)

Chapter 5

Discussion

This chapter discusses the results, the methods used in the experiments and possible future studies. It is divided into a separate discussion on the results, on the method and on future studies.

5.1

On the results

5.1.1

Dynamic TSP

In the double bridge experiment on foraging ants conducted by Goss and colleagues [10], the stochas-tic model they made suggested that if a shorter bridge was introduced after the ants had walked across a longer bridge for some time, the ants would still largely stick to the longer path. The model suggested this because in the time before the shorter bridge had been introduced, the ants were able to deposit a large amount of pheromone on the longer one, making the shorter bridge seem unattractive.

In our implementation of DTSP the weights of some edges are shifted around in each iteration, but the pheromone is not. This seems to cause the artificial ants to behave in a way similar to what was predicted by Goss and colleagues; they will tend to walk along a path where pheromone have been laid out, even though that path may have become longer than other paths due to the dynamic changes made to the edge weights. The artificial ants only look at the amount of pheromone on the next edge in each step and how long that edge is; so if the amount of pheromone is large enough, that might influence them to walk down a "bad" edge, even though the edge has a large weight. This is what our results indicate when the Q value was set too high, since the worst performance was obtained for the highest Q values. The highest Q value allows the ants to deposit larger amount of pheromone, thereby making them favor their old, pheromone intensive, paths more.

(28)

These findings were reflected in the robustness results as well. The highest Q value reduced the robustness, which tells us that the ants had a harder time adapting to changes in the graph. When relying more on the semi-random nearest neighbor heuristic, the ants were not affected by previous iterations, since they had no pheromone to influence their decision making. As a result, they tra-versed the graph each iteration under the same circumstances as in the first iteration, which explains how they were able to produce a relatively consistent performance.

5.1.2

Static TSP

For STSP the results were quite the opposite. Here the highest Q value resulted in the best perfor-mance. This indicates that the pheromone affected the solution quality positively. The results from the dynamic and the static tests almost seem to be opposite of each other; when the Q value im-proved the solutions for the static tests, the same Q value damaged the quality for the corresponding dynamic tests.

5.1.3

2-opt

2-opt local search was definitely a good way to improve the performance and the robustness. With the help of 2-opt, the algorithm was able to produce better solutions more consistently. However, 2-opt may increase the run-times by a noticeable amount. Therefore we think that the feasibility of 2-opt has to be determined depending on situation, because in a real life environment, the additional run-time may be impractical.

5.2

On the methods

5.2.1

Choice of parameters

(29)

nearest neighbor factor starts to take over. For this reason, alternate configurations where β 6= 2 ∗ α could be tried instead.

5.2.2

On the measurements

5.3

Future studies

We limited this study to testing only one ACO algorithm; the original Ant System. There are plenty other ACO algorithms such as MAX-MIN Ant System and Ant Colony System [4] that can be applied to the DTSP. Though, our results seem to suggest that any ACO algorithm in which the ants spread and follow trails of pheromone in a manner similar to the Ant System, will probably en-counter similar problems when trying to solve the DTSP. This is because the purpose of pheromone in the Ant System is to help the ants converge to a near optimal path, whereas in the DTSP that path is continuously changing.

(30)

Chapter 6

Conclusion

(31)

Bibliography

[1] Hajer Ben-Romdhane, Enrique Alba, and Saoussen Krischen. “Best practices in measuring algorithm performance for dynamic optimization problems”. In: Soft Computing 17 (6 2013), pp. 1005–1017.

[2] Eric Bonabeau et al. “Self-organization in social insects”. In: Trends in Ecology and Evolution 12 (5 1997), pp. 188–193.

[3] J.-L. Deneubourg et al. “The Self-Organizing Exploratory Pattern of the Argentine Ant”. In: Journal of Insect Behavior 3 (2 1990), pp. 159–168.

[4] Marco Dorigo, Mauro Birattari, and Thomas Stützle. “Ant Colony Optimization”. In: IEEE Computational Intelligence Magazine 1 (4 2006), pp. 28–39.

[5] Marco Dorigo, Vittorio Maniezzo, and Alberto Colorni. Positive feedback as a search strategy. Tech. rep. Politecnico di Milano, Dipartimento di Elettronica, June 1991.

[6] Marco Dorigo and Thomas Stützle. Ant Colony Optimization. The MIT Press, 2004. isbn: 0-262-04219-3.

[7] Marco Dorigo and Thomas Stützle. “Ant Colony Optimization: Overview and Recent Ad-vances”. In: Handbook of Metaheuristics. Ed. by Michel Gendreau and Jean-Yves Potvin. Boston, MA: Springer US, 2010, pp. 227–263. isbn: 978-1-4419-1665-5. doi: 10.1007/978-1-4419-1665-5_8.

[8] Matthias Englert, Heiko Roglin, and Berthold Vocking. “Worst Case and Probabilistic Analysis of the 2-Opt Algorithm for the TSP”. In: Proceedings of the Eighteenth Annual ACM-SIAM Symposium on Discrete Algorithms. Jan. 2007, pp. 1295–1304.

[9] Casper Joost Eyckelhof and Marko Snoek. “Ant Systems for a Dynamic TSP: Ants caught in a traffic jam”. In: Ant Algorithms: Third International Workshop. Ed. by Marco Dorigo, Gianni Di Caro, and Michael Sampels. Sept. 2002, pp. 88–99. doi: 10.1007/3-540-45724-0_8. [10] S. Goss et al. “Self-organized Shortcuts in the Argentine Ant”. In: Naturwissenschaften 76 (12

1989), pp. 579–581.

[11] Francis Heylighen. “Stigmergy as a universal coordination mechanism I: Definition and com-ponents”. In: Cognitive Systems Research 38 (2016), pp. 4–13.

[12] Jon Kleinberg and Evá Tardos. Algorithm Design. Pearson Education Ltd., 2014. isbn: 978-1-292-02394-6.

(32)

[14] Michalis Mavrovouniotis and Shengxiang Yang. “A memetic ant colony optimization algorithm for the dynamic travelling salesman problem”. In: Soft Computing 15 (7 2011), pp. 1405–1425. [15] Pablo Rabanal, Ismael Rodríguez, and Fernando Rubio. “Solving Dynamic TSP by Using River Formation Dynamics”. In: Fourth International Conference on Natural Computation. Vol. 1. Oct. 2008, pp. 246–250. doi: 10.1109/ICNC.2008.760.

[16] William Rand and Rick Riolo. “Measurements for understanding the behavior of the genetic algorithm in dynamic environments: a case study using the Shaky Ladder Hyperplane-Defined Functions”. In: Proceedings of the 7th annual workshop on Genetic and evolutionary compu-tation. (Washington DC, USA). Ed. by Franz Rothlauf. ACM, June 2005, pp. 32–38. doi: 10.1145/1102256.1102263.

[17] Christine Solnon. Ant Colony Optimization and Constraint Programming. John Wiley & Sons, Inc, 2010. isbn: 978-1-84821-130-8.

[18] Guy Theraulaz and Eric Bonabeau. “A Brief History of Stigmergy”. In: Artificial Life 5 (2 1999), pp. 97–116.

(33)

References

Related documents

Tommie Lundqvist, Historieämnets historia: Recension av Sven Liljas Historia i tiden, Studentlitteraur, Lund 1989, Kronos : historia i skola och samhälle, 1989, Nr.2, s..

Likely possibilities are one or both of the other intrinsic motivations, relat- edness and autonomy; or the lower-level flow state as proposed by Marr; or extrinsic

It is straightforward to measure the variance in results. However, these statistics need to be benchmarked in order to know what is a “high” or “low” variance. One advantage of

Since we wanted to create suggestions as for how to create derived demand we found it important to do our work in close connection to the case company. Through meetings, workshops

To choose a solution offered by traditional security companies, in this paper called the firewall solution (Figure 6), is today one of the most common, Identity management market

[r]

The three studies comprising this thesis investigate: teachers’ vocal health and well-being in relation to classroom acoustics (Study I), the effects of the in-service training on

Det som används från LBSA i den LBSA-baserade algoritmen är därför de olika entiteterna som används i mutationen, i den första mutationsvarianten kopierar algoritmen subarrayer