• No results found

Placement support for signal intelligence units

N/A
N/A
Protected

Academic year: 2022

Share "Placement support for signal intelligence units"

Copied!
45
0
0

Loading.... (view fulltext now)

Full text

(1)

UPTEC F 19028

Examensarbete 30 hp Juni 2019

Placement support for signal intelligence units

Olle Frisberg

(2)

Teknisk- naturvetenskaplig fakultet UTH-enheten

Besöksadress:

Ångströmlaboratoriet Lägerhyddsvägen 1 Hus 4, Plan 0

Postadress:

Box 536 751 21 Uppsala

Telefon:

018 – 471 30 03

Telefax:

018 – 471 30 00

Hemsida:

http://www.teknat.uu.se/student

Abstract

Placement support for signal intelligence units

Olle Frisberg

The goal of this thesis was to develop an optimization model that automatically finds optimal groupings for signal intelligence units, aiming to maximize surveillance capability in a user-defined target area. Consideration was taken to transportation possibilities, type of terrain, and the requirement of radio communication between the direction finders. Three scenarios were tested, each providing its own topographical challenges. Several different derivative-free

optimization methods were implemented and evaluated, including global methods to find approximate groupings using a geometrical model that was developed, and the local method pattern search that was using the already existing model. Particle swarm and a genetic algorithm turned out to be the best global solvers. A grouping found by the global method was later improved by pattern search by evaluating possible groupings nearby. The greatest practical challenge for particle swarm and pattern search was the ability to find feasible placement points given a desired direction and step length. Workarounds were

developed, allowing for more dynamic search patterns. For future use, the placement support should be tested on more scenarios with different prerequisites, and the approved terrain types have to be adjusted according to the kind of vehicle carrying the direction finder.

ISSN: 1401-5757, UPTEC F 19028 Examinator: Tomas Nyberg Ämnesgranskare: Di Yuan

Handledare: Petter Bivall och Leif Festin

(3)

Popul¨ arvetenskaplig sammanfattning

N¨ar signalspaningsenheter placeras ut f¨or att lyssna efter radiotrafik eller radarsignaler i syfte att positionera en s¨andare i ett givet m˚alomr˚ade, s˚a sker utplaceringen med mycket manuellt arbete utifr˚an ett erfarenhetsbaserat tillv¨agag˚angss¨att. H¨ansyn m˚aste tas till om transportm¨ojligheter finns till utplaceringsplatserna, vanligen anv¨ands tv˚a till fyra enheter. Terr¨angen m˚aste ocks˚a medge utplacering, en bil kan t.ex. inte placeras i vatten.

Syftet med detta arbete var att skapa ett placeringsst¨od ˚at Totalf¨orsvarets Forskn- ingsinstitut (FOI), som automatiskt placerar ut signalspaningsenheterna s˚a att s¨andaren man lyssnar efter g˚ar att positionera i ett s˚a stort omr˚ade som m¨ojligt. Flera olika optimeringsmetoder har utvecklats och unders¨okts. Dels globala s¨okmetoder som hit- tar bra grupperingar av signalspaningsenheter utifr˚an ett geometriskt perspektiv, dels en lokal s¨okmetod som utg˚ar ifr˚an en gruppering som ¨ar geometriskt bra och sedan f¨ors¨oker f¨orb¨attra l¨osningen genom att utv¨ardera m¨ojliga punkter i n¨arheten med en riktig v˚agutbredningsmodell.

I dom tre testscenarion som anv¨andes visade det sig att den snabba geometriska

modellen som utvecklades ¨overensst¨amde relativt bra med den riktiga v˚agutbredningsmodellen.

Metodernas parametrar optimerades s˚a att resultatet f¨orb¨attrades och k¨ortiden min- skade. F¨or framtida arbeten b¨or metoderna och den geometriska modellen testas p˚a mer komplexa scenarion och terr¨angen beh¨over bed¨omas som anv¨andbar eller ej utifr˚an den fordonstyp som sensorerna sitter p˚a.

(4)

Contents

1 Introduction 8

1.1 Background . . . 8

1.2 Requirements . . . 8

1.3 About the project . . . 8

1.4 Determining a position . . . 9

2 Problem description 10 2.1 Optimization problem formulation . . . 10

2.2 Size of solution space . . . 11

2.3 Computing a result . . . 11

2.4 Test scenarios . . . 12

2.5 Experienced based techniques . . . 14

3 Theory 15 3.1 Derivative-free optimization . . . 15

3.1.1 Generalized pattern search . . . 15

3.1.2 Random search . . . 15

3.1.3 Particle swarm . . . 16

3.1.4 Genetic algorithm . . . 16

3.2 Surrogate model . . . 17

3.3 Flood fill . . . 17

4 Method 18 4.1 Flood fill for generating the placement grid . . . 18

4.2 Finding feasible points . . . 18

4.2.1 Square neighborhood . . . 18

4.2.2 Choosing the highest feasible point . . . 19

4.3 Initial positions . . . 19

4.3.1 Particle swarm implementation . . . 19

4.3.2 Surrogate model implementation . . . 20

4.4 Recursive random search implementation . . . 21

4.5 Pattern search implementation . . . 22

4.5.1 Pattern search with constant recursive square size . . . 23

4.5.2 Pattern search with dynamic square size . . . 23

4.6 Genetic algorithm implementation . . . 23

4.7 Combined solvers . . . 24

5 Results 25 5.1 Feasible placement points . . . 25

5.2 Surrogate model . . . 26

5.3 Recursive random search . . . 26

5.4 Particle swarm . . . 27

5.5 Genetic algorithm . . . 29

5.6 Initial positions . . . 30

5.7 Pattern search . . . 32

5.7.1 Different methods of choosing a feasible point . . . 32

5.7.2 Opportunistic run . . . 33

(5)

5.8 Combined solvers . . . 33

5.9 Global optimum in smaller scenario . . . 35

6 Analysis 36 6.1 Feasible placement points . . . 36

6.2 Surrogate model versus real model . . . 36

6.3 Optimization methods . . . 36

6.3.1 Particle swarm . . . 37

6.3.2 Recursive random search and genetic algorithm . . . 37

6.3.3 Function calls per iteration for RRS, PSO and GA . . . 37

6.3.4 Pattern search . . . 38

6.3.5 Combined solvers . . . 38

6.4 Global optimum . . . 39

6.5 Future work . . . 39

7 Conclusions 40 Appendices 43 A Result tables 43 A.1 Data for initial positions . . . 43

A.2 Data for selection types . . . 43

A.3 Data for polling types . . . 44

A.4 Data for combined solvers . . . 45

(6)

Terminology

EW - Electronic warfare EA - Electronic attack EP - Electronic protection ES - Electronic support C2 - Command and Control SIGINT - Signal intelligence DF - Direction finder

FU - Fusion unit

Skip zone - Region where transmission can not be received.

Direction of Arrival (DOA) - A radio or radar signal’s direction of origin, also called Angle of Arrival (AOA).

Jammer - Signal blocking device

Brute force search - Evaluation of all possible solutions.

Hyperparameter - Parameter for a method and not for the problem itself.

Meta-optimization - Optimization performed to find optimal hyperparameters of a method.

Cardinal points/directions - North, east, south, and west (N, E, S and W).

Intercardinal points/directions - NE, SE, SW and NW.

Positive basis - Positively independent vectors that span Rn. GPS - Generalized pattern search

PSO - Particle swarm (optimization) RRS - Recursive random search GA - Genetic algorithm

(7)

Acknowledgment

I would like to thank everyone that have been involved in this project and helped me along the way. A special thanks to my supervisor at FOI, Petter Bivall, who has helped me to stick to the main goal of the project and all the proofreading of this re- port. A big thanks to Magnus Dahlberg and Hanna Lindell who have helped me with the simulation framework and their practical expertise, to Leif Festin for the practical insights into signal intelligence and to my subject reviewer Di Yuan at Uppsala Univer- sity for the constructive feedback concerning the report and the optimization methods.

Olle Frisberg

Link¨oping, May 2019

(8)

1 Introduction

Electronic warfare (EW) is military operations that use the electromagnetic spectrum to discover, exploit, influence, obstruct or prevent the enemies usage of the spectrum.

EW has been used since the early 1900s and is a increasingly important part of our weapons and Command and Control (C2) systems[5]. EW is usually divided into three subgroups: electronic attack (EA), electronic protection (EP) and electronic support (ES). One important part of ES is to be able to position the enemy’s transmitters, e.g. radio and radar, with signal intelligence (SIGINT). A receiver that is designed to determine a signal’s direction of origin is usually referred to as a direction finder (DF).

With one stationary DF it is possible to get a direction (or bearing) to the transmitter but not a location. If the DF is moving (with for instance an airplane), it is possible to determine a position with only one DF by doing several measurements at different times and positions. With (at least) two directions, the transmitter location will be in the intersection between these two. One common method to compute this intersection when the directions are straight lines (bearings) is with triangulation[1].

1.1 Background

Today when SIGINT is used for locating transmitters, two to four DFs is positioned in patterns based on experience, often involving a lot of manual work. Consideration must be taken to the type of terrain the DF should be placed on, a car antenna can not be placed in a lake or in a forest with tall trees etc. It must also be possible to transport the DF to the desired location.

1.2 Requirements

The goal of this project was to develop an optimization model that in an automatic way finds the optimal positions of the DFs. The user should be able to input a placement area AGwhere it might be possible to place the DFs, input a target area (AT) in which the model should maximize surveillance capability, and also a region of interest area (AROI) with the stronger constraint that it has to be covered by the sensors and should be prioritized. Another requirement was the ability to drop the constrained coverage in AROI if it could not be fully covered. The model should consider terrain type and accessibility to the proposed location of the DFs.

1.3 About the project

The project was conducted at the Swedish Defense Research Agency (FOI) at the department of Electronic Warfare Assessment (Telekrigv¨ardering). All work was im- plemented as a new sub-module in the existing simulation framework called EWSim (Electronic Warfare Simulation interface model) developed by FOI. The scenario plan- ning part of EWSim is called NetScene which is a Geographic Information System (GIS). Screenshots from the GIS will be included to provide basic understanding of the problem and show the different test scenarios used.

Due to both information security and intellectual property rights, this report does not include any of the implemented code. However, all methods are described thor- oughly and should be possible to implement by the reader him-/herself.

(9)

1.4 Determining a position

In order to position a transmitter the DFs must be able to communicate their bearings to a common unit for fusioning, here called a fusion unit (FU). When the FU receives information from the DFs the data is combined to determine the transmitter’s position.

The level of accuracy depends on the angle (φ) between the DFs with respect to the transmitter, and several properties of the DFs. The quality of the determined position is represented with an uncertainty ellipse, see Figure 1. When the angle φ is 90 degrees, the uncertainty ellipse will be a circle with the smallest possible positioning uncertainty.

When φ is 0 degrees, one of the uncertainty axis will be infinitely long, i.e. positioning is not possible[1].

Figure 1: The yellow line with bi-directional arrows shows that a two-way communica- tion link exists between the DFs. The blue lines shows the bearings on the transmitter in red.

(10)

2 Problem description

2.1 Optimization problem formulation

When reformulating the assignment to a constrained optimization problem one obtains the following equation

maximize

X C(X)|AT subject to Xi ∈ Ω/ BT,

C(X)|AROI = 100%, Transportation is possible, Communication exists.

(1)

where X is a vector with the DFs positions, C(X)|A the coverage in area A and ΩBT bad terrain type.

The coverage C(X) in an area A is calculated as

C(X)|A=

NA

X

i=1

Ci(X) (2)

where Ci(X) is the coverage in one grid point in area A.

Ci(X) was calculated from existing models for electromagnetic wave propagation w.r.t. terrain, DF and transmitter equipment. That communication must exist sim- ply means that bearing data can be communicated via radio to the FU. The third constraint in Equation 1 that transportation is possible means that the DFs can be transported to the desired locations via roads and approved terrain type that is con- nected to the roads. The different coverages and regions can be seen in Figure 2.

Figure 2: The top rectangular coverage shows where it is possible to place the DFs.

Red color corresponds to feasible points and the polygon defines a constraining area that the units have to be placed within. The bottom coverage shows the positioning quality in the target area. The outer polygon is AG and the inner polygon AROI. Red color corresponds to a successful positioning, green bearing, blue detection and transparent no detection.

(11)

The last constraint that communication must exists is indirectly fulfilled because otherwise C(X) will have a smaller value and can therefore be dropped. Since the approved placement points does not change it is possible to precompute a list or table, Ω, with feasible points that has both accepted terrain type and the transportation possibility. One could also make C(X)|AROI = 100% a part of the objective function to be maximized by assigning a large weight WROI to it, this will also make it possible to drop the constraint by setting WROI = 0 or control how much it should be prioritized.

With these three modifications to Equation 1 we have maximize

X f (x) = C(X)|AT + WROIC(X)|AROI

subject to Xi∈ Ω

(3)

where f (X) is the final objective function to be maximized.

2.2 Size of solution space

The geographical data that was used contained terrain type (forest/sea/marsh etc) and height every 25 meters. It is not an unrealistic scenario that the approved placement area of the DF is in a area of 50 ∗ 50 km corresponding to 4 000 000 placement points.

Lets make an assumption that only 0.1% of these has an allowed terrain type and is close to a road. With three DFs, this would correspond to a feasible region of 40003 = 6.4 ∗ 1010 positioning possibilities. If a low resolution coverage diagram takes one second to calculate, the exact solution with a brute force search would take 2029 years to find.

2.3 Computing a result

A common metric in optimization is to measure how many function calls that are needed before the algorithm converges. In the challenge of the present work, a function call corresponded to the computation of a coverage diagram in AT and AROI, see Equation 3 and the bottom rectangle in Figure 2. Every time a DF is moved, the coverage has to be recalculated. Every grid point in the diagram holds a value Ci(X) that represents the positioning quality with a decimal number of

Ci(X) =













0, if undetected

0.1, if detected

0.3, if bearing

0.3 + 0.6 ∗ e10000v−t , if positioned and v > t 0.9, if positioned and v ≤ t

(4)

where v is the positioning variance and t the positioning threshold. The values between 0.3 and 0.9 are continuous and depends on the positioning variance, with high variance the value is close to a bearing (0.3) and with low variance the value is close to the best possible positioning quality 0.9.

To compute the objective function value, i.e. summing up the coverage diagram values, can take from 0.1 seconds in a small area with a low resolution grid to minutes in a large area with high resolution. The positioning quality calculation was already parallelized with CPU-threads so to parallelize an optimization method would probably not utilize anymore cores, just give rise to more overhead of swapping in and out

(12)

threads. One alternative to speedup the process could be to parallelize the coverage calculation on the GPU, but such approaches are outside the scope of this thesis.

2.4 Test scenarios

Three different scenarios, labeled S1-S3 in Figure 3 and Table 1, were used in order to compare the methods under different conditions with respect to problem size and distributions of the constraining areas, topology, terrain types and road accessibility.

To test in different scenarios is important in order to not over-fit the optimization methods parameters on just one specific terrain since it should perform equally well in all parts of Sweden.

Table 1: Specifications for the three different scenarios. Feasible points is how many of the points in AG that are available for placement. Total points is how many points in AG that exists in total.

Feasible points Total points Percent feasible

S1 99306 486080 20.4%

S2 240961 448840 53.7%

S3 57652 335844 17.2%

In Figure 3a (S1) the terrain is close to the ideal case when the elevation is constant (or very smooth) everywhere with no concrete obstacles that are preventing the line of sight to the different positions in AT and AROI. Hence the smooth non-noisy circles of bearings in blue from the DFs and the intersecting regions from the bearings that makes up the successful positioning regions in red. For S2 and S3 the theoretical bearing and positioning regions is not that easy to see anymore. The result is much more noisy and the positioning quality is varying in a non-predictable and discontinues way. Bear in mind that only the coverage from one possible grouping per scenario is visible in Figure 3.

(13)

(a) Scenario 1 (S1)

(b) Scenario 2 (S2)

(c) Scenario 3 (S3)

Figure 3: The three different scenarios that were tested.

(14)

2.5 Experienced based techniques

There are a few experience-based conditions that could be used to steer the optimiza- tion process.

Firstly, the angle should be 90 degrees between the DFs to the center of AROI. However, this is no guarantee that the solution will be optimal or not even good.

For instance, it could be high forest or even a mountain between the DFs, preventing communication, or the DF could be positioned in holes which also results in a skip zone. In the same way, it could be obstacles between the DFs and the transmitter.

However, solutions involving a too small φ, meaning a short distance between DFs, does not have to be tested as they would provide a result with too low accuracy, see Section 1.4.

Secondly, in order to maximize range and reduce obstacles between a DF and the transmitter, a grid with height of the terrain in meters (a.k.a. height coverage) is often used (today manually) to place the DFs at the highest possible altitude so the signals can propagate as free as possible. Having a high antenna achieves the same result.

Thirdly, to get bearing on a maximum number of points in AT and AROI, the DFs should be placed as close to the target area(s) as possible. This is not always possible but the user can choose the shape of AGfreely and where it should be located.

Practically, this means that the DFs in S1 should be positioned to the south, in S2 to the south east and in S3 to the west.

(15)

3 Theory

3.1 Derivative-free optimization

Derivative-free(/gradient-free/model-based/black box/direct search) optimization has become a huge research area. It is used in many real world engineering problems when computing the gradient or hessian is too expensive or simply not possible to get at all.

The objective function could e.g. be hidden inside a binary file, come from physical experiments or a complex computer simulation[7][14]. In this case, f (x) is very noisy (due to the heavy terrain dependency) and too expensive to be able to compute ∇f (x).

Derivative-free methods have been around since the early 1960s when Spendley et al. proposed their simplex-based method (that later was refined by Nelder and Mead) and the pattern search method that Hooke and Jeeves introduced in 1961[14].

In 1997 generalized pattern search was coined by Torczon as a subfamily of derivative- free methods in order to distinguish between the deterministic pattern search methods (which has similar convergence properties) and stochastic methods like genetic algo- rithms, random search algorithms and others that was developed without convergence analysis in mind[16].

The advantage of pattern search methods is that they do not require a lot of function calls and is under mild conditions guaranteed to converge to a stationary point[16][9]. However, other well-known derivative-free methods like genetic algo- rithms and particle swarm cover a greater part of the search space and often finds a better solution at a cost of a significant increase in the number of function calls[9].

Many practical solvers are often a combination of multiple methods to get both global coverage and local convergence[14].

3.1.1 Generalized pattern search

Generalized pattern search methods includes all methods that polls points a step length

∆ away from the current point according to some pattern in order to find a better solution. If a better solution is found, this will be chosen as the current point in the next iteration and ∆ is increased with a factor k1 ≥ 1. The number of points is the same as the number of directions. If a better solution cannot be found, the step size will be decreased by a factor k2 s.t. 0 < k2 < 1. A poll that computes the objective function value for all points and choosing the best is called complete. A poll that chooses the first evaluated point that is better is called opportunistic[14][12].

The pattern exists of multiple direction vectors vi that creates a positive basis in the space Rn where n is the number of dimensions of the problem. The fact that the pattern is a positive basis means that no direction vector can be a positive combination of the other direction vectors and that all points in Rn can be expressed as a linear positive combination of the direction vectors. One can prove that the number of vectors, |{vi}|, in the positive basis must satisfy the inequality n + 1 ≤ |{vi}| ≤ 2n [3].

Positive basis with the lower bound (n+1) is called minimal basis and the upper bound (2n) maximal basis. These two are the most commonly used bases in practice[14][12].

3.1.2 Random search

One of the most simple method for derivative-free global optimization is random search (RS), sometimes called pure random search. RS chooses a number of random points

(16)

in the whole search space and returns the point that resulted in the highest function value. Many other global optimization methods are trying to improve upon RS and the method is therefore often used for comparison when bench-marking[19].

An alternative to RS is recursive random search (RRS), also called pure adaptive search. After a number of points in the whole search space have been evaluated and the best point has been found, the search space shrinks itself in a hyper-rectangle centered at the best point. RS performs exploration of the search space and RSS both exploration and exploitation by reducing the search space to interesting regions[19][18].

3.1.3 Particle swarm

Particle swarm optimization was introduced in 1995 and works by the principal of swarm intelligence[6]. A number of particles (NP) is generated that searches the n- dimensional problem space in order to find a better solution.

The velocity of a particle is updated by three main components in Equation 5. The current velocity vi, the local best solution known to the particle pi and the global best solution pg that the swarm has found so far[15].

vi = wvi+ c1r1(pi− xi) + c2r2(pg− xi) (5) The factor w is the inertia and controls what impact the last velocity of a particle should have for the update. The particle will search more locally for smaller values of w and explore more of the search space for larger values of w. The value is typically decreased during a run, from 0.9 to 0.4 according to [4] and from 1.4 to 0 according to [15]. r1 and r2 is two randomly generated variables s.t. r1, r2 ∈ [0, 1]. c1 and c2

steers how much influence the local and global best solution should have on the new velocity. Low values on c1 and c2 will let the particle travel far away from the best known positions until it is pulled back[4]. Typical values is around 2 to have a mean of 1 when multiplied with r1 and r2[6]. The values mentioned above is just a rule of thumb for general problems and should be meta-optimized for a specific problem.

Since each generation (or time iteration) of particle swarm evaluates NP objective function values, the total number will be G*NP function calls where G is the number of iterations. It has been shown that putting a maximum number of function calls to particle swarm results in poor performance. A better stopping criteria is to analyze the distribution of the particles. The maximum distance to the best solution is small if the swarm has converged[20][8].

3.1.4 Genetic algorithm

Another commonly used heuristic derivative-free optimization method is the genetic algorithm (GA) that dates back since the beginning of the 1950s when scientists first started to study artificial intelligence by trying to mimic natural reproduction and mutation[2][10]. Similar to PSO and RRS, GA also has an initial population of ran- domly generated solutions (referred to as individuals) and in every iteration (genera- tion) the method is trying to modify the population with the hope of finding a new best individual. GA does this with the following steps in every generation[11]:

• Compute the objective function value (fitness) for each individual

• Copy some of the best individuals (referred to as elites) unchanged to the new generation in order to not forget the “best found so far”-solutions.

(17)

• Select parents to the next generation based on the fitness values

• Fill the rest of the new generation by performing mutation on one parent or doing a crossover with two parents

In Equation 6 one can see how many individuals of each type that a population contains

Ntot= Nelites+ Ncrossover+ Nmutations (6) where the values on Ntot and Nelites are parameters chosen by the practitioner. If the rest of the new population (when the elite individuals have been copied) should be crossover- or mutation-individuals is determined by the crossover probability Pcross. For example, Pcross = 0.8 means that 80% of the individuals will be crossovers and 20% mutations on average.

One way of selecting which of the individuals that should be used as parents is to sort the population in descending order according to their fitness values and then generate a probability distribution according to Equation 7

P (Xi) = fitness(Xi) PNtot

j=1 fitness(Xj) (7)

where Xi is an individual. When the probabilities have been computed, a random number 0 ≤ r ≤ 1 is generated for each new individual and the first individual i in the old population that has a probability such that r ≤Pi

j=1P (Xj) will be used as a parent for the new individual. Observe that the sum of all P (Xi) should be equal to one since it is a cumulative distribution. This approach of selecting parents ensures that individuals with higher fitness are more likely to be selected[2].

What mainly distinguish GA from RRS and PSO is the crossover operation. The cross-over philosophy is that one should combine good parts (genes) from the parents to an even better child.

3.2 Surrogate model

In order to minimize the number of expensive function calls, a surrogate function can be used that has some insights of the real model and is more lightweight to compute[17]. This surrogate can then be used to perform a global search for find- ing out approximately where the global optimum might be, but not exactly due to the low accuracy[17][14].

3.3 Flood fill

A common strategy for finding connected components inside an image is with the flood-fill algorithm[13]. Given a starting pixel with a certain color, flood-fill finds all connected pixels with the same color (or similar to some threshold). This is done by adding surrounding pixels with the same color to a stack and then repeating the same process again for each pixel in the stack. When the stack is empty all connected pixels have been found. There are two variants of this algorithm, flood-fill4 when only cardinal points are considered as surrounding points (N, E, S, and W) and flood-fill8 when also the intercardinal points are considered (NE, SE, SW and NW)[13].

(18)

4 Method

4.1 Flood fill for generating the placement grid

For a point to be feasible as a candidate for DF placement it has to be inside the placement polygon, have an allowed terrain type and also be connected to a road via allowed terrain type points (i.e. be accessible). One way of solving this problem is to first find all regions comprised of only road points or points with allowed terrain types, and then ignore regions not containing any roads as such regions would be inaccessible.

When applying flood-fill from Section 3.3 for merging roads with approved terrain type, a pixel corresponds to a grid point and the value is of binary type with either a value of one or zero. A grid point has a value of one if the point represents an allowed terrain type or is part of a road, and the point is inside the placement polygon. If the conditions are not met the point is given the value zero. To find all regions instead of just one, flood-fill8 was applied on every point that had a value of one and did not already exist in another region.

4.2 Finding feasible points

Regardless of optimization method, the problem of finding a feasible grid point (xi, yi) ∈ Ω for a DF i remains. The most simple way of only choosing grid points in Ω would be to store the feasible points in a list. This method would, however, lose the whole geometric connection between the points, a property which the objective function is highly dependent upon.

4.2.1 Square neighborhood

One approach for finding a feasible point nearby is to search an area around the selected point (x, y). For example in a square ranging from north west at (x − w/2, y − w/2) to south east at (x + w/2, y + w/2) where w is the square width. One question that arose was how large the square size should be when searching for feasible points. Dividing the grid into too many squares would result in very few feasible directions, especially in S3 that had very few approved placement points. On the other hand, dividing the grid into too few squares would result in very few iterations and the same optimization problem had to be solved again inside the large squares.

Another approach is to dynamically increase the square width with a factor ks> 1 if no feasible points were found in the area, see Algorithm 1. In this way a feasible point will always be found with a high value on the maximum width wmax unless Ω is

(19)

empty.

Algorithm 1: Dynamic square size Input: Unfeasible grid point p = (xc, yc) Output: Feasible grid point pf = (xf, yf) if p is feasible then

return p;

end w = 2;

while w ≤ wmax do

S = Square with center at p and width w;

for p ∈ S do

if p is feasible then return p;

end end

w = ks∗ w;

end

4.2.2 Choosing the highest feasible point

Instead of choosing the first feasible point in each square (see Algorithm 1), another possibility could be to choose the highest feasible point in a square and in that aspect try to mimic how the positioning was done manually by taking the terrain height in- formation into account (see Section 2.5). Notice the importance of forbidding multiple DFs in one square when choosing the highest feasible point, otherwise several DFs would have the same position.

4.3 Initial positions

Six different types of initial positions were tested to investigate their performance.

The first approach of choosing initial positions was to generate a lot of random groupings and the combination with largest minimum distance between all possible pairs of DFs was chosen.

The second approach also employed random position generation, but selection fa- vored the largest minimum angle between the possible position pairs and the geometric center of the region of interest polygon AROI in order to use the first experience-based condition in Section 2.5.

The third and fourth approach used RS and RRS described in Section 3.1.2 along with the surrogate model.

In the fifth approach, particle swarm was used along with the surrogate model.

Pattern search with the surrogate function was added for comparison and was expected to perform worse compared to the more globally spanned search of particle swarm.

4.3.1 Particle swarm implementation

For the problem in Equation 3, a particle has two dimensions per DF. The particles’

initial positions were chosen randomly from the available feasible points. The initial

(20)

velocity for a particle i in the x-direction was chosen as vix in Equation 8

vix = kinitNx(2r − 1) (8)

where Nx is the total number of grid points in the x-direction, kinit∈ [0, 1] and r is a random number in the range [0, 1]. viy was chosen in the same way.

With two DFs, the position and velocity vector for a particle i was structured as vi = vi,DF1,x vi,DF1,y vi,DF2,x vi,DF2,y

 (9)

which makes it easy to compute the euclidean distance in Equation 10 between a particle i and the best known solution vgbestin order to evaluate a distributed stopping criteria as

d(vi, vgbest) = v u u t

N

X

j=1

(vij− vgbest)2 (10)

where N is equal to the number of DFs times two. A maximum limit of the amount of iterations without improvement (stall iterations) was set to 20 if the swarm still should not converge.

4.3.2 Surrogate model implementation

A more light-weight model was implemented since the global methods like particle swarm requires a lot of function calls which would otherwise take too long time to evaluate with the real model. The surrogate model only considered the 2D geometry and ignored obstacles (terrain height) that might exist between the DFs and the target and between the DFs and the FU. Neither did it take jammers into account. It did, however, take feasible points into account since such data were pre-computed. Figure 4 presents the different regions along with the corresponding score. A bearing in region 1 had one score and positioning in region 2 had a higher score. The score for every point in the target grid was summed in the same way as with the real wave propagation model.

DF1 DF2

2

1 1

0

Figure 4: The different score-regions for the surrogate model. Points in R1 provided a score of bearing. R2 a score of bearing plus a fraction of the positioning score constant according to Equation 11. The bottom rectangle corresponds to the target area polygon (AT or AROI).

(21)

The surrogate model also took the angle into account by computing the score in region 2 as

ScoreR2= ScoreBearing + ScorePosition ∗ φ

90 (11)

where angles with φ > 90 were shifted to φ = 180 − φ. Equation 11 made sure that when φ = 0 only a bearing score was returned. If there were more than two DFs, the best score between all possible pairs of DFs was chosen. Notice that the surrogate model make use of both the first and third experience-based conditions in Section 2.5.

4.4 Recursive random search implementation Algorithm 2: Recursive random search

Input: Ncandidates, kdecrease Output: Grouping

bestGrouping = RandomSearchProcedure(Ncandidates);

w = winitial;

while w > wthreshold do for i ∈ Nfncalls do

grouping = RandomGroupingProcedure(bestGrouping,w);

if grouping better than bestGrouping then bestGrouping = grouping;

end end

w = kdecrease∗ w;

end

return bestGrouping;

The recursive random search method was implemented as Algorithm 2 and starts of by performing a regular random search (named RandomSearchProcedure) in the whole search space AG to get an initial grouping and then the square width w is initialized.

The method then evaluate random feasible groupings with RandomGroupingProcedure by generating a random feasible point in a square with width w centered at the current best DF position. This is repeated Ncandidates times per iteration and then the square width w is reduced with a factor 0 < kdecrease < 1 until some minimum threshold width wthreshold is reached. An example iteration is illustrated in Figure 5.

(22)

w

Figure 5: Example iteration of RRS with Ncandidates = 5 and three DFs. The white circles are the best DF positions from the current best grouping and the black dots are random points that have been generated in a square neighborhood around the best positions. After the five groupings have been evaluated, the squares will shrink and five new groupings will be generated inside the smaller squares.

Since kdecrease, winitial and Ncandidates are constant fixed parameters, the total amount of function calls will always be the same and linearly proportional to Ncandidates. It is also possible to see from Algorithm 2 that the total number of iterations will be equal to

#Iterations = #Function calls

Ncandidates = log(wthreshold/winitial)

log(kdecrease) (12) and since wthreshold< winitial, the number of function calls is proportional to

#Function calls ∝ −1

log(kdecrease) (13)

which can let the method automatically set kdecrease given a user-defined value on the total number of function calls or iterations.

An alternative to generate a random x- and y-value inside a square in Figure 5 could be to generate a random angle and radius, and decreasing the maximum radius instead of the square width.

4.5 Pattern search implementation

With three DFs each having two search dimensions (latitude and longitude), the theo- retical number of direction vectors must satisfy 3 ∗ 2 + 1 = 7 ≤ |{vi}| ≤ 3 ∗ 2 ∗ 2 = 12 [3], see Section 3.1.1. In this problem however, each DF must have at least three directions to be movable to all points, i.e. the number of directions must be at least nine. In this implementation, the maximal basis was used with all cardinal directions per DF, i.e. one direction per weather sign per DF which (with three DFs) corresponds to the upper limit |{vi}| = 12.

The initial value on ∆ was chosen according to Equation 14

∆ = k∗ min(Nx, Ny) (14)

where Nx is the total number of grid points in the x-direction and similar for Ny. The factor k had a value between 0 and 1. For low values on k the method will begin the search close to the initial grouping whereas high values on k will initially make the method search far away from the initial grouping.

(23)

4.5.1 Pattern search with constant recursive square size

In Figure 6 one DF tries to find a feasible point a step size d = ∆ away from the current solution by searching through a square of points for the first or highest feasible one. If none of the four new solutions along the cardinal directions would yield an improvement, the step size would decrease and therefore also the square size. The DF moved if a improvement was found, and the new position would be the current solution in the next iteration. When ∆ = 1 the neighboring points was tested.

d d/2

Figure 6: Example iteration of pattern search with constant recursive square size with one DF (circle). The squares are the new positions centered at d = ∆ points away from the current solution to search for feasible points inside. Maximum one feasible point in every square was tested, either the highest or the first one.

4.5.2 Pattern search with dynamic square size

Two different approaches of using a dynamic square size (see Algorithm 1) was tested, first with no limit on the maximum square size and later with an upper limit that was equal to the static square size, marked d/2 in Figure 6.

4.6 Genetic algorithm implementation

An individual was structured in the same way as the position and velocity vector in PSO (see Equation 9). The probability for an individual in the old population to be selected was computed according to Equation 7.

The mutation operation was performed by selecting a random point in a square neighborhood with the square center at the selected individual (chosen by Equation 7) similar to how random nearby groupings were generated in RRS (see Section 4.4) and the width was also decreased by a factor kdecreasefor every population in the same way. The same stopping condition as for RRS was applied, i.e. when the square width condition w < wthreshold was fulfilled.

The crossover operation was implemented by cloning the DF positions from parent 1 and then replacing one of these DFs (chosen randomly) with the closest DF in parent 2. Both parents were chosen by Equation 7 but were forbidden to be the same. The new crossover-grouping was considered successful if the replaced point from parent 2

(24)

was further away than some distance limit to the old points from parent 1 and if the child was not equal to any of the parents. Mutation was performed on unsuccessful crossover-groupings. Two unsuccessful crossover-childs is shown in Figure 8.

This approach of producing a crossover-child could lead to a result shown in Figure 7 in the best case if the scenario looks like in Figure 2. Notice that it is not possible to do a crossover with only one x- or y-coordinate since this would yield an unfeasible grouping in most cases.

Parent 1 Parent 2

Crossover child

Figure 7: An example of how a successful crossover-grouping can be produced by combining the DF-positions to the left in parent 1 (green circles) and the DF-position in parent 2 to the right (red circle).

Parent 1 Parent 2

Crossover child d < d_min

(a) Distance is smaller than limit.

Parent 1 Parent 2

Crossover child

(b) The child is equal to parent 1.

Figure 8: Two examples of failed crossover-groupings when the replaced DF in parent 1 is the green circle to the right.

4.7 Combined solvers

Depending on the amount of resources that are available in terms of the number of function calls one could combine the solvers and the models in two approaches, one fast combination and one slow.

The fast combination can first run a global search using the surrogate model, the top 100 groupings can then be evaluated with the real model since maximizing the surrogate does not necessarily mean that the real model is maximized. The grouping with highest value for the real model can then be used as the initial grouping for a local search. The result would most likely not be as accurate as the slow combination but the number of function calls would decrease drastically.

The slow combination can use the real model for both the global search and the local search, this would most probably lead to the most accurate result at a cost of thousands of function calls.

(25)

5 Results

The methods described in the last section have been compared and different values on hyperparameters have been tested. Since all methods are stochastic or depends on randomly generated initial groupings, many runs were executed per method and per parameter value to see how they performed on average and worst case. The value on WROI was set to 10 for both f (X) and the surrogate model in all tests.

5.1 Feasible placement points

(a) Approved terrain type (b) Feasible points

Figure 9: Approved terrain type points in (a) and roads in the area have been joined to (b) using flood-fill8.

Note how the approved terrain type points in Figure 9a have been filtered with all road-accessible areas to the result in (b). As required by the given constraints, there are no feasible points outside the placement polygon.

(26)

5.2 Surrogate model

0 200 400 600 800 1000

Objective function value 0

500 1000 1500 2000

Surrogate function value

(a) S1

200 400 600 800 1000

Objective function value 1000

1500 2000 2500

Surrogate function value

(b) S2

50 100 150 200 250 300 350

Objective function value 200

400 600 800 1000 1200

Surrogate function value

(c) S3

Figure 10: 1000 possible groupings were randomly generated and evaluated on both the real objective function and the surrogate function in S1 with a grid size of 20.

Based on the data presented in Figure 10, there seems to be a good correlation between the lightweight model and the real wave propagation model in S1. In S2 and S3 there is still a correlation but not as high. If the models would correlate perfectly, all points in Figure 10 would lie on a straight line.

Averaged over 4160 function evaluations, the surrogate model was 243 times faster than the real objective function when both of them was parallelized on CPU-threads.

5.3 Recursive random search

The result from when Ncandidateswas swiped from 6 to 165 along with a constant value on kdecrease= 0.9 (remember Algorithm 2) can be seen in Figure 11. The total amount of function calls increases linearly with more candidates, see Equation 12, but there is not so much difference in the final result when Ncandidates > 40, only around 2%

compared to 10% when Ncandidates= 10.

(27)

0 20 40 60 80 100 120 140 160 Number of candidates

0.88 0.90 0.92 0.94 0.96 0.98 1.00

Normalized surrogate value

S1S2 S3

Figure 11: Average result for ten runs with recursive random search using the surrogate model when varying the number of candidates per iteration with a grid size of 50.

The number of function calls in Figure 12b increased according to Equation 13 when varying kdecrease and was the same for all three scenarios since winit and wthreshold was the same.

0.5 0.6 0.7 0.8 0.9

Width decrease factor 250

300 350 400 450 500

Surrogate value

S1S2 S3

(a) Surrogate value

0.5 0.6 0.7 0.8 0.9

Width decrease factor 500

1000 1500 2000 2500 3000 3500 4000

Function calls

S1S2 S3

(b) Function calls

Figure 12: Average result over 20 runs with a grid size of 50 when varying kdecrease. 5.4 Particle swarm

The distributed stopping criterias from Equation 10 with the values in Equation 15 davg position(vi, vgbest) < 10

davg velocity(vi, vgbest) < 10 (15) was used to define when the swarm had converged to achieve a trade-off between execution time and function value improvement. Values below 10 rarely improved the result. A maximum number of function calls was also used if the swarm should not converge.

(28)

0 25 50 75 100 125 150 Number of particles

0.4 0.5 0.6 0.7 0.8 0.9 1.0

Normalized surrogate value

S1S2 S3

Figure 13: Average result for five runs with particle swarm using the surrogate model when varying the number of particles with a grid size of 40.

With varying number of particles N P and fixed parameters kinit = 0.05, w = 0.5, c1 = 1, c2 = 1, the result in Figure 13 was achieved. The surrogate value seems to stabilize for N P > 25 and barely increase at all when N P > 80 for all three scenarios.

1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0

Width increase factor 1750

2000 2250 2500 2750 3000 3250 3500

Surrogate value

S1S2 S3

(a) Surrogate value

1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0

Width increase factor 4000

6000 8000 10000 12000

Time (milliseconds)

S1S2 S3

(b) Time

Figure 14: Average result over ten runs with a grid size of 30 when varying the square width increase factor ks.

The value on ks does not affect the final result of particle swarm (see Figure 14a) but the execution time does (Figure 14b).

Particle swarm did not converge if a constant square size was used, enforcing the application of a dynamic square size.

Since particle swarm used the surrogate model that was very fast, it turned out that finding the highest point in each square took too long time (since it involves looping through all points in a square to search for the highest point). Time that could be spent on more function calls and thus more generations instead of looping through the full square.

(29)

5.5 Genetic algorithm

For the measurements in Figure 15, the static parameters were set to Nindividuals= 60, Nelites= 10, kcrossover = 0.5 and kdecrease = 0.95.

20 40 60 80 100

Number of individuals 0.90

0.92 0.94 0.96 0.98 1.00

Normalized surrogate value

S1S2 S3

(a) Normalized surrogate value when sweeping number of individuals.

0 10 20 30 40 50

Number of elites 0.965

0.970 0.975 0.980 0.985 0.990 0.995 1.000

Normalized surrogate value

S1S2 S3

(b) Normalized surrogate value when sweeping number of elites.

0.0 0.2 0.4 0.6 0.8

Crossover ratio 0.990

0.992 0.994 0.996 0.998 1.000

Normalized surrogate value

S1S2 S3

(c) Normalized surrogate value when sweeping the crossover-ratio.

Figure 15: Average result over 20 runs with a grid size of 50 when sweeping over different parameters. Observe that the vertical axes are normalized and does not start from 0.

The number of function calls

• linearly increases with the number of individuals

• linearly decreases with the number of elites

• is independent on the crossover-ratio since mutation is performed after a failed crossover-operation

Experiments were made to shrink the population size instead of performing mutation on a failed crossover but it lead to a large variance in the final surrogate function value.

(30)

5.6 Initial positions

The different ways of choosing initial positions is summarized in Table 2. The methods had a maximum time limit of three seconds per run which was always reached by MD, MA and RS. The other four methods may or may not have converged before reaching the time limit.

Table 2: Initial position methods

MD Minimum distance Largest minimum distance between the possible DF pairs MA Minimum angle Largest minimum angle between the possible DF pairs RS Random search Largest surrogate value

RRS Recursive random search Largest surrogate value when shrinking search space PSO Particle swarm Particle swarm with the surrogate model

GA Genetic algorithm Genetic algorithm with the surrogate model

GPS Pattern search Pattern search with the surrogate model and k= 0.5

MD MA RS RRS PSO GA GPS

Initial position method 0

100 200 300 400 500

Surrogate value

S1S2 S3

(a) Average result

MD MA RS RRS PSO GA GPS

Initial position method 0

100 200 300 400 500

Surrogate value

S1S2 S3

(b) Worst case result

MD MA RS RRS PSO GA GPS

Initial position method 0

2000 4000 6000 8000 10000 12000 14000

Function calls

S1S2 S3

(c) Average function calls

MD MA RS RRS PSO GA GPS

Initial position method 0

2000 4000 6000 8000 10000 12000 14000 16000

Function calls

S1S2 S3

(d) Maximum function calls

Figure 16: Worst case and average result of 10 runs when choosing initial positions using the surrogate model and the different methods in Table 2. The minimum distance limit was fixed to 1000 meters when generating the initial groupings. A grid size of 50 was used and the data can be found in Appendix A.1.

(31)

Figure 16 presents the surrogate value after an initial position was chosen using the different methods. PSO had the highest function value in all three scenarios. The run- times for RRS and GA are highly adjustable by choosing different values on kdecrease. The parameter values for RRS were chosen to Ncandidates = 40 and kdecrease = 0.95 and to get approximately the same average run-time as PSO and therefore a more fair comparison. The parameters for GA was set to Nindividuals = 60, Nelites = 20, kcrossover = 0.7 and kdecrease = 0.95. The initial delta factor for GPS was set to k= 0.5 in order to make the method test solutions far away since the initial grouping was randomized. Observe that MD and MA does not perform any surrogate function calls except one when computing the final value and that only a few function calls are made for GPS compared to RS, RRS, PSO and GA.

(32)

5.7 Pattern search

5.7.1 Different methods of choosing a feasible point

The different approaches of selecting feasible points was tested by first letting particle swarm run with the surrogate model for two seconds and the best grouping was set as initial position to pattern search.

Table 3: Selection types

SH Static highest Highest point with square size proportional to the step length ∆ SF Static first First point with square size proportional to the step length ∆ DH Dynamic highest Highest point with least possible square

DF Dynamic first First point with least possible square size

SH SF DH DF

Selection type 0

100 200 300 400 500

Objective function value, f(X)

S1S2 S3

(a) Without wmax

SH SF DH DF

Selection type 0

50 100 150 200 250

Function calls

S1S2 S3

(b) Without wmax

SH SF DH DF

Selection type 0

100 200 300 400 500

Objective function value, f(X)

S1S2 S3

(c) With wmax set to half of the step length

SH SF DH DF

Selection type 0

50 100 150 200 250

Function calls

S1S2 S3

(d) With wmaxset to half of the step length

Figure 17: The different methods listed in Table 3 for selecting a feasible point with pattern search. (a) and (b) is without any maximum width (wmax) for the dynamic method and (c) and (d) is when wmax was set to the same value as the static square width. The result was averaged over 10 runs and a grid size of 30 was used. The data can be found in Appendix A.2.

(33)

5.7.2 Opportunistic run

Particle swarm was again used to find the initial positions to pattern search when the different polling types was tested. The complete run evaluated (at most) all 12 surrounding points and chose the best improvement. The opportunistic run evaluated one direction at a time and stopped when the first improvement was found (if any). The sorted opportunistic run first evaluated all 12 surrounding points with the surrogate function and sorted them in descending order to then evaluate one at a time with the real objective function and stopped when the first real improvement was made.

Complete Opportunistic Sorted opportunistic Polling type

0 100 200 300 400 500

Objective function value, f(X)

S1S2 S3

(a) Average

Complete Opportunistic Sorted opportunistic Polling type

0 25 50 75 100 125 150 175

Function calls

S1S2 S3

(b) Average

Complete Opportunistic Sorted opportunistic Polling type

0 100 200 300 400 500

Objective function value, f(X)

S1S2 S3

(c) Worst case (minimum)

Complete Opportunistic Sorted opportunistic Polling type

0 25 50 75 100 125 150 175 200

Function calls

S1S2 S3

(d) Worst case (maximum)

Figure 18: Average result (a) and (b) and worst case result (c) and (d) for the three tested polling types over ten runs per type and scenario with a grid size of 30.

5.8 Combined solvers

Since the real objective function was very expensive to compute, a decision was made to only check the final result for the three most promising initial position-methods in Figure 16. The fast combination used the surrogate model for the global search and then GPS for the local search with the real model with the sorted opportunistic polling type. The slow combination used the real model for both the global and local search with the complete polling type for GPS. The same parameter values as in Section 5.6 were used.

(34)

RRS PSO GA Initial position method

0 20 40 60 80 100 120 140 160

Function calls

S1S2 S3

(a) Fast combination excluding surrogate function calls

RRS PSO GA

Initial position method 0

500 1000 1500 2000 2500

Function calls

S1S2 S3

(b) Slow combination

Figure 19: Average number of function calls for the fast and slow combination with a grid size of 50. The fast combination only did around 100 function calls with the real model and the slow combination around 2500 function calls.

Table 4: Average objective function value with the fast combination

S1 S2 S3

RRS 201.28 193.19 85.26 PSO 201.68 193.9 84.99 GA 201.27 195.17 87.07

Table 5: Average objective function value with the slow combination

S1 S2 S3

RRS 200.7 198.98 89.1 PSO 201.67 201.94 90.84 GA 200.88 200.52 91.36

Table 6: Average difference in percentage between the fast and slow combination

S1 S2 S3

RRS -0.3 3.0 4.5 PSO 0.0 4.2 6.9 GA -0.2 2.7 4.9

The results in Figure 19 and Table 4 and 5 was averaged over 10 runs per initial position method and a grid size of 50 was used. All data (including minimum function values and maximum function calls) can be found in Appendix A.4. Observe that all initial position methods performed better in S1 with the fast combination even though it used the surrogate model, see the first column in the two tables.

(35)

5.9 Global optimum in smaller scenario

A brute force search was made in a smaller scenario with only 65 feasible points and two DFs, i.e. 4225 combinations, to see if the optimization method could find the global optimum. A fast combination with particle swarm and the surrogate model found the global optimum before pattern search began 100/100 runs. When pattern search was tested with random initial positions, it found the global optimum 5/100 runs.

(36)

6 Analysis

6.1 Feasible placement points

Generating a grid with feasible points given a placement polygon, terrain types and roads with the flood-fill algorithm gave a good result when the input data was in correct size and format. Some practical problems with converting the roads to the same grid size as the height- and terrain type-data took some time and are still not completely free from error, see the missing road part in Figure 3c to the left in the grouping area. In the future, one feasible grid per DF should be generated because different vehicles can travel across different terrain types and not all vehicles require roads for transportation to be possible, e.g. boats can be positioned in water and air-crafts everywhere.

6.2 Surrogate model versus real model

By using a surrogate model for the global search resulted in a great speedup (see Section 5.2). The huge speedup is probably because the real wave propagation calculation was not completely isolated from the rest of the environment. Every function call with the real model requires that the DFs in the GUI actually moves. One alternative to the surrogate model could be to separate the computations and the user interface to achieve a more fair comparison between the surrogate and the real model.

The outliers in Figure 10 (c) to the top-left was a problem because when a global search found one of them as an initial guess to a local solver, the final result was far away from the global optimum. The problem was solved by evaluating the real model on the best 100 solutions from the global solver and pick the best one as an initial guess.

Another problem with the surrogate model is that a maximum range has to be specified manually on each DF such that the radiuses are (at least approximately) matching the circles in Figure 4. It would be possible to set these radiuses automatically by brute force searching for the closest point to the DF that has a bearing or by using the signal-to-noise ratio and checking when the distance yields some threshold value. A third limitation is that the surrogate model does not take jammers into account.

6.3 Optimization methods

Comparing the different initial position methods turned out to be a real challenge. The first three methods in Table 2 did not converge so a decision on how long they should be allowed to run had to be made. Setting the time limit too far from the convergence time for the last three methods would make them have different preconditions and therefore the result would be unfair. Setting the same amount of surrogate function calls for the last four methods would also be unfair since PSO and GA (in addition to function calls) performs a lot more computations than RRS and GPS. Testing different variants on the local search also led to questions on how the measurements should be performed since GPS is highly dependent on its initial grouping. A decision was made to let PSO find an initial grouping since this is how the local search will be used in practice, i.e. a global solver finds a good initial grouping to the local solver. The drawback with this is that the initial groupings were all very similar and therefore the result may be strongly dependent on the tested scenarios.

References

Related documents

The purpose of this research is therefore to create an Adaptive Neuro-Fuzzy Inference System (ANFIS) model to predict yarn unevenness for the first time using input data of

Ladder chassis was designed with C- cross section and material used was St 52, they performed stress, strain, deformation analysis.. The acting loads are considered as static for

Reviewing the paperwork on some stock transfers for North Poudre stock and the preferred rights of Fossil Creek Reser- voir takes quite a few hours each

In Table 3, outcomes are described across commercialization mode and whether inventors were active during the commercialization. Patents commercialized in new firms have a

Paper III investigates the historical relationship between South American and Chilean species of Jovellana in a gene tree – species tree framework, through

Evaluation of biochemical and morphological parameters during reperfusion after of short time cold ischemia showed recovered metabolism and only a slight inflammatory response

Oorde fortsatt arbctc mcd insekter och da han atcr bosatt sig i Kariskrona, donerats at 。 lika ha‖ : de svcnska till Zool lnstitutionen i Lund och cn stor dcl av de utlindska t‖

The present experiment used sighted listeners, in order to determine echolocation ability in persons with no special experience or training in using auditory information for