• No results found

Automatingand optimizing pile group design using a Genetic Algorithm

N/A
N/A
Protected

Academic year: 2021

Share "Automatingand optimizing pile group design using a Genetic Algorithm"

Copied!
104
0
0

Loading.... (view fulltext now)

Full text

(1)

IN

DEGREE PROJECT MATHEMATICS, SECOND CYCLE, 30 CREDITS

,

STOCKHOLM SWEDEN 2018

Automating and optimizing pile

group design using a Genetic

Algorithm

ARIAN ABEDIN

WOLMIR LIGAI

KTH ROYAL INSTITUTE OF TECHNOLOGY SCHOOL OF ENGINEERING SCIENCES

(2)
(3)

Automating and optimizing pile

group design using a Genetic

Algorithm

ARIAN ABEDIN

WOLMIR LIGAI

Degree Projects in Optimization and Systems Theory (30 ECTS credits) Degree Programme in Industrial Engineering and Management (120 credits) KTH Royal Institute of Technology year 2018

SupervisorS at Tyréns AB: Mahir Ülker-Kaustell Supervisor at KTH: Per Enqvist

(4)

TRITA-SCI-GRU 2018:259 MAT-E 2018:57

Royal Institute of Technology

School of Engineering Sciences

KTH SCI

SE-100 44 Stockholm, Sweden URL: www.kth.se/sci

(5)

Abstract

In bridge design, a set of piles is referred to as a pile group. The design process of pile groups employed by many firms is currently manual, time consuming, and produces pile groups that are not robust against place-ment errors.

This thesis applies the metaheuristic method Genetic Algorithm to auto-mate and improve the design of pile groups for bridge column foundations. A software is developed and improved by implementing modifications to the Genetic Algorithm. The algorithm is evaluated by the pile groups it produces, using the Monte Carlo method to simulate errors for the pur-pose of testing the robustness. The results are compared with designs provided by the consulting firm Tyr´ens AB.

The software is terminated manually, and generally takes less than half an hour to produce acceptable pile groups. The developed Genetic Al-gorithm Software produces pile groups that are more robust than the manually designed pile groups to which they are compared, using the Monte Carlo method. However, due to the visually disorganized designs, the pile groups produced by the algorithm may be difficult to get ap-proved by Trafikverket. The software might require further modifications addressing this problem before it can be of practical use.

(6)
(7)

Sammanfattning

.

Inom brodesign refereras en upps¨attning p˚alar till som en p˚algrupp. Vid design av p˚algrupper till¨ampar f¨or tillf¨allet m˚anga firmor manuella och tidskr¨avanade processer, som inte leder till robusta p˚algrupper med avse-ende p˚a felplaceringar.

Denna avhandling till¨ampar en metaheuristisk metod vid namn Gene-tisk Algoritm, f¨or att automatisera och f¨orb¨attra designprocessen g¨allande p˚algrupper. En mjukvara utvecklas och f¨orb¨attras stegvis genom modifi-kationer av algoritmen. Algoritmen utv¨arderas sedan genom att Monte Carlo simulera felplaceringar och evaluera de designade p˚algruppernas ro-busthet. De erh˚allna resultaten j¨amf¨ors med f¨ardig-designade p˚algrupper givna av konsultf¨oretaget Tyr´ens AB.

Den utvecklade mjukvaran avbryts manuellt och kr¨aver generellt inte mer ¨

an en halvtimme f¨or att generera acceptabla p˚algrupper. Den utvecklade algoritmen och mjukvaran tar fram p˚algrupper som ¨ar mer robusta ¨an de designade p˚algrupperna vilka dem j¨amf¨ors med. P˚algrupperna som skapats av den utvecklade algoritmen har en oordnad struktur. S˚aledes kan ett godk¨annande av dessa p˚algrupper fr˚an Trafikverket vara sv˚art att f˚a och ytterligare modifikationer som ˚atg¨ardar detta problem kan beh¨ovas innan algoritmen ¨ar anv¨andbar i praktiken.

(8)
(9)

Acknowledgements

First and foremost, we would like to express our greatest gratitude and appreciation to Mahir ¨Ulker-Kaustell, our supervisor at Tyr´ens AB, who formulated the problem treated in this thesis and whose help and guid-ance throughout the work has been of utmost importguid-ance. Without him, this thesis would not have been possible.

Further, we would like to thank Anna Jacobson, head of the bridge de-partment at Tyr´ens AB, who gave us the opportunity to work with this interesting problem.

We also want to thank our supervisor at KTH Royal Institute of Technol-ogy, Per Enqvist, for his support and for allowing us to treat the problem presented in this thesis. His feedback on the report has been especially valuable.

Finally, we would like to thank our families for their unwavering support throughout this work, and so much more.

(10)
(11)

Contents

1 Introduction 1

1.1 Background . . . 3

1.2 Purpose and scope . . . 4

1.3 Outline of the thesis . . . 4

2 Mathematical model 6 2.1 Pile group analysis . . . 6

2.2 Mathematical pile group model . . . 6

2.3 Combining loads to calculate pile forces . . . 9

3 Optimization and the Genetic Algorithm 13 3.1 Optimization . . . 13

3.1.1 Optimization problems . . . 14

3.1.2 Size and complexity . . . 14

3.1.3 Optimization methods . . . 15

3.2 Metaheuristic methods . . . 15

3.2.1 Genetic Algorithm . . . 16

3.2.2 Ant Colony Optimization . . . 16

3.2.3 Particle Swarm Optimization . . . 17

3.2.4 Basic Local Search . . . 17

3.2.5 Iterated Local Search . . . 17

3.2.6 Simulated Annealing . . . 18

3.2.7 Tabu Search . . . 18

3.2.8 Choice of metaheuristic . . . 18

3.3 Genetic Algorithm . . . 21

3.3.1 Population and generation . . . 21

3.3.2 Fitness value . . . 21

(12)

3.3.4 Crossover . . . 22

3.3.5 Mutation . . . 23

4 Implementation of GA 26 4.1 Method . . . 26

4.2 Using the Genetic Algorithm . . . 27

4.2.1 Pile cap as a grid . . . 27

4.2.2 Initial population . . . 29

4.2.3 Fitness function . . . 30

4.2.4 Pile distance constraints . . . 31

4.2.4.1 Pile head distance constraints . . . 31

4.2.4.2 Pile body distance constraints . . . 31

4.2.5 Problem formulation . . . 33

4.2.6 Elite clones . . . 35

4.2.7 Crossover . . . 35

4.2.8 Mutation operator and mutation rate . . . 38

4.3 Termination . . . 40

5 Development and evaluation of the software 41 5.1 Fitness functions . . . 41

5.2 Crossover changes . . . 43

5.3 Mutation rates . . . 45

5.4 Discretization of the variables . . . 46

5.5 Error simulation . . . 47

6 Results 49 6.1 Results for pile groups produced by GA . . . 49

6.2 Gradual improvement of GA pile group - 22 piles . . . 53

7 Discussion 56 7.1 GA software . . . 56

7.2 Results . . . 57

8 Conclusions and suggestions for future research 60 8.1 Conclusions . . . 60

8.2 Suggestions for future research . . . 60

(13)

B Division of labor 83

B.0.1 Mathematical model . . . 83

B.0.2 Genetic Algorithm . . . 83

B.0.3 Development and evaluation of the software . . . 84

B.0.4 Results . . . 84

B.0.5 Discussion, conslusion and future research . . . 85

(14)
(15)

Chapter 1

Introduction

Bridges are structures built in order to provide passage over a river, chasm, road, or other obstacles. In order to provide a passage, naturally bridges have to be able to withstand the forces acting on the bridge in normal circumstances. For this purpose, a deep foundation is utilized when the soil is weak and compressible. The deep foundation transfers these forces to the bedrock beneath the soil, which provides the necessary stability [4]. Such a foundation is comprised of a number of piles going from the bedrock up to a concrete block on the bottom of a bridge column, referred to as a pile cap. Safely transferring forces to the bedrock requires the piles to be positioned on the pile cap in such a way that they can tolerate forces in all directions that may occur, given all possible event scenarios for the bridge [25]. This leads to the problem of designing a group of piles with respect to their position on the pile cap, their angles relative the pile cap, pile lengths and the number of piles used. An image of the relevant parts in a bridge structure is presented in Figure 1.1.

(16)

A configuration of piles is referred to as a pile group. The distribution of forces amongst these piles is highly sensitive to pile head positions, pile lengths, pile angles and the number of piles present. Forces acting on the piles arise from different types of varying loads, such as traffic, wind and thermal effects. Permanent loads on the bridge structure, arising due to the structure’s own weight, are also present.

These variable and permanent loads are combined according to some specified rules, and the most unfavorable load combination for each pile is determined, yielding the corresponding pile forces for the entire pile group. As each pile is constrained to tolerate a maximum force and a minimum force, the requirement on each pile group is that the forces acting on any pile do not exceed the tolerable amount of force in the most unfavorable load situation [25]. For clarity, an image of a basic pile is displayed in Figure 1.2 [1].

(17)

1.1

Background

Tyr´ens AB is a consulting firm in the civil engineering market and it is in their in-terest to effectivate the design process of pile groups. When designing a pile group, Tyr´ens currently employs heuristic and manual techniques, which are time consuming and require experience and knowledge in pile group design in order to obtain feasible configurations. This results in a lengthy process of trial and error in order to obtain a pile group design that is acceptable.

Automation of the pile group design process resulting in a feasible design would be highly preferable. Heuristic approaches result in pile groups that are feasible in regards to the forces present but do not necessarily ensure optimality with respect to robustness. Factors such as variations in soil properties can cause the piles to occasionally deviate from the designed angles and positions during the process of driving piles into the earth. Due to these deviations, piles cannot be driven into the ground with high precision. An example of a deviation in pile angle is shown in Figure 1.3.

Figure 1.3: Left image; designed pile group. Right image; actual pile group driven into the bedrock. The angle of the right-most pile in the actual pile group is different compared to the design. Exaggerated for clarity.

As such, pile heads connect to the pile caps at a different position than specified, or point at a different angle than intended. These deviations require a re-assessment of the pile group and may in some cases require driving in additional piles, or even re-quire a modification to the pile cap in order to increase its size. Such actions lengthen

(18)

the foundation laying process as well as increase the costs. Therefore, it is desirable for the initial pile group designs to be more robust against the aforementioned devi-ations, i.e. remain feasible in terms of pile forces and distances between piles. This would reduce the costs of laying foundation for bridges and similar structures and improve the safety of such structures.

Previous research on pile groups for various purposes have used the optimization method Genetic Algorithm. In addition, the Genetic Algorithm has been shown to require less computing time than some alternative methods for a similar problem [3]. Discussion on comparisons with other metaheuristic methods is presented in Section 3.2. Based on past research and comparisons to other metaheuristic methods, the Genetic Algorithm is elected for solving this problem.

1.2

Purpose and scope

The purpose of this thesis is to automate the design of feasible pile groups by use of a Genetic Algorithm, and optimize pile groups with regards to robustness within practical time limits. The algorithm is to be developed in MATLAB as a software, and modifications are made to more effectively solve the problem. Monte Carlo simulations will be run on the pile groups produced by the Genetic Algorithm in order to evaluate their robustness.

1.3

Outline of the thesis

The outline of this thesis is the following;

In Chapter 2, the mathematical model describing the pile groups will be presented.

In Chapter 3, optimization theory will be discussed briefly and an analysis of meta-heuristic methods and the Genetic Algorithm is given.

In Chapter 4, the implementation of the Genetic Algorithm for this specific problem and the problem formulation is presented.

(19)

In Chapter 5, the GA software is modified and evaluated with regards to algorithm efficiency.

In Chapter 6, the produced results are presented.

In Chapter 7, a discussion of the problem and the results is presented.

Lastly in Chapter 8, conclusions drawn regarding the work are stated and suggestions for future research are presented.

(20)

Chapter 2

Mathematical model

In this chapter, the mathematical model describing the pile groups and the arising pile forces given a set of load combinations is formulated. A short introduction will be given in Section 2.1 and the mathematical model is formulated in Section 2.2 and Section 2.3.

2.1

Pile group analysis

Various events taking place in the proximity of a bridge are considered, and every event gives rise to various forces acting on the bridge and therefore the supporting columns. One event can simply be a car decelerating on the bridge, while another could be a strong wind blowing from a certain angle. The forces acting on the columns are computed for each event, and these events are used to calculate potential forces onto the piles [25].

Each pile can tolerate a certain amount of force, and by testing several different events, it is possible to calculate the maximal and minimal forces acting on any given pile, for any considered combination of events. These maximal and minimal forces acting upon a pile will have to be within the tolerable force levels, defined later in this chapter, which would imply the ability of the bridge to withstand the considered events.

2.2

Mathematical pile group model

In this section, the mathematical computations necessary to obtain the maximal and minimal forces acting on individual piles are presented. The calculations are divided into two parts, one for computing permanent forces arising due to the weight of the

(21)

construction itself, and one for computing the variable forces. Both types of forces are thereafter combined to create forces for different load combinations. This yields the maximal and minimal forces acting on an individual pile, given the most unfavorable load combinations [25].

The forces acting on a single pile are dependent on pile head positions in the xy-plane of the pile cap, pile tilt, rotation angles as well as the number of piles in the pile group. Therefore, the transformation of forces from the bridge column to any specific pile head is dependent on the arrangement of all the piles in the physical system.

The aforementioned pile properties define an individual pile i, and are used to deter-mine the robustness and feasibility of the pile group. The coordinates of each pile foot are calculated as

(xf,i, yf,i, zf,i) = (xh,i, yh,i, zh,i) + Li· (sin θicos φi, sin θisin φi, cos θi), (2.1) where (xh,i, yh,i, zh,i) is the location of pile head i on the pile cap, Li is the length of pile i, θi is the pile tilt angle of pile i and φi is the rotation angle of pile i. Next, the bedrock at the feet of the piles is represented by the plane equation

ax + by + cz + d = 0. (2.2) Since each pile foot coordinate (xf,i, yf,i, zf,i) should lie on the bedrock, (2.1) is com-bined with (2.2), yielding

a(xh,i+ Lisin θicos φi) + b(yhi + Lisin θisin φi) + c(zhi + Licos θi) + d = 0. (2.3)

Solving (2.3) for Li results in

Li =

−(axh,i+ byh,i+ czh,i) + d a sin θicos φi+ b sin θisin φi+ c cos θi

. (2.4)

The piles are modelled as compressible springs with one degree of freedom along their length, so it it necessary to calculate their stiffness. The stiffness matrix for each individual pile is given as

Ki = AE Li   dx2 i dxidyi dxidzi dxidyi dy2i dyidzi dxidzi dyidzi dzi2   (2.5)

where A is the cross sectional area of a pile, E is the Young’s modulus of the pile material which describes tensile elasticity, and dxi = sin θicos φi, dy = sin θisin φi

(22)

and dz = cos θi describe the stiffness along the three axes for each pile i. Since the piles all have the same properties, A and E are the same for all piles. Ki, i = 1, . . . n are combined on the diagonal elements of a large sparse matrix K to create a stiffness matrix for all piles

K =      K1 0 · · · 0 0 K2 · · · 0 .. . ... . .. ... 0 0 · · · Kn      (2.6)

where n is the number piles.

The transformation of forces from the point of applied force on the pile cap to the pile heads is performed using a constraint matrix. These constraints define the rigid body connection between the bridge column, or the point of applied force, and the pile heads. The connection is between three degrees of freedom (d.o.f.) of translation from the column, and six d.o.f. at the pile heads. Since there are more d.o.f. than constraint equations, the constraint matrix has more columns than rows [10]. The constraint matrix is defined as

C =      I3x3 0 0 0 −I3x3 c1 0 I3x3 0 0 −I3x3 c2 .. . ... . .. ... ... ... 0 0 0 I3x3 −I3x3 cn      (2.7) where ci is ci=  

0 zh,i −yh,i −zh,i 0 xh,i

yh,i −xh,i 0 

. (2.8)

The matrix (2.7) is then partitioned into an invertible and non-invertible part, cor-responding to constraints to be retained and constraints to be ‘condensed’ out. This is performed because the solution would not be unique due to the equation system having more d.o.f. than equations [10]. The partition looks like

C =Cc Cr = I Cr . (2.9) As such, the solution to the constraint equations defined by the matrix are given by

T =−C −1 c · Cr I6×6  =−Cr I6×6  (2.10)

(23)

which is used as a transformation matrix

Kr = TTKT , (2.11)

and is further transformed into a Green’s matrix by

DG= T Kr−1. (2.12) Finally, to calculate the forces on the pile heads for small position shifts of the pile heads, the transformation is

Fp,i = AE Li 1 −1dxi dyi dzi 0 0 0  DG,i. (2.13) The rows of DG that correspond to a given pile i are denoted DG,i. The transfor-mation (2.12) is a 1 × 6 vector calculated for each pile, and therefore the full matrix with n piles is Fp =      Fp,1 Fp,2 .. . Fp,n      . (2.14)

Fp is a matrix that linearly translates forces from the bridge column to forces affecting any individual pile connected to the pile cap. Since pile positions and angles are the only varying values that are used to obtain this matrix, the translation of forces is directly related to the configuration of piles.

2.3

Combining loads to calculate pile forces

For any given bridge column to be considered, there is a set of possible loads. Some of these loads are permanent such as the bridge’s own weight, and some loads are variable depending on which events occur on the bridge such as cars driving by or a strong wind blowing in a certain direction.

Each possible load, both permanent and variable, have varying strengths ψ, often ranging from 0.6 to 1.5, depending on the severity of the event. This is used to express a variance for the strength of the loads. The varying intensities are taken into account, and several such loads are combined to create a hypothetical event for a bridge. The loads are expressed as a matrix with forces in each of the Cartesian coordinates, moments, intensities and an indicator for whether the corresponding load is a permanent load or a variable one.

(24)

F =       

Fx Fy Fz Mx My Mz ψmax ψmin Permanent Fx Fy Fz Mx My Mz ψmax ψmin Permanent

..

. ... ... ... ... ... ... ... ... Fx Fy Fz Mx My Mz ψmax ψmin Variable Fx Fy Fz Mx My Mz ψmax ψmin Variable

      

Each hypothetical event is expressed as a combination of several forces and moments F in the three Cartesian coordinates [25]. The transformation of forces is linear at this point;

Fpile= F · Fp (2.15) where F is a combination of forces matrix and Fp is the force transformation matrix in (2.14). Hence, the forces Fpile acting on the piles are computed. All such Fpile are calculated, and the greatest as well as the lowest values are determined.

In order to only assess the cases that lead to maximal or minimal forces, ψ is assigned the value that results in the most disadvantageous contribution of each load. If a variable load is actually advantageous, that load is not considered in the combination. This process is visualized in Algorithm 1 below.

(25)

Algorithm 1 Combining loads 1: procedure Permanent loads 2: TotalPermMax = 0

3: TotalPermMin = 0 4: for i = 1 to piles do 5: ForcesPerm = F · Fp(i)

6: for j = 1 to permanent loads do 7: PermMax = max(ForcesPerm) 8: PermMin = min(ForcesPerm) 9: end for

10: TotalPermMax = TotalPermMax + PermMax · ψmax 11: TotalPermMin = TotalPermMin + PermMin · ψmin 12: end for

13: end procedure

14: procedure Variable loads 15: TotalVarMax = 0

16: TotalVarMin = 0 17: for i = 1 to piles do 18: ForcesVar = F · Fp(i)

19: for j = 1 to variable loads do 20: VarMax = max(ForcesVar) 21: VarMin = min(ForcesVar) 22: end for

23: if VarMax·ψmax > 0 then

24: TotalVarMax = TotalVarMax + VarMax · ψmax 25: end if

26: if VarMin·ψmin < 0 then

27: TotalVarMin = TotalVarMin + VarMin · ψmin 28: end if

29: end for 30: end procedure

31: procedure Combine loads

32: PileForcesMax(i) = TotalPermMax(i) + TotalVarMax(i) 33: PileForcesMin(i) = TotalPermMin(i) + TotalVarMin(i) 34: end procedure

The resulting objects PileForcesMax and PileForcesMin are matrices of size n × n where n is the number of piles, see (2.16) and (2.17). The diagonal of the matrices corresponds to the maximal and and minimal forces for each pile, and the diagonal elements are always the worst forces for their piles. Each row shows the forces onto all the piles, given the load combination that gives the maximal or minimal forces for the pile corresponding to the diagonal element of the row.

(26)

P ileF orcesM ax =      Fmax

1,1 F1,2max F1,3max · · · F1,nmax Fmax

2,1 F2,2max F2,3max · · · F2,nmax ..

. ... ... . .. ... Fmax

n,1 Fn,2max Fn,3max · · · Fn,nmax      (2.16)

In this case, Fmax

1,1 would be the maximal force that pile 1 would be subjected to. Then F1,2max, F1,3max, · · · , F1,nmax are the forces acting on piles 2, 3, · · · , n for the same load combination. F2,2max on the next row is the maximal force acting on pile 2, and no other row has a larger force on pile 2. F2,2max ≥ Fmax

i,2 , ∀i = 1, · · · , n. P ileF orcesM in =      Fmin

1,1 F1,2min F1,3min · · · F1,nmin Fmin

2,1 F2,2min F2,3min · · · F2,nmin ..

. ... ... . .. ... Fmin

n,1 Fn,2min Fn,3min · · · Fn,nmin      (2.17)

Similarly, F1,1min is the minimal force acting on pile 1, F2,2min the minimal force acting on pile 2 and so on.

These greatest and lowest force values are required to be within a certain interval of tolerable force values for the piles to withstand the loads. Since only the diagonal elements are considered, Fi,i will be referred to as Fi. In this thesis, the interval of tolerable forces is

0 ≤ Fimin< Fimax≤ 1000 kN, i = 1, . . . , n (2.18) The lower bound in the interval exists partially to avoid tension in a pile, since concrete has a low tensile strength [2], and partially because piles are divided into sections and any pulling force might cause them to disjoint.

(27)

Chapter 3

Optimization and the Genetic

Algorithm

In this chapter, the mathematical prerequisites that describe optimization, meta-heuristics and the Genetic Algorithm are presented.

In Section 3.1, mathematical optimization theory is discussed. Thereafter, in Sec-tion 3.2, metaheuristic methods are presented. Lastly, in SecSec-tion 3.3, the Genetic Algorithm is presented in-depth.

3.1

Optimization

Optimization is the selection of the best elements for the purpose of maximizing or minimizing the value of a certain function, under a set of constraints (or in some cases no constraints), which limit the selection of elements. The overall selection is evaluated by a function specific to a problem, called the objective function, and the value of this function is to be maximized or minimized [15]. In general, a constrained optimization problem can be defined mathematically as [18]

minimize f (x)

subject to hi(x) = 0, i = 1, 2 . . . , m gj(x) ≤ 0, j = 1, 2 . . . , p x ∈ S.

where x =x1 x2 ... xn is an n-dimensional vector of unknown decision or selec-tion variables and f , hi and gj are real-valued functions of the variables in x. S is a subset of n-dimensional space. The function f is the objective function while hi, gj and S represent the constraints.

(28)

3.1.1

Optimization problems

As previously mentioned, optimization problems can be constrained or unconstrained. Constraints imposed on a problem can be comprised of either simple bounds on the decision variables, or complex equalities and inequalities modeling the relationship between these variables [20].

The mathematical functions that model a system also determine the type of the corresponding optimization problem. For example, a linear program (LP) is an op-timization problem in which the objective function is linear in the unknowns, and the constraints are comprised of linear equalities and inequalities [18]. Other com-mon types of problems include quadratic programming (QP), in which the objective function is quadratic in the unknowns and the constraints are linear, and nonlinear programming (NLP), where at least one of the functions is nonlinear. The word pro-gramming here does not refer to computer propro-gramming, but is instead a synonym for planning [15].

Mathematical optimization problems can also be stochastic or deterministic, as well as continuous or discrete. Stochastic programming is applied to probabilistic models of uncertainty, where some of the parameters and data involved are uncertain or unknown, while deterministic optimization problems involve known parameters [22]. Models in which some or all variables take on a discrete set of values are known as mixed integer programs (MIP) or pure integer programs respectively.

3.1.2

Size and complexity

The amount of computational resources required to solve an optimization problem is determined by its time and space complexity. Time complexity involves the number of iterations or search steps necessary to solve the problem, where each iteration takes a fixed amount of time, while space complexity involves the amount of memory on a computer necessary to solve the problem [21]. The complexity of an optimization problem depends on its size, i.e. the number of decision variables, the size of the solu-tion space, the type of objective funcsolu-tion and constraints, and is also closely related to the complexity of the algorithm that is used, and many algorithms are tailored to a particular type of problem.

(29)

Computation problems in general can be divided into sets of complexity classes, de-pending on the asymptotic behavior of the computational resources necessary to solve the problem. For example, the complexity class P (polynomial) involves problems that can be solved with an algorithm with polynomial time complexity in a worst case scenario. This means that the time necessary to solve problems of complexity class P is bounded by a polynomial function O(nk). Thus, for all problems in P, there exists an algorithm that can solve the problem in time O(nk), for some k. Problems in other complexity classes, such as NP, are treated in a similar manner [21].

3.1.3

Optimization methods

For many problems, it may not be possible to use analytical methods to solve for a global optimal solution, and reasonably feasible and optimal solutions, i.e. local optima, will have to suffice [15]. If a given problem is too large and complex, methods that intelligently search through the solution space may be employed, instead of ana-lytical methods [18]. Heuristic and metaheuristic methods are some such procedures.

3.2

Metaheuristic methods

A heuristic method is a procedure that is likely to discover a good solution which is not necessarily optimal, for a specific problem being considered [15]. The procedure is often an iterative algorithm where each iteration attempts to find a better solution than found previously. Heuristic methods are often based on common sense on how to search for a good solution [15]. They are important tools for problems that are too complicated to be solved analytically, because such methods are utilized to find a feasible solution that is reasonably close to being optimal. Metaheuristic methods, or metaheuristics, are heuristic methods developed to be a general solution method that can be applied to a variety of different problems rather than designed to fit a specific one [15]. These are useful for avoiding having to develop new heuristic methods to fit a problem that cannot be solved optimally.

Blum and Roli (2003) summarize metaheuristic methods into the following points: • Metaheuristics are strategies that ”guide” the search process.

• The goal is to efficiently explore the search space in order to find (near-) optimal solutions.

• Techniques which constitute metaheuristic algorithms range from simple local search procedures to complex learning processes.

(30)

• Metaheuristic algorithms are approximate and usually non-deterministic. • They may incorporate mechanisms to avoid getting trapped in confined areas

of the search space.

• The basic concepts of metaheuristics permit an abstract level description. • Metaheuristics are not problem-specific.

• Metaheuristics may make use of domain-specific knowledge in the form of heuris-tics that are controlled by the upper level strategy.

• Today’s more advanced metaheuristics use search experience (embodied in some form of memory) to guide the search [6].

There are many popular and tried metaheuristic methods, and some of the most noteworthy ones will be described in short. Metaheuristics can be divided into groups using various criteria. Here, they are divided into population-based methods and trajectory methods.

3.2.1

Genetic Algorithm

The Genetic Algorithm, henceforth sometimes referred to as GA, is one metaheuristic among several others based on evolution, such as Evolutionary Programming, Evolu-tionary Computation and Evolution Strategies. The Genetic Algorithm is a stochastic search-based optimization algorithm inspired by the process of evolution by means of natural selection, primarily fit for nonlinear optimization problems, where gradient-based methods cannot be applied due to lack of smoothness of the objective function [13, 12]. The algorithm makes use of processes such as selection, crossover, and mu-tation to improve upon a set of solutions and converge towards an optimal solution. Each solution consists of a set of properties, and it is through manipulation of these properties that the Genetic Algorithm can converge towards a good solution [16]. Due to the use of a set of solutions, it can be described as a population-based method.

3.2.2

Ant Colony Optimization

Ant Colony Optimization, or ACO, is an algorithm based the behavior of ants when finding the shortest path to an objective. It is performed by defining a completely connected graph whose vertices are components to a solution, with paths between vertices. The solutions are often limited to completely feasible solutions. ACO utilizes artificial ants to walk through the graph, and these ants leave behind a pheromone trail based on the fitness of the solution. The vertices the ants walk through are combined into a solution. The pheromone trail affects the other ant’s probability of

(31)

choosing a path, but this pheromone trail becomes weaker over time if no ant had walked over it again [6]. This creates a search method that attempts to find most popular path, which corresponds to a solution. As there must be several ants, ACO is also a population-based method [5].

3.2.3

Particle Swarm Optimization

The Particle Swarm Optimization method is inspired by migrating flocks of birds attempting to reach a destination. One solution is one bird, and the flock is spread out over an area. The flock attempts to fly the path that the bird closest to the destination took, and so in PSO, each solution attempts to mimic the best solution. On the way to the best solution, each solution estimates its own area for possible improvements [11]. This would allow the PSO to scan a large area, but collectively search towards a certain direction.

3.2.4

Basic Local Search

Basic Local Search is also called Iterative Improvement, and is one of the most basic metaheuristics. It works by testing solutions near its current solutions, and moving to the solution that is better than its current one. Selection of fitness can be dif-ferent, such as picking the first good local solution found or the best local solution. Since this algorithm always gets stuck in the first local optimum, its performance is unsatisfactory [6]. Instead, it can be modified or combined with other metaheuristics.

3.2.5

Iterated Local Search

Iterated Local Search is a general trajectory method and due to its simplicity and generality, it can be combined with other metaheuristics [6]. It starts from an initial point and performs a local search to find a local optimum. Once such an optimum is found, the algorithm perturbs the solution and continues with a local search. If another local optimum is found, it perturbs the best of these optima and continues. The perturbations must be random, but must also be neither too strong nor too weak. A strong perturbation would essentially make this algorithm a restarting local search while a weak one would not allow it to leave the proximity of its first local optima [6, 12].

(32)

3.2.6

Simulated Annealing

Simulated Annealing is an algorithm that allows moves resulting in worse solutions than the current one, with the goal to eventually find a global optimum. It starts with an initial solution and a so-called temperature parameter T . At each iteration, a new solution is randomly sampled and evaluated based on the current solution, the new solution, and T . The new solution is accepted if it has a better objective function value, but worse solutions are also accepted with a certain probability. This probability depends on T , and a higher T results in a higher probability [6, 12]. The temperature T is decreased during the search process, which leads the algorithm to initially explore the feasible space but converge towards better solutions as T is decreased. The rate at which T decreases is important to the convergence towards a global optimum [6, 12].

3.2.7

Tabu Search

Any application of Tabu Search includes a local search procedure as a subroutine [15]. It performs a search in close proximity to its current solution, and picks a direction where the new solution is an improvement. The key strategy of Tabu Search is allowing moves that do not improve upon the solution. This algorithm has a risk of leaving a local optimum only to move back towards it. For that purpose, Tabu Search has a list of forbidden moves in a list called a ”tabu list”. This would theoretically prevent an algorithm from going in circles, and instead move towards a global optimum [15, 12]. Often times it is impractical to implement a tabu list of complete solutions because managing such large data is inefficient. Solution attributes are stored instead, which is components belonging to solutions, moves, or differences between two solutions [6]. This creates a problem where an attribute can forbid solutions that were not visited. A solution to this problem is the creation of aspiration criteria that allow the algorithm to move towards a forbidden solution if it meets an aspiration criterion [6]. One such criterion is selecting solutions that are better than the current best one.

3.2.8

Choice of metaheuristic

Although there are several algorithms fit for the purpose of optimizing problems that are too large and complex to be solved analytically, the nature of this problem makes certain algorithms unfit to be considered. Exhaustive treatment of the complexity and nature of the problem treated in this thesis will be presented in Chapter 4.

(33)

• Simulated Annealing is potentially a good algorithm for finding a global optimum, but it is very dependent on the speed of reduction of T . The optimal speed is unknown and should be tested, but a speed too low would take longer time than otherwise necessary with other algorithms. This is because a low speed would be similar to a brute force method of randomly finding optima. It would be more desirable to use a method that does not need a lot of testing and altering due to slight changes in the problem, such as analyzing different pile groups.

• Tabu Search would need to keep track of tabu lists. However, for problems that are nonlinear, discrete and complex, the risks of forbidding unexplored so-lutions that share an attribute with an explored solution is high. Minor changes can have major effects in this problem, and therefore, unintentionally forbidding any unexplored solutions is viewed as a weakness. Although aspiration criteria would offset this, this algorithm would be similar to the population-based meth-ods but without any population. As such, Tabu Search would have a tendency to get stuck in a local optimum and have a difficult time finding the local path towards an improvement.

• The Iterated Local Search would be a good method for finding local optima. One weakness is that perturbations cannot be too strong or too weak, which results in a similar problem to Simulated Annealing, since this would require testing and altering the perturbations for different pile groups. However, due to its simplicity, the perturbations may be left to random chance, and the algo-rithm could occasionally find the right perturbation strength randomly. Since the Iterated Local Search is quick in terms of computing time, having several too weak or too strong perturbations does not necessarily result in extreme computing times.

• PSO performs very well in some types of problems [11]. In problems similar to the one treated in this thesis however, there is no general direction towards which all solutions could migrate. Attempting to mimic the best solution could either converge all solutions towards the same local optimum or not improve the solutions at all. Since pile forces have to be evaluated as a whole pile group, mimicking a few of the piles from the best solution will not necessarily result in any improvement if the other piles are not compatible with the mimicked piles.

(34)

• ACO results in a needlessly complicated and convoluted implementation. Using ACO, the artificial ants pick a random trail on a graph to the end, and an entire path is one solution. To define the pile group problem as a graph, it would be necessary to create a much larger graph than the problem itself, to account for all possible pile positions and angles. In addition, the order in which the solutions are created is irrelevant, since pile groups are evaluated when all piles are assigned. Therefore, ACO is deemed impractical for larger cases of the described problem. Combined with the fact that graphs are typically made to take only feasible solutions into account, ACO would not be fit for this problem. • GA-studies with different objectives have been performed on pile group prob-lems [3, 7, 8, 17], which encourages the choice of GA for the problem treated in this thesis. The algorithm produces satisfactory solutions for problems of this nature, and is in general an acceptable technique for difficult problems where classic optimization methods fail due to unsteadiness, non-differentiable func-tions, noise, and other factors [16]. Developing a GA software tailored to this specific problem is expected to improve the computing speed over the general GA.

Another aspect in which GA differs from many other algorithms is how well it finds local optima. For the described problem, finding a global optimum is not necessary and it is enough to find as good of a local optimum as reasonably possible. GA is fit for solving problems with many local optima [16], but the algorithm does not know whether a solution is globally optimal or not. There are several proposed modifications to GA that may increase the probability of finding a global optimum [16].

If finding a global optimum is of utmost importance, using metaheuristic meth-ods such as Simulated Annealing [27] might be a decent choice, although in this particular problem, doing so would require a lot of time due to the large solution space. Thus, GA offers a compromise between time and result, although are some problems in which the algorithm performs worse in both time and result than other metaheuristics [11], which implies potential for improvement using another metaheuristic.

(35)

3.3

Genetic Algorithm

For the reasons mentioned above, the choice of GA for solving this problem is deemed justified. The remainder of this chapter is dedicated to explain the algorithm.

3.3.1

Population and generation

In GA, a population is a group of solutions, also referred to as individuals [16]. For a given optimization problem, a group of potential solutions is initially generated, i.e. the first generation. This generation is far from an optimum and does not necessarily have to be feasible, but serves as a beginning to a long chain of generations that will eventually converge towards an optimum. Thus, a generation is the population in a particular iteration.

3.3.2

Fitness value

Within a population, each solution or individual has a corresponding fitness value calculated with a fitness function [19]. The fitness value indicates how good, or fit, a solution is. Solutions with better fitness values have a higher chance to be used in crossovers, and the very best solutions in a generation may be cloned to the next generation.

3.3.3

Elite clones

Within each population, some number of the most fit solutions are chosen to carry over to the next generation of solutions as clones. These solutions guarantee that the best fitness value of each generation will either be maintained from one generation to another or improved upon, while providing properties of higher quality within a population. An example of two elite clones carrying over from one generation to the next is shown in Figure 3.1.

Figure 3.1: Elite clones (E) carrying over to the next generation. The P’s represent parents and the C’s represent their children.

(36)

3.3.4

Crossover

Crossover is a reproduction function. Within each generation, a smaller group of solutions is selected to combine their properties in order to create new solutions, and these new solutions are considered a new generation. The crossover group is called parents, and the ”genes” of two (or more) such parents are combined in a randomized way to produce one (or more) new solution, their child [26]. It is these children that make up a new generation, and since parent solutions mostly have good fitness values, the new generation is expected to be better in terms of fitness.

The crossover is a convergence operator which directs solutions towards local optima, since the the more ”fit” parents have a higher chance of producing children and thus, future descendants will share similarities. The time performance of GA as well as the avoidance of premature convergence is highly impacted by the particular choice of crossover technique. [26]. Two common examples are given below:

1. The k-point crossover randomly selects k shared crossover points in the chromo-some strings of two parents, and the data between the crossover points is swapped and combined to produce two new children [26]. A simple example is shown in Figure 3.2.

Figure 3.2: Illustration of the k-point crossover.

2. The uniform crossover selects individual genes from each parent instead of whole chromosome segments, with the probability of a gene being inherited from parent 1 being p1, and the corresponding probability for parent 2 being p2 = 1 − p1 [26]. Figure 3.3 demonstrates the concept.

(37)

Figure 3.3: Illustration of the uniform crossover.

3.3.5

Mutation

Each solution may be subject to mutations; random changes in one or more properties (genes) of a solution according to some probability, referred to as the mutation rate. Mutations exist in order to maintain and introduce diversity into a population. With more diversity, the algorithm has a lower risk of ending up in suboptimal local minima [14]. Different types of mutation operators exist, including, but not limited to:

• bit flip mutation, where a random bit of data in the chromosome is flipped, • mutations that replace the value of a randomly selected gene with a random

value from a given probability distribution, and

• mutations that interchange the values of two randomly selected genes. A bit flip mutation is illustrated in Figure 3.4.

Figure 3.4: Illustration of a bit flip mutation.

In other words, mutation is a divergence operator, and increases the probability of the algorithm to search for more solutions in the search space. A mutation probability too high results in a completely random search that will hinder convergence, and a probability too low will result in convergence towards the local optima most similar to the most fit initial individuals.

(38)

Now, the GA described in Section 3.3 is a simple, general version that can in theory be implemented into any defined optimization problem [16]. In this thesis, the GA that is implemented for the specific problem described will be presented in Chapter 4 and Chapter 5. A flowchart of the simple, general version of GA is shown in Figure 3.5. Generate initial population Evaluate fitness values Stop criterion fulfilled? End Save best individuals as eliteclones Do crossovers and mutations New generation Yes No

Figure 3.5: Flowchart describing the most basic GA.

Before moving on, some basic terminology regarding GA in the context of the work presented in this thesis is shown in Table 3.1.

(39)

GA terms Practical terms Population All pile groups

Generation Pile groups in an iteration Individual or solution A single pile group

Parents Pile groups used to produce a new pile group

Child A pile group produced from combining previous generation parents

Elite clone A pile group unchanged from the previous generation

(40)

Chapter 4

Implementation of GA

In this chapter, the implementation and performance enhancements of GA on the specific problem of designing and optimizing pile groups is described. Section 4.1 gives a short introduction explaining the method employed. Section 4.2 and Section 4.3 present the GA implementation.

4.1

Method

In order to solve the problem of automating and optimizing the design of pile groups, a proof of concept software is developed in MATLAB. The function of the software is divided into two parts; one is the Genetic Algorithm that performs the automation and the optimization, and the other is the calculations using the mathematical model described in Chapter 2 that define the constraints and the physical forces in the model.

Initially, the problem was researched thoroughly by studying previous pile group de-signs and through interviews with Mahir ¨Ulker-Kaustell from Tyr´ens AB, in order to understand the problem of designing pile groups. The summary of that research was presented in Chapter 2. Afterwards, genetic algorithms were researched for the purpose of understanding the ideas behind them, and to be able to develop a soft-ware utilizing such an algorithm on this specific problem. The research on genetic algorithms was presented in Chapter 3.

Numerical values used in this problem were gathered from previous pile group designs and query to Tyr´ens staff. Included are values for pile cross section area, pile material Young’s modulus, pile length, pile angle of approach relative to the pile cap, pile cap dimensions and the number of piles used.

(41)

Later in this chapter, the application of GA onto the problem is described, and the calculations on the physics and constraints are presented.

4.2

Using the Genetic Algorithm

The software requires definitions of parameters such as the number of piles used or pile cap dimensions, before initializing the GA process. The specific case used as a starting point is a large bridge column, anchoring one side of a bridge with 70 piles. The parameters for this case are presented in Table 4.1.

Parameters Example values

Population size 20

Number of piles 70

Pile cross section area 729 cm2

Pile elastic modulus 32 · 109

Pile cap dimensions 120 × 180

Size of grids 1.2m

Allowed pile tilt interval 5◦− 20◦ Allowed minimum distance between pile heads 1.2m Allowed minimum distance between piles underground 1.2m Allowed maximum pile force 1000 kN

Allowed minimum pile force 0 kN

Desired pile force 500 kN

Definition of equation plane z + 10 = 0 Force point of application on the pile cap 0 × 138

Penalty value 70

Mutation rate 560-1

Strength of angle mutation interval 5◦− 15◦ Number of best individuals selected as potential parents 4 Number of random individuals selected as potential parents 5

Number of elite clones 2

Table 4.1: The parameters necessary to GA with values from example case.

4.2.1

Pile cap as a grid

Pile caps, the large plates of reinforced concrete that connect a supporting column above and the piles below, are varying from case to case in terms of dimensions and design. These dimensions can be determined concurrently with the pile group design, which means that should a pile cap’s size be inadequate to design a fitting pile group, it is possible to design a pile cap with different dimensions before producing the actual

(42)

pile cap.

As piles have standard widths, it is possible to surmise that a pile group design should not put piles close together relative their width and a certain amount of margin. In addition, there are regulations for minimum pile head distances [25]. Due to this, the pile cap is divided into a grid, with all squares in the grid being of the same size. Using this grid, it is possible to require that at most one pile can be anchored to a grid square in the design. This would help the pile designs be more robust in terms of positioning sensitivity, but would reduce the total possible solutions and thus slightly reduce the complexity of the problem. An image of the grid system is visualized in Figure 4.1

Figure 4.1: Pile cap with the grid system visualized.

A reduction in complexity leads to fewer possible designs for any given case, which could lead to a solution that may be globally optimal in the grid-separated design,

(43)

but suboptimal in a free-placement design. For the purpose of maximizing robust-ness, even small margins on the scale of centimeters could be valuable. Therefore, to preserve some lost complexity due to the grid-design, the piles are allowed to be placed anywhere on the grid squares as long as there is no other pile inside the same grid. This creates a problem of two piles being too close despite being in separate grid squares, if the grids are in proximity to each other, and the piles are connected at an edge of a grid square. Therefore, many possible solutions are reintroduced into the model, but the grid no longer automatically forces a margin of error around the pile position.

Apart from increasing robustness in terms of pile positioning sensitivity, dividing the pile caps into grids results in drastic improvements of the software efficiency. One huge improvement is in the penalty constraint calculation, and another is for the crossover and mutation processes. This will be discussed further in Sections 4.2.3, 4.2.7 and 4.2.8.

4.2.2

Initial population

An initial population, comprised of a number of pile group designs, is created by producing the correct quantity of piles for each pile group. Each such design has an identical number of piles, and the number of designs, or individuals, is determined in the variable definition and can be changed by the user. The tilt and rotation of each pile and each pile’s position on the pile cap is random and uniformly distributed. This would result in a suboptimal and infeasible population of pile groups. However, this randomized population would be the most efficient method of starting the GA iterations. The different piles throughout the entire generation will be combined to select only the best combination among them by the process of crossovers, which will be improved further through mutations.

Furthermore, a random creation would solve the problem of GA iterations being cold-started if the pile groups in the first generation were too similar. Such a ho-mogenized generation would rely heavily on mutations to create unique individuals, but mutations are rare and it would therefore take several iterations before the ini-tial population gave rise to a well-differentiated population. The act of randomizing the pile groups also makes it capable of functioning equally well for differently sized pile caps, and with other quantities of piles. Thus, this is a good general method of

(44)

creating an initial population.

Finally, another reason behind employing a randomized pile group generation over a more sensible design of an initial population is because any other heuristic design of an initial population would be created with bias, possibly hindering the GA from reaching a good solution due to this bias in an initial population. An unbiased, evenly spread out generation is a simple and efficient start for a GA since it will lead to broader searches in the solution space.

4.2.3

Fitness function

For the purpose of reducing a pile group design’s sensitivity to pile angles and posi-tions, a fitness function that will lead the populations towards a robust design has to be chosen. Due to the mathematical model defined in Chapter 2, pile forces are directly related to and solely depend on pile positions and angles. Thus, it necessarily follows that robustness in terms of pile forces translates into robustness in terms of pile positions and angles, and a fitness function that maximizes the robustness in terms of pile forces is used for the algorithm. Also, the condition that the growth of the fitness function of a pile group with a high fitness value would be larger than if the fitness value were lower should be ensured.

The interval (2.18) gives the upper and lower bound of tolerable forces acting on each pile, Fu and Fl. Defining Fm = Fu−F2 l = 1000−02 = 500 [kN ] as the average of the two, and following the aforementioned conclusions, a robust pile group should have the worst pile being exposed to a force as close to Fm as possible. Thus, the fitness function is defined to be on the multi-objective form

f (Fimax, Fimin) = max i ((F max i − Fm)2) + max i ((F min i − Fm)2). (4.1) Minimizing (4.1) achieves what is sought after. The terms are squared since force values below Fm are allowed. Therefore,

Fl ≤ Fimin < F max

i ≤ Fu, i = 1, . . . , n (4.2) must hold for a pile group to be considered feasible in terms of forces due to (2.18), where n is the number of piles in a pile group.

(45)

4.2.4

Pile distance constraints

4.2.4.1 Pile head distance constraints

The mathematical calculations for the distances between pile heads are somewhat computationally taxing, as all piles would have to be compared to each other. Thus, the number of times such a calculation would have to be performed is n(n − 1)/2, where n is the number of piles in a pile group.

To improve upon this, the computations that yield the distances between pile heads utilize the grid system described in Section 4.2.1. Since only one pile can be anchored to each grid square, calculating the distance of a pile to its neighbors requires between between zero to four computations, depending on the occupancy of nearby grids. For an arbitrary pile i, the algorithm checks if there is a pile j in the grid square to the left and the three neighboring grid squares in the grid row below. If such a pile does exist, the distance between these two piles is computed as

di,j = q

(xi,h− xj,h)2+ (yi,h− yj,h)2 (4.3) where (xi,h, yi,h) are the pile head coordinates of pile i and (xj,h, yj,h) are the pile head coordinates of pile j. If the distance is below a certain predefined length, typically di,j < 1.2 m, the difference between pile head distances and the allowed distances are summed together into one number, and are used to penalize the fitness function:

Wd= n X

i,j

(1.2 − di,j), di,j < 1.2, ∀i, j = 1, ...n, i 6= j. (4.4)

4.2.4.2 Pile body distance constraints

Obtaining the smallest distance between each pair of pile bodies is similarly compu-tationally demanding since, again, given n piles the algorithm has to do n(n − 1)/2 computations. The grid system described in Section 4.2.1 cannot be utilized here since it only applies to the pile cap. Thus, the method that calculates the smallest distance for the piles reduces the computation time by avoiding the use of nested loops. The distance values are calculated by modeling each pile as an infinite line in 3-D space:

(46)

where Lh is a vector of the pile head coordinates, Lf a vector of the pile foot coordi-nates and v = Lf − Lh is the direction vector of the line. Given two arbitrary lines, L1(t1) and L2(t2), W(t1, t2) is defined as

W(t1, t2) = L1(t1) − L2(t2) (4.6) and is a vector that connects the two lines for arbitrary points given by t1 and t2. Computing the smallest distance, defined as

Wmin = ||W(t∗1, t ∗

2)|| (4.7)

between L1(t1) and L2(t2) and the corresponding two points on each line, is equiva-lent to computing the values of t1 and t2 that minimize ||W(t1, t2)||, i.e. t∗1 and t

∗ 2.

(4.7) is uniquely perpendicular to the direction vectors v1 and v2, i.e. v1· Wmin = 0 and v2 · Wmin = 0. Thus, t∗1 and t∗2 are obtained by solving

Lh,1· v1+ (v1· v1)t1− (Lh,2+ t2v2) · v1 = 0 Lh,1· v2+ (v1· v2)t1− (Lh,2+ t2v2) · v2 = 0, (4.8) which yields t∗1 = (v1· v2)(v2· (Lh,1− Lh,2)) − (v2· v2)(v1· (Lh,1− Lh,2)) (v1 · v1)(v2· v2) − (v1· v2)2 t∗2 = (v1· v1)(v2· (Lh,1− Lh,2)) − (v1· v2)(v1· (Lh,1− Lh,2)) (v1· v1)(v2· v2) − (v1· v2)2 . (4.9)

Computing (4.7) is the same as minimizing

Wmin = (Lh,1− Lh,2+ t1v1 − t2v2) · (Lh,1− Lh,2+ t1v1− t2v2). (4.10) Each pile i is defined for ti ∈ [0, 1]. Therefore, when calculating (4.9), if either t∗1 or t∗2 or are outside of this interval, the corresponding ti is set to either 0 or 1 depending on if ti < 0 or ti > 1 and the derivative of (4.10) is calculated with respect to the other t, giving the final solution pair (t?1, t?2).

For example, if t1 > 1, the derivative of (4.10) with respect to t2 is computed with t1 = 1 substituted, and is solved for t2. If t2 ∈ [0, 1], the pair (t?1, t?2) has been found. If t2 is outside the interval, the same procedure is repeated.

(47)

The above calculations are performed for each possible two pile combinations. Such pile body distances are collected into a vector [Wmin,1,2, Wmin,1,3, ...Wmin,i,j] for all distances under a certain predefined length, typically Wmin < 1.2 m. The difference between pile body distances and the allowed distances are summed together into one number, and will be used to penalize the fitness function:

Wc = n X

i,j

(1.2 − Wmin,i,j), Wmin,i,j < 1.2, ∀i, j = 1, ...n, i 6= j. (4.11)

4.2.5

Problem formulation

Defining vi,g = ( 1 if pile i is in grid g 0 otherwise,

allows for the implementation of the constraint n

X

i=1

vi,g ≤ 1, g = 1, .., m (4.12) where m is the number of grid squares on the pile cap. The expression (4.12) states that each grid square g may at most contain a single pile. Apart from the pile head distance and pile body constraints, the allowed region for pile head positions on the pile cap is defined as

xi ∈ [xmincap, x max cap ], i = 1, . . . , n (4.13) yi ∈ [ymincap, y max cap ], i = 1, . . . , n. (4.14) Similarly, the allowed rotation and tilt angles of a pile are given by the constraints

φi ∈ [φmin, φmax], i = 1, . . . , n (4.15) θi ∈ [θmin, θmax], i = 1, . . . , n. (4.16) Lastly, the constraint that states that the length of each pile has to reach the prede-fined bedrock plane is given by

Li =

−(axh,i+ byh,i+ czh,i) + d a sin θicos φi+ b sin θisin φi+ c cos θi

, i = 1, . . . , n (4.17) where the values for Li are used in the calculations that determine Fimax and Fimin.

(48)

The constraints for pile forces and distances between piles are used in the fitness function, and the remaining constraints are hard-coded into the software. To ensure that piles are not too close to each other in a design, a penalty function that increases the fitness value whenever this condition is not met is defined. This creates a relaxed problem with a larger feasible region.

The constraint (4.2) defines the forces that are considered feasible for a pile group. However, since the objective is to get the pile forces as close to Fm as possible, that force constraint is not explicitly stated due to being superfluous. The constraint will be fulfilled eventually if it is possible to do so, despite not being explicitly stated in the problem formulation.

Now, using (4.1), (4.4), (4.11) and (4.12)-(4.17), the minimization problem can be written as minimize Fmax i ,Fimin  max i ((F max i − Fm)2) + max i ((F min i − Fm)2))  · (1 + Wd· β) · (1 + Wc· β) subject to n X i=1 vi,g ≤ 1, g = 1, .., m xi ∈ [xmincap, x max cap ], i = 1, . . . , n yi ∈ [ycapmin, ycapmax], i = 1, . . . , n φi ∈ [φmin, φmax], i = 1, . . . , n θi ∈ [θmin, θmax], i = 1, . . . , n Li =

−(axh,i+ byh,i+ czh,i) + d a sin θicos φi+ b sin θisin φi + c cos θi

, i = 1, . . . , n

This minimization problem is a mixed integer nonlinear programming (MINLP) prob-lem, as well as being a multiobjective problem with two functions based on Fmax

i and

Fmin

i . The complexity of the problem arises due to the combination of loads and the many degrees of freedom that it entails, matrices defined in the mathematical problem being non-invertible, the pile distance constraints, constraints involving the maximal and minimal forces of each pile and the way in which the objective function and grid constraints are defined.

The constraints are included as penalties with multiplier β in the fitness function as a form of constraint relaxation instead of an explicit constraint in order to allow for a larger feasible space, and thus lower the risk of ending up in suboptimal local minima.

(49)

If the constraints are defined explicitly, no infeasible solutions would be generated, but the algorithm could in theory be terminated before the desired optimal solution is achieved, if the only possible child bred by two parents is an infeasible one. With relaxed constraints, such solutions are allowed due to a larger feasible space, but these solutions have a larger fitness value.

By using constraints as a sum of distances shorter than the desired margin and increasing the fitness value multiplicatively, the function allows the generations to incrementally improve despite still being infeasible. Two piles at a distance of 0.3 m from each other will still yield fitness value improvement if they move to 0.4 m from each other, despite not meeting the distance margin requirement of, for instance, 1.2 m minimum. The penalty term is zero for solutions that meet the pile distance requirement, and the GA finds better solutions by increasing robustness in terms of pile forces. This is different from usual relaxations such as Lagrangian [18]. Since the penalty multiplier term β disappears for feasible solutions, there is no requirement on β to be a specific value as long as it is large enough, unlike a Lagrangian multiplier. As such, it is assigned a sufficiently large value to discourage infeasible solutions.

4.2.6

Elite clones

The number of elite solutions is chosen to be small relative to the number of total solutions within a population, so as to allow for more variation and randomness. The geometric properties of the piles of an individual determine the fitness value that it yields, and since these properties do not necessarily carry over from a solution to its offspring, the existence of elite individuals is justified mostly by them maintaining or improving the fitness value from one generation to the next. Although these elite individuals were initially exempt from mutations, it was later decided that they should mutate under the condition that if their mutation worsens these elite individuals, the mutation is reverted and the elite individuals remain as they were previously.

4.2.7

Crossover

To account for the fact that solutions with a better fitness value should have a higher chance of reproducing, some number of solutions with the lowest fitness values are chosen to always be included in the ”parent” group, instead of assigning a higher probability of more ”fit” parents reproducing. The preference for more fit parents is done in order to allow quicker convergence towards optima, analogous to natural

(50)

selection. The remaining solutions to be included are chosen at random with no re-gards to their fitness value in order to increase diversity. Two parents, including the elite clones, may only produce one child, rather than two or more, to further increase diversity within the population.

The crossover process is initially implemented by choosing two ”parent” pile groups, and applying the uniform crossover technique with p1 = p2 = 0.5, where p1 is the probability of a child inheriting a pile from parent 1, and p2 the corresponding prob-ability for parent 2. Each pile is taken as a whole, with its geometrical properties untouched, and put into the ”child”, until the child pile group has as many piles as necessary. Due to this randomness, a child may not necessarily have a fitness value comparable to either parent. The non-linearity of the problem ensures that no one or few piles have any specific value, instead the piles can only be evaluated as a whole. Therefore, inherited piles do not necessarily possess the robustness of the parent pile group. In addition, identifying the strengths of a pile group in terms of their position is difficult, if not impossible.

Besides the problem of children not necessarily inheriting the fitness of their parents, it is quite possible that two feasible parents can create an infeasible child. Since the selection of piles would be random, a child inheriting a pile from one parent can also inherit another pile from the other parent that happens to be positioned very close to the first pile. This leads to the children being worse than either parent in some cases.

An improvement to this random crossover is the implementation of the grid system described in Section 4.2.1. It is designed such that children inherit piles randomly, unless the pile it inherits is positioned in an occupied grid square. Due to this, the instances of children having piles too close to each other is greatly reduced, though not completely eliminated. Since piles can be positioned anywhere on the grid square, the possibility of two piles being positioned too close without occupying the same grid still exists. By increasing the grid size, it is possible to reduce this risk even further, though it would reduce the complexity of the problem. The grid system thus introduces some structure into the crossover technique, which addresses some of the aforementioned issues, and allows for faster convergence towards local optima. Algorithm 2 displays the pseudo code for the crossover function.

(51)

Algorithm 2 Crossover function 1: procedure Crossover 2: ChildPileGroup = [] 3: i = 1

4: while i < piles do

5: ChosenP arent =Random 50 % of Parent1,Parent2 6: if ChosenParent(i) is in an empty grid then

7: ChildP ileGroup = [ChildP ileGroupChosenP arent(i)] 8: i = i + 1

9: end if 10: end while 11: end procedure

For clarity and visual appeal of the crossover technique, see Figure 4.2 and Figure 4.3.

(52)

Figure 4.3: New pile group inheriting the piles with a probability of 50% from each parent. Colored for comparison.

4.2.8

Mutation operator and mutation rate

Mutations are processes that randomly change any pile group in some way, and is the main operator that finds better solutions when the algorithm converges towards a local minimum. Mutation is the only operator that allows the population to reach solutions that are not combinations of the first generation. The initial population has a limited number of unique piles, defined by pile angles and pile positions. Since the initial population does not contain all possible piles, it is necessary to have muta-tions that can change the piles into completely new ones, as the crossover process does not change the piles themselves, merely the combination of these piles in a pile group.

Mutations occur on the children and the elite clones in a pile group, i.e. mutations occur after the crossover process is finished and the next generation is fully formed. The implementation works by randomly mutating or changing the values of a pile group in any of the following ways:

• by moving a pile in a pile group from one grid square to another but keeping its position inside the grid square,

• moving a pile somewhere else within its own grid square, and • changing the tilt or rotation angle of a pile.

Figure

Figure 1.1: Basic bridge structure.
Figure 1.3: Left image; designed pile group. Right image; actual pile group driven into the bedrock
Figure 3.5: Flowchart describing the most basic GA.
Table 4.1: The parameters necessary to GA with values from example case.
+7

References

Related documents

  Deckner and Viking planned and took part in the field test measurements. Deckner and Lidén performed the analyses.. Deckner wrote

This study presents a review of the existing prediction models for vibrations caused by pile and sheet pile driving and is part of an on-going research project aiming for better

As part of their study of ground vibrations caused by vibratory sheet pile driving, Athanasopoulos and Pelekis (2000) reconstructed particle displacement paths from field

From the main results it was observed (see Figure 4-1, Figure 4-2, Figure 4-3 and Table 8 ) that the ground vibrations induced during installation of the sheet piles with

The highest modal damping ratio and frequency change is seen when the impedance functions from Arnäsvall and Ångermanälven pile groups were applied.. The Arnäsvall impedance

När det gäller relationen mellan manlighet och våld ansåg informanterna att det inte var manligt att utöva våld men kopplar samtidigt ihop manlighet med fysiskt styrka

“I woke up as the sun was reddening; and that was the one distinct time in my life, the strangest moment of all, when I didn't know who I was - I was far away from home, haunted

Middle Neolithic pottery decoration tools from Alvastra pile dwelling Hinders, Nathalie.. http://kulturarvsdata.se/raa/fornvannen/html/2017_122 Fornvännen