• No results found

Validation of Black-and-White Topology Optimization Designs

N/A
N/A
Protected

Academic year: 2021

Share "Validation of Black-and-White Topology Optimization Designs"

Copied!
88
0
0

Loading.... (view fulltext now)

Full text

(1)

Linköpings Universitet

Department of Management and Engineering (IEI)

Master’s Programme in Mechanical Engineering

Master Thesis 30 credits

ISRN: LIU-IEI-TEK-A–21/03976–SE

Validation of Black-and-White Topology

Optimization Designs

Sharath Chandra Garla Venkatakrishnaiah

Harivinay Varadaraju

Examiner: Stefan Lindström

Supervisor: Shyam Suresh

2021

Linköpings Universitet

581 83 Linköping

(2)

Validation of Black-and-White Topology Optimization Results Sharath Chandra Garla Venkatakrishnaiah

Harivinay Varadaraju Master Thesis 2021

ISRN: LIU-IEI-TEK-A–21/03976–SE

Department of Management and Engineering Division of Solid Mechanics

Linköpings Universitet 581 83 Linköping Sverige

This document was prepared with LATEX, March 2021 Template made by Anton Lydell, 2018.

(3)

FOREWORD

Now this is not the end. It is not even the beginning of the end. But it is, perhaps, the end of the beginning.

(4)

ACKNOWLEDGEMENTS

We would like to thank our supervisor Shyam Suresh for his valuable inputs, pa-tience and his insights on this topic. Additionally, we would also like to thank the Department of Solid Mechanics at LiU for giving us the opportunity to work with them and last but not the least, our families and friends who kept us motivated during these times.

Working on this thesis has given us a good understanding of Topology optimization and has shown us a unique way to solve complex problems, break them down and achieve good results. We would also like to express our gratitude to mathworks as well as stackoverflow whose communities pointed us in the right direction. We would also like to give a shoutout to maths, music, memes and coffee. However, we do not want to thank Migrationverket whose actions led us in the delay of our thesis.

Sharath Chandra Harivinay Varadaraju Linköping, March 2021

(5)

ABSTRACT

Topology optimization has seen rapid developments in it’s field with algorithms get-ting better and faster all the time. These new algorithms help reduce the lead time from concept development to a finished product. Simulation and post-processing of geometry is one of the major developmental costs. Post-processing of this geometry also takes up a lot of time and is dependent of the quality of the geometry output from the solver to make the product ready for rapid prototyping or final production. The work done in this thesis deals with the post-processing of the results obtained from a topology optimization algorithms which outputs the result as a 2D image. A suitable methodology is discussed where this image is processed and converted into an CAD geometry all while minimizing deviation in geometry, compliance and volume fraction. Further on, a validation of the designs is performed as to measure the deviation of the extracted geometry from the post-processed result.

The workflow is coded using MATLAB and uses an image based post-processing approach. The proposed workflow is tested on several numerical examples to assess the performance, limitations and numerical instabilities. The code written for the entire workflow is included as an appendix and can be downloaded from the website: https://github.com/M87K452b/postprocessing-topopt.

(6)

CONTENTS

Foreword i Acknowledgements i Abstract ii 1 Introduction 1 1.1 Research Questions . . . 2 1.2 Problem definition . . . 2 2 Literature Review 3 2.1 Topology Optimization . . . 3 2.1.1 SIMP Method . . . 4 2.2 Geometry Smoothing . . . 5 2.3 Geometry Extraction . . . 5 3 Methodology 6 3.1 Overview . . . 6

3.2 Phase 1: Topology Optimization . . . 7

(7)

CONTENTS iv

3.2.2 Heaviside Filter . . . 8

3.3 Phase 2: Geometry Smoothing . . . 9

3.3.1 Interpolation . . . 12

3.4 Validation . . . 14

3.5 Phase 3: Geometry Extraction . . . 15

4 Implementation 18 4.1 Phase 1: TO . . . 18

4.2 Phase 2: Geometry Smoothing . . . 18

4.3 Phase 3: Boundary Extraction . . . 19

5 Results 20 5.1 MBB Beam . . . 21

5.2 T-Beam with hole . . . 23

6 Discussion & Analysis 28 6.1 Influence of interpolation value . . . 28

6.2 Evolution of density through interpolation and thresholding . . . 31

6.3 Influence of Savitzky-Golay filter on extracted boundaries . . . 32

6.4 Performance Metrics . . . 33

6.5 Effect of non-discrete density values on compliance . . . 36

6.6 Limitations . . . 37

6.6.1 Extreme scenarios . . . 37

6.6.2 Evaluation of extreme scenarios . . . 37

7 Conclusion 41 7.1 Outlook . . . 42

(8)

CONTENTS v

References 42

A Additional Calculations 46

A.1 Stress Computation . . . 46

B Additional examples 48 B.1 Half-Wheel . . . 48

B.2 Short cantilever beam with end load . . . 49

B.3 Short cantilever beam with mid load . . . 49

B.4 Cantilever beam with two load cases . . . 50

B.5 Cantilever beam with fixed hole . . . 50

B.6 Cantilever beam with fixed hole and mid load . . . 51

B.7 L - Beam . . . 51

B.8 T - Beam . . . 52

(9)
(10)

CHAPTER

1

INTRODUCTION

Topology Optimization (TO) is a mathematical methodology that optimizes a ma-terial domain within a prescribed design space, subject to predefined requirements and boundary conditions such that the resulting domain meets a prescribed set of goals. Product development traditionally speaking, is an iterative process where a design is created and simulated until it finally meets the design criteria. Moreover, the design can vary based on the experience of the designer as well as the lead time given for product development. This method evidently is time-consuming all while leaving more room for enhancement of the structure. This is where TO can help greatly reduce the lead time of the component all while developing a component suited for a particular objective. This therefore helps in reducing the amount of material used in the manufacture of the component.

TO is usually done using large scale commercial finite element (FE) softwares. These softwares are usually expensive and use code that has its limits when it comes to user control. Implementing a TO algorithm using tools such as MATLAB helps in fine tuning the code to user specific needs. The results from TO is in the form of a density field on a regular grid. From a manufacturing perspective, a structure consisting of simply connected structure and voids with a distinct interface between them is desired. Hence, a post-processing step is necessary to translate the density field into such a black-and-white structure. Therefore, the geometry output from the aforementioned post-processing step must be of a very high quality and should be extremely close to the smooth density field. In this thesis, a method is proposed that helps in extracting a black-and-white solution with a distinct smooth interface from the TO code and validate to see how much deviation the extracted geometry is to the original solution. The methodologies proposed here can be used in research and development and in industries to achieve fast results for prototype or even manufacture.

To begin with, a simple TO code will be much easier to manipulate and understand as needed. One such code is the popular 88-line MATLAB code which is an ideal starting point as it supports multiple load-cases, is mesh independent and is most

(11)

1.2. Problem definition 2

importantly a fast solver[3]. This code is also short and is validated by the numer-ous publications which support it as one of the best codes to understand how TO works. However, this code has its limitations; it can only solve 2D optimization problems and uses only bi-linear quad elements in it’s FE formulation. The opti-mized result from the code is in the form of a black-and-white image which needs to be manipulated to make it more suitable to our purposes.

1.1

Research Questions

1. How does the TO solver obtain and visualize the results?

2. What are some of the important parameters in the solver code that needs to be considered for a good geometry output?

3. Do any changes need to be made to the resulting CAD geometry (obtained using TO) to make it into an FE ready model?

4. How does the extracted geometry hold up in validation against the original TO output?

5. Can the workflow, i.e. converting from the black-and-white image to a CAD file be automated?

1.2

Problem definition

This thesis deals with the formulation of a methodology or a workflow to post-process the resulting geometry from the 88-line code to make it ready for production in as little time as possible. TO geometries are usually subjected to post-processing as there might be the presence of rough surfaces, jagged edges or intermediate densities. If these rough edges end up in the final geometry they can cause stress concentrations or even decrease the fatigue life of the structure. Hence, a faster and an efficient way to post-process results is needed to make way in a much smoother and rapid product/prototype development.

Running the 88-line MATLAB code results in a 2D array which is represented as a grey-scale image. Such formats cannot be read by commercial CAD software let alone be validated. The geometries from the image/matrix is extracted and post-processed which results in structures that can then be checked using commercial FE software. This helps in validating the design with respect to the strength require-ments numerically and easily prototype it using different additive manufacturing techniques, if needed for testing.

(12)

CHAPTER

2

LITERATURE REVIEW

The formulated research questions help to provide an overview of the limitations of TO results. Hence, it is good to study what has been done previously to overcome these problems. This also helps in establishing a methodology for the current thesis work.

With a preliminary search, one might find several methods to process data in 2D or 3D arrays and convert them into 3D CAD files [1, 13, 11]. After achieving the 3D geometry using one of these methods, the result needs to be cleaned up and validated. To reduce geometry clean up, one ideally needs perfect black-and-white designs in TO. Post-processing methods are available help to achieve such designs [23, 3, 16, 7]. Commercial code such as Spaceclaim by Ansys Inc., has a good built in Additive Manufacturing suite, which can help in geometry modifications and shows procedures for efficient re-analysis after obtaining TO results [2]. An automated workflow is desired as it would reduce the time to run multiple programs or codes to get the final geometry [24].

Literature shows that the most prominent way of resolving these problems consists of three phases which are combined under a singular approach called ’Structural Design Optimization’ [12]. The three phases are listed and explained in detail below.

1. Topology optimization 2. Geometry smoothing 3. Boundary extraction

2.1

Topology Optimization

TO is the first phase in the process of structural design optimization. The TO algorithm decides where to place the material in a prescribed domain to obtain the

(13)

2.1. Topology Optimization 4

best structural performance. Various methods have been established to perform TO such as density approach, SIMP, level set approach, etc., [17].

A generalized TO problem could be defined as

-(TO)          min x f (x ) subject to    fc(x ) ≤ ¯fc, c = 1, 2, 3, ..., nc, 0 ≤ xe ≤ 1, e = 1, 2, 3, ..., ne, (2.1.1)

where f(x) is the objective function and fc(x ), with c = 1,2,...nc, are constraint functions bound by constraint limits imposed by the function ¯fcand xeis the design variable.

In order to understand the TO problem, some basic definitions need to be established [4]. They are:

• Objective Function (f ): This represents the actual design. This function de-fines how good the geometry generated is for every iteration.

• Design Variable (x): This function/vector describes the design which could be changed during optimization. Parameters of the geometry such as area, length, etc. are usually defined as design variables.

• State Variable (u): This function/vector usually acts as response of the struc-ture. This is usually defined in terms of displacement vector u = u(x).

Here the displacement vector u is calculated from the standard finite element (FE) analysis as seen below.

u = u (x ) = K (x )−1F ,

where K (x) is the global stiffness matrix and F is the external load vector.

The results from a TO problem solved in 2D is in the form of a 2D array and each element is ideally between zero and one representing the presence of material in black and the absence of material in white respectively.

2.1.1

SIMP Method

In this thesis, modified SIMP approach is utilized for solving TO problem [3]. This is one of the most common techniques used to obtain black and white designs. Here, each element is assigned a density xe, that determines its Young’s modulus Ee as seen in Equation 2.1.2.

(14)

2.3. Geometry Extraction 5

Ee(xe) = Emin+ xpe(E0− Emin), xe[0, 1], (2.1.2) where E0 is the stiffness of the material, Emin is a very small value of 1e−9 and is introduced to prevent singularities in the stiffness matrix, and p is the penalization factor, typically p > 0 and here it is equal 3. In equation 2.1.2, when Emin = 0 classical SIMP approach is obtained.

2.2

Geometry Smoothing

Phase two of the structural design optimization is geometry smoothing. Smoothing techniques traditionally used involve increasing the number of elements or use some form of shape optimization algorithm on the raw TO results [19]. Since the extracted geometry is in terms of boundaries, curve smoothing techniques are utilized.

Multiple research papers have been published where various methods have been employed that calculate the curvatures of the extracted boundaries and replaces them with basic geometry features [26]. Although automated this method has its downsides as complex results are very difficult to process. Some methods use a level-set approach to get black and white results and then proceeds to perform a secondary TO using an adaptive mesh to get smooth edges on the geometry [19]. These steps require a large number of computations which consume additional time.

2.3

Geometry Extraction

Phase three of the structural design optimization is geometry extraction. The liter-ature shows several boundary identification and geometry extraction methods. The most common techniques are listed below

• Image processing [12].

• Model reconstruction using density distribution [20]. • Iso density contours [10].

• Machine learning [9].

In this thesis, image processing is used for boundary identification and extraction. The reason for choosing this route is that the output from solvers such as 88-line code visualize results in image format. Any 2D results represented in an image format that can be processed using this technique without changing the source code.

(15)

CHAPTER

3

METHODOLOGY

3.1

Overview

The workflow formulated is adapted from "Structural Design Optimization"[12] con-cept. A flowchart depicting the generalized workflow can be seen in Figure 3.1.

(16)

3.2. Phase 1: Topology Optimization 7

The entire workflow can be divided into three major phases excluding TO. The process begins with the input definitions being used for TO whose results along with the same inputs are shared with the next phase i.e. geometry smoothing. The smoothed geometry is then validated and then extracted subsequently.

3.2

Phase 1: Topology Optimization

The TO problem in this thesis is a compliance minimization problem under a volume constraint. The generalized TO problem presented in Equation 2.1.1 specialized to solve the compliance minimization problem can be seen in Equation 3.2.1.

(P)          min x C(x ) = u TK u subject to    V (x ) V0 = ¯V 0 ≤ xe ≤ 1, e = 1, 2, ..., ne, (3.2.1)

where C is compliance, V(x) is current volume fraction, V0 is maximum volume fraction and ¯V is the target volume fraction.

The volume fraction V (x) in 3.2.1 is calculated by taking the arithmetic mean of the elemental densities, see the equation below.

V (x ) = Pne

e=1x¯e ne

(3.2.2) where e is the element index, ne is the total number of elements and ¯xe is the elemental density obtained from the Heaviside filter.

3.2.1

Compliance computation

The total compliance is defined by Equation 3.2.3.

C(x ) = uTK u (3.2.3)

When the geometry is subjected to more than one static load, the net compliance is the sum of n compliances due to each individual load and this can be computed as seen in Equation 3.2.4. C(x ) = n X i=1 UTi KiUi (3.2.4)

(17)

3.2. Phase 1: Topology Optimization 8

where n is the total number of forces acting on the body, Ui is the global displace-ment vector due the force i and Ki is the global stiffness matrix the force i.

The stresses in the geometry are computed as shown in Appendix A.1. This is the most common way of computing stresses in a standard FE analysis.

3.2.2

Heaviside Filter

Filters are usually incorporated into TO problems to avoid mesh dependency and prevent the formation of checkerboard patterns. The checkerboard patterns refers to the formation of regions of alternating solid and void elements ordered in a checkerboard-like fashion. It has been known that checkerboard patterns occur due to bad numerical modelling of the stiffness which leads to non-convergence of FE solutions [18].

The Heaviside filter is a density filter with an Heaviside step function to help move intermediate densities ( ˜xe) i.e. grey values to either one or zero when the values lie between zero and one. There are two benefits for using this filter, one is to obtain a minimum length scale in the optimized design which helps avoid the formation of any features less than the minimum value and the second is to obtain black-and-white solutions. The former reason is why this filter was implemented into this workflow because it polarizes the density results which in turn helps deliver a better geometry extraction since the number of approximations made to remove the grey-scale values are reduced.

The intermediate densities ˜xe introduced when mapped onto the physical density ¯

xe using the Heaviside function which behaves as shown in Figure 3.2(a). That is, if ˜xe > 0, ¯xe = 1 and if ˜xe = 0 then ¯xe = 0. This facilitates the gradient based optimization whose physical densities ¯xe when implemented as a smooth function, see 3.2.5, provides the physical densities ¯xe w.r.t the intermediate density ˜xE, see Equation 3.2.6 when differentiated [3].

¯ xe = 1 − e−β ˜xe+ ˜xee −β (3.2.5) ∂ ¯xe ∂ ˜xe = βe−β ˜xe + e−β , (3.2.6)

where e is mathematical constant and β controls smoothing. Here ˜x is computed using density filter, see Equation 3.2.7.

˜ xe = 1 P iNeHie X iNe Hiexi, (3.2.7)

(18)

3.3. Phase 2: Geometry Smoothing 9

(a) Heaviside Step function (b) Density distribution w.r.t. filters

Figure (3.2): Density behavior and filtering control

Hei = max(0, rmin− ∆(e, i), (3.2.8) where, rmin is the minimum radius and ∆(e, i) is the center-to-center distance be-tween elements.

The effect of Heaviside filter on the density distribution compared to sensitivity filter and density filter is represented in the Figure 3.2(b), where it is clearly visible that Heaviside filter performs significantly better in eliminating intermediate densities. Therefore, to start with a clean density distribution for geometry smoothing and boundary extraction, the Heaviside filter is used.

3.3

Phase 2: Geometry Smoothing

The geometry smoothing workflow is formulated to handle 2D arrays. If the results from the TO are in the form of an image, it can be converted to a 2D array with a size of m×n, where m & n are number of elements in x & y direction of the original geometry respectively and for visualization purposes, the image is shown in grey-scale. However, some grey values i.e. values between zero and one, are present after filtering which are not ideal and do not really translate into real-world applications where the material is either present or absent.

These grey values contribute to the overall volume fraction of the TO geometry and eliminating these grey values without compromising the optimized design is a challenge. From observation, the grey values usually appear at the edges or near the voids in the geometry. Meaning, it would be difficult to determine and extract the geometry. Approximating this boundary would results in an incorrect volume fraction again, rendering the entire process of topology optimization pointless.

(19)

3.3. Phase 2: Geometry Smoothing 10

(a) Grey results (b) Unsmooth Black and white result

Figure (3.3): Density & boundary representation - Unsmooth data (28 × 28 pixels)

Even if a proper boundary is defined, extracting this boundary gives the geometry jagged edges. These jagged edges are due to the boundary extraction algorithm tracing around each square element which could give rise to stress singularities or even reduce the overall life of the structure. These are represented in the Figures 3.3(a) and 3.3(b). The presence of these edges spark the need for a smoothing operation before the geometry is extracted.

The workflow of geometry smoothing phase is represented in Figure 3.4. The work-flow starts off with the input data being fed for TO as well as the creation of an interpolated meshgrid and preparation of a filter for the nodal densities. The higher density meshgrid generated increases the number of elements in the geometry in each direction, thus increasing the resolution of the data without losing any detail in topology. The scale of the new mesh grid is decided by the user but eight times the initial value is found to be optimal with respect to detail and computational resources.

A 2D discrete convolution filter is prepared as shown in [3] to convert the elemental densities to nodal densities. This filter can be expressed as a modified density filter shown in Equation 3.2.7 and the weight factors He show in equation 3.2.8 act as the filter kernel. This helps in interpolation and avoid floating point values while applying boundary conditions. The elemental densities are then mapped to nodal densities using weight functions. These mapped nodal densities are then linearly interpolated according to the meshgrid generated previously.

(20)

3.3. Phase 2: Geometry Smoothing 11

(21)

3.3. Phase 2: Geometry Smoothing 12

3.3.1

Interpolation

Interpolation is used in image processing to increase the resolution of the image. In this case, bi-linear interpolation technique is used to increase the resolution of the results to obtain a smooth boundary. Bi-linear interpolation is an extension of linear interpolation for bi-variate data on a rectilinear 2D grid. Since interpolation is a method for generating new data within the range of a know data set, the unknown values can be treated as unknown function f(x, y). The value of f at (x, y) can be calculated by assuming the value of f at four points Q11 = (x1, y1), Q12 = (x1, y2), Q21 = (x2, y1) and Q22 = (x2, y2), this is represented in in Figure 3.5 [25]. The algorithm to compute the interpolation is presented in the following equations.

Figure (3.5): Bilinear Interpolation [25]

First a linear interpolation is performed in the x-direction. f (x, y1) ≈ x2− x x2− x1 f (Q11) + x − x1 x2− x1 f (Q21) f (x, y2) ≈ x2− x x2− x1 f (Q12) + x − x1 x2− x1 f (Q22)

Then a linear interpolation in y-direction is performed to obtain the value of f and the desired (x, y) point.

f (x, y) ≈ 1 (x2− x1)(y2− y1) [x2 − x x − x1] " f (Q11) f (Q12) f (Q21) f (Q22) # " y2− y y − y1 #

The interpolated nodal data created in the previous step adds intermediate grey values which is not desirable. To get the desired black-and-white results, the inter-polated data is run through the Heaviside threshold projection filter which uses a tanhfunction [22]. The filter can be mathematically presented as

(22)

3.3. Phase 2: Geometry Smoothing 13 ¯ ˜ xi = tanh (βη) + tanh (β( ˜xi− η)) tanh (βη) + tanh (β(1 − η)),

where, ¯˜xi is the smooth physical density, β is the smoothing factor whose value is the same as the converged β value from topology optimization, ˜xi is the interpolated density, and η is the threshold value.

The threshold value η here is chosen to be the arithmetic mean of the maximum and minimum values in the interpolated density matrix. Since the maximum and minimum values here are 1 and 0 respectively, the value of η turns out to be 0.5. This value for η was found to be optimal for single phase thresholding based on research studies, 0.5 provided the optimal geometry with a solid boundary which was neither eroded nor dilated [22].

The tanh function behaves as shown in Figure 3.6(a). This coupled with the Heav-iside smooth function results in the final density values to be extremely close to 0 or 1. Therefore the density distribution follows the same trend as the topology optimized density. The density distribution between the raw topology optimized density, interpolated density and the threshold projection filtered density is shown in the Figure 3.6(b).

(a) tanh function (b) Density distribution comparison

Figure (3.6): Filtering and Density distribution

The interpolation and filtering of the density has the desired effect on the boundary of the results. A comparison between interpolated and filtered interpolated densities can be seen in Figure 3.7. The interpolation factor here is 8 which is consistent with how the density is interpolated in the code. Here, it is clear that the interpolated density is more smooth and filtering by threshold projection results in a smooth boundary separation.

Now that the smooth geometry is obtained, the design can be validated which is an optional step and the boundary can be extracted. The validation process computes the compliance and stress profile of the new smooth geometry.

(23)

3.4. Validation 14

(a) Smooth Grey results (b) Smooth black and white results

Figure (3.7): Density & boundary representation - Smooth data (192 × 192 pixels)

3.4

Validation

Validation is an important step as it help us ensure that the results are reliable. Figure 3.8 shows the flowchart for the subroutine which calculates the compliance and the volume fraction. Here, the smoothed density data from Phase-2 is in the form of nodal densities. So, this data is resized to be in the form of elemental density as the FE algorithm requires it in this form. The compliance, von Mises stress and the final volume fractions are then computed.

(24)

3.5. Phase 3: Geometry Extraction 15

After pre-processing the interpolated density, this data is run through an finite element solver to calculate compliance as shown in Section 3.2.1. von Mises stress is computed as shown in Appendix A and is visualized in the form of a contour plot.

3.5

Phase 3: Geometry Extraction

This phase deals with the extraction of optimized geometry. One of the main chal-lenges of geometry extraction is that the boundaries extracted must be as accurate to the smoothed geometry and the extruded boundaries should not be in need of much post-processing. Considering all of these needs, a workflow was developed which can be seen in Figure 3.9. The interpolated smoothed density data is first converted to a grey scale image. The void regions are identified and then extracted. The extracted geometry is then smoothed and the number of nodes are optimized. The optimized curves are then written to a file.

The smoothed optimized density matrix taken as input to the boundary extraction subroutine is used as is without any conversion. This is convenient and removes the possibility for errors or loss of data that could arise from conversion. From testing and observations, processing the results as an image would incur more complexity and would not always give the best results. Hence, all of the data manipulation and processing is performed directly on the matrix result.

To begin the process of extraction, the geometry features need to be identified first. Since the density matrix of the smoothed geometry is binary, geometry is identified by recognizing all of the black regions and the white regions gives the void regions. The boundaries are then identified for all of the recognized regions of interest (ROI) in the form of co-ordinates.

The stored co-ordinate data extracted is then fed into a Savtizky-Golay filter which helps smooth the small variations. Savitzky-Golay filters or S-G filters for short are often used in digital signal processing to eliminate what is termed as small noise variations. S-G filter smooths data by fitting a polynomial to the given set of data using discrete convolution coefficients with a finite impulse response, i.e., it smooths the data while eliminating small noise variations but keeps the overall shape of the curve unaltered. For fitting, S-G filter employs linear least squares method. Mathematically it is described as seen in in the equation below. [14, 15]

Yj = i=m X i=−m

CiYj+i,

where, Y is coordinate data, j is the index of the coordinate data, Ci is the pre-determined convolution co-efficient and m is the range of co-efficient. The filter individually smoothens data for the X and Y co-ordinates separately as it is a 1D filter.

(25)

3.5. Phase 3: Geometry Extraction 16

Another necessary step in boundary extraction is reducing point density. The bound-ary extraction function captures every small detail and hence the number of data points in the coordinate data is high and has a lot of redundancies. This increases the processing and memory resources required to handle data significantly. The boundary with redundant data is presented in Figure 3.10(a), here the red points are the vertices on the curve.

Figure (3.9): Flowchart showing the boundary extraction process

Therefore the number of points describing the boundary needs to reduce without compromising the geometry. To achieve this the Recursive Douglas-Peucker Polyline Simplification Algorithm is employed [6]. This function down samples and simpli-fies the boundary coordinate data without compromising the final geometry. The recursive algorithm used here is a modified version of the original one called Ramer-Douglas-Peucker algorithm. The pseudocode to replicate the algorithm is presented below.

(26)

compar-3.5. Phase 3: Geometry Extraction 17

Algorithm 1 Ramer-Douglas-Peucker Algorithm 1: function RDP(PointsList, epsilon)

2: dmax = 0

3: end = length(PointList) 4: for i = 2 : (end − 1) do

5: d = perpendicularDistance(PointList[i], Line(PointList[1], PointList[end])) 6: if d > dmax then 7: index = i 8: dmax = d 9: end if 10: end for 11: ResultList = empty; 12: if dmax > epsilon then

13: recursive1 = RDP(PointList[1...index], epsilon) 14: recursive2 = RDP(PointList[index...end], epsilon)

15: ResultList=recursive1[1:length(recursive1)-1],recursive2[1:length(recursive2)] 16: else

17: ResultList = PointList[1], PointList[end] 18: end if

19: return ResultList;

ison to Figure 3.10(a), the number of vertices (red points) has decreased significantly. The ’simplified’ version of the co-ordinates of these boundaries are written onto text files which can then be imported on to a CAD software to make a 2D surface or an extruded 3D geometry.

(a) Boundary data with 12577 points (b) Simplified boundary with 36 Points

(27)

CHAPTER

4

IMPLEMENTATION

This chapter shows how the proposed methodology is implemented in MATLAB. A 2020-b release of MATLAB is used here. Additionally, the Image Processing Toolbox and Signal Processing Toolbox for MATLAB are also utilized.

4.1

Phase 1: TO

A popular TO code called ’The 88-line code’ is used here. This code is written in MATLAB and is one of the most popular algorithms to date. This code is modi-fied to include additional numerical examples along with the capability to compute stresses. To obtain a cleaner output from the solver, the Heaviside projection filter is implemented following the paper accompanying the 88-line code [3].

4.2

Phase 2: Geometry Smoothing

An interpolated meshgrid is created using an user-defined interpolation factor. Then following the 88-line code, a discrete convolution filter is prepared to convert the elemental densities to nodal densities and then, the elemental densities are mapped to nodal densities using the weight functions for nodal densities. Following that, the nodal densities are interpolated using interp2 in matlab with linear interpolation method according to the high resolution meshgrid generated. The interpolated den-sities are then passed through the threshold projection filter to obtain a definitive boundary.

(28)

4.3. Phase 3: Boundary Extraction 19

4.3

Phase 3: Boundary Extraction

The regions of interest are identified in MATLAB using regiopnprops. This func-tion is part of the image processing toolbox and it measures the properties of the image regions [21]. regionprops identifies and differentiates regions in the geome-try based on its continuity and density values. Hence, the entire TO geomegeome-try will be identified as a single body and the void regions are identified as separate bodies. The details of these regions are stored in an array and can be extracted and used as needed. Now by taking the length of the array, one can determine the total number of regions that have been identified, which are then used in a for loop to smoothen and extract the boundary co-ordinates.

The Savitzky-Golay filter is implemented using the in-built function sgolayfilt. This function takes in the polynomial order which is user defined and the co-ordinate from the previous step. Since, the in-built function can only handle one dimensional data, the x and the y co-ordinate data is filtered separately and then combined back together.

The number of points on the boundary after sgolayfilt is unnecessarily high, the Ramer-Douglas-Peucker algorithm is used to down sample the coordinate data. This is implemented into MATLAB following the pseudo-code shown in Methodology. The simplified co-ordinates is now written to a text file using the function dlmwrite. This text file can be modified to be read in the CAD software of choice.

(29)

CHAPTER

5

RESULTS

This chapter presents the results of the implemented workflow explained in the previous chapter. Several numerical examples, like MBB beam and T-beam, are tested with the proposed workflow. An MBB beam is a classical test case in TO. A T-shaped beam with a hole is also tested which is the most complex case as the initial geometry is asymmetric. Several additional numerical examples such as Half-wheel, cantilever beam, L-shaped beam etc have been executed and documented in Appendix B.

The material parameters for each of the numerical examples executed is as follows: Young’s modulus E0 is 1, Emin is the Young’s modulus for the void regions is 1e−9, Poisson’s ratio ν of 0.3, a filter radius of 0.03 × nelx, where nelx is the number of elements in the x-direction. a penalization factor of 3 and an interpolation value of 8. All the test cases in this thesis are solved to find the optimal material distribution with respect to minimizing compliance with a fixed amount of material volume. The generalized version of this problem has been defined in Section 3.2.

The computations presented in this thesis were executed on a computer with the following specifications and using the listed softwares

Class Laptop

CPU Intel Core i7-7700HK @ 3.8 Ghz

RAM 16 GB DDR3

GPU Nvidia GTX 1060 MaxQ 6GB GDDR5

OS Ubuntu 20.04 LTS

MATLAB Version R 2020b

(30)

5.1. MBB Beam 21

5.1

MBB Beam

The MBB beam test case used here has the same boundary conditions in accordance with the original paper [3]. The left end of the MBB beam has a symmetric boundary condition. A static load F is applied at the top left end. A constraint in the y direction is applied in the bottom right end. The boundary conditions along with the design domain and the load applied are shown in Figure 5.1 with length L = 50mm. The design domain is discretized by 150×50 elements to which a final volume fraction of 50%.

Figure (5.1): MBB beam showing boundary conditions and external load for opti-mization [3]

After solving the TO problem, the optimized model along with the stress distribution of MBB beam is shown in Figures 5.2 and 5.3. One notices that the profile of the optimized model has rough edges as well as tiny holes within the geometry.

Figure (5.2): MBB beam result from Phase-1

Figure (5.3): Stress contour of unsmooth MBB beam

The same optimized model after the smoothing process can be seen in Figure 5.4 where an interpolation value of 8 was used. It can be seen that the smoothing

(31)

5.1. MBB Beam 22

process is very effective in removing the small holes as well as the jagged edges. As mentioned, the advantage of using the proposed workflow is that this smoothing process comes at a small cost in terms of volume fraction and compliance.

Figure (5.4): MBB result after smoothing process (Phase-2)

Figure (5.5): Stress contour of smooth MBB beam

Figure 5.5 shows the stress contours of the smoothed model. It should be noted that the stress contours in Figure 5.3 and Figure 5.5 are quite the similar with respect to the stress distribution and concentration except that the latter has a much better looking contour as it has more number of elements (interpolated matrix).

The smoothed model is passed onto phase-3 where the geometry is extracted and simplified. This extracted geometry is visualized in Figure 5.6. But the true sim-plicity and accuracy of the workflow is seen when the boundaries are imported into a CAD software (Spaceclaim in this case). The extruded geometry is seen in Figure 5.7.

Figure (5.6): Extracted boundary of the smoothed MBB beam

Comparing the results, the unsmooth MBB beam has a compliance of 191.48 for a volume fraction of 0.499 and the smooth MBB beam has a compliance of 195.24 for a volume fraction of 0.508 This shows a difference of 1.92% change in the compliance value for 1.74% increase in volume fraction.

(32)

5.2. T-Beam with hole 23

Figure (5.7): Extruded MBB beam

5.2

T-Beam with hole

For the second example, a T-beam is taken as seen in Figure 5.8. This in on itself is an intricate geometry. To increase the complexity, a hole was added to one of the branches of the T-beam which induces an asymmetric initial geometry. This geometry is subjected to the compliance minimization problem as shown in Equation 3.2.1 with a volume fraction of 25%. The boundary conditions along with the design domain and the loads applied are shown in Figure 5.8 with L = 100mm. The topmost edge is clamped and two static loads F are applied at each end of the horizontal beam. The model with this configuration has a total of 5000 elements.

Figure (5.8): T-beam with a hole showing boundary conditions and external loads Phase-1 results are seen in Figure 5.9 and stress contour plot in figure 5.10. The

(33)

5.2. T-Beam with hole 24

jagged edges as well as the presence of a large number of grey elements can also be seen. An asymmetrical geometry due to the presence of the hole can be seen in the same figure.

Figure (5.9): Density result from Phase-1

(34)

5.2. T-Beam with hole 25

Smoothed model from Phase-2 along with the corresponding stress contours seen in Figures 5.11 and 5.12. The geometry was smoothed using an interpolation value of 8. The jagged edges as well as the grey values are not present and the overall geometry is smooth. The stress plot also show the same. Stress concentrations as well as the contours are similar to Phase-1 (Figure 5.10).

Figure (5.11): Smoothed T-beam from Phase-2

(35)

5.2. T-Beam with hole 26

The boundaries extracted from Phase-3 can be seen in Figure 5.13 and the 3D ge-ometry as a result of extruding from Spaceclaim is seen in Figure 5.14. By looking at the 3D models, it is safe to say that these models are ready for manufacture or can be further modified as the user desires.

Figure (5.13): Extracted boundary of the smoothed T-beam

(36)

5.2. T-Beam with hole 27

Comparing the results, the unsmooth T-beam has a compliance of 326.24 for a volume fraction of 0.2499 and the smooth T-beam has a compliance of 301.77 for a volume fraction of 0.2584 This shows a difference of 7.50% change in the compliance value for 3.25% increase in volume fraction.

(37)

CHAPTER

6

DISCUSSION & ANALYSIS

In this chapter, the results from the proposed methodology are probed and analysed. The reason for this scrutiny is to understand and find out the pros and cons of the proposed methodology. Measures for further improvements that can be made are also laid out.

6.1

Influence of interpolation value

Interpolation value serves to increase the number of data points by the prescribed data points within already existing data points. In this workflow, the density results obtained after smoothing are interpolated right before being fed into the smoothing phase. The effects of interpolating these density results are discussed in this chapter. After testing, an interpolation value of 8 is selected. To show as to why 8 gives the best results, a comparison of results for a higher (12) and a lower (4) interpolation value are presented here. It is to be noted that as the interpolation value increases, the number of elements in each coordinate direction increase by a factor of that value, which significantly increases the computation time.

The result of variation of the interpolation value for the MBB beam is presented in Figure 6.1. It is evident that as the interpolation value increases, the geometry boundaries get smoother. On first glance, it may not seem like a significant difference but further along the process in boundary extraction, these results end up loosing detail when extracted. This can be visualized in Figure 6.2 where the boundary of the lower interpolation value looses detail and almost looks polygonal. But as the interpolation value increases, the number of nodes (represented as red dots) increase thus maintaining the curvature of the geometry.

(38)

6.1. Influence of interpolation value 29

(a) Interpolation value = 4

(b) Interpolation value = 8

(c) Interpolation value = 12

Figure (6.1): Final MBB geometry for different interpolation values

(a) Interpolation value = 4

(b) Interpolation value = 8

(c) Interpolation value = 12

Figure (6.2): Extracted boundary of MBB beam for different interpolation values

(a) Interpolation value = 4

(b) Interpolation value = 8

(c) Interpolation value = 12

(39)

6.1. Influence of interpolation value 30

T-beam being a much more complex geometry than MBB beam shows a much clearer distinction when it comes to varying interpolation values. The jagged edges in the lower value geometry stands out, see Figures 6.3(a). The Figures 6.3(b) and (c) are much smoother and therefore desirable.

(a) Interpolation value = 4

(b) Interpolation value = 8

(c) Interpolation value = 12

Figure (6.4): Extracted boundary of T-beam for different interpolation values

The boundary curves extracted for T-beam solidify the need for a higher interpo-lation value as the effect is more distinct. For the lower interpointerpo-lation value, the corners are rather sharp compared to smooth rounded corners of the void regions. These changes for the T-beam can be seen in Figures 6.4 (a), (b), & (c).

Table (6.1): Performance results for different interpolation value

Case Interpolation = 4 Interpolation = 8 Interpolation = 12

Vf C T(s) Vf C T(s) Vf C T(s)

MBB 0.509 193.19 81.2 0.508 195.24 96.5 0.508 196.51 124.7 T-beam 0.258 300.50 106.7 0.258 301.77 123.7 0.258 300.83 194.5 A good way to see the differences in the interpolation is to compare the metrics for every numerical example. Table 6.1 shows these metrics where Vf is volume fraction, C is compliance and T is computation time. The difference in volume fraction and compliance between interpolation values 8 and 12 although small are present. These small change in compliance are due to the more detailed contours being present at higher interpolation values. This is supported by the difference in the metrics between interpolation values 4 and 8. It is also to be noted that with increase in interpolation value comes with significant increase in computation time (T ). For MBB beam, the increase is 21% and for T-beam it increases by 36%. Therefore, an interpolation value of 8 gives a good balance between the quality of the output with computation time and hence was chosen as a standard input for all numerical examples.

(40)

6.2. Evolution of density through interpolation and thresholding 31

6.2

Evolution of density through interpolation and

thresholding

The performance of the smoothing is demonstrated where the combined effects of interpolation and thresholding were showcased. In this section, the density changes through the different steps are tracked. This provides an insight into the working of the geometry smoothing phase. The evolution of density data for the MBB beam and T-beam are presented in the Figures 6.5 and 6.6.

(a) Unsmooth data (b) Interpolated data (c) Smooth data

Figure (6.5): Effect of smoothing steps on MBB beam

(a) Unsmooth data (b) Interpolated data (c) Smooth data

Figure (6.6): Effect of smoothing steps on T-beam with hole

Both cases successfully demonstrate how the smoothing operation handles undesir-able features in the density data. For example, the small pin-hole present in Figure 6.5 is removed and the region around it is black-and-white. In the T-beam geometry, the region around the circular void region has a lot of intermediate densities which are all cleaned up. In addition to that, the jagged edges are smoothed and a definite boundary edge is formed in both examples.

In the Figures 6.5(a) and 6.6(b) after the interpolation operation, all the local im-perfections appear to be smoothed by addition of a lot of intermediate density values to the data. So, there is no definitive boundary to the geometry anymore.

(41)

6.3. Influence of Savitzky-Golay filter on extracted

boundaries 32

The density data when visualized as a distribution gives a quantitative measure of the transformation, see Figure 6.7.

In the thresholding step, the threshold projection filter polarizes the density values to the extremes, i.e., either 0 or 1 and the resulting data is black and white, this is coupled with the uniformity induced by interpolation results in a smooth definitive boundary.

(a) MBB beam (b) T-beam with hole

Figure (6.7): Density distribution after different steps

The density distribution shown in the Figures 6.7 (a) and (b) clearly show the distri-bution in all the steps which correspond to their respective pictorial representation. It can also be observed that the final density distribution is better than the initial optimized density since there are less intermediate values which makes it closer to the ideal black and white solution.

6.3

Influence of Savitzky-Golay filter on extracted

boundaries

Savitzky-Golay filter being a low-pass filter removes and smoothens the extracted boundaries making it suitable to be converted to 3D geometry. These kinks can cause problems when imported to a 3D CAD software as they do not contribute to a clean geometry. Additionally, if these end up on a part being manufactured, they might result in stress concentrations in those areas which are undesirable. Looking at Figures 6.8(a) and 6.9(a), these kinks can be clearly visualized at the curvatures. Figures 6.8(b) and 6.9(b) show the same geometries after the boundaries are passed through Savitzky-Golay filter. This filter is one more step in the process of smoothing and extraction to get a good overall geometry output.

(42)

6.4. Performance Metrics 33

(a) MBB boundaries not filtered through S-G filter

(b) MBB boundaries filtered through S-G filter

Figure (6.8): Effect of S-Golay filter on extracted boundaries of MBB beam

(a) Tbeam boundaries filtered through S-G filter

(b) T-beam boundaries filtered through S-G filter

Figure (6.9): Effect of S-Golay filter on extracted boundaries of T-beam

6.4

Performance Metrics

In this section the performance of the code is evaluated in terms of accuracy and speed. This code-package can compute and perform smoothing on 10 different nu-merical examples. All the nunu-merical examples are executed and the results are tabulated in Table 6.2. The input values, boundary conditions and the pictorial results for all the cases are presented in Appendix B.

Table 6.2 shows the target volume fraction, the compliance as computed by the 88-line code, the final volume fraction of the smoothed geometry and the compliance computed for the smooth geometry and the total time consumed to compute the for every case.

From the Table 6.2, it can be seen that the volume fraction is on target with an average deviation of 2.0451% and a maximum deviation of 3.2507% for the T-beam

(43)

6.4. Performance Metrics 34

with hole case. The compliance also follows the same trend but with a slight higher average deviation of 5.7739% and a maximum deviation of 15.6299% for the half-wheel case. Here, the compliance deviation is due to the fact that the material distribution has changed and the final calculated compliance is for a black and white data instead of gray data and thus it is reasonable to assume it can change. The deviation from target for both compliance and volume fraction are presented as bar graphs for better visualization in Figure 6.10.

Table (6.2): Performance Metrics for all the cases

Case Target Volume fraction Optimized Compliance (Unsmooth) Postprocessed data (Smooth) Time (s) Volume fraction Compliance MBB beam 0.5 191.4968 0.5088 195.243 96.5199 Half-wheel 0.5 17.20803 0.5165 20.3959 67.4252 Short Cantilever Endload 0.4 55.1832 0.4099 57.2686 197.9899 Short Cantilever Midload 0.4 45.7072 0.4040 46.1192 187.1894 Multiload Cantilever 0.4 60.3618 0.4072 68.1684 110.1343 Plate with hole Endload 0.4 64.3958 0.4059 67.7707 201.1839 Plate with hole Midload 0.4 55.1159 0.4059 55.4905 230.6175 L-beam 0.25 218.1832 0.2526 210.5477 99.349 T-beam 0.25 325.2703 0.2576 300.7161 110.5704 T-beam with hole 0.25 326.2442 0.2584 301.773 123.75338

(44)

6.4. Performance Metrics 35

Figure (6.10): Percentage Error

Achieving fast simulation times is a desired quality in numerical computation. Figure 6.11(a) show the time consumed for each stage. In this figure, it is apparent that in the significant contribution to the increased time is from the validation process. The time taken to compute the final results is minimal if validation is turned off.

(a) Time for different stages (b) Percentage Increase

Figure (6.11): Computation Time

This is made clear in the Figure 6.11(b) which shows the percentage increase in the computation time. The average time increase for computation without validation is 0.694%with a maximum increase of 1.12% for L-beam. The average time increase

(45)

6.5. Effect of non-discrete density values on

compliance 36

for computation with validation is 24.07% and a maximum increase of 45.3% for L-beam.

For this reason, validation process can be toggled off manually. This will not affect the final geometry or the boundary curves extracted in anyway, but the compliance and volume fraction of the final smooth geometry is not computed.

6.5

Effect of non-discrete density values on

compliance

In a compliance minimization problem, there exists an inverse correlation between compliance and volume fraction. But comparing the values of compliance and vol-ume fraction for unsmooth and smooth data in Table 6.2, this is not the case here. This pattern has been observed in [16, 8]. This is due to the presence of grey values in the density data which are non-discrete. This grey data takes up a significant portion of the final volume fraction. By observing the change in the number of grey values between smooth and unsmooth data helps provide a good idea as to how much influence they have on the compliance.

(46)

6.6. Limitations 37

Figure 6.12 shows the comparison between the contribution of grey values to the final volume fraction and its effect on compliance. As the contribution of these grey values increase, the compliance is found to increase and vice versa. These grey values are present in the transition regions, complex geometries such as T-beam tend to have a much larger percentage change.

6.6

Limitations

The proposed methodology for structural topology optimization performs well and meets all the goals but it still comes with a few limitations which are discussed in this section.

6.6.1

Extreme scenarios

It is possible that the results from topology optimization may contain large amounts of gray values. This can be caused by solving for low volume fractions or using a dif-ferent filter other than Heaviside filter. In these cases, the methodology proposed so far does not perform very well. To address this issue, in addition to the methodology a few intermediate steps have been implemented which can be toggled manually. These additional steps involve cleaning the data from 88-line code as clean input data is very important for post processing methodology and image based thresholding. The density data is cleaned by filtering it through the threshold projection filter which is used in smoothing operation and explained previously in Section 3.3 of this chapter.

After this the smoothing routine is called again and the smooth density data is thresholded using a MATLAB inbuilt function imbinarize which uses graythresh from MATLAB to determine the threshold value for the now smoothed density data. This method works if the thresholded data is binary without any threshold induced artifacts but at the cost of not hitting the target volume fraction. This scenario is evaluated in the discussion Chapter 6.6.2

6.6.2

Evaluation of extreme scenarios

As mentioned before in Section 6.6.1, the cause of undesired results are due to the presence of large amount of gray data especially when solving the 88-line code under low volume fraction conditions. This section explores the cause, undesired results and the response implemented to address the output.

Running the 88-line solver at a low volume fraction for the MBB beam results in a high gray outputs as seen in Figure 6.13(a). Running this data through the proposed methodology results in the output of a smooth geometry with blurry boundary and

(47)

6.6. Limitations 38

Figure (6.13): MBB gray results (Volume fraction = 0.20)

the boundary curves show in the Figure 6.14(a) and (b). Detection of a definitive boundary in these figures arise issues here.

(a) Smoothed gray results (b) Boundary curves for blurry boundary

Figure (6.14): MBB results for low volume fraction of 0.20

The response subroutine executed under such conditions, cleans the results from top-88. The improved density distribution is as shown in the Figure 6.15. Compared to the unaltered gray result in Figure 6.13, this is much preferred.

Figure (6.15): MBB filtered results (Volume fraction = 0.20)

Running this through the smoothing routine with image based thresholding, pro-duces a clean and smooth geometry seen as presented in Figure 6.16(a). The bound-ary extracted for this new smooth geometry is as desired, see Figure 6.16(b).

(a) Smoothed black and white results (b) Boundary curves

Figure (6.16): Clean MBB results for low volume fraction of 0.20

As demonstrated in this section, the short comings of the proposed methodology in extreme scenarios can be quickly addressed with a few adjustments and reusing

(48)

6.6. Limitations 39

the already implemented techniques. But this comes at the cost of not hitting the target specifications prescribed in the inputs, especially the volume fraction. All the results from this section are presented in the Table 6.3

Table (6.3): Results from low target volume fraction

Case Gray results Cleaned results Percentage difference

Volume fraction 0.2 0.2204 9.255

Compliance 733.45 532.55 27.378

From Table 6.3 it is evident that although the response for low volume high gray inputs work, the accuracy of suffers. The difference in compliance here is a direct result of the final geometry being black and white. Thus, the change in compliance is reasonable. The focus here is on the difference in volume fraction since it is one of the target specifications. One of the reason for not using any type of image based thresholding in the main methodology is because the volume fraction target is not achieved. Nonetheless, the response routine implemented to address the high gray value count helps in minimizing the lower limit of the usable volume fractions constraints presented in the Table 6.4 with some compromises.

As discussed in Section 6.6.2, extreme cases can induce high gray values into the initial data causing the final geometry does not meet the specified requirements. This is due to the absence of a definitive boundary which does not help in boundary recognition. The minimum limit for the volume fraction for each case is presented in the Table 6.4. Also the discussion in Section 6.6.2, explains the additional routine implemented to address the issue extends the limit of this minimum volume fraction.

Table (6.4): Minimum volfrac before geometry distortion

Case Min. volfrac

MBB beam 0.25

Half-wheel 0.25

Short cantilever Endload 0.175 Short Canti Midload 0.225 Multiload Cantilever 0.175 Plate with hole Endload 0.20 Plate with hole Midload 0.20

L-beam 0.225

T-beam 0.2

T-beam with hole 0.2

Although the additional routine works, it comes at the cost of not achieving the target volume fraction. If the target volume fraction is too low, this routine will not provide satisfactory results.

(49)

6.6. Limitations 40

Another limitation of the proposed methodology is that it only works with 2D data and the output is also in terms of 2D boundary co-ordinates. The 3D geometry pre-sented in the results need to be extruded in a CAD software, which is an additional step but is quite simple to achieve.

When dealing with geometries involving fixed void regions, such as a hole, the dimen-sions of this hole can change after the smoothing and extraction process. Although a small variation in the dimensions is present, this problem can be easily fixed when converting the boundaries to a 3D model by adding the desired geometry of the intended dimension and using a Boolean operations to obtain the right size. This is another of the limits of the methodology which can be fixed using simple steps.

(50)

CHAPTER

7

CONCLUSION

An effort was made in this thesis to post-process 2D geometry data obtained as a results of topology optimization and convert them into manufacturable 3D geometry. Major issues such as eliminating grey values, smoothing jagged and rough edges were addressed. A generalized workflow was presented which extracts the TO geometry with very minimal changes to the volume fraction and compliance. This workflow can be modified and adopted to any TO algorithm to bridge the gap from 2D results to a solid 3D CAD model by removing additional post-processing.

The TO solver adopted here is built on the MATLAB platform and uses SIMP methods and standard FE analysis to obtain results. These results are computed as a 2D density matrix and then visualized as an image and are manipulated to obtain the final result. Important parameters such as the penalty value for the SIMP method, filter radius and the type of filtering play an important role for a good geometry output.

A method of geometry smoothing using image processing techniques such as inter-polation have been showcased. The smoothed geometry is validated by computing the compliance and volume fraction and comparing them with that from TO code. Smoothing and extraction of the boundaries of the geometry to a co-ordinate data was demonstrated.

The geometry extracted is true to the smoothed 2D geometry as possible and is fair to say that the model is FE ready in just a few clicks. The smoothed geometry is validated against the results from the TO solver and were expected to be within the acceptable margin of error.

Automating the code is also shown as the geometry smoothing and extraction pro-cess is done in a single click with the exception of importing the co-ordinate data into a CAD software. This stands out to be one of the limitations of the code. Using this workflow reduces time and process of generating a geometry to a pro-duction ready model resulting in quick prototyping and fast propro-duction. Since, the

(51)

7.1. Outlook 42

smoothed geometries are in 2D and are of high quality, they can be manufactured using processes such as milling, laser or water-jet cutting or even extruding the part and cutting it to form multiple parts.

The report concludes with some of the limitations of the code as well as examples that show the effectiveness of our workflow. As Robert le Ricolais said "The art of structure is where to put the hole", there are many algorithms which can give out perfect smooth solutions, all of them involve complex mathematics. For a more simpler solution and equivalent results, Image Processing offers a really good way to process these complex results as shown in this thesis.

7.1

Outlook

The workflow developed in this thesis can be further improved upon in many ways. Since it is a generalized methodology, the workflow can be coded in languages such as FORTRAN or C++ to make it run faster and more efficient. Computing large amounts of data has been a big challenge in computational science and with modern advancements in computer science and data processing, the computational times are being reduced every year. Methods such as parallel computing and GPU processing can be incorporated for much faster and reduced calculation times.

The workflow with a few modifications can be adopted to smoothing and extracting 3D geometry. Since doing this would increase the size of the matrices by a substantial amount, the methods discussed previously would come in handy. This would greatly reduce the post-processing time for 3D TO geometry.

The method shown here can be used in an industry where one is trying to prototype or even manufacture components which only need extrusion in a very short period of time. This workflow would massively reduce the overall lead time of development thus having a overall positive economic impact on the companies that would use it.

(52)

REFERENCES

[1] Adam A. Converting a 3D logical array into an STL surface mesh. url: https: //se.mathworks.com/matlabcentral/fileexchange/27733-converting-a-3d-logical-array-into-an-stl-surface-mesh (visited on 2020). [2] Oded Amir, Ole Sigmund, Boyan S Lazarov, and Mattias Schevenels. ”Efficient

reanalysis techniques for robust topology optimization”. In: Computer Methods in Applied Mechanics and Engineering 245 (2012), pp. 217–231.

[3] Erik Andreassen, Anders Clausen, Mattias Schevenels, Boyan S Lazarov, and Ole Sigmund. ”Efficient topology optimization in MATLAB using 88 lines of code”. In: Structural and Multidisciplinary Optimization 43.1 (2011), pp. 1–16. [4] Peter W Christensen and Anders Klarbring. An introduction to structural

op-timization. Vol. 153. Springer Science & Business Media, 2008.

[5] Robert D Cook et al. Concepts and applications of finite element analysis. John wiley & sons, 2007.

[6] David H Douglas and Thomas K Peucker. ”Algorithms for the reduction of the number of points required to represent a digitized line or its caricature”. In: Cartographica: the international journal for geographic information and geovisualization 10.2 (1973), pp. 112–122.

[7] Erik Holmberg, Carl-Johan Thore, and Anders Klarbring. ”Worst-case topol-ogy optimization of self-weight loaded structures using semi-definite program-ming”. In: Structural and Multidisciplinary Optimization 52.5 (2015), pp. 915– 928.

[8] Zaizhuo Jiang, Li Zhang, Yingbing Wan, and Cong Jin. ”An Improved Method for Decreasing Intermediate Density Elements in Topology Optimization”. In: 2nd International Conference on Computer Engineering, Information Science & Application Technology (ICCIA 2017). Atlantis Press. 2016, pp. 968–971. [9] Xin Lei, Chang Liu, Zongliang Du, Weisheng Zhang, and Xu Guo. ”Machine

learning-driven real-time topology optimization under moving morphable component-based framework”. In: Journal of Applied Mechanics 86.1 (2019).

(53)

References 44

[10] K. Maute and E. Ramm. ”Adaptive topology optimization”. In: Structural optimization 10 (1995), pp. 100–112.

[11] Bill McDonald. surf2stl. url: https://se.mathworks.com/matlabcentral/ fileexchange/4512-surf2stl (visited on 2020).

[12] PY Papalambros and M Chirehdast. ”Integrated structural optimization sys-tem”. In: Topology design of structures. Springer, 1993, pp. 501–514.

[13] Amir Safari. Make STL of 3D array (Optimal for 3d printing). url: https: / / ww2 . mathworks . cn / matlabcentral / fileexchange / 68794 make stl -of-3d-array-optimal-for-3d-printing?s_tid=prof_contriblnk (visited on 2020).

[14] Abraham. Savitzky and M J E Golay. ”Smoothing and Differentiation of Data by Simplified Least Squares Procedures.” In: Analytical Chemistry 36.8 (1964), pp. 1627–1639. doi: 10.1021/ac60214a047. url: https://doi.org/10. 1021/ac60214a047.

[15] Ronald W Schafer. ”What is a Savitzky-Golay filter?[lecture notes]”. In: IEEE Signal processing magazine 28.4 (2011), pp. 111–117.

[16] Ole Sigmund. ”Morphology-based black and white filters for topology opti-mization”. In: Structural and Multidisciplinary Optimization 33.4-5 (2007), pp. 401–424.

[17] Ole Sigmund and Kurt Maute. ”Topology optimization approaches”. In: Struc-tural and Multidisciplinary Optimization 48.6 (2013), pp. 1031–1055.

[18] Ole Sigmund and Joakim Petersson. ”Numerical instabilities in topology opti-mization: a survey on procedures dealing with checkerboards, mesh-dependencies and local minima”. In: Structural optimization 16.1 (1998), pp. 68–75.

[19] Marco K. Swierstra. ”Post-Processing of Topology Optimized Results: A method for retrieving smooth and crisp geometries”. In: (2017).

[20] Poh-Soong Tang and Kuang-Hua Chang. ”Integration of topology and shape optimization for design of structural components”. In: Structural and Multi-disciplinary Optimization 22.1 (2001), pp. 65–82.

[21] C.M. Thompson and L. Shure. Image Processing Toolbox: For Use with MAT-LAB;[user’s Guide]. MathWorks, 1995.

[22] Fengwen Wang, Boyan Stefanov Lazarov, and Ole Sigmund. ”On projection methods, convergence and robust formulations in topology optimization”. In: Structural and Multidisciplinary Optimization 43.6 (2011), pp. 767–784. [23] Emiel van de Ven, Robert Maas, Can Ayas, Matthijs Langelaar, and Fred

van Keulen. ”Continuous front propagation-based overhang control for topol-ogy optimization with additive manufacturing”. In: Structural and Multidisci-plinary Optimization 57.5 (2018), pp. 2075–2091.

[24] Anton Wiberg. Towards Design Automation for Additive Manufacturing: A Multidisciplinary Optimization approach. Vol. 1854. Linköping University Elec-tronic Press, 2019.

(54)

References 45

[25] Wikipedia. Bilinear interpolation. url: https://en.wikipedia.org/wiki/ Bilinear_interpolation.

[26] Guilian Yi and Nam H Kim. ”Identifying boundaries of topology optimization results using basic parametric features”. In: Structural and Multidisciplinary Optimization 55.5 (2017), pp. 1641–1654.

(55)

APPENDIX

A

ADDITIONAL CALCULATIONS

A.1

Stress Computation

The 88-line code does to compute stress by default. But since it performs a 2D finite element analysis for bi-linear square elements of unit length, with a simple setup with a few lines of code is implemented to calculate the equivalent von-Mises stress. The procedure is detailed below and follows the theory from the literature by Robert D Cook.[5].

For a 2D case, the stress can be calculated using the equation A.1.1, where σ is a 2D stress tensor, C is the constitutive matrix, B is the strain displacement matrix and u is nodal displacements.

σ = C B u (A.1.1)

For a 2D case, the 2D stress tensor σ is, σ =    σx σy σxy    (A.1.2)

where σx is stress in x direction, σy is stress in y direction and σxy is the shear stress. Constitutive matrix C is,

C = E 1 − ν2    1 ν 0 ν 1 0 0 0 1−ν2    (A.1.3)

(56)

A.1. Stress Computation 47

where, E is the Young’s modulus and ν is Poisson’s ratio. Strain displacement matrix B is given by,

B = 1 2L    −1 0 1 0 1 0 −1 0 0 −1 0 −1 0 1 0 1 −1 −1 −1 1 1 1 1 −1    (A.1.4)

where L is element length, here L = 1. and nodal displacements u is

u = h

ux1 uy1 ux2 uy2 ux3 uy3 ux4 uy4 iT

(A.1.5) where subscripts x and y correspond to coordinate directions and the numbers 1, 2, 3, 4correspond to the node numbers.

Using this stress tensor in A.1.1, equivalent von-Mises stress can be calculated using the eqn A.1.6,

σvM S = q

σ2

(57)

APPENDIX

B

ADDITIONAL EXAMPLES

Multiple load cases were added to the code as explained in the previous chapters. The solutions for these load cases can be seen in this appendix. Each of the load cases presented here has figures showing the geometry showing the initial boundary condition along with the loads acting on it, the geometry obtained from the 88-line code, the geometry after the smoothing process and the extracted geometry. The compliance value along with the volume fractions before and after the smoothing process is documented in the Chapter 5:Discussion and analysis in Table 6.2. The inputs to the 88-line code is provided as well.

B.1

Half-Wheel

nelx = 120; nely = 50; volfrac = 0.5, penal = 3; rmin = 0.3*nelx

(a) Boundary conditions (b) Extracted Geometry

(c) top-88 Geometry (d) Smoothed Geometry

References

Related documents

46 Konkreta exempel skulle kunna vara främjandeinsatser för affärsänglar/affärsängelnätverk, skapa arenor där aktörer från utbuds- och efterfrågesidan kan mötas eller

Both Brazil and Sweden have made bilateral cooperation in areas of technology and innovation a top priority. It has been formalized in a series of agreements and made explicit

För att uppskatta den totala effekten av reformerna måste dock hänsyn tas till såväl samt- liga priseffekter som sammansättningseffekter, till följd av ökad försäljningsandel

Generella styrmedel kan ha varit mindre verksamma än man har trott De generella styrmedlen, till skillnad från de specifika styrmedlen, har kommit att användas i större

Parallellmarknader innebär dock inte en drivkraft för en grön omställning Ökad andel direktförsäljning räddar många lokala producenter och kan tyckas utgöra en drivkraft

Detta projekt utvecklar policymixen för strategin Smart industri (Näringsdepartementet, 2016a). En av anledningarna till en stark avgränsning är att analysen bygger på djupa

However, the effect of receiving a public loan on firm growth despite its high interest rate cost is more significant in urban regions than in less densely populated regions,

Industrial Emissions Directive, supplemented by horizontal legislation (e.g., Framework Directives on Waste and Water, Emissions Trading System, etc) and guidance on operating