• No results found

Toward predictive maintenance in surface treatment processes: A DMAIC case study at Seco Tools

N/A
N/A
Protected

Academic year: 2022

Share "Toward predictive maintenance in surface treatment processes: A DMAIC case study at Seco Tools"

Copied!
99
0
0

Loading.... (view fulltext now)

Full text

(1)

Toward predictive maintenance in surface treatment processes

A DMAIC case study at Seco Tools

Martin Berg Albin Eriksson

Industrial and Management Engineering, master's level 2021

Luleå University of Technology

Department of Social Sciences, Technology and Arts

(2)
(3)

Preface

This master’s thesis has been the concluding effort of our five years of study in Industrial Engineering and Management at the Luleå University of Technology. The thesis had a focus on quality development with a mission from Seco Tools in Fagersta, Sweden.

The master’s thesis was finalized during the spring of 2021 in the shadow of the ravaging pandemic. The workflow was, however, not affected by the prevailing pandemic. Our work has been successful broadly due to the assistance we received from Seco Tools, for which we are eternally grateful. The thesis was educative in many ways, and a large part of the learning would not have been possible without the assistance from Seco Tools. These experiences will be valuable in our upcoming working life, for which we are thankful.

A huge thanks go to everyone at Seco Tools for their support and aid during the thesis work. Many people have been helpful during the thesis, which has been essential for the work’s success. A special thanks go to our super- visor Mats Johansson Jöesaar, who has repeatedly helped us with his expertise and interest. We would also like to thank our peer reviewers, Joel Törnblom Johansson, Mani Mostafaee, Nathalie Jonsson, and Nikolaj König, for their interest and valuable feedback. Our instructor from Luleå University of Technology, Erik Vanhatalo, deserves special thanks for always showing his interest and helping us in the slightest of need. We appreciate all your commitment, as we would not have reached as far without any of you.

Luleå, 28th May 2021

Martin Berg Albin Eriksson

(4)
(5)

Thesis outline

The thesis outline is adjusted to match the phases in the Six Sigma project approach, which is a systematic improvement concept based on a uniform and fact-based workflow in five steps (DMAIC). From the conventional scientific structure, data collection, analysis, and recommendations are converted into a chapter called case study. The two later steps in DMAIC, Improve and Control, include the practical recommendations contained in a traditional scientific thesis. A comparison between the traditional thesis structure and this thesis structure is illustrated in Figure 1 below.

Traditional

report outline Six Sigma

report outline

1. Define 2. Measure 3. Analyze

4. Improve 5. Control 1. Introduction

2. Theoretical Background 3. Methodology

4. Data Collection 5. Analysis 6. Discussion 7. Conclusions and recommendations

This thesis

1. Introduction 2. Methodology

3. Theoretical Background 4. Case Study - DMAIC 5. Conclusions

6. Discussion

Figure 1: Illustration of the converted report structure from a traditional report structure.

(6)
(7)

Abstract

Surface treatments are often used in the manufacturing industry to change the surface of a product, including its related properties and functions. The occurrence of degradation and corrosion in surface treatment processes can lead to critical breakdowns over time. Critical breakdowns may impair the properties of the products and shorten their service life, which causes increased lead times or additional costs in the form of rework or scrapping.

Prevention of critical breakdowns due to machine component failure requires a carefully selected maintenance policy. Predictive maintenance is used to anticipate equipment failures to allow for maintenance scheduling before component failure. Developing predictive maintenance policies for surface treatment processes is problematic due to the vast number of attributes to consider in modern surface treatment processes. The emergence of smart sensors and big data has led companies to pursue predictive maintenance. A company that strives for predictive maintenance of its surface treatment processes is Seco Tools in Fagersta.

The purpose of this master’s thesis has been to investigate the occurrence of critical breakdowns and failures in the machine components of the chemical vapor deposition and post-treatment wet blasting processes by mapping the interaction between its respective process variables and their impact on critical breakdowns. The work has been conducted as a Six Sigma project utilizing the problem-solving methodology DMAIC.

Critical breakdowns were investigated combining principal component analysis (PCA), computational fluid dy- namics (CFD), and statistical process control (SPC) to create an understanding of the failures in both processes.

For both processes, two predictive solutions were created: one short-term solution utilizing existing dashboards and one long-term solution utilizing a PCA model and an Orthogonal Partial Least Squares (OPLS) regression model for batch statistical process control (BSPC).

The short-term solutions were verified and implemented during the master’s thesis at Seco Tools. Recommenda-

tions were given for future implementation of the long-term solutions. In this thesis, insights are shared regarding

the applicability of OPLS and Partial Least Squares (PLS) regression models for batch monitoring of the CVD

process. We also demonstrate that the prediction of a certain critical breakdown, clogging of the aluminum

generator in the CVD process, can be accomplished through the use of SPC. For the wet blasting process, a PCA

methodology is suggested to be effective for visualizing breakdowns.

(8)
(9)

Sammanfattning

Ytbehandlingar används ofta i tillverkningsindustrin för att förändra ytan på en produkt, inklusive dess re- laterade egenskaper och funktioner. Förekomsten av nedbrytning och korrosion i ytbehandlingsprocesser kan leda till kritiska haverier över tiden. Kritiska haverier kan försämra produkternas egenskaper och förkorta deras livslängd, vilket medför ökade ledtider eller extra kostnader i form av omarbetning eller skrotning.

Förebyggandet av kritiska haverier på grund av fel på maskinkomponenter kräver en noggrant utvald under- hållspolicy. Prediktivt underhåll används för att förutse utrustningsfel för att möjliggöra schemaläggning av underhåll innan komponentfel. Att utveckla prediktiva underhållsstrategier för ytbehandlingsprocesser är prob- lematiskt på grund av det stora antalet attribut att beakta i moderna ytbehandlingsprocesser. Framväxten av smarta sensorer och big data har lett till att företag eftersöker prediktivt underhåll. Ett företag som strävar efter prediktivt underhåll av sina ytbehandlingsprocesser är Seco Tools i Fagersta.

Syftet med examensarbetet har varit att undersöka förekomsten av kritiska haverier och fel i maskinkomponen- terna i kemisk ångavsättning och våtblästringsprocesser genom att kartlägga interaktionen mellan dess respektive processvariabler och deras inverkan på kritiska nedbrytningar. Arbetet har genomförts som ett Sex Sigma-projekt med användning av problemlösningsmetoden DMAIC.

Kritiska haverier undersöktes genom att kombinera principalkomponentanalys (PCA), computational fluid dy- namics (CFD) och statistisk processkontroll (SPC) för att skapa en förståelse för fel i båda processerna. För båda processerna skapades två prediktiva lösningar: en kortsiktig lösning med befintliga instrumentpaneler och en långsiktig lösning med användning av en PCA-modell och en ortogonal partial least squares (OPLS) regres- sionsmodell för batch-statistisk processkontroll (BSPC).

De kortsiktiga lösningarna verifierades och implementerades under examensarbetet på Seco Tools. Rekommend-

ationer gavs för framtida implementering av de långsiktiga lösningarna. I detta examensarbete delar vi också

insikter om tillämpligheten av OPLS- och partial least squares (PLS) regressionsmodeller för övervakning av

CVD-processen. Vi visar också att förutsägelse av en viss kritisk nedbrytning, igensättning av aluminiumgen-

eratorn i CVD-processen, kan åstadkommas genom användning av SPC. För våtblästringsprocessen föreslås en

PCA metod vara effektiv för att visualisera nedbrytningar.

(10)
(11)

Table of Contents

1 Introduction 1

1.1 Background . . . . 1

1.2 Problem discussion . . . . 2

1.3 Purpose . . . . 3

1.4 Delimitations . . . . 3

2 Methodology 4 2.1 Research approach and methodological choices . . . . 4

2.2 Define . . . . 6

2.3 Measure . . . . 7

2.4 Analyze . . . . 9

2.5 Improve . . . . 12

2.6 Control . . . . 13

2.7 Validity and reliability . . . . 14

3 Theoretical Background 16 3.1 Predictive maintenance . . . . 16

3.2 Applications of predictive maintenance . . . . 18

3.3 Batch statistical process control . . . . 20

3.4 Degradation of machine components in surface treatment . . . . 21

3.4.1 Influential variables in CVD . . . . 22

3.4.2 Influential variables in wet blasting . . . . 23

4 Case Study - DMAIC at Seco Tools 24 4.1 Define . . . . 24

4.1.1 Potential savings . . . . 26

4.1.2 SIPOC diagram . . . . 26

4.1.3 CVD . . . . 27

4.1.4 Wet blasting . . . . 28

4.2 Measure . . . . 31

4.2.1 Measurement system analysis . . . . 31

4.3 Analyze . . . . 33

4.3.1 CVD . . . . 33

4.3.2 Wet blasting . . . . 39

4.4 Improve . . . . 45

4.4.1 Short-term solution: Shewhart charts for predicting aluminium generator maintenance . . 45

(12)

4.4.2 Long-term solution: OPLS for batch monitoring of the CVD process . . . . 49

4.4.3 Short-term solution: Three sigma control limits for wet blasting maintenance . . . . 55

4.4.4 Long-term solution: Multivariate statistical process control using PCA . . . . 58

4.5 Control . . . . 60

5 Conclusion 62 6 Discussion 64 6.1 Method discussion . . . . 64

6.2 Validity and reliability of results . . . . 64

6.3 Key takeaways . . . . 65

6.4 Future studies . . . . 66

References 67

A Compilation of journals and their CiteScores I

B Process flowchart III

C Pearson Product-Moment correlation matrix for CVD variables. IV

D Strongest linear correlations between CVD variables V

E Pearson Product-Moment correlation matrix for wet blasting variables. VII

F Strongest linear correlations between wet blasting variables VIII

G Example of a breakdown in the wet blasting process X

H Batch Evaluation Model (BEM) based on 28 recipe steps. XII

I Revision Plan for maintenance XIII

(13)

List of Terms

This list of terms provides a brief explanation of the more technical terms used in the report.

Chemical Vapor Deposition (CVD)

A vacuum deposition method, used as a coating process, that uses thermally induced chemical reactions at the surface of a heated substrate.

Inserts

Inserts are exchangeable attachments for cutting tools, inserted into the body of the cutting tool.

Orthogonal Partial Least Squares (OPLS) Regression

A variant of PLS regression, which utilizes orthogonal signal correction to maximize the covariance between X and Y on the first latent variable.

Partial Least Squares (PLS) Regression

A regression technique used for reducing the number of predictors to a smaller set of components and performing least squares regression on the components.

Principal Component Analysis (PCA)

A technique used for dimensionality reduction by increasing interpretability and reducing noise in large datasets. New uncorrelated variables are created from the dataset that maximizes the explained variance.

Recipe

A combination of input variables, e.g. gases, chemical compounds, pressures, temperatures, and time, which results in a specific surface layer.

Slurry

An abrasive media, often in the form of sand with a certain grain size, in combination with water.

Substrate

A term used to describe the base material on which a new film or layer of material is added through

processing, e.g. deposited coatings.

(14)

1 Introduction

This chapter presents a background to the application of surface treatment processes in the manufacturing of inserts. The background is synthesized with a problem discussion. Then, the purpose and aims of the master’s thesis are described and, ultimately, delimitations are presented.

1.1 Background

Surface treatments are often used in the manufacturing industry to change the surface, including its related properties and functions, for a wide range of products. For inserts, surface treatment processes are used for strengthening the wear properties (Caliskan et al., 2017; Thakur et al., 2014). Surface treatments provide an addition of a hard and wear-resistant thin film coating, often combined with dedicated pre-and post-treatment of the tool and coating surfaces. Uncoated inserts lack the additional strengthening properties resulting in shorter service life (Venkatesh et al., 1991). Inserts, that have a contact surface continually in motion, are particularly suitable for surface treatment as they are exposed to large thermal and mechanical stresses such as heat generation and wear (Klocke & Krieg, 1999). For inserts, two commonly used surface treatment methods are chemical vapor deposition (CVD) and post-treatment wet blasting.

CVD is a manufacturing process used to synthesize hard and wear protective surface layer coatings onto, for example, inserts. The manufacturing process is carried out in a vacuum furnace at a high temperature, to which gases and other materials are added through various machine components such as gas tubes, valves, and flow meters (Crowell, 2003). Overall, CVD is used to produce large quantities of high-quality materials that increase both the performance and service life of coated products compared to uncoated ones (Choy, 2003). CVD can be regarded as a time-consuming process, generally around 12 to 30 hours, where an interruption during the process could lead to the scrapping of the products. In addition to the growth of protective layers, the gases and their reaction products in combination with high temperatures cause the constituent CVD machine components to commonly degrade through corrosion or oxidation processes. Also, the long process time makes it problematic to predict if machine components will fail during an ongoing deposition, leading to possible scrapping of faulty products.

The post-treatment method of wet blasting is a versatile technique used for cleaning, shaping, and modifying

surface properties such as roughness, strain- or stress state (Schalk et al., 2013; Tkadletz et al., 2015). The

investigated wet blasting application is categorized into the latter area, to improve the service life of the inserts

(Teppernegg et al., 2014). Wet blasting makes use of a mixture of an abrasive media with water, a slurry,

that in combination with compressed air delivers a powerful mechanical surface treatment. The abrasive me-

dia, commonly in the form of sand with certain grain sizes, in combination with water and compressed air, is

problematic for the constituent machine components of the blasting system. Comparable to the CVD process,

the performance of the wet blasting system degrades over time without proper maintenance. This degradation

(15)

makes it difficult to schedule maintenance of components to avoid component failure during the ongoing process (Deng et al., 2006).

Prevention of machine component failure in manufacturing requires a carefully selected maintenance strategy.

Failing machine components have a direct effect on the final product since it results in rework or scrapping of the products. Minimizing rework and scrapping through correct maintenance scheduling may contribute to more sustainable manufacturing. Therefore a proper maintenance strategy is required that increases the availability and the reliability of the production facility (L. Wang et al., 2007). The development and selection of suitable maintenance strategies that are based on preventative elements is challenging due to the vast number of attributes considered (Bashiri et al., 2011). Wrongful selection of maintenance strategy may cause high losses, such as lost production time or volume, indicating a relation between maintenance and profitability (Salonen & Deleryd, 2011).

1.2 Problem discussion

Due to the complexity of interactions between production activities within the manufacturing ecosystems, main- tenance has reached a critical point in the manufacturing industry (Sezer et al., 2018). The related maintenance costs represent a total of 15 to 40 percent of the total operating costs for all manufacturing (Han & Yang, 2006).

Within the manufacturing industry, machine reliability is of particular importance, which creates a need to be able to predict maintenance to uphold a high level of reliability (Selcuk, 2017). A company that experiences a need for predictive maintenance is Seco Tools, which manufactures inserts for metal cutting solutions. The need is particularly evident in two of their processes: the CVD and the wet blasting process. The CVD process handles thousands of inserts per batch and the wet blasting process handles hundreds of inserts per batch, where the presence of unplanned maintenance creates downtime that can increase the lead time by up to 70 percent (M. Nyström & M. Näsman, personal communication, January 25, 2021). Increased lead times endangers ser- vice reliability, which hampers the service quality (Parasuraman et al., 1988). Not only do the breakdowns induce increased lead times, but also internal costs of poor quality such as reworking or scrapping of inserts.

An appropriate maintenance strategy will prevent increased lead times and hence reduce internal costs of poor quality.

Predicting maintenance requires data access, which now is possible through the Internet of Things (IoT) (Civer-

chia et al., 2017; Xiaoli et al., 2011). Data is the key to generate information that can predict or collaborate

to predict decisions (Zonta et al., 2020). In contrast to conventional preventive maintenance, predictive main-

tenance is based on historical data to predict trends, behavioural patterns, and correlations through advanced

analytics, e.g., statistical modeling or machine learning approaches (Carvalho et al., 2019). The statistical mod-

eling provides a new tool for conventional preventative maintenance and converts it to a predictive strategy for

scheduling maintenance. Statistical models can be used for identifying non-conforming patterns, i.e. anomaly

detection, which are patterns that do not correspond to the conventional behaviour of the process (Zenati et al.,

2018). Predictive maintenance establishes conditions that will benefit the decision-making process for main-

(16)

tenance activities and minimizes equipment downtime (J. Lee et al., 2006). Predictive maintenance has been researched thoroughly and is a popular area in the movement towards industry 4.0 (Zonta et al., 2020). Studies on the CVD and wet blasting processes have had a focus on optimizing input variables through Design of Exper- iments (Ahmed et al., 2006; Papon et al., 2017), but few studies have investigated the details of which process variables cause a gradual degradation of machine components within the processes. By degradation, we refer to the mechanisms that exhibit the effects of wear and tear over time, and critical breakdowns are breakdowns that cause an interrupted process. Predicting the need for maintenance in both processes requires an understanding of the interaction between process variables during critical breakdowns, which few have studied in the CVD (Yoo et al., 2019) and wet blasting contexts. We argue that the understanding of which variables systematically affect the degradation of the machine components of the CVD and wet blasting process is incomplete and needs further investigation.

1.3 Purpose

The purpose of this master’s thesis is to develop models that aid in reducing the number of critical breakdowns in the CVD and wet blasting processes.

Three aims have been formulated to achieve the purpose:

1. Investigate the root causes of critical breakdowns in the CVD and wet blasting processes.

2. Increase the understanding of how the process variables change during critical breakdowns in the CVD and wet blasting processes.

3. Develop and verify models for anomaly detection.

1.4 Delimitations

In order to achieve the purpose within the given time span of approximately 20 weeks, two delimitations have been made. The thesis work will only investigate one CVD machine and one wet blasting machine. The reason for the delimitations is that each CVD and wet blasting machine has their own specific wear and usage pattern.

The machines utilize different recipes which consist of varying attributes for pressure, temperature, flow, and cycle time, which results in unequal machine component wear. The chosen wet blasting machine recently had all its machine components replaced in order to create necessary conditions for predicting machine maintenance.

Two or more of the same type of machines rarely work under the same conditions and a comparison between

them could be misleading (M. Nyström, personal communication, January 28, 2021). Because of the interaction

between process variables, having unequal machine component wear could lead to misleading results. Having

the possibility of conducting analysis on components with equal amount of wear could result in less overall

variance.

(17)

2 Methodology

This chapter presents the methodology to provide the readers with a detailed description of the workflow of the case study. Then each phase in the DMAIC cycle, with a corresponding course of action, is presented. Conclusively, a discussion about the study’s quality will take place.

2.1 Research approach and methodological choices

The thesis work was conducted as a case study at Seco Tools. The case study had an exploratory approach with a focus on investigating casual relations to develop a greater understanding of the problem area. An exploratory approach was deemed necessary since no underlying assumptions or predictions could be made regarding the occurrence of the critical breakdowns. One part of the study’s aim was to increase the understanding of the manufacturing processes, i.e., understanding why the deviations occur, which provided an excellent avenue for exploratory research (David & Sutton, 2011).

The problem-solving methodology for the case study was based on the improvement procedure commonly known under the acronym DMAIC, which stands for Define, Measure, Analyze, Improve, and Control (Prashar, 2014).

DMAIC is common in Six Sigma projects as a method of reducing variation but has been applied in practice as a generic problem-solving and improvement strategy (McAdam & Lafferty, 2004). DMAIC is particularly applicable in problem-solving where the problem must be defined and categorized into subcategories (De Mast

& Lokkerbol, 2012), which is the reasoning for using the DMAIC methodology. According to park2003six, the five steps in DMAIC can be divided into two main areas. The first area contains Define, Measure, and Analyze and constitutes the characterization of the problem. The optimization and follow-up from the achieved result are then dealt with in the second area relating to Improve and Control. The case study has been influenced by these two main areas, with the extension of an additional area that concerns the understanding of the problem.

park2003six sees the last area as optimization, but this study recognizes it as an improvement stage. These three

different areas are referred to as Phases and their connection to DMAIC is illustrated in Figure 2. A schematic

overview of the various activities and in which phase they have been carried out is illustrated in Figure 3.

(18)

Time Improvement

strategy

Characterization (Phase 2) Understanding

the problem (Phase 1)

Improvement (Phase 3)

Define Measure Analyze Improve Control

Figure 2: The three phases of DMAIC, adapted from Park (2003).

Phase 2 Measure & Analyze

March Phase 1

Define January & February

Phase 3 Improve & Control

April & May

Literature study Unstructured interviews with process engineers

Development of problem description

Identification of essential

information Data retrieval Data smoothing START

END

Verification of models

Exploratory analysis

Model evaluation

Suggestions and recommendations

for future work Identification of

analysis tools Development of

models

Figure 3: Schematic illustration of the project phases and their content.

The purpose of each phase was to:

• Phase 1: Investigate and create an understanding of the problems associated with the CVD and wet blasting processes.

• Phase 2: Identify and increase the understanding of process variables and the data to use, and how to retrieve and analyze the data for the CVD and wet blasting processes.

• Phase 3: Verify results, create models for anomaly detection, and implement the models connected to the

CVD and wet blasting processes.

(19)

Based on the knowledge generated and the critical breakdowns identified during Phase 1, the authors realized that the structure of the project could not mimic the more traditional DMAIC project with e.g. factor trials for Phase 2 . Factor trials for critical breakdowns were considered costly and unsustainable, as historical data could show equivalent information. It was identified that model building through statistical software was suitable to clarify trends and deviations overtime for the CVD and wet blasting processes. By developing a model for each process, where alarms were given for deviating values, developed models could in Phase 3 be verified against real- time data. The systematic and fact-based approach in DMAIC allowed the study to utilize a rigorous framework that enabled a continuous workflow in a gated manner. The five phases of the DMAIC study were performed sequentially and the course of action regarding each phase will be described in detail below.

2.2 Define

The Define phase consisted of developing an understanding of the complex CVD and wet blasting processes.

Field observations and unstructured interviews were used to develop knowledge about the CVD and wet blasting processes and the associated breakdowns. Field observations were conducted by observing the processes to gain insights into their behavior and purpose. Unstructured interviews were performed with process engineers to learn about the processes from their experience. Notes were compiled during the interviews and controlled by asking the interviewees for their approval. After creating an understanding of the processes and the critical breakdowns, a problem description was developed, bundled with the study’s purpose and delimitations. The project’s savings potential was established to put the problem from an economic point of view. A theoretical background was composed to gain even more knowledge about the CVD and wet blasting processes and how to develop a suitable predictive maintenance strategy. Other activities that were conducted were the development of detailed process descriptions for the CVD and wet blasting processes, a SIPOC diagram, a process flowchart for the CVD and wet blasting, and a review of previous internal studies. All these activities were conducted to develop a greater understanding of the complex manufacturing processes. The detailed process descriptions, the SIPOC diagram, and the process flowchart were developed with the help of internal documents, interviews, and literature.

Collection of theory and knowledge was done primarily during the Define phase. The primary purpose of the

theoretical background was to increase the understanding of maintenance strategies and the problems that occur

in the CVD and wet blasting processes. Another purpose of the theoretical background was to find predictive

maintenance practices suitable for the CVD and wet blasting processes. The collection of theory and literature

has mainly been through the databases Google Scholar, Web of Science, and Scopus. Google Scholar was included

as an inclusive and automated approach for finding any scholarly document, while Scopus and Web of Science

served as a more indexed search for articles. The articles used have been reviewed and published in scientific

journals. Books have been utilized in some cases to gain a holistic perspective on matters where journal entries

lacked background information. In the cases where conference proceedings were used, a thorough examination

of its contribution and the scientific level was conducted before including it in the thesis. CiteScore was used to

get an assessment of the relevance of each scientific journal, where it was decided to only use journals with more

(20)

than two in CiteScore to evaluate its relevance. CiteScore is calculated by the number of citations for the last four years of the journal divided by the number of publications in the last four years of the journal. In Table 1, an overview of the eight highest-scoring journals is presented.

Table 1: Compilation of the highest ranked CiteScores used.

Scientific journal CiteScore (2020)

Progress in materials science 47.1

Science 45.3

Journal of Statistical Software 18.2

IEEE Transactions on Industrial Electronics 16.4 Journal of Industrial Information Integration 14.7 IEEE Transactions on Industrial Informatics 13.9

Expert Systems with Applications 13.9

Automatica 12.4

Mean (all articles): 7.907

In Appendix A, a table containing every journal entry and its CiteScore is presented. Keywords that were used in the theoretical background were Chemical vapor deposition, Wet blasting, Equipment condition model, Remaining useful life prediction, and Predictive maintenance.

2.3 Measure

Based on the problem description established in the Define phase, the initial part of the Measure phase was used to investigate and identify available data and how it could be used. The collecting of real-time data was proceeding before the start of the master’s thesis work. Because of this, real-time data could be obtained directly from online cloud storage without needing to conduct further measuring. Before moving on to the Analyze phase, the real-time data had to be smoothed, i.e. elimination of statistical outliers, missing data points, and structured.

Delimitations were made to limit the amount of data, which was done by restricting the data collection only to

feature selected recipes, specific time frames, and a single machine for each process. The investigated recipes

were selected for two reasons. The recipes were the most frequently ran recipes in the production facility, and

the recipes investigated had the most critical breakdowns associated with them. Data collection for the CVD

process was limited to 52 batches, reaching half a year back in time. For the wet blasting, data from 2021 was

used. The variables of interest were decided with the help of process engineers, a R&D scientist, and literature

regarding both the CVD and wet blasting processes. A simple measurement system analysis was conducted to

understand the components of variation in the measurement system. The measurement system analysis focused

more on the wet blasting process since this process had the majority of deviating sensors.

(21)

The data collection used for this master’s thesis has been a combination of qualitative and quantitative data.

This thesis has collected both primary data and secondary data. Primary data has been real-time data from the online data storage and interviews. Data from the online data storage is considered primary data as it was raw data, which is data that has not been processed for use. Secondary data in the form of previous production data have been used to estimate the potential savings. A detailed description of the methods used for the data collection and the reasoning behind it can be seen below.

Qualitative data

Unstructured interviews were used to increase the understanding of the two processes and the process variables linked to them. The unstructured interviews were conducted when deemed necessary to obtain a clear and correct understanding of the CVD and wet blasting processes. The focus was on understanding the processes, which was obtained by physically being on-site and questioning the process engineers in how the processes work. With the help of unstructured interviews with engineers and scientists, even greater knowledge and understanding of the processes could be obtained. Due to the authors’ previous lack of knowledge in CVD and wet blasting, unstructured interviews were seen as the best way to increase knowledge. Unstructured interviews emphasize the depth validity of each interview. This allows the interviewee to tell their story and allows the interviewers to determine the flow of the dialogue (David & Sutton, 2011). Discussions with the interviewees led to valuable insights regarding the critical breakdowns and both processes. All interviews were performed virtually for safety reasons due to the COVID-19 pandemic. A compilation of the unstructured interviews with their purpose and results can be seen below in Table 2.

Table 2: Compilations of interviews during the master’s thesis.

Position Date Purpose of interview Results Duration

& location Manufacturing

Engineer Coating CVD

2021-01-27 Review of the CVD pro- cess and the data connec- ted to this process.

Increased understanding of the CVD process and the data of interest.

1h, Microsoft Teams Manufacturing

Engineer Wet blasting

2021-01-29 Review of the wet blasting process and the data and variables connected to the wet blasting process.

Better understanding of the wet blasting process and the various variables and data that was to be used.

1h, Microsoft Teams

Senior R&D Scientist CVD

2021-02-04 Review of the current vari- ables for the CVD pro- cess and the development of new variables.

Clarification of which vari- ables that are of most interest and the development of new variables.

1h,

Microsoft

Teams

(22)

Quantitative data

The collection of quantitative data was done with the use of historical data from both manufacturing processes.

The CVD and wet blasting processes have been collecting and storing data online in a online data storage. New data are continuously enrolled and, therefore, there was no need of collecting new data. For the CVD process, data collection was limited to batch runs during 2020 and 2021 to avoid considerable differences in component wear or measurement instruments. During phase 2, 59 variables of interest were used for assessing the individual importance of factors for describing a clogging situation in the CVD process. In phase 3, 54 variables were used to develop the Orthogonal Partial Least Squares (OPLS) regression model in SIMCA 15 (Sartorius Stedim Data Analytics AB, 2017). SIMCA 15 is a multivariate data analysis software developed by Sartorius that can perform multiple different statistical methods such as cluster analysis, PLS regression, and also modeling techniques with orthogonal projections. For the wet blasting process, data collection was limited to 2021 and data after January 11 to avoid considerable differences in component wear or measurement instruments since the machine had maintenance on January 11. The data between January 30 and February 18 could not be used due to problems with the data acquisition system. During phase 2, 16 variables of interest were used during the analysis of the breakdowns in the wet blasting process. These 16 variables were also used in phase 3 to develop a process monitoring model.

2.4 Analyze

The analysis of the primary data focused on establishing knowledge about the variables that are influenced by

the critical breakdowns in both processes. An illustration of the workflow of the Analyze phase for each process

can be seen in Figure 4.

(23)

Correlation Matrix CVD

Wet blasting

PCA Model CFD Simulation

Summary Statistics Normality Tests Assumption of

Normality Shewhart Charts

Correlation Matrix PCA Model

What variables have the highest variance?

Normalization of regular conditions

Normalization of deviating conditions

Case-by-case comparison between regular and deviating

conditions

Contribution plots What variables change

as a result of the breakdowns?

Correlation Matrix PCA Model CFD Simulation

Summary Statistics

Correlation Matrix PCA Model CFD Simulation

Summary Statistics Shewhart Charts

Correlation Matrix Correlation Matrix Studying the correlation

structure among variables

PCA Model What variables have the

highest variance?

CFD Simulation Developing a greater

understanding for a clogging situation

Summary Statistics Can statistical trends predict clogging?

Control Charts Utilizing statistical

trends to predict clogging

Correlation Matrix Studying the correlation

structure among variables

Analyze

Batch monitoring using PCA Batch monitoring

using OPLS

Improve

SPC for clogging prediction

SPC for wet blasting maintenance

Figure 4: Workflow of the Analyze phase.

For the CVD process, a correlation matrix was first used to study the correlation structure among variables.

This was followed by two Principal Component Analysis (PCA) models: one PCA model on a good performing batch and one PCA model on a batch with a breakdown to make a comparison between the batches. This was done using multivariate data analysis (MVDA), such as PCA, in the software SIMCA. The variables that were of interest to predict a certain breakdown, clogging of the aluminium generator, were selected during this time.

Computational Fluid Dynamics (CFD) and statistical analysis were used after establishing an understanding of the most impactful variables. The software COMSOL Multiphysics 5.6 (COMSOL Inc., 2020) was used for developing a CFD model. Statistical analyses were conducted using Statgraphics Centurion 19 (Statgraphics Technologies Inc., 2020). After evaluating the variables of interest, statistical process control (SPC) and MVDA modeling were used to create models for predicting maintenance in the Improve phase for the CVD process.

For the wet blasting process, the analysis procedure mimicked the CVD process up until after the PCA model.

From the analysis, it was realized that variables follow a certain pattern over time, which made us explore

the opportunity to normalize each recipe with its own mean and standard deviation to create a case-by-case

comparison between regular and deviating conditions. This was followed by using contribution plots to see what

(24)

variables change as a result of the breakdowns. After creating an understanding of the variables of interest, SPC and PCA were used for model building for predicting maintenance in the Improve phase for the wet blasting process.

Principal component analysis

Principal component analysis (PCA) is a multivariate statistical method used as a dimensional reduction tech- nique (Jolliffe, 1986). PCA projects high-dimensional data into direction vectors. Direction vectors constitute an orthonormal basis where individual data points are linearly uncorrelated (Jolliffe, 1986). The number of dimensions corresponds to the number of principal components, where removal of principal components results in fewer dimensions but also less overall variance explained. The principal components are constructed as linear combinations of all measured variables and their weights are decided by the eigenvectors of the data’s covariance or correlation matrix. The covariance matrix is typically used when the variable scales are similar, and the correlation matrix is generally used when the variables are on different scales. Principal components based on the correlation matrix do not depend on the absolute values of correlations, but the ratios between correlations.

With principal components derived from the correlation matrix, principal components can be defined as:

z = α

0

x

(1)

where α’ is the column consisting of the eigenvectors of the correlation matrix, and x

consists of the standardized variables (Jolliffe, 1986). The first principal component will show the most variance of the data, and the second will show the second-most, etc. Figure 5 illustrates how direction vectors are built based on a collection of points in a two-dimensional space.

V ariable x

Variable x

Principal Component #2 Direction of second most variance

Principal Component #1

Direction of most variance

1

2

Figure 5: Illustration of PCA.

PCA can be used to obtain lower-dimensional data while preserving as much of the variance as possible (Jolliffe,

1986). Dimensionality reduction can also be useful to minimize noise within datasets. By including the loading

(25)

vectors with the largest eigenvalues, one can obtain the most useful information related to the problem and minimize the information related to noise (Wold et al., 1987). If there is a strong correlation between explanatory variables, PCA can be useful to remove linear correlations (Jolliffe, 1986). PCA was applied in this project for these reasons, which served to be useful for the high-dimensional data. If the reader is more curious about the mathematical definition of PCA, one can find an extensive interpretation and definition of PCA by Jolliffe (1986).

2.5 Improve

The Improve phase was spent developing solutions for predicting maintenance based on the knowledge obtained in the Analyze phase. Two solutions were developed for both processes: one short-term solution with SPC and one long-term solution with process monitoring. The short-term solutions developed were validated, and then the implementation of the short-term solutions was conducted. Time was also spent developing process monitoring models that could be used to monitor process performance and hence decide whether the process is drifting away from its normal operating range. One PCA model and one OPLS regression model were developed for monitoring batch performance. The process monitoring models was built following a batch statistical process control (BSPC) methodology, where "good" performing batches are used to model the trajectory of the process and to define control limits. After selecting a sample of good performing batches, variables are then preprocessed by scaling and transformation. A more detailed explanation of BSPC is given in chapter 3.3.

Partial Least Squares Regression

Partial Least Squares (PLS) regression is a statistical method used to find a linear regression model by projecting the predicted variables and the observable variables to a new space. A PLS model is also known as a bilinear factor model, where the original data (X) is projected onto latent variables (LV) for predicting the Y variable (Geladi & Kowalski, 1986). This makes PLS bear some relation to PCA, and the underlying model of multivariate PLS can be described as:

X = TP

T

+ E (2)

Y = UQ

T

+ F (3)

where X is a matrix of predictors, Y is a matrix of responses. T and U are matrices that are projections of X and Y respectively. P and Q are orthogonal loading matrices, and E and F are error terms (Geladi & Kowalski, 1986). For a more detailed description of the PLS algorithm, a thorough explanation of the algorithm is provided by Geladi and Kowalski (1986). The goodness of fit in every model can generally be described by three variables:

R2X, R2Y, and Q

2

(Souihi et al., 2015). R2X explains how many percent of the variation of X which is in

the number of extracted components. The explained cumulative variation in Y, R2Y, which is the correlation

between the observed and the predicted values of the studied response (i.e. how much the variables change over

time in that phase). Q

2

is the fraction of the total variation of X or Y that can be predicted by a component,

which is estimated through cross-validation. The objective is to maximize the R2Y and Q

2

scores, as the higher

(26)

the values the better the response can be described and predicted by the independent variables (Souihi et al., 2013).

PLS regression is particularly applicable in two cases: when the matrix of predictors has more variables than observations, or when there is a strong correlation among variables (Wehrens & Mevik, 2007). In this project, a strong correlation among variables was present. PLS was also strongly suggested by the literature for batch statistical control. A three-dimensional illustration of a PLS model can be seen in Figure 6.

X1

X2

X3 Direction of

the 2nd LV

Direction of the 1st LV

Direction that gives the best correlation with Y Subspace

Figure 6: Three-dimensional illustration of a PLS model (Geladi & Kowalski, 1986).

2.6 Control

The phase was used to develop management plans and revision plans for both maintenance solutions. This phase consisted of establishing a revision protocol that goes in-depth with how the maintenance solutions should evolve to increase the robustness of the solutions. The revision protocol describes how alarm limits should be systemat- ically re-evaluated to ensure that the mean and standard deviation are representative of the current process state.

The protocol also covers details regarding how solutions develop over time in the event of considerable process

changes, such as recipe changes. The description explains how the operators should control and monitor the

processes to ensure that maintenance is carried out at the right time. The phase describes how Seco Tools should

act in the upcoming future to enable short-term predictive solutions. A future agenda is presented concerning

how long-term solutions should be incorporated to enable a predictive maintenance strategy combined with a

root cause analysis tool.

(27)

2.7 Validity and reliability

In quantitative research, two key concepts are generally used to discuss the study’s quality, namely validity and reliability. Validity refers to whether the results measure what is intended to be measured (Bell et al., 2007).

Reliability refers to the ability to which the results can be reproduced when the research is repeated under the same conditions in a future state (Bell et al., 2007).

The data collection was limited to only contain a single recipe in both processes to attain a consistent way of collection data. The purpose of this was to evaluate batch runs that had the same parameter settings, which provided consistency in data. Outlier elimination was conducted with help from process engineers to incorporate their valuable experience to determine which outliers that were either measurement errors or statistical anomalies.

Missing values resulted in the elimination of all data points in the same time frame to ensure a consistent result. The choice to only investigate one CVD and one wet blasting machine was made to avoid variations between machines, such as age, the number of batches produced, and differing machine components. This creates machine-specific solutions, which limits the generalizability. Creating a model fit for several machines with differing machine components could contribute to a model with a lack of fit, because of the variance between machines.

During the work, it was noted that two variables from the wet blasting process did not show accurate values.

This meant that these two variables had to be excluded from the entire analysis, which may have affected the results. This was, however, disregarded as the correlation matrix showed that these two variables had a strong correlation with other variables. The correlation can suggest that the other variables represent the two missing variables to some extent, meaning that the result might not have been affected as much.

The selection of variables was made with the help of literature, process engineers, and an R&D scientist. This selection was accompanied by subjectivity as it relied on the expertise and experience of the process engineers and the R&D scientist. This was not possible in any other way due to the lack of pre-existing literature concerning the critical breakdowns. In the absence of literature, the authors had to trust the first-hand experience of the process engineers for variable selection. The selection may have affected the results as influential variables may have been excluded. The subjectivity was counteracted by including all available variables for the wet blasting process. For the CVD process, variables were decided with help from a process engineer with 16 years of experience in CVD and an R&D scientist with 13 years of experience in CVD.

The authors have, to the greatest possible extent, tried to exclude potential biases in analyzing procedures.

Progress was evaluated in multiple iterations in consultation with process engineers and supervisors to limit the researchers’ biases. Interactions were made weekly with supervisors to ensure progress and to get a holistic perspective on analytical progress.

The theoretical background has only used journals with CiteScores greater than two. CiteScore resembles Impact

Factor to some extent, where both are used for rating journal quality. For simplicity’s sake, if CiteScore would

(28)

be equal to Impact Factor, using Impact Factors higher than two would yield the top 40% of journals for 13,000

selected scientific journals (SCI Journal, 2018). The authors decided that CiteScores greater than two was the

restraint that enabled the theoretical background to show relevance and its influence on the subject. CiteScore

served as a performance metric, where journal entries have been evaluated before being included in the report

solely based on its CiteScore. The authors do understand that there are numerous articles with relevance and

low CiteScores. However, in a subject that evolves due to technological advancement, it is of importance to only

use articles that show prevalence in academia.

(29)

3 Theoretical Background

This chapter presents a background linked to maintenance practices to develop an understanding of the most common maintenance practices. Then, a selection of studies linked to data-driven predictive maintenance in simple components and more complex components is presented. Conclusively, findings regarding impactful vari- ables on the CVD- and wet blasting processes are presented to get an insight into which variables are of interest for developing a maintenance solution.

3.1 Predictive maintenance

The concept of predictive maintenance (PdM) has in the latest years been adopted by many manufacturing companies (Z. Li et al., 2017; Selcuk, 2017; Van Horenbeek & Pintelon, 2013). PdM can be described as the practice of anticipating failures to optimize maintenance efforts based on historical data (Selcuk, 2017).

Component failure can mean many things, either total failure or degraded performance, but we refer to it as noticing the degraded performance before reaching total failure. Historical data can be used for both diagnostics and prognostics, i.e. providing information about machine health, indicating potential failure, and providing guidance for maintenance scheduling (Jardine et al., 2006). Maintenance scheduling becomes proactive rather than reactive with the use of PdM, and thus effective and efficient (Selcuk, 2017). This ability to plan maintenance scheduling in advance minimizes equipment downtime and costs (J. Lee et al., 2006; Susto et al., 2014).

PdM was first introduced back in the late 1940s (Prajapati et al., 2012), where experienced operators analyzed the

working area by using his or her senses of seeing, hearing, smelling, and touching to detect a faulting component

Selcuk, 2017. This method of maintenance is still valued, but the emergence of intelligent sensors has replaced

the need for "seeing, hearing, smelling, and touching". Through the use of intelligent sensors, the collected data

provides new opportunities that can be utilized to predict the remaining useful life (RUL) of machine components

(Yan et al., 2017). In principle, PdM utilizes the sensor data to predict faults or failures in equipment, machines,

or components (Bekar et al., 2020). The predictions is used for the generation of inspection intervals based on

component performance or condition.

(30)

Other commonly known techniques for maintenance policies, in order of complexity, can be categorized into the following strategies:

1. Run to Failure (R2F) Maintenance is the classic type of reactive maintenance where there is no underlying plan of when to perform maintenance. Commonly known as corrective maintenance or unplanned main- tenance, it is the simplest of maintenance strategies where it is performed only when the equipment has failed. R2F Maintenance could result in high equipment downtime and costs, which could lead to a large number of defective products in production (Susto et al., 2012).

2. Preventative Maintenance (PvM) refers to the type of maintenance that is carried out periodically or on a planned schedule to anticipate the point of failure in processes. PvM usually prevents most failures but could lead to unnecessary corrective action which results in inefficient use of resources (Susto et al., 2014).

3. Condition-Based Maintenance (CBM) is based on constant monitoring of machine or equipment health to recognize when maintenance is required. CBM is tied to the degradation of the process and usually, maintenance cannot be planned (Selcuk, 2017).

4. Predictive Maintenance (PdM) is when maintenance is performed based on an estimation of the health status of components from historical data (Susto et al., 2014). Maintenance schedules are created based on continuous monitoring of components and then utilizing prediction tools to measure when maintenance is necessary. This helps identify the functional breakdowns before they happen but nowhere near poten- tial breakdowns (Munirathinam & Ramadoss, 2014). PdM and equipment monitoring is becoming more prevalent as manufacturers try to minimize downtime and maximize productivity (Selcuk, 2017).

Each maintenance policy has its place, but each comes with its’ drawbacks. R2F delays maintenance actions

and comes with a huge risk of unavailability of production assets. PvM prevents most failures but comes with

an extra cost due to possible unnecessary corrective action. A suitable maintenance strategy should, therefore,

reduce failure rates amongst equipment and minimize maintenance costs. With this in mind, PdM becomes

particularly appealing to most manufacturers in the era of Industry 4.0 due to the increasing amount of data

from production equipment Kumar et al., 2019.

(31)

3.2 Applications of predictive maintenance

There are many applications of predictive maintenance, but most are related to the three classifications of the approach used for prediction:

• Physical model-based, i.e. a mathematical modeling that corresponds to the condition of the component, needing a precise indicator of failure, and utilizing statistical methods to predict the condition (D. Wu et al., 2018).

• Knowledge-based, i.e. mathematical models closely related to the physical type, but utilizes expert systems or fuzzy logic to reduce the complexity of the models (Ayad et al., 2018; D. Wu et al., 2016).

• Data-driven, i.e. mathematical models based on statistics, pattern recognition, artificial intelligence, or machine learning (ML) algorithms (Zonta et al., 2020).

Based on Zonta et al. (2020) literature review, most PdM solutions focus on the data-driven classification of the prediction. The range of methods used is not very large, where solutions that require larger mathematical volumes typically require the use of ML algorithms or artificial intelligence to handle the computation complexity.

Many studies have been conducted in an attempt to monitor the life span of simple components. A common way for equipment- or process-monitoring is control charts, where Garcıa-Escudero et al. (2011) utilized multivariate control charts for condition monitoring and fault detection in induction motors. By using a fast Fourier transform to extract features, they developed an efficient algorithm for fault detection. In recent years, machine learning approaches such as random forests feature selection (RFFS), support vector machine (SVM), and artificial neural network (ANN), have seen an increase in use for equipment maintenance because of their predictive capabilities (Tian, 2012; Widodo & Yang, 2007; Yang et al., 2009). B. Wu et al. (2013) used ANN algorithms to optimize a CBM maintenance strategy by predicting the lifetime of pump bearings and calculating the maintenance cost per unit, determining whether to perform PvM or component replacement. Other researchers have used a health index to predict the health state of components. Kim et al. (2012) estimated the health state of pumps by predicting the RUL with the use of SVM. By using an age-dependent variable on a semi-Markov model, Peng and Dong (2011) assigned different health states (baseline, contamination 1, contamination 2, and failure) and used the model to predict the health state of a pump based on vibration signals. Similarly, Liu et al. (2015) utilized an adaptive semi-Markov model with four health states based on 36 sensor readings on a pump. They found that the model was effective by introducing dimension reductions which resulted in considerably decreased space- and computation complexity.

As shown, many studies have been conducted to predict the health condition of components and provided excellent

results. On the other hand, most of these studies focus on simple components with fewer sensors, i.e. pumps,

or bearings on motors. This means that these approaches may not apply to the same extent in a high-tech

industry such as the machining industry. Highly complex manufacturing processes, such as CVD, consists of

(32)

hundreds of process steps that generate large amounts of real-time data from hundreds of sensors. Yoo et al.

(2019) tackled this by proposing an algorithm to analyze sensor data for equipment health condition monitoring in semiconductor manufacturing. They used clustering algorithms to group sensors with similar characteristics and extract key sensors that are correlated with the equipment’s health condition. After identifying the sensors that were correlated with the equipment’s health condition, they utilized a multiple regression model to predict the equipment’s health condition with great results.

An important step in developing a robust data-driven model is selecting the monitoring signals that are correlated with system health. This becomes problematic in manufacturing processes with hundreds of sensors due to possible correlation between independent variables, which skews interpretability about which predictors that are redundant with respect to others. This phenomenon is referred to as multi-collinearity, which PCA can eliminate (C.-Y. Lee et al., 2019). PCA methods transforms the original variables into several orthogonal principal components (PCs), which reduces dimensionality. To combine the clarified benefits of PCA with regression modelling, Partial Least Squares (PLS) regression can be used to extract a small number of latent variables by projecting the independent variable and the dependent variable into a new space simultaneously (Tenenhaus et al., 2005). PLS has become popular nowadays for fault diagnosis and predictive maintenance (G. Li et al., 2010; D. Wang et al., 2009; Zhang et al., 2009). A reasoning for this is the ability to tackle large numbers of highly correlated variables (Yin et al., 2014). An alternative to PLS regression that has seen use in the literature is the orthogonal partial least squares (OPLS) method (Souihi et al., 2015). In comparison to PLS, OPLS utilizes orthogonal signal correction which removes the systematic variation in X that is orthogonal to Y. The study by Souihi et al. (2015) used OPLS to model a batch chemical hydrogenation process to great success. OPLS proved to be superior over both PLS and PCA in this particular study as OPLS had an advantage in preprocessing of data. The superior preprocessing can be efficient in situations where structured noise dominates, such as when modelling batches with similar characteristics.

Figure 7 summarizes this chapter by providing an overview of applicable statistical methods to use when devel- oping a predictive maintenance solution.

Machine Learning Approaches Random Forest Feature Selection (RFFS)

Support Vector Machine (SVM) Artifical Neural Network (ANN)

Control Charts Hotelling's T2 Statistical Modeling

Semi-Markov model Principal Component Analysis (PCA)

Partial Least Squares (PLS) Regression

Clustering Algorithms

Applications of Predictive Maintenance

Figure 7: Some methods used for developing a predictive maintenance solution.

(33)

3.3 Batch statistical process control

Batch-wise manufacturing processes have seen application in several industries, such as biochemical, food, high- quality chemicals, or semiconductor (González-Martınez et al., 2011; Souihi et al., 2015; Wold et al., 1998).

Developing monitoring schemes that can detect process failures and malfunctions, followed by pinpointing of root causes, can lead to meaningful improvements in product quality (Kourti, 2003). As a result, several techniques have been developed for online monitoring of batch processes, for example, multivariate batch modeling schemes based on PCA and PLS methods (González-Martınez et al., 2011). Generally, this way of working has been named batch statistical process control (BSPC), where "good" batches are used to model the trajectory of the process and to define the control limits, which new batches are then predicted against. Souihi et al. (2015) states that PCA and PLS have been the state-of-the-art methods used for BSPC.

Developing a monitoring system for BSPC can generally be achieved through three steps: modeling (model

training), prediction (model testing), and online deployment (model execution). Model building is first done

from a set of normal batches, with no deviations, to make a model based on "good" performing batches. During

the modeling step, several steps may need to be performed, including normality tests, preprocessing, scaling,

and arrangement of the three-way matrix data in a two-way (Souihi et al., 2015). Normality tests are done to

assure that variables follow a normal distribution, which is needed to utilize three-sigma control limits. Scaling

of data, or preprocessing, is a general approach for optimizing data by decreasing the influence of factors that

disturbs the data in statistical models (Zhu et al., 2019). Arrangement of the three-way data matrix in a two-way

can generally be done through batch-wise unfolding (BWU) (Souihi et al., 2015) or observation-wise unfolding

(OWU) (Wold et al., 1998). The fundamental difference between BWU and OWU is that by using BWU the

PCA or PLS model will find principal components that best explain the deviation among batches, while OWU

will find principal components that best approximate the raw trajectory (Souihi et al., 2015). The structure of

BWU and OWU is illustrated in Figure 8.

(34)

3D Batch Data Variables

Batch

Time Batch t=1 ... t=T

Variables

Variables

Batch t=1

...

t=T BWU

OWU

Figure 8: Arrangement of the three-way data matrix through either BWU or OWU (Jeong & Lee, 2020).

It is of particular importance to recognize the difference between PCA and PLS for BSPC. PCA will maximize the variance of the projected data, while PLS maximizes the covariance between the independent variables and the dependent. Any PLS model could, however, be misleading as there could be systematic variation in the independent data that are not related to the dependent variable which is studied (Tauler et al., 1995). This means that there is always a risk that PLS might catch systematic variation not correlated to batch time or batch maturity in BSPC (Souihi et al., 2015). A way to deal with this is by utilizing orthogonal projections to latent structures (OPLS) to OWU data matrices (Trygg & Wold, 2002). OPLS can differentiate between the systematic variation which is correlated to the dependent variable and systematic variation which is not correlated to the dependent variable. This allows OPLS to be used for BSPC by reducing systematic variation (Silva et al., 2017; Souihi et al., 2015).

3.4 Degradation of machine components in surface treatment

Deviations that arise in surface treatment machines can generally be linked to the unpredictable nature of the

process variables. The self destructing nature of process variables degrade the machine components overtime,

which creates a need for continuous monitoring of the process variables. Sensors continuously monitor the

process variables in surface treatment processes to keep it in balance. With the sensors, irregular changes and

patterns of changes for the machine equipment can be observed (Yoo et al., 2019). However, in highly complex

manufacturing processes, realizing the most impactful sensors can be challenging due to the vast number of

sensors in the processes.

(35)

3.4.1 Influential variables in CVD

CVD is a manufacturing process that utilizes thermally induced chemical reactions to produce materials. The reagents, which are supplied in gaseous and liquid forms, generally require different temperatures, pressures, and flow rates to produce the new surface layer on the substrate. Yoo et al. (2019) proposes that temperature, flow rate, pressure, and the concentration of chemicals are all considered impactful variables on the CVD process.

Yarbrough and Messier (1990) mentions that temperature, pressure, and gas composition, are all active compon- ents of the region of parameter space that predicts the growth of surface layers in CVD processes. This is in line with observations by Choy (2003), where he found that temperature, pressure, reactant gas concentration, and gas flow all require adequate control and monitoring. Choy (2003) further states that the gas flow in the CVD reactors are affected by the following variables:

• Geometry of the reactor

• Temperature of the reactor

• Pressure of the reactor

• Temperature distribution of the reactor

• Flow rates of gas

• Density of gas

Crowell (2003) states that temperatures and pressures could affect the occurrence of clogging and oxidation in the CVD process. He further states that controllers for the flow and valves need to have temperatures between 180

C to 200

C to work correctly. Clogging often results in a reduction of flow in pipes. Clogging in the CVD process can, therefore, generally be described by Bernoulli’s flow equation:

P

1

+ 1

2 ρv

12

+ ρgh

1

= P

2

+ 1

2 ρv

22

+ ρgh

2

(4)

where v is fluid velocity, ρ is fluid density, h is relative height, and P is pressure. Bernoulli’s principle states that for an ideal fluid (with constant density, steady flow, and zero viscosity) the sum of its thermal-, kinetic-, and potential energy must not change (Johnson, 2016, p.26). In this case with clogging, it means that there must be an increase of speed in the fluid simultaneously as the pressure drops to allow the sum of energy not to change.

Figure 9 gives an example of this. This may also be affected by other factors such as location of sensors, size of

pressure differences, location of clogging, the length of the pipes, and the expenditure of time of opening.

References

Related documents

Stöden omfattar statliga lån och kreditgarantier; anstånd med skatter och avgifter; tillfälligt sänkta arbetsgivaravgifter under pandemins första fas; ökat statligt ansvar

Both Brazil and Sweden have made bilateral cooperation in areas of technology and innovation a top priority. It has been formalized in a series of agreements and made explicit

För att uppskatta den totala effekten av reformerna måste dock hänsyn tas till såväl samt- liga priseffekter som sammansättningseffekter, till följd av ökad försäljningsandel

Från den teoretiska modellen vet vi att när det finns två budgivare på marknaden, och marknadsandelen för månadens vara ökar, så leder detta till lägre

Generella styrmedel kan ha varit mindre verksamma än man har trott De generella styrmedlen, till skillnad från de specifika styrmedlen, har kommit att användas i större

For integration purposes, a data collection and distribution system based on the concept of cloud computing is proposed to collect data or information pertaining

Enabling the three business opportunities, supporting troubleshooting, planning reactive maintenance and performing condition-based maintenance, with the information system will

Together with habituation studies showing strong effects of action experience on the ability to see actions as goal directed (see section about action understanding in the