• No results found

Validation of a real-time automated production-monitoring system

N/A
N/A
Protected

Academic year: 2022

Share "Validation of a real-time automated production-monitoring system"

Copied!
41
0
0

Loading.... (view fulltext now)

Full text

(1)

Bachelor Thesis/Degree Thesis

HALMSTAD

UNIVERSITY

Computer Science and Engineering, 300 credits Computer Engineering, 180 credits

Validation of a real-time automated production-monitoring system

Computer Science and Engineering &

Computer Engineering, 15 credits

Halmstad 2021-06-06

David Dimovski, Johan Hammargren Andersson

(2)
(3)

iii

Abstract

In today’s industry, companies are, to an increasing degree, beginning to embrace the concept of industry 4.0 [1]. One of these companies is Diab which has a factory located in Laholm where they manufacture composite material. Some of the machines at the factory are older with outdated control systems and require a way to log data in real-time.

The goal of the project is to create a working prototype system that can monitor the production flow in real-time by using sensors to collect data about the work efficiency of the machine, measuring the idle time when the machine is working and when it’s not and storing this data in a database which can be accessible by a Graphical User Interface (GUI). The purpose is to investigate the requirements to get a fully operatable system and what it takes to maintain it to get an idea if the system should be self-developed by the company or buy/license from a third party.

The system was built by using a NodeMCU ESP32, a Raspberry Pi 4B and a SparkFun Distance Sensor Breakout VL53L1X, and for the software to program the NodeMCU ESP32, Arduino IDE was used; Java language was used to develop the server on the Raspberry Pi and, together with MariaDB, to store the data.

The tests that were conducted showed that the data could be displayed within a second in the created GUI but could not guarantee a reading of a passing block; however, it gave a good overview of the workflow of the machine. An improvement of the system is suggested by using visual-based object detection as in [2]. An overview of the production in real-time can allow for future possibilities of optimising the production flow and, with an improvement of the system, can increase the automation of the production, which can bring the company closer to the concept of industry 4.0.

Sammanfattning

I dagens industrier börjar man, till en ökad grad, anpassa sig mer och mer efter konceptet industri 4.0 [1]. Ett av dessa företag är Diab som har en fabrik i Laholm där de tillverkar kompositmaterial. Några av maskinerna på fabriken är äldre med föråldrade styrsystem och är i behov av ett sätt att logga data i realtid.

Målet med projektet är att skapa ett fungerande prototypsystem som kan övervaka produktionsflödet i realtid genom att använda sensorer för att samla in data om maskinens arbetseffektivitet, mäta

tomgångstiden och lagra denna data i en databas som kan nås via ett Grafiskt användargränssnitt (GUI). Syftet är att undersöka vad som krävs för att utveckla och underhålla ett fullt fungerande system för att få en uppfattning om systemet bör utvecklas av företaget själva eller köpas in från en tredje part.

Systemet består av en NodeMCU ESP32, en Raspberry Pi 4B och en SparkFun Distance Sensor Breakout VL53L1X, och programvaran för NodeMCU ESP32 användes Arduino IDE; Java användes som språk för att utveckla servern på Raspberry:n och som databassystem användes MariaDB.

Testerna som utfördes visade att den insamlade datan kunde visas inom sekunden via den GUI som skapades men kunde ej garantera en avläsning av ett passerande block. Däremot gav det en bra överblick över maskinens arbetsflöde. En förbättring av systemet föreslås genom att använda vissuell baserad objektdetektering som i [2]. En överblick av produktionen i realtid ger framtida möjligheter för optimering av produktionsflödet och med en förbättring av systemet kunna öka automationen av produktionen vilket kan föra företaget närmare konceptet för industri 4.0.

(4)

iv

(5)

v

Contents

1. Introduction ... 1

1.1. Purpose and Goal ... 1

1.2. Requirements ... 1

1.3. Limitations ... 2

2. Background ... 3

2.1. Object detection with object classification ... 3

2.2. Object detection without object classification ... 4

2.3. Activity Recognition ... 6

2.4. Predictive Maintenance ... 6

2.5. Processor ... 7

2.6. Graphical User Interface ... 7

2.6.1. Graphical User Interface using Java and JavaFX... 7

2.6.2. Graphical User Interface using Python and PySimpleGUI ... 9

2.6.3. Graphical User Interface using Javascript and Node.js ... 9

2.7. Data storage ... 9

2.8. System Communication ... 10

2.8.1. Wi-Fi ... 10

2.9. Sandaren (machine) ... 11

2.10. Hema (machine) ... 12

2.11. Related Work ... 13

3. Method ... 15

3.1. Specification of assignment ... 15

3.2. Hardware and software choices ... 15

3.1.1. Choice of Sensors... 15

3.1.2. Choice of computing-hardware ... 15

3.1.3. Choice of the Database type ... 15

3.1.4. Choice of Communication ... 16

3.1.5. Choice of GUI ... 16

3.2. Method description ... 16

3.2.1. Sensor & Microcontroller ... 17

3.2.2. Microcontroller & Server ... 18

3.2.3. Database ... 19

3.2.4. GUI ... 20

3.3. Testing... 21

4. Result ... 23

4.1. GUI ... 23

(6)

vi

4.2. Block Registration ... 24

4.3. Collect data and show data in real-time ... 25

4.4. Memory storage in ESP32 if the connection is lost ... 26

5. Discussion ... 27

5.1. Method ... 27

5.2. Result ... 27

5.3. Goal & Requirement and Related Work ... 28

5.3.1. Economics ... 29

5.3.2. Safety ... 29

5.3.3. Integrity ... 29

6. Conclusion ... 31

7. References ... 32

(7)

1

1. Introduction

In today’s industry, companies are, to an increasing degree, beginning to embrace the concept of industry 4.0 [1]. Industry 4.0 refers to the fourth industrial revolution describing how the current and future industry’s development looks and emphasises automation, interconnectivity, machine learning, and real-time data. To achieve automation within factories, object detection, activity recognition, and predictive maintenance play a considerable role. The goal is to increase efficiency, productivity, and flexibility, leading to an increase in profit.

One of these companies is Diab, a Swedish company that manufactures composite materials in various applications, such as inside the wind turbines’ rotor blades. During the last couple of years, Diab has started to automate parts of their manufacturing using robots and automated trucks, replacing manual labours. One of their next steps is to enable the collection of real-time data from their

production flow.

Diab’s manufacturing begins at processing the raw materials, which are chemicals that get mixed and heated to produce rectangular blocks of composite material. During the creation process, the blocks get an unwanted surface on all their six sides, a machine cut these away, and then the blocks pass through a manual operator’s quality control. The blocks then get cut height-wise to get specific thickness depending upon what application or customer using it. If there are minor defects in the cut blocks, they can run through another machine that grinds down the block’s thickness to remove the defect. Once the blocks have their required thickness, they can either be sold or processed into ready- made kits for customers.

All the machines’ production is today reported into a system manually by the machines’ operator.

Diab aims to automate this process and allow real-time data collection to analyse the product flow, such as when a machine is working or idle and their production rate in real-time.

1.1. Purpose and Goal

The goal of the project is to create a working prototype system that can monitor the production flow in real-time by using sensors to collect data about the work efficiency of the machine, downtime between when the machine is working and when it’s not, and storing this data in a database which can be accessible by a GUI. The purpose is to investigate the requirements to get a fully operatable system and what it takes to maintain it to get an idea if the system should be self-developed by the company or buy/license from a third party.

1.2. Requirements

The requirement of the system:

• Collect data in real-time by register each passing block.

• Ensure that the system only registers a block and no other objects.

• Show data in real-time in a GUI by displaying the data before the next data reading.

• Avoid data loss if the connection to the server fails.

(8)

2

1.3. Limitations

The limitation of the system:

• The system is created to be implemented on Sandaren (see section 2.9).

• Protection against environmental disturbances like dust against the hardware will not be handled.

• The system is limited to be sensor-based, using laser or ultrasonic sensors, without visual features.

(9)

3

2. Background

This section gives an overview of how the different parts of the developed system work, as seen in Figure 1, and the various alternatives.

Each subsection handles one part of the system seen in Figure 1 by proposing different technologies in the form of what they are and how they work. Furthermore, this section also covers Object detection, Activity recognition, and the type of machine in which the system is implemented and tested.

Figure 1: System overview of the developed system

2.1. Object detection with object classification

The technical term object detection is often associated with the identification and classification of an object using sensors. A sensor’s definition is a device used to detect changes in something or if something is present [3]. There are many different sensors with different kinds of applications for detecting and monitoring objects.

Today’s state of the art uses cameras (image sensors) to allow any system to gain vision, very much like humans, to collect data. An image sensor can detect and extract the information needed to create an image by converting light into electrical signals. The two most common types are the charge- coupled device (CCD) and the Complementary metal-oxide-semiconductor (CMOS) used in most camera technologies today. Both CCD and CMOS use a 2D array of electrodes, where the main difference is how the information from each electrode is collected. For CCD, the electrode’s charge is shifted horizontally to one single row, in which it can be electrically amplified, and the data can be collected [4]. In contrast to CCD, a CMOS allows direct reading between an electrode and the output due to each electrode connected to its amplifier [5].

Once the image information is gathered from the sensor, it can then be processed to extract useful information for the system, such as detecting and classifying objects. The data collected from an image sensor is processed through machine/deep learning algorithms to identify and classify the objects within the visual field. The downside with this approach is that it requires a lot of computing power, which can be a problem when fast responses based on the data are needed.

(10)

4

For these situations, Light Detection And Ranging (LiDAR) can be used instead. A LiDAR is an optical instrument that behaves like radars but uses light instead of radio waves. The way they work is shown in Figure 2, where the light is emitted in the facing direction, and the reflected light’s attribute is interpreted to calculate the distance to objects.

Figure 2: The light source of the LiDAR is reflected to the photoelectric sensor to determine the distance between the LiDAR and the object.

In the transition to Industry 4.0 within factories, LiDAR can be used on autonomously navigating robots in factories to help detect objects and obstacles [6]. The data points collected from LiDAR can be processed by using a Point Cloud [7] to gain a 2D/3D representation of the collected data. A Point Cloud is a set of data containing points representing 2D or 3D space. To know where in space each point belongs, they are holding information about their x, y, z values. More attributes can be given to each point, like RGB values. A Point Cloud can be rendered directly or converted into polygon models using various methods like Polygon Mesh, Triangle Mesh. Different approaches on how to detect boundaries and reconstruct edges are discussed in [8], where the authors propose a novel method for boundary detection and edge reconstruction for Point Clouds. Using a LiDAR and Point Cloud allow for a much faster response since barely any calculations are required. However, without further processing of the gathered data, like using the same algorithms required to process the

information captured by the camera, the type of object cannot be determined. One reason to know the classification of the detected object is due to further use of that information when determining future action of robots.

2.2. Object detection without object classification

In section 2.1, it is mentioned how LiDAR and Image sensors can detect and classify objects. Suppose one would like to detect an object without classifying it. Both LiDAR and Image sensor can be used for this task but would not be worth the cost of these technologies. Instead, one can use the more straightforward form of a sensor like a laser or ultrasonic sensors.

A photoelectric sensor (laser sensor) uses a light transmitter to detect an object’s absence, presence, or distance. There are mainly three different types, through-bream, reflective and diffuse, used for this purpose.

Through-beam – This type uses a transmitter and a receiver. The transmitter emits a beam of light on the receiver. If anything blocks the entire beam of light from reaching the receiver, an input is registered. The through-beam sensor requires a two-point installation. Because the input registration occurs when the beam breaks and doesn’t rely on its reflection, shown in Figure 3, this type of sensor can cover an extended range [9]. The main difference between a LiDAR and a through-beam is that the latter has its receiver facing towards the light source rather than next to it. The through-beam also does not rely on the light sources being reflected as the LiDAR does.

(11)

5

Figure 3: In the left image, the object does not trigger an input. In the right image, the object triggers an input.

Reflective – This type has the transmitter and receiver next to each other and uses a reflective surface to reflect the receiver’s emitted light. If the beam of light fails to get back to the receiver, an input is registered, shown in Figure 4. Like the Through-beam sensor, the reflective sensor is more reliable for a more extended range and requires a two-point installation. This type can have problems with objects that reflect light well[10-11].

Figure 4: In the left image, the object does not trigger an input. In the right image, the object triggers an input.

Diffuse – Also called proximity-sensing. This type emits light until an object reflects it, by entering its field of vision, to the receiver. If the light is reflected, an input is registered, Figure 5. This type of laser sensor’s benefit is that it only requires one-point installation and often costs less than the other types. The downside is that it can only detect objects within its range, making the correct placement of this sensor more crucial[10, 12].

Figure 5: In the left image, the object is within range and reflects the light, which triggers an input wherein the right image, the object is out of range and cannot be detected.

An ultrasonic sensor is used to detect objects and measure distance. These sensors are used in

applications that vary from reading fluid/gas levels within chemical processes [13], cars [14], or count objects on a production band. The ultrasonic sensor, as the name suggests, takes advantage of

ultrasound. The ultrasound operates in the 20KHz-1GHz band range, which is well over an adult human’s upper hearing limit. An ultrasonic sensor emits a soundwave in the direction which it is

(12)

6

facing. If the wave hits an object, the reflection will be registered by the receiver, shown in Figure 6.

It allows the sensor to detect the densest objects in different sizes and colours, but not foam-like objects that absorb sound.

Figure 6: An Ultrasonic sensor detects an object (green wave) and registers input from the reflection (purple wave).

One can calculate the distance to the object by the equation 𝐿𝐿 =12 × 𝑇𝑇 × 𝐶𝐶 where L is the distance in metre, T is the time in seconds between emission and reception, and C is the speed of sound in air.

2.3. Activity Recognition

Activity recognition aims to detect and recognise different actions and activities performed by one or many entities. Commonly activity recognition is used to track human activity but can also be used in different fields. Today, activity recognition is used in a wide variety of areas such as security and surveillance, autonomous driving, human-robot interactions, and health care [15].

Data collection in activity recognition can be classified into two main approaches [16]; vision-based or sensor-based systems. A vision-based system collects data by using visual systems, i.e. cameras or closed-circuit television, and relies on different computer vision techniques to process the pictures and videos into data which is used for recognition. In contrast, a sensor-based system relies on various sensors such as wearable sensors, accelerometers, gyroscope, and global positioning system for localisation [16].

Different approaches can be used to process and interpret the data to determine what activity is currently or was performed, such as using machine learning algorithms to predict patterns such as the Bayesian network, Markov Model, and Deep learning [16].

2.4. Predictive Maintenance

Predictive maintenance aims to reduce the overall downtime and fully utilise the machine by only replacing parts close to when it is about to break, thus saving unnecessary cost by replacing something that can still function. As opposed to reactive maintenance, where parts are replaced only when they break or preventive maintenance, where parts are replaced in intervals to prevent downtime, however, parts can still have Remaining Useful Life [17], which leads to increased loss for replacing a part that has not been fully utilised.

This is done by trying to predict when a failure is going to occur by analysing incoming data from sensors located at the machine. There are different techniques for processing the sensor data and predict when a failure is about to occur, such as physical model-based, knowledge-based and data- driven [17] approaches.

(13)

7

2.5. Processor

There are two main parts in the system that require processing power; these are the server containing the database and GUI, and the sensors, where signals are needed to be handled and sent to the database. The two applications are different and therefore have different requirements in processing power and peripherals. Single-board computers (SBC), or a microcontroller, are two common boards that can handle one or both tasks.

A microcontroller is a single integrated circuit that is dedicated to performing a specific task. It contains a central processor unit CPU with integrated memory or peripheral devices. Most microcontrollers are designed to be small to fit into embedded systems and consume low energy resulting in that a microcontroller has less processing power than other boards, such as SBCs. As fewer external components are needed, an embedded system using microcontrollers tend to be more widely used.

Common development boards that utilise microcontrollers are Arduino [18], ESP32 development cards; these have different peripherals and support for additional parts like Wi-Fi support, analogue to digital converters to receive and handle signals from sensors. They also contain many

intercommunications support, like I2C [19] and SPI [20].

An SBC is a complete computer containing a microprocessor, memory, and I/O built on a single board and usually has an operating system running, better suited for multitasking as an OS generally support multi-threading. They also often contain other peripherals such as Wi-Fi connection or Bluetooth and general-purpose input/output, GPIO, to further increase the connectivity and offer a way to develop smaller systems. SBC provide more processing power than microcontrollers. The most popular one is the Raspberry Pi, currently in version 4.

2.6. Graphical User Interface

A GUI is used to enable a good user experience while dealing with data collection or manipulation.

With today’s computers, this is something that the user expects and has become the standard way of presenting a computer program. Compared to a text-based user interface, writing commands to navigate, the GUI allows the user to interact with the program using more conventional means such as pointing devices and symbols[21-22].

When building a GUI application, different kinds of programming languages are available depending on the targeted platform. One can choose a platform-independent language or a language compatible to run on multiple platforms.

The upcoming subsections will describe how to create GUIs using some popular platform-

independent programming languages such as Java [23], Python [24], and JavaScript [25]. These three languages are easy to learn and very well accessible to anyone.

2.6.1. Graphical User Interface using Java and JavaFX

Java is a platform-independent object-oriented programming language able to create desktop and web – applications. When making a GUI, the Java Developer Kit (JDK) offers different built-in toolkits such as their original API for creating GUIs, Abstract Widget Toolkit (AWT) [26]. AWT allows users to create windows, buttons, and other useful widgets to develop GUIs. The problem with AWT is that it uses platform-specific code for the graphics, which means creating an application for a windows computer would look different on a Macintosh [26]. The Swing Library [20], which also comes with the JDK, can avoid this. Swing is based on AWT and allows the user to create applications looking almost the same independently of the operating system [27].

In 2008, Sun Microsystems released JavaFX, which could be considered the Swing library’s accessor.

JavaFX is now an open-source graphics library based on Java that enables the programmer to create

(14)

8

desktop, web, and mobile applications [28]. Just like AWT and Swing, JavaFX comes with the JDK.

The architecture of JavaFX is shown in Figure 7.

Figure 7: The relation between different layers of the JavaFX Architecture

At the top layer, the Scene Graph is located. It is responsible for the relation between the components of the application and is using a Tree-structure. The starting point of the graph is called the root node, which has no parents. The root node can have children in the form of a leaf node or a branch node.

The leaf node consists of an object that cannot have child nodes, while the branch node objects can.

In Figure 8, an example would be having a Group-type, which can hold a list of leaf and branches to act as a root node. This Group can then have a Circle as a Leaf and a Region-type as a branch. The Circle-type cannot have any children while a Region is the base class for most of the UI-controls and, as an example, can contain children like a Text-type or an ImageView [29].

Figure 8: The structure of the elements of a JavaFX application

The third layer of the picture described the different types of graphics libraries JavaFX uses to handle 2D and 3D rendering, web, and media implementation. Prism, a hardware-accelerated graphical pipeline [30], takes advantage of the operating system’s graphic API to manage the first layer’s graphics rendering. For Windows, that would mean pathing to DirectX [31]; for Linux, Macintosh, and embedded systems, OpenGL [32] is used. To provide native operating services, the Glass Windowing Toolkit (GWT) [30] is responsible for handling surfaces, timers, and windows. GWT is

(15)

9

the part of the JavaFX Stack that serves as a bridge between the native operating system and the JavaFX platform.

Quantum Toolkit, which can be found in the second layer of Figure 7, is responsible for tying Prism and GWT together and make it available for the JavaFX API. It also takes care of the threading rules and prioritises rendering or event handling[30, 33].

2.6.2. Graphical User Interface using Python and PySimpleGUI

Like Java, Python is a platform-independent programming language used to create both desktop and - web -applications. Unlike Java, Python supports different programming types such as object-oriented, functional, procedural, and imperative programming [24]. The Python language offers many other packages to create GUIs for both the web and desktop. One of those packages is PySimpleGUI which allows for the creation of windows using multiple frameworks [34]. The frameworks supported for PySimpleGUI are Tkinter, Qt, WxPython(cross-platform GUI Toolkit), and Remi(Html renderer).

2.6.3. Graphical User Interface using Javascript and Node.js

In contrast to Python and Java, JavaScript is a client-side scripting programming language. This means that JavaScript executes on the client-side instead of the server-side. JavaScript code is executed within web browsers embedded in HTML [24]. Node.js, an open-source, cross-platform JavaScript runtime environment, makes it possible to run JavaScript code outside web browsers [35].

Taking advantage of Node.js, one can build a GUI application with JavaScript. One library for this is NodeGui which allows for creating a native cross-platform desktop application [36].

2.7. Data storage

The data the sensors collect needs to be stored. Two types of database structures are considered: a Not Only SQL (NoSQL) and a Relational Database that uses Structured Query Language (SQL). A Relational Database is based on the relational model for storing digital data. The most common software system to handle a Relational Database is a Relational Database Management System (RDBMS).

The RDBMS divides the data into multiple normalised tables, also known as a relation. A table consists of tables and columns.

Figure 9: Shows a table/relation (ANIMAL) with five attributes(columns) and three rows.

Each column corresponds to a unique attribute, whereas each row of the relation has one value per attribute, as shown in Figure 9: Shows a table/relation (ANIMAL) with five attributes(columns) and three rows. Defining a database containing relations of tables requires some language. The most used one for a relational database is SQL, which consists of a Data Definition Language (DDL) [37] and a Data – Manipulation Language (DML) [37]. DDL is responsible for controlling and defining the structure of the database. A table can be created via the CREATE TABLE command and deleted by the DROP command. These are only a few among many commands to control the database. After creating a table, DML commands like INSERT, DELETE, and UPDATE can manipulate the already stored data [37]. A few examples of when to consider using an RDBMS are when the data is of constant volume or when the data can be predicted if there are relations between the data or plans on

(16)

10

using complex queries [38]. A few standard RDBMS software free to use is MariaDB, PostgreSQL, and SQLite. The latter is an embedded type of database that coexists within the application using it rather than being an alone process. Another notable difference between SQLite and the other two mentioned is that it can run without any network connection serving both client and server; this makes it more convenient for developers, especially when creating smaller databases [39]. MariaDB and Oracle Database are both supported by oracle and widely used within small as big companies. From an economic perspective, a vital difference is that MariaDB is under GNU, making it accessible even for commercial use than oracle Database [40].

In contrast to RDBMS, storing data in a NoSQL database is done by four different methods. Key- Value Store is one of the ways and is when the data is represented as a collection of key-value pairs;

the second method is called Graph-store and stores the data in graph structures of different kinds.

Document-store is the third approach that uses a JSON-based format to store data hierarchically.

Lastly, the Wide-Column Store method uses nested key-value pairs to store related data within a single column. One downside with using NoSQL is that syntaxes for controlling and manipulating the database can differ depending on what software is used. NoSQL databases are often used when the data collected is dynamic and changes a lot, can be expressed without relationships, and is easy to retrieve without complex queries[38, 41].

2.8. System Communication

There are different ways to let devices communicate, either by physical or wireless ones, depending upon the system’s structure. A system with the various parts close to each other may benefit from using physical wiring to communicate, whereas a system that is spread out over a larger area can benefit from wireless communication if the infrastructure allows for it. As the server will be in one place, and the sensors with their respective microcontrollers can be placed on different machines located at various locations, wireless communication will be considered, such as Wi-Fi.

2.8.1. Wi-Fi

Wi-Fi is a trademark name owned by the Wi-Fi Alliance for a series of wireless network protocols based on the IEEE 802.11 standards [42]. It utilises unlicensed frequencies in the ISM-band for communication, with the most common ones used are the 2.4 GHz and 5 GHz frequency bands. Wi-Fi provides communication between devices through wireless local area networks (WLAN) and can also offer devices connected to Wi-Fi networks internet access.

WLANs utilising Wi-Fi standards have two basic modes, infrastructure mode or ad hoc mode. In the infrastructure mode, see Figure 10, devices are connected to a wireless access point (WAP), forming a network through which they can communicate.

Figure 10: Infrastructure mode WLAN

(17)

11

Several WAPs can, by using a wireless distribution system (WDS), connect and widen the network range, allowing devices to connect to the network further away.

In ad hoc mode, one device, the network owner, acts as an access point and allows other devices connected to the network to communicate through peer to peer.

2.9. Sandaren (machine)

This section covers the specific machine on which the system was implemented. The machine, Sandaren, is a machine that grinds the block to a certain degree issued by the machine operator. The machine operator starts by placing a pallet of Blocks in position before activating the system. When the system is activated, each block is placed on the production band, shown in the left image of Figure 11. The block goes through the machine onto a pallet, as shown in the right picture of Figure 11.

Figure 11: Left image shows the front side of Sandaren, and the image to the right shows the backside.

Depending on the length of the block, each grind session differs between intervals around 3-20 sec.

When the entire pallet of blocks has been ground to the desired depth, the machine operator has to remove the pallet of finished blocks and then repeat the session. The description of the process is how an ideal session occurs. An example of a non-ideal session is if the operator has to intervene when the block is being fed to the machine, interfering with any expected process pattern. When the machine can operate without the operator having to interfere, a good pattern is formed; see Figure 12, where the blocks are registered close to constant time intervals.

(18)

12

Figure 12: A good pattern for a pallet containing 8 blocks.

If the operator has to interfere with the process, an irregularity in the pattern can occur, shown in Figure 13; the last block does not follow the interval in which the rest has been produced. The longer the gap between the blocks, the more prolonged the machine has been at a standstill. Except for these two types of behaviour, an exception can occur when the operator has to redo single blocks that need further ground, which could mean a single block gets registered several times.

Figure 13: Irregularity in the pattern of a pallet of 9 blocks.

The machine has no control system that saves data, nor is it connected to the company’s business system, which makes it the ideal machine to implement and test the system on.

2.10. Hema (machine)

This section covers the specific machine, which is a part of a production line controlled by the same control system that the system was supposed to be implemented on.

The Hema machines are controlled by a control system that contains a database of the logged data from all the sensors on the machines. The entire system is fully automated.

One part of the production line consists of two subparts, a moving production table and a splitting saw machine.

Figure 14: The table where the brick of composite material is located during a “cutting-session”.

The composite material is placed on the table, shown in Figure 14, during the placement phase. When the material has been correctly positioned on the table and is ready to be cut, the cutting phase starts.

This phase’s cycle, one cut to the block, consists of three stages: the table moving towards the saw and stage two being the saw cutting the block and stage three being the table moving back into the starting position. The cutting phase lasts until the correct number of cycles has been completed. The number of cycles is determined beforehand by the machine’s control system.

(19)

13

Each cycle of the cutting phase is responsible for cutting the block into a pre-determined size. The splitting-saw is shown in Figure 15.

Figure 15: The saw of the machine is shown within the red marking.

The splitting-saw is positioned, so the minimal distance between it and the product table is 20mm, which means that the maximum cut of a block can leave 20mm + the cutting-saw thickness as rest.

2.11. Related Work

In 2017 Shih-Hsiung Lee and Chu-Sing Yang did similar work to this thesis [2]. Their system was built to be able to recognise and count objects in real-time within a fast-speed environment. For this, their system used an NVidia Tegra TX1 platform with 256 GPU CUDA cores, and Quad-core ARM Cortex A57 processor and Basler USB 3.0 industrial camera to simulate a smart industrial camera.

Using a GPU, they could process a massive amount of data needed to differentiate between the objects in their tests. Due to the number of objects needed to be detected in real-time, they solved the problem with a smart camera using speeded-up robust features(SURF), a local feature detector and a descriptor. The test consists of a small controlled environment where two model cars were driving around a model track. With this setup, they managed to detect and recognise the model-care for each lap they did on the track with a rate of 100%.

In [43], they simulate a garment industry’s production line by measuring the time it takes for each machine to finish its task. The data is collected by measuring the workers’ power consumption and then sending it through an algorithm to extract accurate task time. Doing this allows for input in the discrete-event-simulator to gain overall productivity, individual productivity, and other valuable information. Like the data collected in this thesis, the collected data can help optimise the production flow.

In [6], the authors talk about robot automation that comes with Industry 4.0 and the concept of dynamic object detection by using a stereo camera and LiDAR to supplement where the other method struggles. When testing the camera and LiDAR separately, the accuracy in detecting an object

favoured the LiDAR. The experiment to test if a fusion, using both detection methods, would improve the result was positive due to the camera keeping track of the object in a situation where the LiDAR could not, as when being too close to a target.

(20)

14

(21)

15

3. Method

The method section describes how the system is constructed and how each part of its functionalities is implemented. It covers which microcontroller, sensors, communication system, programming

languages, GUI libraries, database type used, the motivation behind them, and the different tests conducted to verify them.

The project was divided into three phases, according to the lips model [44], where the first phase covers the work involving defining the project’s purpose and goal, the research of each part of the system for the background, and the creation of a time plan and similar work of the matter. Based on the background research, different solutions for each part of the system were compared, and a suitable solution for the various parts was chosen.

The students supply the hardware, i.e. sensors and SBC required to set up the system. Computer hardware to develop the GUI and database is also provided by the students, and the software used for programming the different parts is open-sourced.

3.1. Specification of assignment

The system was divided into four parts, where each one was given a specific time frame in which it would be completed and tested to enable a clear work structure. The specification of the first subsystem was issued by the students together with the company; this included what machine to implement the system on and what data to collect. The rest of the system was specified by the students themselves with consultation by the company.

3.2. Hardware and software choices 3.1.1. Choice of Sensors

It was considered that the system should be independent of the factory systems and that the sensors’

placement must not affect the block and the machines in any way. The choice of sensors was

narrowed down to a SparkFun Distance Sensor Breakout, VL53L1X, which contains a LiDAR sensor;

due to measuring the change in distance between the blocks and the board. However, due to a late change in what machine the company allowed us to implement the system on, the sensor that was used was not necessarily optimal for the task since now only the presence of the blocks needs to be registered by the sensor, which can be achieved by most of the covered sensors in section 2.2.

3.1.2. Choice of computing-hardware

The choice was either to use microcontrollers or SBCs that offer some GPIO and fit into the machines’ environment. For the server, a Raspberry Pi 4B was chosen, Raspberry Pi is a very common SBC and has lots of documentation available, and there is already technical competence of the Raspberry Pi in the company. A breakout board containing an ESP32 microcontroller was chosen to control the sensor as the sensor part did not need an operative system, and this microcontroller comes with support for both Wi-Fi and Bluetooth communication.

3.1.3. Choice of the Database type

For this project, the amount of data to collect was of a smaller size. Therefore, one can argue that any type of database can be used. Nevertheless, it is worth considering that in the future, where relations between the collected data from different machines might be a reality, and the importance of structured data with relations becomes the case. For this reason, the database type choice for this system is an RDBMS with SQL as the language. When choosing the RDBMS software, all three alternatives, SQLite, MariaDB and Oracle Database, can be used. For this system where a server hosting the database is going to be implemented, it makes more sense to use MariaDB or Oracle Database, whereas the choice between the two mentioned comes down to the fact MariaDB is under

(22)

16

the GNU license, i.e. it is free to use without fee, which is required since the system might be used in a commercial purpose.

3.1.4. Choice of Communication

The choice of communication protocol landed on Wi-Fi because the factory already has a Wi-Fi network set up for the entire factory. Using Wi-Fi allows the ability to connect sensors and the corresponding microcontroller from different parts of the factory wirelessly.

3.1.5. Choice of GUI

Java was chosen as the programming language due to selecting JavaFX for the creation of the GUI.

All three languages and respective GUI libraries could have been used for this project due to the GUI’s simplicity. All of the languages offer simple solutions to implement a GUI as platform- independent applications. The reason behind choosing JavaFX is that it can offer more detailed documentation than PyCharmGUI and NodeGUI, which can be crucial for further and more complex implementations of the GUI.

3.2. Method description

As explained in section 3.2, the final choices of hardware and software to use for the system was a NodeMCU ESP32, a Raspberry Pi 4B and a SparkFun Distance Sensor Breakout VL53L1X, which can be seen in Figure 16, and for the software to program the NodeMCU ESP32, Arduino IDE was used; Java language was used to develop the server on the Raspberry Pi and, together with MariaDB, to store the data. To develop the GUI client, Java was used together with the library JavaFX. This section describes how all the parts of the system were integrated.

Figure 16: Picture of the system components, ESP32 bottom left, distance sensor top left and raspberry pi right

(23)

17 3.2.1. Sensor & Microcontroller

The LiDAR sensor and the ESP32 is wired together and communicate via i2c. To avoid most of the dust from the factory, they are contained within a 3D -printed box. The language used for

programming the ESP32 was C++ with the Arduino IDE. The box containing the ESP32 and the LiDAR-sensor was placed under the production band of the machine (see section 2.9) to register a block passing over it by a specific change of distance.

In order to determine when a block has passed the sensor, the algorithm described as shown in Figure 18 was used and ran on the EPS32.

Figure 17: Flow chart of ESP32 workings

The flow chart in Figure 17 describes the program structure of the ESP32 looks like, which is based on polling. When the ESP32 is turned on, it tries to connect to the specified Wi-Fi before connecting to the Raspberry server. When the connection has been established, it tries to update its clock to the clock of the Raspberry. If the ESP32 went offline during measurement, data collected would be stored

(24)

18

and then sent after the clock update. The clock in ESP32 is a Realtime-Clock (RTC) that updates time from an NTP server, and the time is synced with a given interval of 3 hours as the RTC in ESP32 is somewhat inaccurate. After these steps, the ESP32 starts reading the state of the sensor and register blocks according to the algorithm shown below:

Figure 18: Algorithm to detect block

This algorithm is made to fit the machine described in 2.9. It starts by reading the state of the sensor until the set minimum height is registered by the sensor, which means that a Block has covered the sensor. To ensure that a block is correctly detected, the sensor is required to be blocked during a minimum period of time (see 2.9). If that period has been met and the distance is significantly

increased, the ESP32 logs a Timestamp, which is then sent along with the total time it took to register the block (to help detect faulty Block-registrations) to the server; and these actions are repeated.

3.2.2. Microcontroller & Server

The ESP32 connects to the Raspberry Pi, the server, via Wi-Fi and uses Transmission Control Protocol (TCP) to send packages to give reliable communication. The programming language used to write the server in Java. The server is implemented according to the flowchart shown in Figure 19.

The server is based on a multi-threaded server-client construction. The server thread handles the new connections of clients, in this case, ESP32. When a new device is connected to the server, it identifies itself, and the server creates a new worker thread that handles the sending and receiving of data. A

(25)

19

new thread will be created for every device and will live until the connection is lost, i.e. the device closes its socket. When the connection is closed, the server will close the thread.

The server was implemented according to this flowchart in Figure 19 down below.

Figure 19: Server structure

3.2.3. Database

The database is hosted on the Raspberry Pi and uses MariaDB as a database manager. The database is constructed to hold the data of each block. Each column contained an id when it got registered by the sensor, a timestamp of when it entered the database, and the time it took to register that block, as shown in Figure 20.

(26)

20

Figure 20: Describing the Block Column

3.2.4. GUI

The structure of the GUI was implemented, as shown in Figure 21.

Figure 21: The design of the application. Each square represents one of each class used for the application. The arrows indicate a direct association.

The GUI application was created by taking advantage of the LineChart-class from the JavaFX API, which allowed the display of data in a graphical format. A separate class, DataBaseConnector, was used to connect to the database to handle all calls to the database. Two vital functions were

implemented, getData and getDataUpdate. The first one loads old data already stored on the database,

(27)

21

whereas the latter return newly added data from the database. All data fetched from the database is stored using Block-objects to pass on to the GUI.

3.3. Testing

To test the entire system, each part was tested individually before tested together as a whole system to verify the system requirements.

The Sensor and ESP32’s algorithm was tested by simulating the movement pattern of the machine, described in 3.2.1, of the ESP32. The connection between ESP32 and the Raspberry Pi was tested by simulating a loss of connection between the two. The test was simulated by dropping the connection to the network on the ESP32 to see if a reconnection was established when available, according to the algorithm described in Figure 17. Another test was made to see how much data could be stored in the ESP32 before filling the entire flash memory. The GUI was tested by reading data from the database to check if the values were represented and displayed correctly.

When all the parts had been tested, the entire system was deployed on the machine (see 2.9) at the company. After that, the entire system was tested to ensure the requirements, see section 1.2, were met.

To test if the system register, stores and displays the data in real-time, each block reading was timestamped when registered by the sensor, when entering the database and when read by the GUI to measure the time between the events. To ensure correct data was stored in the database, the data was compared to data acquired by counting blocks by hand for two hours. To know if a block is there or not, the algorithm shown in Figure 18 was tested, together with the other parts, by counting passing blocks. This was done at the factory, on the machine, whilst supervised to spot errors associated with unpredictable events.

To test if the ESP32 saves data when not connected to the server was done by simply turning off the connection to the Raspberry Pi and comparing the data with expected data during the test time on the machine.

The placement of the sensor during the test at the company, shown in Figure 22. Together with the ESP32, the sensor was placed, so the sensor pointed upwards at the production band.

Figure 22: The sensor placement during the tests at the factory.

(28)

22

(29)

23

4. Result

This section covers the result of the GUI and the result obtained from the tests to verify the system’s requirements. See section 1.2.

4.1. GUI

The GUI showed in Figure 23 allows the user to display collected data for a chosen date. If the current date is chosen, the GUI reads already stored values from the database and activates a new thread to check for new income values in the database. As shown in Figure 23, the graph’s y-axis shows each hour of the day, while the x-axis shows each minute of the hour. Hovering over a node in the graph displays the information of that block; see section 3.2.4. Each circle in the graph represent a registration of a block. The colour indicates a unique hour and the different intervals the amount of time between each registration. See section 2.9

Figure 23: The GUI for the prototype system displaying hours on the y-axis and minutes on the x-axis, and each colour representing a unique y-value. Each circle corresponds to a registered block and the line inbetween is time taken before

next block..

(30)

24

4.2. Block Registration

Figure 24: Collected data and the time taken for each block to be processed.

The histogram in Figure 24 shows that 718 test blocks, 715 are within the registration interval, and three of them exceed. The data were collected during 18 hours at the factory. The registration pattern of the data, shown in Figure 25, indicates that even though almost all data is within the accepted registration interval, more than three blocks could be considered questionable registrations, as discussed in section 2.9, due to scattered patterns.

Figure 25: Shows the data from Figure 24 as patterns in the GUI. A regular pattern with even time between the data points indicates correct readings. The irregular patterns show downtime or an abnormal registration time, which can be seen in

most of the y-values(hour) between data points.

In Figure 26, as an example, we can see more clearly that the pattern shown at hour 21(purple) is irregular the first 6 data points before stabilising like the pattern at hour 22(red).

1 2 52

559

95

4 2 3

0 100 200 300 400 500 600

< 8000 (8000,

10000] (10000,

12000] (12000,

14000] (14000,

16000] (16000,

18000] (18000,

20000] > 20000

Number of blocks

Time in milliseconds

(31)

25

Figure 26: Shows a pattern(purple) at hour 22 that starts with inconsistent readings, uneven time between the data points, and data points with regular readings, even time between data points.

4.3. Collect data and show data in real-time

The test measures the time it takes for a data point sent by the ESP32 to be displayed by the GUI. 360 data points were sent from the ESP32.

Figure 27: Time taken in milliseconds for the GUI to display a data from when it was registered in the microcontroller to it was displayed in the GUI

In Figure 27, it can be noted that a majority, 88,9 %, of the data points had taken less than 605 milliseconds before the GUI displayed them, and none data points took over 1 second before it was displayed, in the context of displaying the data it can be said to be in real-time.

(32)

26

4.4. Memory storage in ESP32 if the connection is lost

Figure 28 displays the number of data points stored in the ESP32 while not connected to the server before the memory runs out, and a crash of the ESP32 is possible. The minimum of data points from the test that could be stored was 2574 data points where the system could still run.

The fastest block type that the machine on which the system was tested, see 2.9, could process was a block that took 3 seconds; this means if that block were processed constantly, the ESP32 could run for about 2.145 hours before the memory runs out and a connection to the server for storage was required, or else the data recorded after would be lost. With another more common block type that took 14 seconds each to process, the system could be run for around 10 hours instead.

Figure 28: Show the amount of data points stored in microcontroller list before memory runs out and data points are lost.

2570 2572 2574 2576 2578 2580 2582

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18

Data points stored in list

Trial

Data points in list before memory runs out

(33)

27

5. Discussion

5.1. Method

The lips model [44] made the planning and structuring of the project easy and worked well with what we had in mind from the start of the project. However, the idea that the background part would be finished before the second phase of lips was hard to manage due to new things coming up when we started implementing the system, which meant that the research part ran parallel with the rest of the entire project instead of just during phase one, this also happened with the testing of the project. We had planned to test all the system parts individually when they were implemented independently from the system but ended up noticing it is easier to test them all when the entire system was assembled.

The choices of software for the system were not crucial to the result. In retrospect, most of the discussed software could have been used. When it came to the choice of database, both RDMS and NoSQL could have been used due to the low amount of data saved and that the blocks did not have relations with other objects. For future implementations of the system with more sensors, relations between each sensor’s data is essential, like with the Hema machines (see section 2.10), where knowing what block each cut belong. The usage of JavaFX made it easy to implement graphs to display the data in the GUI and allow for future expansion of the application where the developer can implement different graphs to display the data.

The hardware choices allowed the implementation of the system to be straightforward since all the components were compatible with each other. Due to the late change of machine to deploy the system, a specific sensor type was not crucial. The microcontroller, ESP32, was well suited to work with the Raspberry Pi 4B, ensuring access to a local database to test the system. The communication within the system worked well using Wi-Fi, making it possible to implement this system without using Raspberry Pi within a local network.

5.2. Result

The results shown in section 4 gives a good overview of how the machine worked during those hours.

However, comparing the result that only 3 of the 718 blocks registered with the pattern shown in Figure 25, one can suspect more than 3 faulty readings due to lots of irregularity patterns. As

mentioned in section 2.9, irregularity in the expected patterns shows a lot amongst the collected data.

Even though a good overview of the data was given, it is hard to guarantee a block reading. In [6], where the author used image sensors instead to identify the object (see 2.1), they manage to identify every object with a 100% rate. Even though their test was conducted in a controlled environment, their method not only detected the object but also classified it, which could be helpful to avoid a situation when the operators of the machine have to interfere (see section 2.9), and faulty readings occur due to relying on patterns (see section 2.3) instead of seeing.

Moving over to the result shown in 4.2, the time taken is sufficient to be used on this machine (see 2.9) to be able to show each reading before the next reading occurs. The result also shows that the system can monitor the production flow of machine with a higher flow of production and still show each reading before the next one occurs.

Looking at the results from the test, when the microcontroller has lost connection, the system can run around 2 hours if the blocks that take the shortest time are processed continuously. Suppose the system was to completely replace the operator as a system to report in the production. In that case, increased reliability in storage might be needed, for example, adding additional memory capacity to the ESP32 such as an SD-card or change to another device such as a raspberry pi 4 or a raspberry pi zero which has the capacity for SD-card of different size for increased storage capabilities.

(34)

28

5.3. Goal & Requirement and Related Work

When collecting data, it is essential to know whether the data collected is reliable or not, i.e. if a collected data point corresponds to a block. If you cannot guarantee reliable data, at least there must be some way to distinguish good data from bad data to know if it can be used or not. The goal of this project was to create a prototype system that could monitor and register the production flow of machines in Diab’s factories. The goal itself was reached, but how good was the system? The requirements set for the system was at the low end with the exception of being able to guarantee a block has passed the sensor. As the test shows, this cannot be guaranteed with 100% with the choice of sensor and the implemented algorithms. However, looking at simple patterns of the data and what to expect, at least most of the bad data was noticeable.

Even though in [6] their proposal is for autonomous robots, something similar could be used to improve this thesis system. If the work of [6] and [2] is combined, this system could improve its way to determine between good and bad readings. The problem of interruptions or stuck blocks wouldn’t be an issue anymore if the system could detect and identify objects. All it would have to be trained to detect is the specific object it’s supposed to track. However, the hardware and software that must be used would increase the cost of the system. An increase in power to run the system is also to be expected. The one guaranteed result of the LiDAR sensor used is always shown if something was within the chosen threshold. This could be taken advantage of if combining with a camera sensor. The LiDAR would detect the object; then, the camera would determine its value. As mentioned in section 2.1, this can be done with a simple camera and a GPU.

Implementing these improvements to the system making it more reliable and could open up

possibilities for Diab to implement it on more of their machines. With these changes, the system could replace the operator as a source of information of the block production; this might give a more

accurate reporting of how many blocks are being produced as the system would “unbiased” compared to an operator who might have some reason to over-report production.

The real-time aspect of the system would also give better planning at managing level by giving hints of when the production is lagging behind directly, rather than waiting for an operator to report in a batch produced, compared to the predicted result, thus allowing a more dynamic use of the workforce by for example reallocate operators from a machine which is ahead, to run machines that are behind, during breaks for example.

Having a system that measures the time it takes to produce every individual block gives more reliable data for predictions of the production and more accurate goals; it may also give a more accurate picture of what the current machines can produce. Measuring the downtime for every machine with the exact times for when the downtime occurred may also make it easier to identify bottlenecks in the production and the reason behind them, making the production flow smother.

The company today have varied machinery of older and newer machines. Implementing the system on the older machine would be a good option since they lack integrated databases to collect information.

The system would then act as a further digitalising and “modernising” of the machines following the companies Industry 4.0 strategy. Further development of the system could also offer the possibility to implement newer and more cutting-edge techniques to further their Industry 4.0 strategy, such as predictive maintenance (see 2.4) and sensor-based activity recognition (see 2.3). Predictive

maintenance could offer better machinery reliability with smaller downtimes and push newer, more expensive investments a little to the future when the demand increases. Activity recognition could also provide a solution when the machinery automation increases by providing information of where in the process a specific machine is. This could decrease the need for individuals monitoring the machines at the place and instead can do it from a control room.

References

Related documents

A RTOS using Xenomai's cobalt core or the PREEMPT_RT patch can be implemented on the CL-SOM-iMX7 using the i.MX7D with the i.MX vendor kernel. The I-Pipe patch for the cobalt

A protocol has been developed in Dynamo in order to upload the parameters’ values inside the Revit model of the building every time new data is inserted through the online form.

The starting point was the KTH rule system that contains an extensive set of context-dependent rules going from score to performance processed in non-real-time, and the

46 Konkreta exempel skulle kunna vara främjandeinsatser för affärsänglar/affärsängelnätverk, skapa arenor där aktörer från utbuds- och efterfrågesidan kan mötas eller

För att uppskatta den totala effekten av reformerna måste dock hänsyn tas till såväl samt- liga priseffekter som sammansättningseffekter, till följd av ökad försäljningsandel

The increasing availability of data and attention to services has increased the understanding of the contribution of services to innovation and productivity in

Av tabellen framgår att det behövs utförlig information om de projekt som genomförs vid instituten. Då Tillväxtanalys ska föreslå en metod som kan visa hur institutens verksamhet

Generella styrmedel kan ha varit mindre verksamma än man har trott De generella styrmedlen, till skillnad från de specifika styrmedlen, har kommit att användas i större