• No results found

Design and implementation of an AI-based Face Recognition model in Docker Container on IoT Platform.

N/A
N/A
Protected

Academic year: 2021

Share "Design and implementation of an AI-based Face Recognition model in Docker Container on IoT Platform."

Copied!
66
0
0

Loading.... (view fulltext now)

Full text

(1)

1

Master of Science in Telecommunication Systems May 2020

Design and implementation of an AI-based Face Recognition model in Docker

Container on IoT Platform.

Adil Shaik

Uma Vidyadhari Chetlur

(2)

2

This thesis is submitted to the Faculty of Computing at Blekinge Institute of Technology in partial fulfilment of the requirements for the degree of Master of Science in Telecommunication Systems. The thesis is equivalent to 20 weeks of full-time studies.

The Authors in this research paper grant to the Blekinge Institute of technology a non- exclusive right to publish the work electronically and in a non-commercial purpose, make it accessible on the internet. The Authors warrants that the work does not contain any text, pictures, references, and materials that violate the copyright laws.

Contact Information:

Author(s):

Adil Shaik

E-mail: adsh17@student.bth.se Uma Vidyadhari Chetlur

E-mail: umch17@student.bth.se University advisor:

Prof. Kurt Tuthschku Dept. Computer Science

Faculty of Computing Internet : www.bth.se

Blekinge Institute of Technology Phone : +46 455 38 50 00 SE–371 79 Karlskrona, Sweden Fax : +46 455 38 50 57

(3)

3

Abstract

Context: Our thesis aims to develop and implement an AI-based model for face recognition using the Docker container, such that it can be transferable to any IoT platform.

Objectives: The main objective of the thesis is to develop an AI-based face recognition Model (which is implemented following the Deep Learning algorithm)for the security system for making decisions to lock or unlock the door system and to deploy the developed AI Model in a Docker Container on an IoT platform. The main aim of the thesis would be to achieve the edge computing concept that brings the Artificial Intelligence (through our AI model) to the low power Internet of Things (IoT) devices with the help of containerization concept. Containerisation would be similar to the virtualisation. Docker containers are easy to port on various IoT devices (Firefly rk- 3399). Along with the portability, Docker includes all the dependencies and modules required for running the application in a container.

Method: Our research work comprises the methodology of developing the containerised AI model. We have chosen the method of training the algorithm such that it detects the faces captured by our camera, which is connected with the help of CSI connector. The algorithm includes the concept of Deep Learning which is a subset of Artificial Intelligence. The method consists of several steps, for example, Deep learning Algorithm detects the faces from the image, and then the image is converted to a set of gradients. These gradients can be converted again to landmarks to consider the focal points of the image and then the training step is performed using the Support Vector Machine classifier. Finally, the authorised user is recognised.

Conclusion: Our research work comprises the methodology of developing the containerised AI model and deploying the containerised application on the Raspberry Pi (IoT device), which consists of the ARM processor. We conclude that the containerised application run with high efficiency, is portable and transferable between multiple platforms, and the containerised application is compatible with multiple architectures (ARM, x86, amd64).

Keywords: Artificial Intelligence, Docker, IoT, Firefly-RK3399, Face Recognition.

(4)

4

Acknowledgement

We would like to express our deepest and sincere gratitude to our supervisor Prof.

Dr. Kurt Tutschku for his valuable guidance, encouragement and continuous support throughout our thesis. It would not have been possible to accomplish this work without his supervision and support.

Special thanks to Ms. Vida Ahmadi Mehri for her kind help during this thesis work. It has been a great opportunity to work under their supervision.

Finally, a huge thanks to our parents and friends for extreme love and support during this thesis work.

Adil Shaik.

Uma Vidyadhari Chetlur.

(5)

5

Table of Contents

Chapter 1: Introduction ... 7

1.1Contribution of the Thesis ... 9

1.2 Research Questions ... 9

1.3 Thesis Outline ... 9

1.4 Problem statement and Motivation ... 10

Chapter 2: General Concepts for Implementation of Face Recognition ... 12

2.1 Machine Learning ... 12

2.1.1 Machine Learning Methods ... 12

2.1.2 Deep Learning ... 13

2.2 Face Recognition Techniques [5] ... 14

2.2.1 Feature-based approach ... 14

2.2.2 Face recognition based on video sequences [5] ... 15

2.2.3 Face recognition from sensory ... 15

2.2.4 Neural Network approach [29] ... 16

2.3 System Architecture ... 17

2.4 IoT Cloud-Based Approach [47]: ... 18

Camera (RPi camera [31]): ... 19

Ubuntu (Host Operating System) ... 21

AI face recognition model ... 21

Chapter 3: Face Recognition Model ... 22

Overview ... 22

3.1 Methodology ... 23

3.1.1 Identifying all the faces ... 23

3.1.2 Analysing and projecting faces ... 25

3.1.3 Concealing Faces ... 26

3.1.4 Finding the individual’s name from the encoding ... 27

Chapter 4: Related Work ... 28

Chapter 5: Hardware and Software Tools ... 30

5.1MODULES ... 30

5.1.1 OpenCV (Open Source Computer Vision) ... 30

5.1.2 NumPy ... 31

5.1.3 Face Recognition Module... 31

5.2Docker ... 32

5.3Firefly-RK3399 ... 35

5.4 Raspberry Pi ... 39

Chapter 6: Implementation of Face Recognition System ... 41

6.1Single Triplet Training ... 41

6.2Finding the name of the person from the encodings. ... 41

6.3Generation of Docker image of AI model ... 42

6.4Steps to Create Docker file and build docker image ... 43

6.5Multi-Arch Image ... 44

6.6Performance of AI model ... 45

Chapter 7: Results ... 47

7.1 Comparison of Running an AI Model with and without Docker on IoT device ... 47

7.1.1 Face recognition success rate of AI model ... 48

7.1.2 Confusion Matrix Results ... 50

7.1.3 Execution time and Statistics ... 52

7.2 Lessons learned... 57

7.2.1 Easy Challenges ... 57

7.2.2 Difficult Challenges ... 57

Chapter 8: Discussion ... 59

8.1Answers to the Research Questions ... 59

8.2 Challenges and Future Works ... 60

References ... 62

(6)

6

List of Figures

Figure 1 Difference between containers and virtual Machines [22] ... 8

Figure 2 Process of Face Recognition ... 17

Figure 3 System Level Architecture ... 17

Figure 4 Application Level Architecture ... 18

Figure 5 IoT cloud-based Approach [46] ... 18

Figure 6 Raspi Camera [30] ... 19

Figure 7 Docker Architecture ... 20

Figure 8 HoG representation of Face [6] ... 25

Figure 9 HoG of Face Pattern [6] ... 26

Figure 10 Firefly Rockchip RK3399 [34] ... 37

Figure 11 Raspberry Pi3 [33] ... 39

Figure 12 Detected Face with Docker ... 47

Figure 13 Detected Face without Docker ... 48

Figure 14 AI model Recognising faces ... 49

Figure 15 AI model Recognising faces ... 49

Figure 16 AI model Recognising faces ... 49

Figure 17 AI model Recognising faces ... 49

Figure 18 AI model Recognising faces ... 49

Figure 19 AI model Recognising faces ... 49

Figure 20 Confusion Matrix of AI model ... 51

Figure 21 Execution times of known image with docker ... 53

Figure 22 Execution times of known image without docker ... 53

Figure 23 Execution time graph for known face ... 54

Figure 24 Execution times of unknown image without docker ... 55

Figure 25 Execution times of unknown image with docker ... 55

Figure 26 Execution time graph for unknown face... 56

(7)

7

Chapter 1: Introduction

Artificial Intelligence (AI) is a concept that tries to understand the essence of human intelligence and produce a machine that mimics human intelligence. It provides a solution to manage the enormous data flows and storage in the Internet of Things (IoT). IoT is a sort of “general worldwide neural network “in the cloud which connects different objects in the system. [39]

In the broadest sense, the Internet of Things consists of objects connected to the web. The Internet of Things (IoT) is imagined proliferating due to the multiplication of communication technology, the accessibility of different gadgets, and computational frameworks.Self-driving vehicles (SDV) [40] for vehicular frameworks, micro grids [40]

for circulated distribution frameworks, and Smart City Drones for security systems can be considered as few examples for IoT systems [40]. The IoT engineering depends on a 3- level/layer framework that comprises of a hardware layer, a communication /network layer, and a layer of interfaces/services [40]. Prerequisites that must be considered are:

I) Universal accessibility and availability, and also connectivity of the different heterogeneous devices/services and various volumes of clients counting portability through generally concurred APIs

II) Dynamic administration/organization of clients, billions of devices and besides gigantic measure of information delivered by those associated devices

III) Maximum resource utilization, access to share IoT assets (objects, applications, platforms)

IV) Personalization of clients and administrations, giving services according to the requirements of clients. All the above functionalities must be Reliable (e.g., forgiving setting/strategy changes and achieving trust from the parts of the clients) and Scalable.

Since IoT applications demand portability and ease of different deployment techniques like containerization are used to achieve the same. Containerisation is an innovation to virtualize applications in a lightweight way that has brought about a significant change in

(8)

8

cloud applications. A container holds bundled independent, prepared to-deploy parts of data.

Conceptually, containers are less weight and like virtual machines (VMs). Each container has its own process with its virtual resources and file system, for example, Memory, CPU, Disk space, etc., and is isolated from other applications, services, and running containers, as shown in the below figure [22].

.

Figure 1 Difference between containers and virtual Machines [22]

The critical difference is that containers run on the host OS in user space, instead of an entirely different environment, like VMs. Along these containers are lighter in weight, which makes them extensively smaller than a virtual machine. They can run corresponding to various applications in client space, exist together with virtual environments, and even run inside at least one VMs [26].

Containers are vital to pushing data to the edge. Many edge gadgets are worked with process capacities to do considerably more than traditional development of information.

They can analyse incoming data streams using trained machine learning models.

Considering AI applications, for example, cameras and visual gateways can identify situations quickly and alert operators instead of sending data to a central location.

In the current implementation, we have developed a containerized AI-based Face Recognition model using deep learning techniques.

Deep Learning (DL) plays a crucial role in Artificial Intelligence. Deep Learning is a subset of AI. DL is challenging, a famous research area of machine learning. About Facial Recognition, deep Learning empowers us to accomplish more prominent accuracy than traditional AI strategies. With traditional AI strategies, hand-coding is required for image detection and extraction, while this not needed with deep Learning. It is the AI that understands the needs and addresses developmental needs while focusing on sustainability.

Mostly deep Learning uses the neural network architectures. Hence it can be known as deep neural networks.[37] Generally, the deep learning models are trained by utilizing the

(9)

9

large sets of data and neural network architectures.[37]Various deep learning architectures such as deep neural networks, convolutional deep neural networks, deep belief networks and recurrent neural networks have been applied to fields like computer vision, automatic speech recognition, natural language processing, audio recognition and bioinformatics where they have been shown to produce state of-the-art results on various tasks.[ 38]

1.1 Contribution of the Thesis

This thesis is part of a group of three thesis group, which together implement and investigates a BTH demonstrator. The collective aim is to analyse and demonstrate the concept of bringing Artificial Intelligence (AI) to the internet of things (IoT) devices.

To demonstrate the same, a model is developed which performs face recognition, such that it can be transferred to the IoT platform using a container-based approach. Our contribution is:

• AI-based Face Recognition Model: Implementation and training of AI-based Face recognition model

• Building and running a Docker Image of AI face recognition application:

Convert the face recognition application into a container image with the help of Docker.

• Multi-Architecture Image: Build and deploy multiple architecture docker images such that it is compatible with different processors like (arm, intel, aarch64, etc.)

1.2 Research Questions

RQ 1: How can an AI model be implemented such it can be transfer to and executed on an arbitrary but capable IoT platform using Docker container?

RQ 2 How can an AI model that is implemented using Docker be trained to recognize various faces, i.e., decide whether the Face is of the authorized user or not?

1.3 Thesis Outline

The document is structured as follows:

In Chapter 2, we cover the Useful Concepts for Implementation of Face Recognition.

This section covers all the basic concepts required to understand and implement the AI- based Face Recognition system. In addition, the System Architecture includes the system set up and software and hardware description like Camera, Docker containers. Chapter 3 describes the steps of the proposed algorithm. Chapter 4 summarises the related work to the thesis. In section 5, Software and Hardware tools for the implementation of the AI- based face recognition on Embedded IoT devices are outlined. This includes a description

(10)

10

of the modules involved, like OpenCV, face recognition, and the details about the docker container. In chapter 6, the implementation is described. This includes the commands used for building a docker image. Chapter 7 provides the details about results. Finally, chapter 8 gives details about the conclusion and work that could be done after this thesis.

1.4 Problem statement and Motivation

The main goal of the Artificial Intelligence is to create intelligent machines that can think and solve the tasks without the human guidance. The security applications must have learning capability which can learn based on the previous insights.

Multiple instances can be run on a single kernel in an operating system using the container- based virtualisation. The container- based virtualisation improves the application performance.

Since all the applications can be run on the same kernel, there is resource efficiency in this approach, and it is easy to migrate. The driving motivation to start this thesis is to investigate the ability and feasibility to deploy our containerised AI- based face recognition model on Firefly-RK3399 and Raspberry Pi (IoT device). The thesis is divided into two categories. The First category is to design the containerised AI based Face recognition application for recognising the authorised user. There are multiple techniques in algorithm development. The second category includes checking the possibility of developing the model such that it is compatible with multiple architectures for example, ARM (Firefly’s architecture).

Division of Thesis work Implementation Adil Shaik

1. Docker environment setup for x86-64 Architecture. (+Uma)

2. Implementation of Face Recognition model using Docker on x86-64 Architecture.

3. Docker file implementation for x86-64 Arch and Image building.

4. Containerization of AI model for x86-64 Architecture.

5. Solving compatibility issues & binaries errors arm64 Arch (IoT Devices) 6. Docker Environment setup in arm64 based IoT devices.

7. Docker file implementation for arm-based devices and container image building.

8. Enabling root privileges for Video input device and adding to user Linux groups 9. Allowing the root user to access the running X-server, with $DISPLAY environment

variable

10. Loadingbcm2835-v4l2 kernel module to the Linux kernel for automatic Camera detection.

11. Running container on ARM-based device with required privileges and enabling X- Server with DISPLAY environmental variable in a container.

Uma

1. IoT Device Set up

2. Connectivity of the device (Wi-Fi, Ethernet) 3. Raspbian Operating System Installation

4. Platform and Environment set up for Docker container on IoT Device.

5. Creating a Repository and Pushing the image to Docker Hub.

Division of Thesis Document Adil Shaik

Chapter 4: Related Work

(11)

11

Chapter 6: Implementation of Face Recognition System Chapter 7: Results

Chapter 8: Discussion UMA

Chapter 1: Introduction

Chapter 2: General Concepts for implementation of face recognition Chapter 3: Face Recognition

Chapter 5: Hardware and Software tools Chapter 7: Results.

(12)

12

Chapter 2: General Concepts for Implementation of Face Recognition

2.1 Machine Learning

Being a subset of Artificial Intelligence, Machine Learning enables an environment to develop a quality of acquiring and understanding knowledge from previous events.

Extrapolation of existing data and having the foresight of coming values based on sample inputs is a major part of machine learning.[27] It also focuses on the development of computer programs that can access data and use it to learn for themselves.[27] Processing vast amounts of data are made more accessible through Machine Learning.

Mostly it conveys quicker, progressively precise outcomes to recognize gainful changes or hazardous dangers. It might likewise require extra time and assets to train it appropriately.[27] The concept of fusing Artificial Intelligence with Machine Learning and technologies that perform tasks that only humans used to be able to (Cognitive Technologies: products of the field of artificial intelligence) has led to even higher amounts of information being able to be quantized.[27]

The first goal is to make sure the machines start to acquire knowledge on their own without any assistance from the programmer and they adapt to changing situations and given actions.

AI and machine learning are usually used interchangeably, particularly within the realm of huge information. But this isn’t a similar factor, and it’s necessary to grasp how these are often applied otherwise.

Artificial intelligence could be a broader conception than machine learning that addresses the employment of computers to mimic the psychological feature functions of humans. At the point when machines do assignments dependent on calculations in a smarter way, that is AI. Machine learning could be a lot of Artificial Intelligence and spotlights on the intensity of machines to get a gathering of data and learn for themselves, regularly changing calculations as they become familiar with the data, they are handling. [27]

2.1.1 Machine Learning Methods

Machine Learning can be done in multiple ways as per the users/programmer requirement. Currently, a highly used method for Machine Learning has Supervised Learning. Majorly used is Unsupervised, whereas Reinforcement and Semi-supervised are used on rare occasions.

(13)

13

Supervised Machine Learning [27]: These algorithms are written in a way that they can apply what has been learned in the past to new data using examples, which in turn may be allowed to predict future events. The past data, commonly known as examples, are used as input, output desired is known, and these are the starting steps for these algorithms to start learning. The analysis of this labelled example leads to the learning algorithm producing a function that can successfully make predictions about the future outputs. The system can provide targets for any new input after sufficient training.[27] The Supervised Algorithm has the capability to infer on its errors and mistakes by comparing the algorithm's output to the given output.

Unsupervised Machine Learning algorithms [27] are used when the input data gave is not bound by any structure or any labelled examples like in the algorithms of Supervised Machine Learning. These algorithms are developed to intuitively discover a structure or any form of label within the given unlabelled data.

2.1.2 Deep Learning

Like ML, "deep Learning" is likewise a technique that concentrates highlights or characteristics from crude data sets. The central matter of contrast is DL does this by using a multi-layer Artificial neural network with many shrouded layers stacked in a steady progression. DL additionally has, to some degree, progressively modern algorithms and requires more powerful computational resources. These are exceptionally created computers with high-performance CPUs or GPUs. Most deep learning techniques utilize neural network designs, which is the reason deep learning models are regularly alluded to as deep neural systems. The expression "deep" means the quantity of concealed layers in the neural network. Conventional neural systems just contain 2-3 hidden layers, while deep neural networks can have upwards of 150. [28]

Artificial neural networks (ANNs) are computing systems that are actually inspired by biological neural networks. [28] Artificial neural networks (ANNs) are the systems within the neural networks. Such frameworks learn (logically enhance their capacity) to do assignments by thinking about precedents, by and large without undertaking explicit programming. Typically, neurons are organized in layers. After some time, consideration concentrated on coordinating explicit mental capacities, prompting deviations from science, for example, back propagation, or passing data in the switch bearing and altering the system to mirror that data.

The actual goal of the neural network is to find solutions in the same way that a human brain would resolve. [28] The input can lead to output either by using a linear relationship or a non-linear relationship.

Considering an example, a deep neural network that is trained to recognize the dog breeds would check the picture and confirms the probability that the dog, which is in the image, is a particular breed.

(14)

14 2.2 Face Recognition Techniques [5]

The face recognition techniques can be divided into three categories:

• Techniques that operate on intensity image

• Techniques that deal with video sequences.

• Techniques that require other sensory data such as 3D information or infra-red imagery.

Considering the intensity of images as criteria, this technique can be divided into two categories:

• Feature-based approach

• Holistic-based approach 2.2.1 Feature-based approach

Feature-based methodology will process the input picture to distinguish and remove facial highlights, for example, eyes, mouth, nose, and so forth and after that, it figures out the geometric relationship among those facial focuses, along these lines lessening the input facial picture to a vector of the geometric elements. This methodology is sub- partitioned into:

• Geometric feature-based matching [5]: These techniques are dependent on the computation of a set of components from the image of a face. The complete set can be portrayed as a vector. The vector represents the position and the span of the primary facial highlights like the nose, eyes, eyebrows, mouth, jaw, and the blueprint of the Face. The pros of the approach are, it defeats the issue of occlusion, and it doesn’t require extensive computational time. The disadvantage is that it doesn’t provide a high degree of precision.

• Elastic bunch graph [5]: This approach is based on dynamic link structures. A diagram for an individual face is produced utilizing a lot of fiducially focuses on the Face, each fiducially point is a hub of a completely associated diagram and is named with the Gabor channels’ reaction.

A representative set of such graphs is combined into a stack-like structure called face bunch graph. The recognition of a new face image is done by comparing its image graph to those of all the known face images and the one with the highest similar values are selected as closed matching.

Advantages:

• By focusing only on the bounded areas, they do not modify or damage any information in the images

• This approach generates improvised recognition outcomes than the feature- based approach.

(15)

15

Disadvantages:

The approach utilizes an immense unit of interaction between the test and training images

• When there is a massive difference in the pore, scale, and illumination, this technique doesn’t carry out productively.

• Hybrid approach [5]: The hybrid approach is achieved by synthesizing multiple techniques to benefit from developing prominent outputs. Adopting numerous technologies will enable us to compensate for the cons of one approach by the pros of another technique.

2.2.2 Face recognition based on video sequences [5]

It deals with real-time recognition of faces by order of images being recorded in a video camera. The most significant use of face resolution is mainly for supervising the security requirement. This part comprises detecting, recording, and diagnosing of faces. [5]

Advantages:

• The ample amount of information enables the system to choose the frame, which is a more suitable image of all and eliminate other inadequate images. Hence, the dynamic will not have an advantage over static.

• It offers temporal continuity, which makes recognition efficiently better.

• It permits the recording of face images so that the change in facial expressions and postures of the body can be consigned. Hence, they bring out better recognition outcomes.

2.2.3 Face recognition from sensory

Abundant efforts were put forth on face recognition for a 2- Dimensional (2D) intensity images. Yet, a complete study hasn’t been done in identifying an undividable by accomplishing other methods of sensing like 3- Dimensional (3D) or range data IR imagery.

3D based model techniques: [5]

This method will aid in accomplishing the characteristics which are depended on the shape of face and outline cheeks without disturbing the changes occurred due to lighting, orientation, and background clusters that will influence the 2D system.

Examples of 3D include scanning systems, stereo vision systems, structured light systems, reverse rendering/shape from shading, etc.

Advantages:

• It accomplishes the characteristics by illustrating the shape of the Face.

(16)

16

Disadvantages:

• This technique is difficult to understand and is highly expensive.

Infrared based techniques: The thermal infrared imagery is not sensitive in lighting with which the images can be utilized for recognition and detection of faces.

Infrared images bring out great outcomes for face recognition as the features of the human Face and structure of an individual differ from one another.

Advantages:

• It enhances face recognition characteristics.

Disadvantages:

• Thermal sensors of high quality are necessary, which are expensive.

• It is prone to degrade the resolution and produce a high amount of unwanted noise in the images.

• There are no vastly accessible data sets of infrareds

2.2.4 Neural Network approach [29]

There are different methods to perform feature extraction using neural networks.

For example, Intrator et al. proposed a hybrid or semi-supervised method. [1] They consolidated unsupervised techniques for feature extraction and, directed techniques for discovering features ready to decrease classification blunder. For classification, feed- forwarding neural networks (FFNN) [29] can be used. It was examined that the error rates could be reduced by training various neural networks and calculating an average of their inputs, although it was consuming more time than the normal method.

Lawrence et. Al [2] used self-organizing map neural networks and convolutional networks.

Their arrangement execution is limited above by that of the Eigen face yet is even more expensive to execute by and by. Zhang and Fulcher.

Tree model for translation-invariant [29] face recognition in 1996[3]. The thought was to actualize an airport surveillance system. The information that was given to the algorithm was passport size photographs. This was connected to face detection, feature detection, and arrangement developed a face detection and recognition algorithm using this kind of network [4]. The system deployed one subnet for every class locally. The inclusion of probability limitations lowered false acknowledgment and false dismissal rates.

(17)

17 2.3 System Architecture

The input of a face recognition system is a picture or video stream. [29] The output is a recognizable proof or confirmation of the subject or subjects that show up in the picture or video. A few methodologies characterize a face recognition framework as a three-stage process shown in figure 2. Starting here of view, the Face Detection and Feature Extraction stages could run at the same time.

Figure 2 Process of Face Recognition

Figure 3 and 4 below describes the system architecture and application level architecture.

Figure 3 System Level Architecture

The above figure represents the system level architecture where the device used for the implementation of the project is Raspberry Pi 3 model B+, The operating system used is raspberian and the operating system usually divides into user namespace and kernel namespace, the Docker engine is installed on the operating system with supported ARM architecture binaries to run the Docker engine to host containers. Libraries and modules and dependencies that are required to run the AI model face recognition application on IoT platform are bundled inside the container to provide the isolation from other dependencies.

Application decides the authenticity of the person’s face depending on the recognized face confidence level, the higher the confidence level the higher the authenticity of the person

(18)

18

Figure 4 Application Level Architecture

The above figure represents the application level architecture where multiple blocks are used in implementing the face recognition model. In the first block video feeds are given as input to the application and HoG classifier are introduced to draw a pattern of the face and landmarks the faces, the image is then rotated, scaled and sheared to address the tilted faces in the live feeds, then these faces are encodes and fed to the SVM classifier for training and evaluation. Classifiers and Face recognitions are more discussed in the upcoming chapter 3.

2.4 IoT Cloud-Based Approach [47]:

Generally, IoT cloud-based is one of the approaches in IoT world, where the computation of the application and data is been analysed and monitored on the cloud. This approach has both advantaged and disadvantaged, where in our approach one of our mission is to bring the computation near to the IoT device, In few case the computation needs to be done over IoT device rather than sending all the data to the cloud for computation and data analysis.

Figure 5 IoT cloud-based Approach [46]

(19)

19

IoT cloud-based approach is Homogenous approaches where the platform that is processing and analysing the data has a standardized hardware with mostly specific CPU architecture platforms. This model of approach has homogenous hardware, similar standard communication protocols, the application layer may differ for one system to other system and service management are standardised, this makes easy for general applications where data from the IoT devices are streamed to cloud engines for computation, data analysis, process the data and to be monitored. In this approach IoT devices just stream the collected data from sensors and other peripheral devices and communicate over standard IoT protocols like MQTT, CoAP etc to send the data to the cloud engines. The main advantages and disadvantages are listed below

Advantages

- Monitoring Services

- Highly available Cloud platform - Device connectivity platform - Fast data processing and analysis Disadvantages

- Latency in transferring data - Less IoT hardware utilization

- Security risk while transferring data over internet - Expensive approach

Camera (RPi camera [31]):

The camera used in our experiment is the Raspberry Pi 3 camera v1 module [31], as shown in figure 5. The Raspberry Pi Camera Board plugs straightforwardly into the CSI connector on the Raspberry Pi. [31] It's ready to convey a completely clear 5MP resolution picture or 1080p HD video recording. An image can be saved in jpg, bmp, gif, png. The advantage of saving an image in jpg format is that it the image can be saved in less time.

Figure 6 Raspi Camera [30]

(20)

20

Docker Container [21]

Docker is an open-source tool that computerizes the deployment and running of utilizations inside containers. [21] An image is nothing but a file that comprises of code, config files, and different parameters. [21]

Figure 7 Docker Architecture

By utilizing Docker’s techniques for testing and deploying code rapidly, you can fundamentally lessen the duration between developing code and running it in a live server.

[21]. The Docker can be explained as a client-server-based application, as shown in the above figure 7. The components of Docker Architecture include

Docker Server: This process runs as a daemon in an operating system. [21]

Docker Client: The command-line client communicates with the server using the Representational Transfer (REST) Application Programming Interface (API). [21]

Docker Images [21]:

Docker image comprises files and the dependencies required to run the application [23]. There are two techniques to build an image. The first technique is done by utilizing a read-only format. The establishment of each image is a base image. The working framework image is fundamentally the base images, for example, Ubuntu 14.04 LTS, or Fedora 20. The images of working framework make a container with a capacity of complete running OS. The base image can likewise be made from scratch. Required applications can be added to the base image by altering it, yet it is important to build an alternative image. It can be called as "committing a

(21)

21

change." The alternative technique is to make a "docker file." The docker file contains a set of commands when "Docker build" is run from the terminal it executes all the guidelines given in the docker file and builds an image.

Docker registry [21]:

It is usually a store of images. There are public and private registries. We can access images over the web or make your own image for interior purposes. Docker Hub is a popular registry [21] which is discussed later in the document.

Ubuntu (Host Operating System)

Ubuntu is an operating system that is free and open-source Linux distribution, which is based on Debian [35]. Linux is superior to windows. Its architecture especially the kernel and the file system, is much better than windows. Ubuntu is based on Debian. By combining Debian's way of thinking and philosophy and the GNU devices, the Linux Kernel, and another significant free programming, structure a one of a kind programming appropriation called Debian GNU/Linux. Runs on the architectures like – x86, x86-64, ARMv7, ARM64. [35] Containers take advantage of process isolation in Linux close by the names spaces to make an isolated process. It is considering from Docker to Kubernetes. Ubuntu can run the containers at scale.

AI face recognition model

The Artificial Intelligence face recognition model is designed to detect, capture, and recognize the Face from an image. The model is implemented such that it works (runs) on multiple processors like ARM, AMD, X86, Intel, etc. The main objective is to develop face recognition AI Model that is suitable to make decisions for locking or unlocking the door system as a use case and to deploy the developed AI Model in a Docker Container. This model can make decisions based on training and dataset. Multiple faces and names are mapped and are used as training sets to the system. This recognition model system uses deep learning and image processing techniques to detect the Face in the live stream video and process the frames with the matching faces from the datasets. The further details are provided in chapter 3.

(22)

22

Chapter 3: Face Recognition Model

Overview

At a human level, we perform face recognition consistently without any effort.

Although it sounds like a fundamental task for us, it is a difficult task for computers, as it has numerous factors that can hinder the precision of methods, for instance, low resolution, brightness, and other factors.

In software engineering, face recognition is fundamentally identifying a person considering a facial picture. Face recognition is done with the facial pictures as of now separated, edited, resized, and typically changed over to gray-scale, the face recognition algorithm detects the characteristics which best predicts the picture. Face recognition can be done as follows:

• Initially, consider a photo and discover each person’s Face in it.

• Consider each Face and have the capacity to comprehend that regardless of whether a face is turned an abnormal way or in awful lighting, it is yet a similar individual.

• Have the capacity to diơerentiate persons based on the unique features of the Face such as the size of the eyes, length of the Face, and so forth.

• At last, analyse the features of that Face and compare it to every individual’s Face you know to decide the individual’s name.

As a human, your brain is capable to do the greater part of this naturally and in a flash of time. [6] Computers, mobiles, and ARM devices are not capable of this sort of generalization, so we need to train them how to do each step in this procedure independently.

Our Face recognition model and reasons for choosing this model: [36]

• The AI model for face recognition we have choose is based on deep learning algorithm.

• The model can recognize and manipulate faces and is compatible with both the Python versions.

• It is recognized as the world’s lightest and simplest face recognition library that gives an accuracy of 99.38% on the labelled faces in wild benchmark.[36]

• This model also equipped with command line tools that lets the user perform any face recognition, manipulation, matching tasks from the data training data sets.

(23)

23

• Moreover, this model can be used for comparing the faces and can also detect the faces sideways with the same accuracy.

• The model is open source, by using ton of benefits for choosing an open-source models, we can have full access over the model, there are no limitations or API call limitations that can stop from the developing the targets. These open source models have cost benefits.

• The model is active and has support by a community of developers. While we have other face recognition API providers from diơerent vendors such as Amazon, Google, Microsoft, IBM, OpenCV. And the model is bonded with OpenCV. [36] All the commercial API providers have limited usages and limited to the modifications.

3.1 Methodology

The steps involved in face recognition task are as follows [6]:

• Identifying all the faces: In this step, face detection is performed, and the Face is converted into gradients (which is described below in 3.1.1). At this step the goal would be to detect only the faces in the image.

• Analysing and projecting faces: This step includes finding landmarks on the detected Face.

• Encoding-faces[6]: This step includes extracting few measurements from the detected Face and performs a comparison between known faces and unknown faces (input).

• Finding the individual’s name from the encoding: Finally, at this stage we train SVM classifier [6] and obtain the name of the person as output.

3.1.1 Identifying all the faces

Face Detection is the initial step in the process [6]. To discover human faces in a picture, we will begin by making our pictures black and white as the colour data is not needed to find faces. Then we will consider a single pixel in the image at a time. For every single pixel, we would like to look at the pixels that surround it.

Above all when the frame in the video is feed into the system the system should first identify the faces and amount of faces present in the frame, [6] secondly for each Face that has been identified by the system it can able to identify and understand that even if a face is turned in different directions the system should recognise it as a person’s face.

Third will be the lighting conditions, this depends on the type of camera that is being used to input the feeds.[6] Next step will be choosing the unique features of the particular Face in the frame that can be used to differentiate from other faces, for example different size of eyes, facial expressions and features etc. Finally comparing and validating the unique features of the chosen Face with all the other faces.

(24)

24

Generally, humans have tendency to determine and recognise faces that are seen every day, whereas computers are not so capable of high-level generalization. In such cases technologies like machine learning and training datasets are used to teach the system to do the steps in process. The process includes training different machine learning algorithms are taken and chained together to get best results. The first step is to detect the faces in the frame or a picture, this is a great feature for the system where the system can automatically picks out faces ignoring other backgrounds and colours and also it can make sure that the identified Face is in good focus before it identifies the Face. To start finding faces in a frame, the frame is converted in to black and white image because to reduce the size of the frame that helps in increasing the processing speed,[6]

then every single pixel in the frame at single time is considered that are directly surrounding it.

We will probably make sense of how dark the present pixel is compared with the pixels around it. At that point, we need to draw an arrow showing in which direction the picture is getting darker. If this is repeated that process for multiple single pixels in the image, each pixel is replaced by an arrow. These arrows know as gradients and they show the flow from light to dark across the entire image. [6]

By repeating the procedure for every single pixel in the image, the pixel can be replaced by an arrow. [6] These arrows are known as gradients and they show the flow from lighter pixel to darker pixel. This may seem like an irregular activity, however, there is a great purpose behind for replacing the pixels with gradients. On the oơ chance that we dissect pixels specifically, extremely dark pictures and extremely light pictures of a similar individual will have very surprising pixel values.

However, by just considering the direction that brilliance changes, both extremely dark pictures, and extremely bright pictures will wind up with the same correct representation. That makes the issue a ton simpler to solve. But considering the gradient for every single pixel gives too much detail. [6] It is good, that can simply observe the essential flow of lightness/darkness at a higher level, so we could look at the basic patterns

.

To do this, we separate the picture into little squares of 16x16 pixels each. In every square, we will count the number of gradients in major direction. Then all those squares in the image are replaced with arrows which were pointing to strongest direction. The result is that the original image is turned into a very basic portrayal that catches the fundamental structure of the Face as shown in Figure 8.

(25)

25

Figure 8 HoG representation of Face [6]

3.1.2 Analysing and projecting faces

Humans can detect the images with postures in different directions. Here we need to deal with a problem that it looks completely different for a computer to detect faces turned in different directions. [6] To represent this, we will endeavour to wrap each photo with the goal that the eyes and lips are dependable in a fixed place in the picture.

This makes it simpler to perform the comparison of faces in further steps.

The fundamental idea is that we will think of 68 specific points (called landmarks [6]) that exist on each Face the chin, the outside edge of each eye, the inward edge of every eyebrow, and so forth. At that point we will train a machine learning algorithm to have the capacity to discover these 68 focuses on any face. Since we located the eyes and

(26)

26

mouth, we basically turn, scale and shear the picture with the goal that the eyes and mouth are focused as most ideal as We won't do any extravagant 3d wraps since that would bring contortions into the picture.

Figure 9 HoG of Face Pattern [6]

3.1.3 Concealing Faces

Scientists have found that the most exact approach is to give the PC a chance to make sense of the estimations to gather itself. Deep Learning completes the task in a better way at making sense of which parts of a face are essential to measure. The best approach is to train a Deep Convolutional Neural Network. [6] Rather than preparing the system to perceive pictures objects, we will train it to produce 128 measurements for each Face [6].

The training procedure works by considering 3 images at a time:

• The training image of a known face is loaded.

• Consider another image of same known person.

• Consider another image of unknown person.

At that point, the algorithm considers the measurements it is producing currently for every one of those three pictures. The neural network marginally with the goal that it ensures the measurements it creates for 1 and 2 are somewhat similar while ensuring the measurements for 2 and 3 are different comparatively.

(27)

27

In the wake of rehashing this stage multiple times for many pictures of thousands of various individuals, the neural network figures out how to dependably create 128 measurements [6] for every individual.

Any ten unique photos of a similar individual should give generally similar estimations.

The 128 measurements of each Face are termed as an embedding.[6] Reducing the raw information like a photo into a rundown of PC created numbers comes up a great deal in machine learning.

This procedure of training a convolutional neural system to yield embedding requires a considerable measure of information and PC power. This procedure of training a convolutional neural network to yield confront embedding requires a considerable measure of information and PC control. So, all we must do ourselves is run our face pictures through their pre- trained system to get the 128 estimations for each Face. The 128 real-valued estimations are given as input to the SVM classifier for the classification and training the model with real values numbers that are extracted from this procedure.

3.1.4 Finding the individual’s name from the encoding

We should simply discover the individual in our database of known individuals who have the nearest estimations to our test picture. One can do that by utilizing any essential machine learning algorithm. No extravagant deep learning techniques are required. A linear SVM classifier would complete the task. We should train a classifier that can take in the measurements from another test image and tells which known individual is the nearest match [6] and it gives the name of person as output. This can be done in two ways, one can be attaching the names of particular person with the image name that is saved in the datasets, other can be listing all the names and mapping the names to the The following chapter describes the related work around the face recognition models and different scenarios and use cases where this face recognition models are trained and put into practice. Also, the following section describes IoT devices and related works around running the AI application on different IoT platforms.

(28)

28

Chapter 4: Related Work

The Internet of Things (IOT) is a combined network of various devices for example vehicles, small computation devices like raspberry pi etc and various things connected and talk together along with sensors mounted on the devices that enables the system to work together and exchange data though different IoT protocols and gateways. When IoT is reached out with sensors and actuators, the advancement changes into things getting connected and communicate on an occasion, automated physical framework also help in the advancement of technologies, for instance, smart homes and smart cities, smart robots and machines etc. [42]

The main objective of this paper is the architectures of IoT applications and challenges [24]. This survey provides a description of the IoT functional blocks like communication block, services, management block etc. [24] and provides IoT supported technologies, Hardware platforms, wireless communication standards. The IoT application domains also include device management, Heterogeneity management, Visualisation. The advantages of combining AI with IoT includes avoiding unplanned downtime, increasing operation efficiency which helps to maintain ideal outcomes.

The principle aim of the paper is to analyse the performance of the various ARM based development IoT boards like Raspberry pi, Orange pi, Dragon board 410c, Firefly etc and to check the CPU benchmarks and also performance. [43] The paper states that the docker containers and Linux containers are preferable to achieve a unique method to deploy applications on the Single on chip (SoC) IoT boards[43]. The authors conclude that by running the containers on IoT boards, the performance is not affected; hence the performance would be alike to bare metal performance. The security can be enabled by using the Linux containers and security polices similar to VMS to deploy the various applications on the devices.

This paper summarises that the technologies like Internet of Things (IoT), Artificial Intelligence (AI), cloud computing are being promoted to be implemented in smart factories. This mechanical transformation will bring critical gains, assets reserve measures and diminished costs, as machines will have data to work all the more productively, versatile and following request changes. This paper talks about the use of supervised Machine Learning methods aligned with Artificial Intelligence, to actualize an intelligent and collaborative robotic station[44], which does the quality control of Human Machine Interface (HMI) , equipped with pressure buttons and LCD shows[44]. For this purpose, the authors have reused a Machine learning Techniques to perform recognition of the operator’s Face which motivated us to reuse and train the algorithm. The recognition in the system is done by applying the machine learning (ML) techniques to the user’s picture.

The combination of multiple Machine learning methods like using Convolutional Neural networks, Histogram of Gradients, Nearest neighbours Algorithm becomes the critical focus to perform the recognition. His main aim of the paper was to describe the combination of image processing and deep learning algorithm for a quick inspection.

(29)

29

The paper focuses on developing face recognition in a fully automated cloud-based attendance system. [9] The aim to create an online student attendance database, interfaced with a face recognition system dependent on the raspberry pi 3 model [9]. A graphical user interface (GUI) will provide usability to investigate the data stored in the attendance system. This work used open computer vision (OpenCV) library and Python for face detection system combined with SSH file transfer protocol (SFTP) to connect to the internet. In this paper the live feeds are transferred using the SSH file transfer protocol and supporting python libraries to the backend server and the processing and analysis of the data is done at the server and output is shown on a graphical user interface. The paper concludes that by interfacing a face recognition system with a server, a real-time attendance system can be implemented and be monitored remotely [9].

The research focuses on reducing human error in various security applications where authentication is needed to access the privileges of the respective system. This system can be help in recognizing the different functions that are involved in various business. Face recognition algorithm can be improving in the use of assets with the goal that the project can perceive progressively the number of appearances one after another, which can make the framework for better. We can likewise follow a specific understudy in an association rapidly with the assistance of this system. We came to understand that there is a broad assortment of strategies, for instance, biometric, RFID put together, thus concerning which are dreary and non-productive. So, to overcome this above system is the better and reliable course of action from each sharp of time and security. Here, the authors tried to capture a new face on their camera. If this individual had his Face captured and trained previously, our recognizer would make a "forecast" restoring its id and an index, indicated how sure the recognizer is with this match[45].

In a study, it is proposed that the architecture and the implementation of a collaborative rich-text editor that makes use of micro services to enable and enhance its scalable co-editing functionality. This includes micro services for synchronizing unstructured text using operational transformations, for chat functionality, and for detecting and recognizing faces in images added to the editor. The architecture makes use of Docker containers to allow for the development and testing of individual services as separate containers enabling seamless deployment across the available network of computers and other computing devices. The system will be demonstrated by showing how micro services make it possible for multiple users. [8]

(30)

30

Chapter 5: Hardware and Software Tools

5.1 MODULES

5.1.1 OpenCV (Open Source Computer Vision)

OpenCV (Open source Computer Vision) is a library which is used for computer vision. [18] OpenCV comes with a trainer as well as detector. [18] OpenCV was worked to give a good infrastructure to PC vision applications and to quicken the utilization of machine perception in business.

It has C++, Python and Java interfaces and backings Windows, Linux, Mac OS, iOS and Android. OpenCV was intended for computational proficiency and with a solid spotlight on real-time applications. It is written in enhanced C/C++ and Multi-core processing can be performed using this library. [18] Empowered with OpenCL, it can utilize the hardware acceleration of the hidden heterogeneous platform.

The library has more than 2500 improved algorithms, which incorporates a far-reaching set of great machine learning algorithms. [18] These algorithms can be utilized to distinguish and perceive faces, recognize objects, group human activities in recordings, track camera developments, track moving articles, extricate 3D models of objects, create 3D point clouds from stereo cameras, combine pictures together to deliver a high resolution picture of a whole scene, find similar images from a picture database, expel red eyes from pictures taken utilizing flash, take after eye developments, perceive view and set up markers to overlay it with augmented reality, and so on. OpenCV has more than 47 thousand individuals [18] of client network and evaluated number of downloads surpassing 14 million. The library is utilized broadly in organizations, research groups and by administrative bodies.

OpenCV comes with a face detector called as Haar Cascade classifier [32]. To train a classifier to recognize the faces, two sets of images are considered, one set which consists of all the pictures with faces and the other which does not include faces. These two sets are used to generate classifier models. Feature extraction is performed by using negative and positive pictures. [32] Haar classifier uses object detection framework. [32]

Advantages of using OpenCV module [18]:

• It’s a large library and its accessible free of expense.

• Since OpenCV library is written in C/C++ it is very quick.

• An Image is fundamentally and exhibit for which Python’s NumPy - NumPy and SciPy.org - SciPy.org modules can be utilized.

(31)

31

• The exhibit controls in NumPy module are exceedingly upgraded for speed. The face detection can be performed at 15 frames per second speed for 384*288-pixel photos, [18] which are fast compared to other modules.

• OpenCV gives algorithmic efficiency fundamentally to process real-time projects.

Also, it has been planned in a way that permits it to take advantage of hardware acceleration.

• Sometimes in substantial scale ventures, not every person is alright with Python and codes in C++, so one need to fit in with the bound together coding style hence OpenCV supports multi-languages.

When it comes to the installation part, we have installed using below command.

Installation command: $ pip install OpenCV

5.1.2 NumPy

NumPy is the fundamental package for logical computing with Python. It contains in addition to other things:

• Intense N-dimensional array object.

• Tools for integrating C/C++ code.

Other than its undeniable logical uses, NumPy can likewise be utilized as a proficient multi-dimensional container of non-specific information. This enables NumPy to flawlessly and quickly coordinate with a wide range of databases.

5.1.3 Face Recognition Module

Face recognition module is used to recognise and manipulate faces from Python or from command line from the world’s simplest face recognition module. The model has in accuracy about 99.38%. This also provides a simple face recognition command line tool that lets you perform face recognition on a folder of images from command line. [36]

Installation:

The following are the requirements.

– Python 3.3+ or Python 2.7 – macOS or Linux

Install the face_recognition module using pip3 (for python 3) as given below

(32)

32

pip3 install face_recognition

When you install face recognition module; the following command line programs are obtained:

• face_recognition - Recognizes the faces within a folder consisting of photographs.

• face_detection - Finds the faces within a folder consisting of photographs.

5.2 Docker

Docker is a container technology which makes it simple to bundle and disperse software alongside its other conditions. Containerization is a lightweight choice to full machine virtualization that includes encapsulating an application in a container with its own working condition. [10] This gives a significant number of the advantages of stacking an application onto a virtual machine, as the application can be run on any appropriate physical machine Docker is written in Go, an open source programming language created in 2007 at Google by Robert Griesemer, Rob Pike, and Ken Thompson[10] .

Benefits of Docker Container:

A. Application Portability

Docker puts all the dependencies of applications in container which is portable on various platforms. The distributed applications can be built, move and run utilizing containers. The engineers and developers can run same application on laptops, VMs or cloud by performing the automation of deployment inside containers [10].

B. Docker is lightweight and quick in performance

Containers are usually quick in performance when compared with Virtual Machine since virtual machines boot a whole operating system to begin and consume the resources as each Virtual machine needs to run a full OS instance. Anyway, a container start is much the same as beginning a process. [10]

C. Ideal resource utilization

Docker permits to allocate and control CPU, memory, system and disk resources to all the process utilizing Linux’s Control Group [10]. It guarantees that one procedure isn’t assuming control over most of the operating System resources and causes hindrance to other processes

(33)

33

Docker relies upon Linux Containers (LXC), cgroups and namespaces abilities which don’t exist in Windows [3]. Microsoft has its own container technology on Windows, yet trials are happening to empower Docker containers to keep running on Windows Server.

D. Advantages of Container based Virtualization [10]

The Container-based virtualization utilizes a single kernel to run numerous instances of a working operating system and it doesn’t copy usefulness. Every container instance keeps running in a totally isolated secure condition. [10]

Container-based virtualization is more resource proficient since all applications keep running over a small kernel and instances are quicker to make or migrate. It implies a solitary framework can conceivably have a bigger number of containers than VMs, yet it confines the adaptability and decision of your operating systems. Single Operating System makes circumstance of single purpose of failure for most of the containers.

For instance, a malicious attack or crash of host OS can affect most of the containers.

Containers are viewed as more proficient contrasted with VMs claiming the extra resources required for every OS is dispensed with and instances can be created quickly and are easy to migrate. Cloud organizations are more intrigued by containers because undeniably container instances can be deployed on same hardware.

5.2.1 Docker CE Installation

To install Docker, you require 64-bit versions of any of the following Ubuntu versions.

• Bionic 18.04 (LTS) [11]

• Artful 17.10. [11]

• Xenial 16.04 (LTS) [11]

• Trusty 14.04 (LTS) [11]

This is a simpler way and recommended approach. Some clients download the DEB package and manually perform the installation and manage upgrades. This is valuable in circumstances, for instance installing Docker on air-gapped systems with no internet access. For testing and development environments, users perform Docker installation by utilizing the automated scripts. [11]

We performed installation using repository using the below commands:

• apt package must be updated.

- sudo apt-get update

(34)

34

• Install the packages using below command:

- sudo apt-get install apt-transport-https ca-certificates curl gnupg-agent software-properties-common [11]

• Adding Docker’s official GPG key:

- curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add –

• Check the key with the fingerprint 9DC8 5822 9FC7 DD38 854A E2D8 8D81 803C 0EBF CD88, by looking for the last 8 characters of the fingerprint:

- sudo apt-key fingerprint 0EBFCD88

• To set up the stable repository run the below command. The below command is recommended for X86_64 / amd64architecture:

- sudo add-apt-repository \ “deb [arch=amd64]

https://download.docker.com/linux/ubuntu \ $(lsb_release -cs) \ stable[11]

• As Firefly or Raspberry pi run on arm architecture. The below command is recommended for armhf architecture:

- sudo add-apt-repository \ “deb [arch=armhf]

https://download.docker.com/linux/ubuntu \ (lsb_release -cs) \ stable[11]

• Check the Docker CE installation:

- sudo docker run hello-world

A test image can be downloaded with the mentioned command and it runs in a container.

If the installation is successful, a message is printed on the terminal and it exits. Further configurations can be done by running commands with sudo. [11]

In order to avoid typing password every time, we can create a group and add users. To access it

- sudo groupadd Docker

- sudo usermod -aG Docker firefly

Now reboot the system.

(35)

35

5.2.2 Docker Hub

Docker Hub is a cloud-based repository in which Docker clients make, test, and distribute container images.[21] Through Docker Hub, a client can operate public, open source image repositories, and additionally utilize a space to make their own private repositories and work groups.[21]

A Docker client can pick Docker Registry, which is a stateless, open source and versatile server-side application if they want to keep up the capacity and distribution of Docker images as opposed to depending on Docker service. Image repositories are spaces inside Docker Hub wherein clients transfer and store container images. Pubic repositories empower clients to share and team up on container images. [21] Private repositories shield any touchy or exclusive information from unapproved people.

To push an image to Docker Hub, a user must do the following [21]:

• The environment variable in the Docker terminal must be set to the Docker ID, which is the username shared between Docker Hub and Docker Cloud.

• Sign with the docker login command to the Docker Cloud.

• Docker tag command must be used to tag a specified image.

• The image can be then pushed to Docker hub with the docker push.

• Check Docker Cloud to verify that the image is uploaded to repository.

• The image can be pulled by using docker pull command.

5.3 Firefly-RK3399

Firefly-RK3399 is an open source single board computer that is intricately made by the Firefly group and LoveRPi. It is powered by RK3399 Soc. Fitted with 6 centres, up to 4GB DDR3, and locally available eMMC with up to 128GB capacity, the execution of this single board PC is in a group all its own. Con-trolled by the ARM Mali T-864 GPU, the RK3399 can drive 4K Ultra-HD streams and diversions by means of USB Type C DisplayPort 1.2 and HDMI 2.0. Locally available Gigabit Ethernet, Dual Band 802.11AC Wi-Fi, and Bluetooth 4.1 make availability easy. It is designed for Virtual Reality, 4K and Panoramic Photo and Video Capture, Computer Vision, 3D Rendering, Gaming, Low Power Server, and a lot all the more front-line applications.[20]

References

Related documents

The chapter continues with the description of game design frameworks that helps us analyse common features of NPC design in games using stealth gameplay style and validate the

The executable emits an XSL-T transformation which, when applied to the original model, re-orders the reactions in descending order by total number of occurrences over the

AI-based automation facilitates matchmaking, as Kumar and Khandelwal (2018, p. 3) bring up that the recruitment agencies can now pursue both high volume and high touch

In this study we aim to measure Docker containers and Jails efficiency; with efficiency, we mean the individual measurable performance of the CPU, memory, write to disk,

Running distributed large-scale data processing frameworks (Apache Hadoop or Apache Flink) inside containers will hide network information 1.. Furthermore, the

The aim of this thesis is, therefore, to explore Docker containers in a forensic investigation to test whether data can be recovered from deleted containers and how

Security Implications for Docker Container Environments Deploying Images From Public Repositories: A Systematic Literature Review.. Bachelor Degree Project in Information

The SDP application did not have any source code available, only an installation file. It was important to find a 64-bit version, because running 32-bit applications require