• No results found

Night Setback Identification of District Heating Substations

N/A
N/A
Protected

Academic year: 2022

Share "Night Setback Identification of District Heating Substations"

Copied!
47
0
0

Loading.... (view fulltext now)

Full text

(1)

i

Night Setback Identification of District Heating Substations

Second-cycle

Author: Kassaye Gerima

Supervisors: Hasan Fleyeh & Fan Zhang Examiner: Siril Yella

Subject/main field of study: Microdata analysis Course code: MI4002

Higher education credits: 15 cr Date of examination: 01/20/2021

At Dalarna University it is possible to publish the student thesis in full text in DiVA.

The publishing is open access, which means the work will be freely accessible to read and download on the internet. This will significantly increase the dissemination and visibility of the student thesis.

Open access is becoming the standard route for spreading scientific and academic information on the internet. Dalarna University recommends that both researchers, as well as students, publish their work open access.

I give my/we give our consent for full-text publishing (freely accessible on the internet, open access):

Yes ☒ No ☐

Dalarna University – SE-791 88 Falun – Phone +4623-77 80 00

(2)

i Abstract

Energy efficiency of district heating systems is of great interest to energy stakeholders. However, it is not uncommon that district heating systems fail to achieve the expected performance due to inappropriate operations. Night setback is one control strategy, which has been proved to be not a suitable setting for well-insulated modern buildings in terms of both economic and energy efficiency. Therefore, identification of a night setback control is vital to district heating companies to smoothly manage their heat energy distribution to their customers. This study is motivated to automate this identification process. The method used in this thesis is a Convolutional Neural Network(CNN) approach using the concept of transfer learning. 133 substations in Oslo are used in this case study to design a machine learning model that can identify a substation as night setback or non-night setback series. The results show that the proposed method can classify the substations with approximately 97% accuracy and 91% F1-score. This shows that the proposed method has a high potential to be deployed and used in practice to identify a night setback control in district heating substations.

Keywords: District heating, Night setback, CNN, Transfer learning, Pre-trained model

(3)

ii

Abstract...i

Table of Figures ...iv

List of Tables………...………...v

1.

Introduction...1

1.1. Background... 1

1.2. Problem Formulation... 2

1.3. Research Question... 2

1.4. Research Purpose... 2

1.5. Proposed Solution... 3

2.

Literature Review...4

3.

Theoretical Background...10

3.1. Overview of CNN...10

3.2. Overview of Transfer Learning... 12

3.3. Overview of CNN pre-trained Models...14

3.3.1. AlexNet...14

3.3.2. VGG...15

3.3.3. ResNet...15

3.3.4. DenseNet...16

3.3.5. MobileNet...16

3.3.6. SqueezeNet...17

4.

Methodology...18

4.1. Data Description...18

4.2. The proposed Method...19

5.

Results and Discussion……...26

5.1. Results 5.1.1. Results of ResNet...27

5.1.2. Results of VGG...28

5.1.3. Results of AlexNet and SqueezeNet...29

5.1.4. Results of DenseNet and MobileNet...29

5.2. Discussion...32

(4)

iii

6.

Future work and conclusion...35

7.

Reference...37

(5)

iv Table of Figures

Figure 3.1 Architecture of CNN………...……….11

Figure 3.2 The overall architecture of the Convolutional Neural Network (CNN)………...…...12

Figure 3.3 Three ways in which transfer might improve learning………...….13

Figure 4.1 General workflow of the proposed approach………...20

Figure 4.2 Normalized monthly energy usage of a night setback substation sample………...21

Figure 4.3 The corresponding heat map image of the time series shown in Figure 4.5.……...…21

Figure 4.4 Normalized monthly energy usage of a non-night setback substation sample…..…...22

Figure 4.5 The corresponding heat map image of the time series shown in Figure 4.7………....22

Figure 4.6 Original data and the augmented image after adding Gaussian noisy and shifting…..24

Figure 5.1.1 Confusion Matrix of ResNet18, ResNet34 and ResNet50...……….27

Figure 5.1.2 Confusion Matrix of VGG11, VGG16, and VGG19…...……….…...……….28

Figure 5.1.3 Confusion Matrix of AlexNet and SqueezeNet………29

Figure 5.1.4 Confusion Matrix of MobileNet and DenseNet………29

Figure 5.2.1 Confusion matrix for binary classification………...32

(6)

v List of Tables

Table 4.1 Summary of the filtered data for colder months………….………...19 Table 4.2 Summary of the data partition………….………...19 Table 5.1 Summary of models’ prediction performance……….….26

(7)

1

1. Introduction

1.1. Background

The District heating industry has a long tradition and nowadays plays a key role in heat distribution in several countries in Northern and Eastern Europe with growing markets in Asia and North America. In Sweden, more than 50% of all heating demand is provided through the use of district heating [1]. The district heating concept continues to grow well and expand as the technology at its core is financially advantageous and environmentally sound [1].

In Sweden, the use of district heating varies for diverse building types. For apartment buildings and premises, district heating is the main heating method, at 82% and 68% respectively [2]. For single-family houses, electricity and biofuels are the major heating sources, and district heating accounts for only 12% [2]. District heating is therefore a vital part of the Swedish energy system.

Therefore, energy efficiency of district heating systems is of great interest to energy stockholders.

To increase the efficiency of district heating substations, different types of control mechanisms have been in use. These energy-saving measures set by customers (such as night-setback settings) affect the pattern of consumption. Night setback is one of the different control mechanisms.

Purposely lowering the room temperature at night is known as night set back. Night setback strategy is used in different types of buildings including residential and commercial buildings. The main purpose of night setback is to use energy efficiently by setting back the thermostat at night when the building is not in use. Several studies proved that Night setback saves energy.

Despite its advantages in energy savings night setback is problematic for district heating/utility companies because it leads to a sudden morning pick. Sudden morning pick happens because the building should be suitable for occupants or people who work in the building before starting their

(8)

2

job [3]. Besides, night setback control strategy is not a suitable setting for well-insulated modern buildings in terms of both economic factors and energy efficiency. Studies made by Basciotti and Schmidt [3] and Noussan et al. [4] concluded that night-setback settings are causing problems regarding energy system levels by causing higher daily morning peaks. The hourly and daily alterations in heat demand, cause challenges, for example, the extra start-ups of heat-only boilers, which escalates emissions from, and the cost of heat production in district heating systems.

As a result, district heating companies are interested to know whether a substation uses a night setback control strategy or not to plan their energy distribution and to enhance their energy demand management. Identifying this will help them to meet the energy demand of their customers by working on their current energy supply.

This study will focus on understanding district heating load patterns to design a machine learning model that will identify night setback and non-night setback district heating substations.

1.2. Problem Formulation

Various studies have been conducted concerning night setback advantages in energy efficiency of district heating substations. Besides, some scholars have researched the cons of night temperature setback to district heating substations and other energy stakeholders. In different literature, the identification of heat load patterns of district heat substations was done manually. This manual identification is time-consuming and usually needs to be checked by domain experts which makes it more difficult. Therefore, this thesis will focus on designing a model that identifies night setback and non-night setback series of district heating substations with a relatively small number of datasets. The developed model should be tested against a large number of datasets from new substations.

(9)

3

1.3. Research Question

The thesis investigates the following question.

a. How can a night setback and non-night setback series of district heating substations be identified, given the daily heat load patterns of district heating substations for each month?

1.4. Research Purpose

The purpose of this study is to design a machine learning model that can identify a night setback and non-night setback series of district heating substations with a relatively small amount of training dataset to reduce a future manual effort.

1.5. Proposed Solution

Once the model is designed, it will automatically identify night setback and non-night setback series of district heating substations. The proposed solution to solve the research problem is to apply CNN architectures and the concept of transfer learning. CNN architectures are a special kind of multi-layer neural networks, designed to recognize visual patterns directly from pixel images with minimal preprocessing. Transfer learning helps to leverage previous learnings to build accurate models in a time-saving way. In image classification, transfer learning is mostly stated by employing pre-trained models. Image of monthly heat load pattern and data of hourly resolution of district heating substations will be used as input to train the model. This solution will help energy stakeholders to easily identify substations with night setback control.

(10)

4

2. Literature Review

District heating can be defined as a framework in which water is warmed in one or several larger units and is then distributed via pipes to the residential/commercial/industrial customer's premises where the heat is extracted for space heating purposes or to prepare hot tap water [5]. Night-time building setback is a widely used control strategy in which the system is cycled off during unoccupied hours, and the temperature in each zone is allowed to drift away from the occupied. A night setback system is often integrated into an office type building management system (BMS) or as part of an equivalent electronic control system, which has a built-in timer. The building must be warmed up again before the users arrive the next day. It is, therefore, important that the BMS programming returns the setpoint to the normal day position early enough to heat the building before they arrive [6].

Different studies related to district heating substations, pattern load analysis, and energy savings and problems that emanate from night setback control strategy has been conducted. Some researchers identified the pattern load analysis of district heating substations and have classified them based on the control strategy the substations applied. In addition, other researchers have studied the energy savings that can be found from using night setback controls. Moreover, they are also scholars who have studied the problems which arise from using night setback control which makes it problematic to energy stack holders.

Even though several studies related to pattern load analysis and advantages and disadvantages of night setback control are done, there is no study done to automate the identification of night setback and non-night setback district heating substation load patterns. The goal of this literature review is, primarily, to compare and understand the load pattern analysis of district heating substations.

(11)

5

Secondly, it is to understand the energy-saving advantages and disadvantages of a night time setback control strategy to district heating and utility companies.

Due to the shortage of high-resolution intelligent installation, studies on the analysis of heat load patterns of district heating substations are scarce in the literature. In 2010, Goia et al. [7] by analyzing hourly heat usage data of district heating substations in Turin, they identified four clusters of daily load curves of the substations. The method used to identify these groups was a functional clustering-based approach. While the two clusters with relatively high heat consumption represented heat load curves of winter season, the third cluster represented low heat demand mainly in October and April. Besides, the fourth cluster consists of heat load curves with medium heat consumption are from Autumn and Spring seasons. To forecast short term pick load, functional linear regression models were used. As the main factor in classifying the substations was heat consumption level and the shapes of daily curves in different clusters were similar, different load patterns were not identified by the result of the clustering. Besides, the purpose of the study was not load pattern analysis.

In 2013, Gadd and Werner [8] After analyzing weekly heat load plots of 141 buildings in Southwestern Sweden they identified load patterns of substations with four different control strategies namely time clock operation control 5 days a week, time clock operation control 7 days a week, night setback control and continuous operation control. In 2015, these authors identified three types of faults in their fault detection study by manually analyzing meter readings of substations [9]. The three groups discovered are unsuitable heat load patterns, poor substation control, and low average annual temperature difference. The analysis was difficult as it needs

(12)

6

different hypothetical rules to classify the faults. The requirement to know the type of activities in buildings was one of the challenges.

In 2017, Noussan et al.[10] in their study focused on heat load patterns of district heating substations, they proved that outdoor temperature and various temperature control strategies are the key factors in heat consumption variation. Night setback control and night switch-off are among the operation control strategies. Energy signature and visualization of heat load curves were used as a method in this study. In addition to identifying the key drivers of heat consumption, their analysis showed that night setback control strategy results in early morning energy picks. In their case study, they have used the daily and hourly reading of heat load patterns from October 2011 to April 2012 of district heating substations in Turin.

In 2019, Yakai et al. [11] applied Gaussian Mixture Model (GMM) clustering to extract heat load sub-patterns caused by behavior of users/ managers and seasonal variation. They used data from an energy station in Tianjin as a case study. The method identified four clusters of operation patterns which they defined as a working pattern, on-duty pattern, daytime-nighttime pattern, and nighttime – daytime pattern. In addition, the researchers designed a load pattern prediction model.

Besides, they identified that behavior of managers(Night setbacks) are the reasons for morning peak loads.

In 2019, Calikus et al. [12] discovered typical patterns of heat load profiles and their control mechanisms applied in a set of buildings. To understand the heat load patterns in the district heating substations k-shape clustering method was used. The results of the clustering were checked by domain experts manually. The proposed method demonstrated encouraging results. However,

(13)

7

no further effort being done to computerize or automate the identification of the control strategies and minimize possible manual effort in the future.

Studies specifically related to night setback identification in district heating substations are nonexistent. However, there are some studies related to the advantages and disadvantages of night setback. Nelson et al. [13] in their study on night setback energy saving found out that night setback will always result in energy savings. In their research, they have used a hybrid computer model to determine the energy savings effects of a day, night, and day-and-night thermostat setback.

Szydlowski et al. [14] gauged energy savings resulting from using night temperature setback in light construction wooden office buildings. The researchers installed controlling equipment in six, two-story buildings at Fort Devens, Massachusetts. Data taken during both single-setting and night-setback operating modes were used to build models of each building’s heat intake as a function of the difference between inside and outside temperature. They used models to measure temporary savings that could be attained from the use of night-setback thermostat control.

Accordingly, they found using an automatic setback thermostat in the buildings saves $780 per year per building.

Moon et al. [15] studied thermostat strategies' impact on energy consumption on residential buildings. Their analysis revealed that heating and cooling systems were significant energy- consuming components in cold and humid climate zones. Accordingly, heating energy in cold climate zones and cooling energy in hot-humid climate zones have a potential for a significant saving. Different thermostat strategies (adjustments of setback period, of set-point, of setback temperature) showed their obvious effects on such savings. The heating system showed the most considerable energy-saving effect through proper thermostat strategies, particularly in a cold

(14)

8

climate. Proper setback period set point and setback temperature need to be established to achieve energy efficiency in residential buildings.

In 2017, Basciotti et al. [3] studied ‘demand-side management in district heating systems’ and Noussan et al. [4] studied ‘Real operation data analysis on district heating load patterns.’ Both groups of researchers concluded that night-setback settings are causing problems regarding energy system levels by causing higher daily morning peaks.

In 2019, Moon et al. [16] conducted a study to develop an Artificial Neural network algorithm to predict the optimal start moment of the setback temperature to enhance indoor thermal comfort and building energy efficiency. The results of the development and tests disclosed that the indoor temperature, outdoor temperature, and temperature difference from the setback temperature were the three major variables predicting the optimal start moment of the setback temperature.

Current research studied the load pattern analysis of district heating substations and identified the variations as seasonal and daily. Besides, it is identified that the seasonal variations are emanated from variations in outdoor temperature while the daily variations are induced by social patterns due to customers' social behaviors and control strategies like night setback. In addition, current research studied the energy savings that can be found by using night setback control and the problems emanated from night setback control strategy.

Outcome of the literature review

Several studies conducted in earlier years proved that night setback control strategy is important in energy savings in old and not well-insulated buildings by reducing the amount of energy at night when buildings are not occupied. In recent studies, other researchers found out that night setback control is problematic in modern and well-insulated buildings as it creates early morning peaks in

(15)

9

load patterns to create comfort for occupants in the buildings. As time goes by the outcome of several studies and researchers' opinions on night setback is changing. The main reason for this is that the emergence of modern and state of the art insulated buildings. In these buildings, the application of night temperature setback is proved to be problematic in terms of both economic factors and energy efficiency.

Using night temperature setback in modern and well-insulated buildings will lower the delivery quality for all customers during the morning pick hours. Therefore, it is crucial to identify substations that uses a night setback control strategy for energy stakeholders. However, no work addresses how to automate the process of night setback identification of district heating substations to reduce future manual effort. Therefore, this thesis focuses on designing a machine learning model that can identify night setback and non-night setback series of district heating substations by analyzing monthly heat load patterns of district heating substations. To achieve this, CNN architectures are used to train and classify the substations based on their heat load patterns.

(16)

10

3. Theoretical Background 3.1. Overview of CNN

A great interest in deep learning has begun in recent years [17]. The most well-known algorithm among several deep learning models is CNN, a class of artificial neural networks that have been a leading technique in computer vision tasks since the amazing results were shared on the object recognition competition known as the ImageNet Large Scale Visual Recognition Competition (ILSVRC) in 2012 [18,19].

CNN is a deep learning model for processing data that has a grid pattern, such as images, which is motivated by the organization of animal visual cortex [20,21] and designed to spontaneously learn spatial hierarchies of characteristics, from low- to high-level patterns. It typically consists of three types of building blocks or layers: convolution, pooling, and fully connected layers (Figure 3. 1) [19]. The first two layers perform feature extraction, whereas the fully connected layer, maps the extracted features into the final output, such as classification.

A convolution layer is composed of mathematical procedures and a special type of linear operations, which are very crucial to perform feature extraction. In digital images, pixel values are stored as an array of numbers in a two-dimensional grid. Then kernel which is a small grid of parameters is applied at each image position, since a feature may occur in various parts of the image. This makes convolutional neural networks to be very efficient for image processing.

Extracted features can become hierarchically and progressively more complex as one layer feeds its results into the subsequent layer. To minimize the difference between outputs and ground truth labels a training is done by optimizing the parameters through different algorithms like back Propagation and gradient descent.

(17)

11

CNN uses relatively less pre-processing when compared to other image classification algorithms.

Besides, the independence from prior knowledge and manpower in feature design is a major advantage of CNN [22]. Processing images were a big problem for artificial intelligence before CNN come into existence. The problems are solved with the advent of CNN. The great features of CNN are, primarily, it can effectively reduce the size of large data into small data. Secondly, it can effectively retain image features and recognize similar images when the image is flipped, rotated, or transformed [22].

Figure 3.1 Architecture of CNN

Source: https://insightsimaging.springeropen.com/articles/10.1007/s13244-018-0639-9

(18)

12

Figure 3. 2 The overall architecture of the CNN includes an input

layer, multiple alternating convolutions, and max-pooling layers, one fully connected layer and one classification layer.

Source : https://www.researchgate.net/publication/331540139_A_State-of-the Art_Survey_on_Deep_Learning_Theory_and_Architectures/figures?lo=1

3.2. Overview of Transfer Learning

Transfer learning is defined as a method whereby neural network model is first trained on a problem similar or related to the problem, ready to be solved. Then some layers from the trained model are used in a brand-new model trained on the problem of interest [23]. It is a common and popular approach in deep learning where pre-trained models are used as the initial point on computer vision and natural language processing duties given the enormous mathematical computation and resources required to train deep learning neural network models or the large and challenging datasets on which deep learning models are trained [24]. The objective of transfer learning is to enhance learning in the target task by leveraging knowledge from the source task.

In transfer learning, after training a base network on a base dataset and base task, then we apply or transfer the learned features to a target network to be trained on a target dataset and target task.

However, this process will correctly work when the features are general, suitable for both the base and target tasks, rather than only specific to the base task [23].

(19)

13

In 2009, Torrey et al. [25] specified three common measures by which transfer might improve learning (Figure 3.3). First is the initial performance achievable in the target task using only the transferred knowledge, before any further learning is done, compared to the initial performance of an ignorant agent. Second is the amount of time it takes to fully learn the target task given the transferred knowledge compared to the amount of time to learn it from scratch. The third is the final performance level achievable in the target task compared to the final level without the transfer.

Figure 3.3 Three ways in which transfer might improve learning

Source:https://www.researchgate.net/figure/Three-ways-in-which-transfer-might-improve-the-learning-performance

Transfer learning can be used to enhance learning by offsetting the difficulties posed by tasks that involve unsupervised learning, semi-supervised learning, or small datasets in addition to a standard supervised-learning task [25]. I.e., in the absence of a large amount of data or class labels for a task, handling it as a target task and doing an inductive transfer from a related source task can lead to more accurate models.

In 2008, Shi et al. [26] looked at transfer learning in semi-supervised and unsupervised situations.

They presume that a reasonably sized dataset exists in the target task, however, it is mostly

(20)

14

unlabeled due to the high expense of having a domain expert to assign labels. To solve this problem, they proposed an active learning approach, in which the target-task learner requests labels for examples only when necessary. They build a classifier with labeled examples, including mostly source-task ones, and estimate the confidence with which this classifier can label the unknown examples. When the confidence is very low, they ask for an expert label.

For image classification problems, it is common to use a deep learning model pre-trained for a large and challenging image classification task such as the ImageNet 1000-class photograph classification competition. The research and development organizations that design models for this contest often make their final model public under a permissive license for reuse. The following CNN architectures are among the state-of-the-art successful models in image classification competition in ImageNet at different times.

3.3. Overview of CNN pre-trained Models

3.3.1. AlexNet

In 2012, Alex Krizhevsky [27] developed a model that won the image classification competition by a large margin. AlexNet was a novel architecture composed of eight layers. The first five are convolutional layers, and the rest are fully connected layers. It used ReLu activation function, which showed a better training performance than tanh and sigmoid functions [27]. AlexNet computed in the ImageNet large-scale visual recognition challenge on Sep. 30, 2012. It achieved a top-5 error of 15.3% which is 10.8% points lower than the second-place top 5 error rate of 26.2%.

The depth of the model was the key component for its high performance. Though it was

(21)

15

computationally expensive it was made feasible by using graphics processing units (GPUs) during training [27].

3.3.2. VGG

VGG was the next step after AlexNet. It is an innovative object recognition model that supports up to 19 layers. In its first convolutional layer, AlexNet focused on smaller window sizes and strides. However, VGG addresses another very crucial aspect of CNNs which is depth [28]. VGG takes 224 x 224-pixel images as an input. The convolutional layers in this model use a very small receptive field (3x3). In addition, it uses 1x1 convolution filters which act as a linear transformation of the input. Then it is followed by ReLu unit[29]. Moreover, VGG has three fully connected layers, while the first two have 4096 channels each the third one has 1000 channels 1 for each class. All its hidden layers use ReLu, which cats training time. Though VGG is based on AlexNet, it uses receptive fields(3x3 with stride 1) which is very small with respect to AlexNet's receptive fields which are 11x11 with a stride of 4. Besides, the decision function is more discriminative as it uses three ReLu instead of one. Its small size convolution filters help VGG to have many weight layers which leads to better performance. In image net image classification competition, it achieved 25.5% top-1 error and 8.0% top-5 error on a single test scale. In multiple test scales, it achieves 24.8% and 7.5% top-1 error and top-5 error, respectively.

3.3.3. ResNet

ResNet presents a residual learning framework to ease the training of substantially deeper networks. In ResNet instead of learning unreferenced functions, layers are reformulated as learning residual functions with reference to layer inputs. It is proved that ResNet is easier to optimize and can gain better accuracy from considerably increased depth. ResNet evaluates with a depth of 152

(22)

16

layers on image net dataset with lower complexity. It won first place on the 2015 ILSVRC classification task by achieving a 3.57% error on the ImageNet test [30].

ResNet stacks up identity mappings, layers that initially do not do anything, and sticks over them, reusing the activations from previous layers. Faster learning is achieved as skipping compresses the network into fewer networks. Then, when the network trains again, all layers are expanded, and the residual parts of the network explore more and more of the features space of the source image.

3.3.4. DenseNet

DenseNet (Dense Convolutional Network) and ResNet are similar except DenseNet inputs are concatenated while inputs in ResNet are summed [31]. As a result of the input concatenation, subsequent layers can access the feature maps learned by former layers. This allows future reuse throughout the network which leads to having a compact model. It achieved higher accuracy by using dense connections and fewer parameters compared with ResNet [31].

To improve the declined accuracy caused by the vanishing gradient in high-level neural networks was the main reason why denseNet was created. Therefore, it reduces the problem which was created due to longer path between the input layer and the output layer [32].

3.3.5. MobileNetV2

MobileNet is a neural network architecture designed for mobile and resource-constrained environments [33]. The architecture works by significantly decreasing the number of operations and memory needed while retaining the same accuracy. [33,34] It takes a low dimensional

(23)

17

compressed representation as an input which is first expanded to high dimension and filtered with a lightweight depth wise convolution [33]. The core layer of mobileNet is deep wise separable convolution [33]. This model is a perfect fit for mobile devices embedded systems and computers without GPU [34].

3.3.6. SqueezeNet

In 2016, Iandola et al. [35], created a squeezeNet model intending to identify CNN architectures with few parameters while maintaining competitive accuracy. To achieve this, they employed three strategies. The strategies were replacing 3x3 filters with 1x1 filters, decreasing the number of input channels to 3x3 filters, and Down sampling late in the network so that convolution layers have large activation maps. The first two strategies are about carefully reducing the number of parameters in a CNN while attempting to preserve accuracy while the third strategy is about maximizing accuracy on a limited budget of parameters. To successfully employ the strategies, they used a fire module as a building block in their architecture. Finally, they created a CNN architecture that has 50× fewer parameters than AlexNet and maintains AlexNet-level accuracy on ImageNet.

In this work, all the above pre-trained models will be applied to classify our target dataset, and then we will recommend the best model with higher performance metrics.

(24)

18

4. Methodology 4.1. Data Description

The data used in this case study is an hourly reading of district heating substations of different buildings of the year 2017 which are found in Oslo, Norway. The dataset is provided by a utility company, Utilifeed [36] through their developed web API. The API shows measures of hourly resolution of four parameters: Energy (sum for each hour), Volume flow (sum for each hour), Supply temperature (instantaneous value at each full hour), Return temperature (instantaneous value at each full hour).

This research used the hourly heat load data of 133 district heating substations. These substations were manually verified by domain professionals. Based on the experts’ opinion, 25 of the substations are grouped under night setback while the rest 108 substations showed a non-night setback pattern. To simplify the training the yearly heat consumption data of the substations is sliced by months and scaled in a range from 0 (minimum value) to 1 (maximum value). In this case study, only data of comparatively colder months such as October, November, December, January, February, and March are considered. The reason for this is that because night setback control strategy only affects a load of space heating in buildings. The other months' data is excluded as night temperature setback has almost no effect during warmer months.

Then the heat load pattern of each month’s series is converted into equivalent heat map image form. Then substations with night setback characteristics are named as “1” and the substations with non-night setback characteristics are described as “0”. Table 1 illustrates the summary of the filtered data.

(25)

19

Table 4.1. Summary of the filtered data for colder months

Label No. of substations No. of images

1 (night setback) 25 150

0 (non-night setback) 108 648

Total 133 798

Then the labeled substations are divided into 40% training and 60% testing in a stratified manner.

Table 2 shows the summary of data partition.

Table 4.2. Summary of the data partition.

Data Training Testing

No. of substations 53 80

No. of images 318 480

Percentage 40% of the total 60% of the total

4.2. The proposed method

Deep Convolutional neural networks (DCNNs) based on transfer learning that are discussed in section 3.1 are the main building blocks in the proposed method. The reason behind selecting this particular approach is, primarily, DCNNs are proved to be the best performers with high accuracy in image classification problems as the idea of dimensionality reduction suits the huge number of parameters in an image. In addition, the amount of training data we have is small therefore, we want to apply pre-trained models to use their knowledge learned from millions of image data sets.

Besides, transfer learning is crucial in building a model with a lower computational resource. The overall process of the proposed approach is shown in Figure 4.1.

(26)

20 Figure 4.1 General workflow of the proposed approach

First, the yearly heat load pattern of each district heating substation is sliced into monthly series.

Then to make the Neural Network training easy, each monthly series is normalized into a range of 0 to 1. Next, the normalized monthly heat load patterns are converted into a corresponding heat map image for the ease of labeling by experts. Then the heat map images are manually checked and labeled as either 'night setback' or 'non-night setback' series of substations. Data labeling requires a lot of manual work. It is an essential step in machine learning as the quality of a model depends on the quality of the training data. This task requires a lot of manual work. In this study MakeSense.AI, an open-source online annotation tool is used to label the images as its user interface is simple and easy to use. Figure 4.2 shows the samples of a normalized time series heat load pattern or energy usage while figure 4.3 shows the corresponding heat map image of it. The samples of the labeled substations (Night setback and non-night setback are shown in figures 4.4 and 4.5.

Normalization Heat map

images

Resizing image Data Partition

Classification results Data

Augmentation

Heat map images labeling Heat energy time-

series data

Loading models and train them

Model testing

(27)

21

Figure 4.2. Normalized monthly energy usage of a night setback substation sample

Figure 4.3. The corresponding heat map image of the time series shown in Figure 4.2

(28)

22

Figure 4.4. Normalized monthly energy usage of a non-night setback substation sample

Figure 4.5. The corresponding heat map image of the time series shown in Figure 4.4

(29)

23

From the above figures 4.4 – 4.7 it is clear that the proposed approach, converting the time series energy data into heat map image, is crucial in simplifying the labeling work by industry experts as night setback or non-night setback as the heat map images are clearer when compared to the time series plots. From the heat map image of night setback in figure 4.3, it shows there is a pattern of morning pick at 4:00 in the morning and an evening deep at around 20:00 in the evening for several days in a row with in the month. On the contrary, figure 4.5 shows an arbitrary pattern of a non- night setback series. These patterns that can differentiate between the series of night setback and non-night setback controls are not easily and clearly observable in the original time series data plots in figures 4.2 and 4.4. Therefore, in addition to making the labeling process easier and faster the heat map images have the advantage to make a quick check-up manually in case the model fails to classify the substations correctly. The x-axis of the heat map images represents the days in a week while the y-axis represents hours in a day. The colors in the image represent the normalized amounts of heat consumed by the buildings ranging from blue(lowest) to red(highest).

After the data is labeled, the heat map images of each labeled substations are resized into the same size as the images the pre-trained models were trained. Then the labeled data are partitioned into 40% training and 60% testing.

In order to reduce an overfitting problem in a deep neural network, scientifically applying data augmentation has been a proven technique [37]. It refers to creating random variations of the input training data so that they appear different without altering the meaning of the data. Brightness change, rotation, and flipping are among the commonly used techniques in image data augmentation. While applying augmentation to time-series data due care is needed as random flipping or rotation may damage the sequence of the original time series data.

In this thesis, two data augmentation techniques are applied to the heat map image data.

(30)

24

• Gaussian noise with zero mean and a standard deviation of 0.01 is added to the time-series data. Adding such a small amount of noise makes input data vary, in the meantime, resolutions of peaks and valleys of each hour are still well preserved.

• A sliding window of 168 hours is applied, which shifts the original time series by one day, which preserves the sequential meaning of the original time series and the corresponding label will not be changed by such shifting.

The original image before data augmentation, after adding Gaussian noise and shifting is shown in Figure 4.6. A relatively small weight value of 0.2 is chosen in the proposed approach to retain the majority of information on original images.

Figure 4.6. Original data and the augmented image after adding Gaussian noisy and shifting. Left:

Sample image one before augmentation. Right: Sample image one after augmentation

The next step after the training data is augmented is loading pre-trained models that are trained using ImageNet [38] dataset which comprises of 1.3 million images across 1000 different

(31)

25

categories. The deep neural network models used in this thesis are VGG, AlexNet, ResNet, SqueezeNet, MobileNetV2, and DenseNet. The models are deployed and trained in Google colabratory(Google colab) and Pytorch library is used as a tool. It has all the pre-trained models used in this experiment. In order to accommodate the classification of our specific task ( Night setback or Non-night setback), the final classification layers of the pre-trained models are substituted by new classification layers, that are specific to the new dataset for binary classification.

The training procedure used in this study is a standard two-phase transfer learning approach.

Initially, layers before the newly added classification layers are frozen. Freezing these layers prevents the carefully pre-trained weights from modifying during training. The pre-trained layers are already well-trained and efficient in capturing general concepts such as identifying the gradients of an image and its edges. In the second phase, all layers are unfrozen and trained. Still, the basic features such as the gradients and edges learned by the first pre-trained models are crucial for almost all other tasks in the training process while the later layers specifically learn features that are more task-specific such as birds feathers, eyeball of a cat, and others [39]. These features are not valuable for our specific task which is night setback identification. Hence, discriminative learning rate proposed by Howard and Ruder [40] is applied in phase two training. The fundamental concept behind discriminative learning rate is that for the initial layers of the pre- trained model a lower learning rate is used while a higher learning rate is applied to the top layers.

As a result, the weights of the initial layers will be changed lesser and slower than the later layers.

Having trained the models, model testing is done using the testing set data.

(32)

26

5. Results and Discussion 5.1 Results

Generally, it is challenging to select the right metrics to evaluate a machine learning classifier. It becomes more difficult when working with unbalanced data classification. The models present four evaluation metrics in their test results. These are precision, recall, F1-score, and accuracy.

The result of these parameters is summarized below in Table 5.1. In some cases, like dealing with unbalanced data accuracy is not enough to evaluate a model’s performance. Therefore, a confusion matrix is used for each model to have a detailed insight into the predictions.

Table 5.1. Summary of models’ prediction performance

Models

Testing Results

Precision Recall F1 Accuracy

ResNet18 0.838384 0.922222 0.878307 0.952083

ResNet34 0.911111 0.911111 0.911111 0.966667

ResNet50 0.951807 0.877778 0.913295 0.968750

VGG11 0.840000 0.933333 0.884211 0.954167

VGG16 0.818182 0.900000 0.857143 0.943750

VGG19 0.885057 0.855556 0.870056 0.952083

AlexNet 0.950617 0.855556 0.900585 0.964583

SqueezeNet 0.873684 0.922222 0.897297 0.960417 MobileNetV2 0.918605 0.877778 0.897727 0.962500

DenseNet 0.917647 0.866667 0.891429 0.960417

Based on the results found generally the models have shown an excellent performance. The best performer among the models is ResNet50 with an accuracy level of 96.87% and an F1-score of 91.3%. The least performer among the models is VGG16 with an accuracy level of 94.37% and an F1-score of 85.7%. While the other model's prediction accuracy and precision are in between these figures. In general, the models reached about human-level performance on this task.

(33)

27

In addition to the above performance measures, the confusion matrix for each model is presented below. A confusion matrix is a performance measure for classification of machine learning models.

Confusion Matrix is useful to evaluate a classification model when the data in each class is unbalanced. Since our data is unbalanced, we used confusion matrix to have a clearer understanding of the predicted figures. The number of correct and incorrect predictions are summarized with count values and broken down by each class.

5.1.1 Results of ResNet

ResNet18 ResNet34 ResNet50

Figure 5.1 Confusion Matrix of ResNet18, ResNet34 and ResNet50

In figure 5.1 it is shown that the pre-trained CNN model ResNet50 has a true positive of 386 substations, False Positive 4 substations, False-negative 11 substations, and True negative of 79 substations. This means that 386 nns (non-night setback) substations and 79 ns (night setback) substations are classified correctly as nns (non-night setback) and ns (night setback) respectively.

(34)

28

However, 4 nns (non-night setback) substations and 11 ns(night setback) substations are classified incorrectly as ns (night setback) and nns (non-night setback) respectively. ResNet18 has 374 substations true positive, 16 substations false positive, 7 substations true negative and 83 substations are true negative. Besides, ResNet34 classified the 464 substations correctly and the rest 16 substations are classified incorrectly.

5.1.2 Results of VGG

VGG11 VGG16 VGG19 Figure 5.2 Confusion Matrix of VGG11, VGG16, and VGG19

While VGG11 classified 458 substations correctly, it failed to classify the rest 22 substations in their respective classes. VGG16 predicted 453 substations correctly while it did not predict 27 substations correctly. VGG19 classified 457 substations correctly while it failed to classify 23 substations correctly.

(35)

29

5.1.3 Results of AlexNet, SqueezeNet

AlexNet SqueezeNet Figure 5.3 Confusion Matrix of AlexNet and SqueezeNet

Pre-trained models AlexNet and SqueezeNet correctly classified 463 and 461 substations, respectively. However, they wrongly predicted 17 and 19 substations, respectively.

5.1.4 Results of MobileNet, Densenet

MobileNet Densenet Figure 5.4 Confusion Matrix of MobileNet and Densenet

(36)

30

Pre-trained models MobileNet and DenseNet correctly classified 462 and 461 substations, respectively. However, they wrongly predicted 18 and 19 substations, respectively.

Designing a machine learning model is not a final stage in preparing a problem-solving algorithm.

Before it is put into production, the model should be analyzed. Analysis of an algorithm is the process of finding the computational complexity of the algorithms. [41] Computational complexity is the amount of time, storage size or other resources needed to execute the algorithm.[43] Time complexity is a computational complexity which measures how fast or slow an algorithm will perform for a given input size. [41] Besides, it describes the amount of time it takes to train and test a model. It is commonly calculated by counting the number of steps or operations performed by the algorithm during its training and testing stages. [42] The time complexity of an algorithm could define whether an algorithm is practical to be used in real environment or not. [43]

Algorithms running time depends on the size of the input data, hardware type, programming languages and even compiler versions. The efficiency of an algorithm can be determined through analytical approaches or empirical approaches. [42] Empirically, the time complexity of an algorithm is calculated by measuring how much time it takes to train and test a certain algorithm and comparing it with other algorithms. Most programming languages have a sort of timing function that can be used to know the amount of time it takes to run a certain code. However, this empirical method has drawbacks as it is largely dependent on the size of the data, type of hardware, type of programing language and other factors.[42] To solve this problem models are analyzed analytically by looking at what an algorithm actually does and how it scales to large datasets by counting the number of operations performed instead of measuring the time it takes to run. [42]

(37)

31

To consider the time complexity of CNN models we have to consider all the iterations or frequencies done during back propagation and forward propagation. [43] The deeper the model the higher the iterations. For example, in this thesis we can compare the time complexity of all the models used here. The main difference between ResNet50, ResNet34, and ResNet18 is the number of layers used in the models. I.e., each model has 50, 34, and 18 layers, respectively. Each input data pass through these layers and finally it is classified in one category as a night setback or non- night setback. In addition, each operation is multiplied by the number of epochs. As the number of layers increases the number of steps or operations increases. As a result, the time complexity increases. When we check the time complexity of the above models ResNet50 takes 24.2 seconds while ResNet34 and ResNet18 takes 16.74 seconds and 16.65 seconds, respectively. Similarly, when we compare the time complexity of VGG11, VGG16 and VGG 19 their difference is the number of layers used in each model. Each VGG model’s training time and testing time increase as the number of layers increases from 11 to 19. While the training and testing time of VGG 11 is 15.4 seconds VGG 16 and VGG 19 have 15.6 and 15.9 seconds, respectively. Comparing all the models AlexNet has the shortest training and testing time, which is 13.7 seconds. The main reason why AlexNet has the shortest time is because it has fewer number of layers than the others. It has only eight layers. As the number of layers decreases the number of iterations or operations decreases. When we compare all the time taken for each model it is almost negligible as our data is small in size and we are using pre-trained models which did not need much training. The difference in time complexity might clearly appear while training a model using a large amount of data starting from scratch without using pre-trained models.

(38)

32

5.2. Discussion

This work was necessitated by the need to automatically identify substations with night setback control to help energy stakeholders. During problem formulation, it was realized that there was not any study conducted on this particular topic. A confusion matrix table is used to describe the performance of the models used in predicting the class of the substations.

Predicted Class Actual Class

Class NNS Class NS

Class NNS True Positive False Negative Class NS False Positive True Negative Figure 5.2.1 Confusion matrix for binary classification

The true positives and true negative values are observations that are correctly predicted while the false positive and false-positive occur when the actual class contradicts with predicted class. Using these four parameters (True positive, True Negative, False positive, and False negative) accuracy, precision, Recall, and F-1 Score are calculated for each model. Accordingly, the results for each model are presented in table 5.1.

Among all the pre-trained models used in this study, ResNet50 showed the best performance with the least false negatives and false positives. This model has classified the test data with 97%

accuracy, 95% precision, 91% F-1 score, and 88% recall. Accuracy is the sum of true positives and true negatives divided by the total number of test samples while precision is true positives divided by the sum of true positives and false positives. A recall is calculated by dividing true positives by the sum of true positives and false negatives while the F-1 score is a weighted average between precision and recall. Precision indicates how many positive predictions are true while recall reveals the number of positive classes the model can predict. While the measurement of accuracy is good for balanced data F1 is recommended for unbalanced data [41].

(39)

33

These results are very important as they are almost close to human-based classification the most important point is that using this model it is possible to identify a certain district heating substation whether it is using night setback control or not. Such identification was monotonous and extremely time-consuming when it is done manually. Besides, it was expensive as it needs domain experts to label and classify the substations. Coming up with a Machine learning classification model with such an accuracy level is a great help to energy stakeholders to easily identify the substations with night setback control strategy.

Even though the classification models showed good performance in general, there are some wrongly predicted substations. Therefore, it might need domain experts to check these incorrectly predicted substations. To get higher accuracy, it will be good to have a higher number of training sets. It might slightly increase the number of true positives and true negatives. As a result, the prediction performance of the models can be enhanced.

The purpose of building a machine learning model is to resolve a problem, and a machine learning model can only do so when it is in production and actively in use by consumers. Hence, model deployment is as valuable as model building. The designed model can be easily deployed on personal computers and other devices of district heating companies and other heat energy stakeholders. In order to deploy the model, the stakeholders must have the needed tools and frameworks for this purpose. Specifically, they will need software like python and machine learning tools like TensorFlow or PyTorch which provides several frameworks, libraries, and components for launching and monitoring machine learning models in production. In the model deployment, a machine learning engineer or other related professional may be needed.

According to VentureBeat’s 2019 report close to 90% of machine learning models never make it into production. The reasons can be seen in two categories as organizational and inherent reasons

(40)

34

related to the machine learning models. One of the organizational reasons is the lack of leadership support for machine learning developers. To succeed in the deployment of machine learning models leaders need to understand the basics of how machine learning models work. Allocating resources is not enough to realize the deployment of a machine learning project. Moreover, lack of access to uniform and structured data from different departments of a company is another challenge. In addition, the communication gap between data scientists, IT, and engineering departments of a company has a negative impact on the deployment of a machine learning model.

If these different sections of a company work in a coordinated manner it will be easier to put the models into production.

In addition to the above organizational challenges, machine learning models have their own set of challenges. One is the Scalability problem. A model might work well in a small environment, but this does not imply that it will be successful everywhere because cloud storage space and hardware requirements to handle bigger data sets might not be available. Moreover, since machine learning models are in their infancy there are still considerable gaps when it comes to different software languages and frameworks. As each language and framework comes with its unique libraries and dependencies, projects become difficult to keep track of.

(41)

35

8. Future Work and Conclusion

They are several studies done on the advantage and disadvantage of night setback control. In earlier studies which focused on characteristics of night setback control, it is found that night setback controls can save a significant amount of energy. On the contrary, recent studies proved that night setback control is problematic in modern and well-insulated buildings as it causes early morning pick of heat load patterns which affects the quality of heat distribution by energy stakeholders. In this study, identification of night setback control of a district heating substation by using a machine learning model is conducted.

Based on the results found, the machine learning models designed in this study classified the district heating substations as night setback and non-night setback with a range of 94% - 97%

accuracy and 86% - 91% F1-score. The highest prediction performance with an accuracy of 97% is scored by the pre-trained CNN model ResNet50 followed by ResNet34 and AlexNet with an accuracy level of 96.7% and 96.5% respectively.

One of the problems we face in this study is that the data used in this research is a mixed data of different types of residential, public, and commercial buildings. Future studies may conduct a similar study by using a training data set of a particular type of building with a large amount of training data. We believe this will help to build a more robust model with higher accuracy. There are different control strategies used in buildings to effectively and efficiently use heat energy.

Therefore, other researchers may design models that can identify those control strategies used in different buildings. In addition, Future research can investigate other methods to solve this particular problem and compare it to our results.

(42)

36

In conclusion, this study automated the identification of night setback control in a set of district heating substations by designing a machine learning model. The designed model has a prediction accuracy of almost 97% and an F1 score of 91.3%. This model can be used by different energy stakeholders to instantly identify whether a certain substation is using a night setback control or not. This will help them to manage their heat energy distribution and enhance their customer service.

(43)

37

8. Reference

[1] C. Johansson, “Towards Intelligent district heating.” 2010.

[2] Swedish Energy Agency, “Energy in Sweden: Facts and Figures 2010.” 2010.

[3] D. Basciotti and R. Schmidt, “Demand-side management in district heating systems- Simulation case study on load shifting.”, 2013.

[4] M. Noussan, M. Jarre, and P. Alberto, “Real operation data analysis on district heating load patterns.” 2017.

[5] Euro heat and power, “Euro heat and power magazine 2015/16.” 2016.

[6] Grundfos.com. [Online]. Available: https://www.grundfos.com/ww/learn/research-and- insights/night-setback. [Accessed: 28-Aug-2020].

[7] A. Goia, C. May, and G. Fusai, “Functional clustering and linear regression for peak load forecasting.” 2010.

[8] H. Gadd and S. Werner, “Heat load patterns in district heating substations.” 2013.

[9] H. Gadd and S. Werner, “Fault detection in district heating substations.” 2015.

[10] M. Noussan, M. Jarre, and P. Alberto, “Real operation data analysis on district heating load patterns.” 2017.

[11] Y. Lua, Z. Tiana, P. Peng, J. Niua, W. Li, and H. Zhanga “GMM clustering for heating load patterns in-depth identification and prediction model accuracy improvement of district heating system.” 2019.

(44)

38

[12] E. Calikus, S. Nowaczyk, A. S. Anna, H. Gadd, and S. Werner, "A data-driven approach for discovering heat load patterns in district heating." 2019.

[13] L. Nelson and J. MacArthur “Energy savings through thermostat setback.” 2018.

[14] R. Szydlowski, L. Wrench, P. O`Neill, and J. Paton. “Measured energy savings from using night temperature setback.” 1993.

[15] J. Moon and S. Hoon-Han “Thermostat strategies impact on energy consumption in residential buildings.” 2011.

[16] J. Moon and S. Jung “ Algorithm for optimal application of the setback moment in the heating season using an artificial neural network model.” 2016.

[17] Y. LeCun, Y. Bengio, and G . Hinton “Deep learning.” 2015.

[18] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A.

Khosla, M. Bernstein, A. Berg, and L. Fei-Fei. “ImageNet Large Scale Visual Recognition Challenge.” 2015.

[19] A. Krizhevsky, I. Sutskever , and H. Geoffrey “ImageNet classification with deep convolutional neural networks.” 2012.

[20] H. Hubel, and N. Wiesel “Receptive fields and functional architecture of monkey striate cortex.” 1968.

[21] K. Fukushima, “Neocognitron: a self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position.” 1980.

(45)

39

[22] Easyai.tech. [Online]. Available: https://easyai.tech/en/ai-definition/cnn/. [Accessed: 05- Sep-2020].

[23] K. G. Kim, “deep learning.” Biomedical Engineering Branch, Division of Precision Medicine and Cancer Informatics, National Cancer Center, Goyang, Korea,, 2016.

[24] J. Brownlee, “A gentle introduction to transfer learning for deep

learning,” Machinelearningmastery.com, 19-Dec-2017. [Online]. Available:

https://machinelearningmastery.com/transfer-learning-for-deep-learning/. [Accessed: 10- Sep-2020].

[25] L. Torrey and J. Shavlik “Transfer learning”, 2009.

[26] X. Shi, W. Fan, and J. Ren. “Actively transfer domain knowledge. In European Conference on Machine Learning,” 2008.

[27] A. Krizhevsky, I. Sutskever, and H. Geoffrey, “ImageNet Classification with Deep Convolutional Neural Networks”, 2012.

[28] J. Wei “VGG Neural Networks: The Next Step After AlexNet,” Towardsdatascience.com.

[Online]. Available: https://towardsdatascience.com/vgg-neural-networks-the-next-step- after-alexnet-3f91fa9ffe2c,. [Accessed: 01-Nov-2020].

[29] K. Simonyan, and A. Zisserman “Very Deep Convolutional Networks for Large-Scale Image Recognition”, 2019.

[30] K. He, X. Zhang, S. Ren, and J. Sun, “Deep Residual Learning for Image Recognition”, 2015.

(46)

40

[31] G. Huang, Z. Liu, L. Van der Maaten, and K. Weinberger “Densely Connected Convolutional Networks”, 2016.

[32] G. Singhal, “Introduction to densenet with TensorFlow,” Pluralsight.com. [Online].

Available: https://www.pluralsight.com/guides/introduction-to-densenet-with-tensorflow,.

[Accessed: 07-Nov-2020].

[33] A. Howard, M. Zhu, B. Dmitry, K. Wang, T. Weyand, M. Andreetto, and H. Adam,

“MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications.”, 2017.

[34] M. Pradhan, “Image Classification using MobileNet in the browser,” Analytics Vidhya, 22-Nov-2019. [Online]. Available: https://medium.com/analytics-vidhya/image-

classification-using-mobilenet-in-the-browser-b69f2f57abf. [Accessed: 07-Dec-2020].

[35] F. Iandola , S. Han, M. Moskewicz, K. Ashraf, W. Dally, and K. Keutzer

“SQUEEZENET: AlexNet-level accuracy with 50x fewer parameters and less than 0.5MB model size.”, 2016.

[36] “Utilifeed.” [Online]. Available: https://www.utilifeed.com/.

[37] C. Shorten and T. Khoshgoftaar, “A survey on Image Data Augmentation for Deep Learning,” 2019.

[38] “ImageNet.” [Online]. Available: http://www.image-net.org.

[39] M. Zeiler and R. Fergus, “Visualizing and Understanding Convolutional Networks.” 2014.

[40] J. Howard and S. Ruder, “Universal language model fine-tuning for text classification.”

2018.

(47)

41

[41] R. Sedgewick and K. Wayne, “Analysis of Algorithms,” Princeton.edu. [Online].

Available: https://introcs.cs.princeton.edu/java/41analysis/. [Accessed: 06-Feb-2021].

[42] K. Fredenslund, “Just how fast is your algorithm?” Kasperfred.com. [Online]. Available:

https://kasperfred.com/series/computational-complexity/just-how-fast-is-your-algorithm.

[Accessed: 09-Feb-2021].

[43] M. Ahmed, “Computational complexity of Neural Networks.” [Online]. Available:

https://medium.com/swlh/computational-complexity-of-neural-networks-38c01e7e566a,.

[Accessed: 06-Feb-2021].

[44] K. Miyasato, “Classification report: Precision, recall, F1-score, accuracy,” Medium, 05- Apr-2020. [Online]. Available: https://medium.com/@kennymiyasato/classification-report- precision-recall-f1-score-accuracy-16a245a437a5. [Accessed: 25-Dec-2020].

References

Related documents

This dissertation focuses on an infrasystem that epitomizes these challenges and new prerequisites; the district heating systems in the Stockholm region.. The thesis (and

One criterion used in the literature (Reighav and Werner 2008) to evaluate feasibility of a district heating project is the heat density: heat consumption over a year per

– Local Possibilities for Global Climate Change Mitigation.

The data used for calibration and validation of the load model was supply temperature, return temperature, mass flow and volume flow out from the CHP-plant Idbäcken in Nyköping to the

• Conversion of industrial processes that utilise electricity and fossil fuels to DH can have the following benefits: (1) DH heat loads become more evenly distributed over the

When biomass is considered an unlimited resource, the results showed that introduction of biofuel production into DHSs, introduction of DH-driven absorption cooling

System studies of district heating and cooling that interact with power, transport and industrial sectors. Danica

en effektiv teknisk åtgärd. I detta direktiv avses med teknisk åtgärd varje teknik, anordning eller komponent som har utformats till att vid normalt bruk förhindra eller begränsa