• No results found

Artificial Intelligence in the Solar PV value chain: current applications and future prospects

N/A
N/A
Protected

Academic year: 2021

Share "Artificial Intelligence in the Solar PV value chain: current applications and future prospects"

Copied!
99
0
0

Loading.... (view fulltext now)

Full text

(1)

Artificial Intelligence in the Solar PV value

chain: current applications and future

prospects

Alexander Tived

[115]

Master of Science Thesis TRITA-ITM-EX 2020:420 2020 KTH Industrial Engineering and Management

(2)

Examensarbete TRITA-ITM-EX 2020:420 2020

Applications of Artificial Intelligence in

the Solar PV value chain, current and

future prospects

Alexander Tived

Godkänt Examinator

Björn Laumert

Handledare

Rafael Guedez & Silvia Trevisan

Uppdragsgivare Kontaktperson Alexander Tived

Sammanfattning

(3)

Master of Science Thesis TRITA-ITM-EX 2020:420 2020

Applications of Artificial Intelligence in

the Solar PV value chain, current and

future prospects

Alexander Tived

Approved Examiner

Björn Laumert

Supervisor

Rafael Guedez & Silvia Trevisan

Commissioner Contact person

Alexander Tived

Abstract

(4)

FOREWORD

Throughout the course of this project I have been assisted by several academics, industry professionals as well as organizations which have provided the tools I needed to perform this study. In light of that, I want to thank everyone involved.

Further, I would like to acknowledge my supervisors at KTH, Rafael Eduardo Guedez Mata and Silvia Trevisan who introduced me to the subject helped guide me through the course of this project.

Moreover, I would like to thank everyone active in the industry who helped me gain insight in the subject and in effect complete this study.

Lastly I would like to direct a special thanks to family and friends who have supported and aided me through the course of this project.

(5)

NOMENCLATURE

Abbreviations

ABC Artificial Bee Colony

ABSO Advanced Bee Swarm Optimization

AC Alternating Current

AI Artificial Intelligence

ANFIS Adaptive Neuro Fuzzy Inference System

ANN Artificial Neural Network

AR Auto regression

ARMA Autoregressive Moving average

ARIMA Autoregressive Integrated Moving average

BA Bat Algorithm

CNN Convolutional Neural Network

DC Direct Current

DE Differential Evolution

DNI Direct Normal Irradiance

ESCE-OBL Evolution algorithm Enhanced by the Opposition-Based Learning strategy

FL Fuzzy Logic

GA Genetic Algorithm

GBRT Gradient Boosted Regression Trees GHI Global Horizontal Irradiance

GRNN General Regression Neural Network

HS Harmony Search

kNN k-Nearest Neighbors

LCOE Levelized Cost Of Electricity

LOLP Loss Of Load Probability

LSTM Long Short Term Memory

MA Moving Average

MAE Mean Absolute Error

MAPE Mean Absolute Percentage Error

MLP Multi-Layer Perceptron

MPP Maximum Power Point

MPPT Maximum Power Point Tracking

MGS Metallurgical Grade Silicon

NARX Nonlinear Autoregressive network with Exogenous Inputs nRMSE normalized Root Mean Square Error

NWP Numerical Weather Prediction

PS Pattern Search

PSO Particle Swarm Optimization

PV Photovoltaic

RMSE Root Mean Square Error

RNN Recurrent Neural Network

SA Simulated Annealing

SGS Solar Grade Silicon

SVM Support Vector Machine

(6)

TABLE OF CONTENTS

LIST OF FIGURES 1

LIST OF TABLES 3

1 INTRODUCTION 4

1.1 Background 4

1.2 Objectives and scope 6

2 FRAME OF REFERENCE 7

2.1 Solar PV 7

2.1.1 Solar resource 7

2.1.2 PV fundamentals 8

2.1.3 PV panel equivalent electric models 10

2.1.4 Current-voltage characteristics 12

2.2 Solar PV value chain 13

2.2.1 The solar panel components 13

2.2.2 Silicon production 14

2.2.3 Module manufacturing 16

2.2.4 Project development and operations 17

2.2.4.1 Feasibility assessment 17 2.2.4.2 System design 18 2.2.4.3 System operations 20 2.2.5 Post operation 20 2.3 Artificial intelligence 21 2.3.1 Machine learning 21

2.3.1.1 Parametric and non-parametric models 22

2.3.1.2 Supervised, unsupervised and reinforced learning 23

2.3.1.3 Concepts in machine learning 25

2.3.1.4 Neural networks 26

2.3.1.4.1 Feed-forward and Backpropagation 28

2.3.1.4.2 Convolutional neural networks 28

2.3.1.4.3 Recurrent neural networks 29

2.3.1.4.3.1 LSTM 30

2.3.1.5 Fuzzy logic 31

2.3.1.5.1 ANFIS 33

2.3.1.6 Optimization algorithms 33

(7)

2.3.1.6.2 Particle swarm optimization 35

2.3.1.6.3 Ant colony optimization 35

2.3.1.6.4 Simulated annealing 35

2.3.1.7 Support vector machines & support vector regression 35

2.4 Previous work 36

3 METHODOLOGY 39

3.1 Review of AI state of the art technologies in solar PV 41

3.2 Factors impacted by AI 41

3.2.1 Levelized Cost of Energy (LCOE) 41

3.2.2 Social impact 42

3.2.3 Impact on emissions 42

3.3 Input from the industry 42

3.4 LCOE analysis 43

3.4.1 System and land cost 43

3.4.2 Insurance, operation and maintenance cost 43

3.4.3 System performance ratio 44

3.4.4 Degradation rate 44 3.4.5 Discount rate 44 3.4.6 Assumptions summary 44 3.5 Case study 45 3.6 Limitations 45 4 RESULTS 46

4.1 Curre nt applicable AI methods in solar PV 46

4.1.1 Solar irradiance prediction 46

4.1.2 System design and optimization 49

4.1.2.1 Solar panel parameter identification 49

4.1.2.2 Solar PV system sizing 50

4.1.3 System operations 52

4.1.4 AI technique summary 57

4.1.4.1 Commercial AI tools 59

4.2 LCOE analysis 59

4.3 Input from industry 64

4.3.1 Summary of input from the industry 64

5 CASE STUDY 65

(8)

5.2 Technical data 66

5.2.1 Solar resource assessment 66

5.2.2 I-V curve approximation and system modelling 68

5.2.3 System operations 68

5.3 Social data and project timeline 69

6 DISCUSSION/ANALYSIS AND CASE STUDY RESULTS 71

6.1 Case study results 73

6.1.1 Quantification of LCOE and job impact due to supply chain automation 76

6.1.2 Visual summary of case study 78

7 CONCLUSIONS 80

8 RECOMMENDATIONS AND FUTURE WORK 81

9 REFERENCES 82

10 APPENDIX 89

A. Module type 89

(9)

1

LIST OF FIGURES

Figure 1. Visualisation of the solar PV value chain ... 5

Figure 2. Solar irradiance over California over a five day period in January [6, figure 4.2] ... 7

Figure 3. Definition of angles at a specific location. L is the latitude at that site, 𝛿 is the declination angle, 𝛽𝑛 is the altitude angle and 𝜃 is the zenith angle [7], fig 7.9 ... 8

Figure 4. The solar position at a specific site at any given moment in terms of the altitude and azimuth angle. Here, 𝛽 is the altitude angle and 𝜙𝑠 is the solar azimuth angle [7], fig 7.10 ... 8

Figure 5. Decomposition of the incoming extraterrestrial radiation [8], fig 2.1 ... 9

Figure 6. Solar irradiance in Norrköping, Sweden, by constituents during one day, [9] ... 10

Figure 7. The single diode solar cell equivalent circuit [12]... 11

Figure 8. The double diode solar cell equivalent circuit ... 12

Figure 9. Series and parallel configuration of solar panels [14], fig. 15.2-15.3... 12

Figure 10. I-V curve of a solar cell [43]... 13

Figure 11. The different components of a solar panel, [15] ... 14

Figure 12. Production steps in the metallic route of solar grade silicon [19], fig. 5 ... 15

Figure 13. Production steps in the chemical route of solar grade silicon [19], fig. 6 ... 15

Figure 14. Solar grade silicon production in a fluidized bed reactor [20], fig 6 ... 16

Figure 15. PV panel to string to array ... 19

Figure 16. The subsets of artificial intelligence ... 21

Figure 17. Illustration of the decompositio n of the dataset [26] ... 24

Figure 18. Schematic of supervised learning [100]... 24

Figure 19. Schematic of unsupervised learning [100]... 25

Figure 20. Schematic of reinforced learning [100] ... 25

Figure 21. Representation of the prediction error [29]... 26

Figure 22. Illustration of the bias-variance tradeoff [31] ... 26

Figure 23. McCulloch-Pitts neuron model [32] ... 27

Figure 24. Representation of a neural network (perceptron) ... 27

Figure 25. Representation of error backpropagation [34] ... 28

Figure 26. Working principles of a CNN ... 29

Figure 27. RNN architecture vs. Feed-Forward Neural Network architecture ... 30

Figure 28. Regular unit (neuron) compared to a LSTM unit ... 31

Figure 29. Membership of data in the water temperature example [44] ... 31

Figure 30. Fuzzification of data vs boolean (crisp) logic ... 32

Figure 31. Fuzzy logic components and architecture ... 32

Figure 32. ANFIS architecture ... 33

Figure 33. The travelling salesman problem ... 34

Figure 34. Support vector machine working principle ... 36

Figure 35. Comparison of systematic review and meta-analysis [39] ... 39

Figure 36. Visualization of the workflow ... 40

Figure 37. Limitations within the solar PV value chain... 40

Figure 38. Illustration of applicable AI methods in the solar PV value chain ... 58

Figure 39. Task distribution of data scientists, based on the amount of time spent in each area [99] ... 59

Figure 40. LCOE parameters affected by current available AI techniques ... 61

Figure 41. LCOE fluctuations in solar irradiance prediction using SVM, MLP and Persistence method ... 61

Figure 42. LCOE fluctuations in parameter identification using a pattern-search, analytical numerical and ABC method derived from [103] ... 62

Figure 43. LCOE impact on increased system yield by tracking ... 63

(10)

2

(11)

3

LIST OF TABLES

Table 1. Summary of advantages and disadvantages of parametric and non-parametric algorithms ... 22 Table 3. Review of AI techniques from published articles on the topic of solar irradiance

prediction... 47 Table 4. Review of AI techniques from published articles on the topic of solar cell parameter identification... 49 Table 5. Review of AI techniques from published articles on the topic of solar PV system sizing ... 51 Table 6. Review of AI techniques from published articles on the topic of solar power forecasting, MPPT optimization and predictive maintenance ... 52 Table 7. LCOE simulation results by implementing AI driven irradiance prediction vs. SolarGIS and onsite measurements... 73 Table 8. LCOE simulation results by implementing AI driven parameter identification ... 74 Table 9. LCOE simulation results by implementing AI driven solar tracking and MPPT

(12)

4

1 INTRODUCTION

This chapter will provide an introduction to the thesis along with the objective, scope and purpose of the thesis. The method will be defined further into the report due to the knowledge required to understand certain concepts will be provided in the Literature Review chapter.

1.1 Background

Solar photovoltaic (PV) systems is one of the most mature renewable energy sources and will have a great impact on the transition towards a higher share of renewable energy in the global energy mix due to its modularity and relative ease of installation. In order to increase the knowledge and possibilities of solar PV, an understanding of the different steps in the value chain must be developed.

The solar PV value chain can be interpreted in different ways. The basic definition of a products value chain is “The activities, or processes, by which an actor adds value to a product”. From that formal definition, the following entries would cover the value chain of a solar PV system:

 Research and Development: Continuous development of the technologies used within solar PV systems

 Capital Equipment Production: Auxiliary equipment needed for the production of the solar cells. These include equipment for cutting the wafers, suppliers of chemicals and gas for silicon purification etc.

 Silicon production: The conversion of metallurgical grade silicon to solar-grade silicon.  Module manufacturing: The assembly line from solar-grade silicon to complete solar

modules.

 Balance of System components: All system components except solar cells and the electrical components which helps constitute a solar PV power plant such as mounting equipment, wiring, solar tracking systems, cabling etc.

 PV deployment: The last section, PV deployment, refers to the several services which adds value to the PV panels. This step regards the integration of the PV panels into the power plant as well as delivery of electricity to customers, thus this step of the value chain includes system design, applying for different permits, installation etc. [1].

(13)

5

Figure 1. Visualisation of the solar PV value chain

Artificial intelligence has had a huge impact on the society, from self-driving cars and automatio n within the industry to smart home applications like Siri or Alexa. As the penetration of this technology has reached most of the society, it has also reached the energy sector and has been well researched within the renewable energy sector. Artificial intelligence is widely used for irradiance prediction and estimated power output for solar power plants as well as wind speed prediction and estimated power output from wind farms [2].

(14)

6

1.2 Objectives and scope

Given that AI has become a very powerful and popular tool and which has led to extensive in many fields, including renewable energy sources, it is important to understand to what extent this tool can be used within the value chain.

Intuitively, usage of AI should increase efficiency and precision where it is implemented due to the fact that computers can process more data than a human during a given time span. But, what would be the effects of implementing AI? When discussing the effects of using a certain method it is important that the term “effects” is properly defined. In this thesis, effects refers to the difference between using AI to solve a specific problem compared to using conventional methods of solving the same problem. In order to answer this question, the main deliverables of this thesis are:

 What are the components of the solar PV value chain and how does it operate today?  What are the different artificial intelligence technologies currently in use in the solar PV

value chain?

 What are potential future exploits of artificial intelligence within the solar PV value chain?  What would be the effects on the solar PV system if AI were to be implemented through the

value chain in terms of:

o Job creation/destruction o LCOE reduction or increase

The scope of this thesis will encompass only the solar PV value chain and the focus is mainly on a system level. Here, the term “system level” refers to the systematic thinking of the value chain, i.e. the Balance of system components and the project deployment steps.

Even though many papers has suggested different kinds of usage of AI within the field of solar PV they mainly focus on a specific task in a specific phase of the value chain such as solar irradiance prediction [3], solar energy prediction[4], solar power output prediction [5] etc. Though there has been no investigation on how AI impacts the value chain. In order to answer the questions listed above, the objectives of this thesis are

 To assess the current state of Artificial Intelligence in terms of use cases and technology application in solar PV in terms of:

o In which situations a specific AI techniques is used o Which technique shows best performance

 To evaluate current AI techniques used in the industry by survey-style interviews as well as evaluation of published articles

 To analyze future possible AI development paths in the field of solar PV  To extract research results from existing papers, including:

o Quantitative results related to the system level thinking of the value chain, where available

o What AI technologies may be applied in different parts of the value chain

 Define KPIs related to economic and social aspects in order to assess the influence of AI throughout the system level of the solar PV value chain

(15)

7

2 FRAME OF REFERENCE

This chapter will present the theoretical framework needed to conduct and understand the

objectives of the thesis. A presentation of the solar resource, fundamentals of photovoltaics and characteristics of the solar cells as well as a description of the steps in the photovoltaics value chain. Further an in depth explanation of important artificial intelligence concepts and algorithms are presented.

2.1 Solar PV

2.1.1 Solar resource

There are two quantities to describe the solar resource, namely:

 Irradiance (W/m2) which describes the rate at which incoming solar radiant energy is incident on a surface. Irradiance is often denoted with G.

 Insolation (J/m2 or kWh/m2) which is the integration with respect to time of the irradiance and thus describes the incident radiation on a surface per unit area. This quantity is often denoted with H.

Given that the sun does not maintain a fixed position as well as disturbances on the irradiance from clouds, the irradiance and so the insolation differs throughout each day, or even by the hour, of the year. Figure 2 below depicts the solar irradiance and insolation over a period of five days on a horizontal surface in California which shows how much impact a cloudy day may have.

Figure 2. Solar irradiance over California over a five day period in January [6, figure 4.2]

(16)

8

the light as it passes through the atmosphere, the power lost is due to absorption and scattering by dust and air particles. The air mass is defined as the path the light travels normalized to the shortest path it travels (when the sun is at its highest point). This is also equivalent to the inverse cosine of the zenith angle [6].

Figure 3. Definition of angles at a specific location. L is the latitude at that site, 𝛿 is the declination angle, 𝛽𝑛 is the

altitude angle and 𝜃 is the zenith angle [7], fig 7.9

The location of the sun at any time of the day may now be described in terms of the altitude angle and the solar azimuth angle. The solar azimuth angle determines the suns position relative to the true south, or true north when calculations are conducted for the southern hemisphere, of a specific location. Here, yet another important quantity must be introduced, namely the hour angle. The hour angle defines the number of degrees that the earth must rotate in order to place the sun directly over the longitude [7].

Figure 4. The solar position at a specific site at any given moment in terms of the altitude and azimuth angle. Here, 𝛽 is the altitude angle and 𝜙𝑠 is the solar azimuth angle [7], fig 7.10

2.1.2 PV fundamentals

(17)

9

the current through the electrical grid. Of course, the amount of current that may be extracted from the sun is then dependent on the suns position and with that, the available solar radiation.

The radiation emitted by the sun is also known as extraterrestrial radiation (ETR) and when the ETR passes through the atmosphere, its spectral distribution is separated into differe nt components. Due to the presence of the atmosphere and different reflecting objects on the earth, the incoming radiation that contributes to the energy collected on a surface consists of three main parts. Direct-beam radiation passes the atmosphere in a straight line from the sun down to earth without being impacted (scattered or absorbed) by the atmosphere and/or particles, although the directbeam radiation is relative to a horizontal surface and thus is the projection of the direct -beam radiation onto the surface, using the zenith angle (𝜃) at that location. The diffuse radiation is the part of the solar irradiance which have changed its direction due to the atmosphere and/or air particles. Further, light reflected from the ground or other sources (especially if the surface is tilted, which is the case in PV solar systems) may also impact the energy collected by the surface [7].

Figure 5. Decomposition of the incoming extraterrestrial radiation [8], fig 2.1

Due to this separation of the ETR, the global irradiance on a surface at a specific location is the sum of these components. The total global irradiance (G) may be defined as:

(18)

10

Figure 6. Solar irradiance in Norrköping, Sweden, by constituents during one day, [9]

As the magnitude of the global radiation decreases, so does the accumulated energy on the surface subjected to the radiation. This causes lower power output of solar panels and is especially affecting concentrating technologies, such as concentrated solar power systems, as the only contribution for these types of systems are the direct-beam radiation [10]. These effects cannot be mitigated, as the clouds position and movement cannot be altered by human intervention, but being able to predict or estimate for example a variability in the power output of a solar system due to cloud movements could be useful.

Climate models are digital simulations of the Earth’s climate system and can be used to estimate climactic behaviors. These models are running on supercomputers and estimate climate patterns by dividing the Earth into a 3D grid, within each box of the grid several parameters are calculated such as humidity and temperature. Although the resolution of the grid is far too low and cloud dynamics and physics cannot be modeled unless the resolution is vastly increased. Although increasing the resolution to such a level would not yield any results in a useful amount of time [11]. As cloud movement and positions are difficult to simulate, research has been done to quantify the cloudiness through an index. There are different ways to quantify the cloudiness but a commonly used tool is the clearness index, which is the quotient of the incoming radiation to earth by the radiation striking the surface.

2.1.3 PV panel equivalent electric models

(19)

11

Figure 7. The single diode solar cell equivalent circuit [12]

Using this representation of the solar cell, the output current 𝐼 can be expressed as:

𝐼 = 𝐼𝐿 − 𝐼𝐷 − 𝐼𝑠ℎ (2)

Where 𝐼𝐿 the photo-current proportional to the irradiance is, 𝐼𝐷 is the current through the diode and 𝐼𝑠ℎ is the leakage current. The current through the diode may further be represented by the Shockley’s diode equation as:

𝐼𝐷 = 𝐼0[𝑒

𝑞(𝑉+𝑅𝑠𝐼)

𝑛𝑘𝑇 − 1] (3)

Where 𝐼0 is the reverse saturation current, 𝑞 is the electron charge, 𝑇 is the absolute temperature of the cell in Kelvin, 𝑉 is the voltage across the diode, 𝑛 is an ideality factor and represents the recombination of holes and electrons in the p-n junction region and 𝑘 is the Boltzmann constant. The leakage current 𝐼𝑠ℎ may be expressed using Ohm’s law to yield:

𝐼𝑠ℎ = 𝑉 + 𝑅𝑠𝐼

𝑅𝑠ℎ (4)

Combining equation (3)-(4) and substituting these expressions into equation (2) yields the electrical characteristics of the solar cell as [13]:

𝐼 = 𝐼𝐿− 𝐼𝐷− 𝐼𝑠ℎ = 𝐼𝐿 − 𝐼0[𝑒𝑞(𝑉+𝑅𝑛𝑘𝑇𝑠𝐼) − 1] − 𝑉 + 𝑅𝑠𝐼

𝑅𝑠ℎ (5)

(20)

12

Figure 8. The double diode solar cell equivalent circuit

Following the same derivation as for the single diode model, the output current is given as:

𝐼 = 𝐼𝐿 − 𝐼𝐷 1 − 𝐼𝐷2 − 𝐼𝑠ℎ = 𝐼𝐿− 𝐼01[𝑒 𝑞(𝑉+𝑅𝑠𝐼) 𝑛1𝑘𝑇 − 1] − 𝐼 02[𝑒 𝑞(𝑉+𝑅𝑠𝐼) 𝑛2𝑘𝑇 − 1] − 𝑉 + 𝑅𝑠𝐼 𝑅𝑠ℎ (6) Where 𝑛1= 1 and 𝑛2 = 2.

The electrical output of a single solar cell is not sufficient to power most applications and is definitely not sufficient to have an impact on the electrical grid, not even a single solar panel, which is comprised of several cells. Solar panels, much like batteries, can be coupled in differe nt arrangements to yield different output characteristics. Solar panels in a series connection will increase the voltage output proportional to the number of panels connected while if the panels are connected in parallel, the output current is proportional to the number of panels connected. Any linked collection of solar panels are also called an array [14].

Figure 9. Series and parallel configuration of solar panels [14], fig. 15.2-15.3

2.1.4 Current-voltage characteristics

(21)

13

flowing through it and can thus be used to identify at which point a solar cell of specific parameters output the highest amount of power.

Figure 10. I-V curve of a solar cell [43]

The performance of PV panels are usually derived under a set of standardized conditions, known as standard test conditions (STC) in which the irradiance level is set as 1000W/m2, cell

temperature of 25℃ and an air mass value of 1.5.

2.2 Solar PV value chain

This section will break down the different steps in the solar PV value chain in order to further understand the development of solar cells, modules and systems. The solar PV value chain contains numerous steps with each of these steps containing advanced and demanding processes.

2.2.1 The solar panel components

The solar PV module is much more than just solar cells. It is a construction which has to withstand outdoor usage and must be available to withstand harsh weather while having a reasonable long service time. Apart from the photovoltaic solar cells there are several other components which are needed in order to construct a solar PV panel, these are:

 The front glass which main function is to offer protection and provide robustness to the module. Another very important property of the front glass is that it should have a high transparency in order to allow as much incident light as possible to reach the solar cells.

 A back-sheet which is made out of plastic and is positioned at the bottom of the structure, just above the PV cells, and electrically isolates the PV cells from the armature as well as protect against dirt and moisture.

 The encapsulate material is a vital component of the solar panel and acts as a binder between the different layers within the solar panel. The encapsulate material is typically placed above and below the solar cells and besides binding the components together, it offers increased robustness and service life of the panel and the cells. The material used for the encapsulate is usually some kind of polymer and the most common type of polymer is EVA – Ethylene vinyl acetate.

(22)

14

Figure 11. The different components of a solar panel, [15]

As can be seen, there are many other kinds of material than the silico n in a solar PV panel, these will not be the scope of this project. These are merely presented in this chapter to give an overview of the components of the panel.

2.2.2 Silicon production

Solar cells are made out of solar-grade silicon (SGS) which in turn is obtained from metallurgic a l grade silicon (MGS) which is produced through the reduction of sand in the presence of carbon. Solar-grade silicon is a very high purity form of silicon and in order to obtain this the silicon must go through various refining stages. SGS can be produced in two different ways, through a chemica l approach or a metallurgical approach [16].

(23)

15

Figure 12. Production steps in the metallic route of solar grade silicon [19], fig. 5

The chemical approach uses pyrolysis and reduction in order to obtain SGS from MGS. Through these processes, the volatile compounds that reside in the silicon from the production of MGS are removed. These volatile compounds include 𝑆𝑖𝐻4, 𝑆𝑖𝐶𝑙4 and 𝑆𝑖𝐻𝐶𝐿3. There are mainly two methods of producing SGS through the chemical route, namely the Siemens Process and the through a fluidized bed reactor [19].

Figure 13. Production steps in the chemical route of solar grade silicon [19], fig. 6

(24)

16

decomposing this purified molecule in a “Siemens reactor”. The trichlorasilane, which is of very high purity after the refining steps, are fed into the Siemens reactor together with hydrogen and reacts with heated silicon seed rods. These rods are heated to around 1100℃ which causes high purity silicon (solar-grade silicon) to be decomposed onto the rods. With the now solar-grade silicon disposed on the rods, the silicon is broken down into chunks for further transport to the microelectronic and solar industries.

Figure 14. Solar grade silicon production in a fluidized bed reactor [20], fig 6

The second main method of chemical production of solar-grade silicon is by processing Silane gas in a fluidized bed reactor. The reactor is fed small particles (seeds) of silicon from the top while Silane gas and Hydrogen gas are introduced at bottom of the reactor. The reactor is heated and the continuous flow of gas from the bottom causes the particles in the reactor to behave like a fluid which in turn yields very high temperature homogeneity. The Silane gas decomposes in the reactor to either grow on existing particles from the feed or consume these. When the seeds grows too large such that the fluidization cannot be contained, they are extracted at the bottom of the reactor [20].

2.2.3 Module manufacturing

The production steps from solar-grade silicon to complete solar modules are accomplished through four steps:

 Ingot casting  Wafer cutting

 Etching, cleaning, polishing and antireflective coating  Soldering cells to form a module

(25)

17

seed occurs, the rod is extracted and rotated simultaneously. The movement of the rod, if controlled appropriately, causes the melted SGS to solidify in such a way that the result is a monocrystalli ne silicon ingot [21]. The polycrystalline silicon on the other hand is allowed to solidify on its own which causes grains of silicon to form.

From the ingots, wafers are cut into thin slices using a wire saw. Further, the wafers are treated to remove any damage caused by the saw as well as adding antireflective coatings. Further, controlled amounts of phosphorous are introduced to the wafer to create the p-n junction [22]. A complete solar panel can contain anywhere from 36 to 144 cells. These are soldered together as well as adding cabling between the positive and negative sides of the solar module to enable electron transfer.

2.2.4 Project development and operations

The project development step in solar PV value chain, by the definition given in “Introductio n”, includes every step of solar energy extraction from manufactured module to electricity delivery. This chapter is aimed to define how a solar PV power plant is created, from initial planning which includes site recognition, load demand, available solar resource etc. until operation which in turn includes efficient operations of the solar panels such as solar tracking and energy storage. In each stage the conventional and traditional methods used to determine metrics related to the operations of the power plant are covered.

2.2.4.1 Feasibility assessment

In order to develop a profitable solar PV system, the possible sites at which the proposed system may be installed at must be identified. Assessing a site suitable for a solar PV system installa t io n includes several factors, some of these factors are summarized in the bullet list below:

 Cost of land which could increase if high amounts of service infrastructure such as roads and/or if high electricity transfer infrastructure has to be built in addition to the plant  Land building permissions

 Real estate market conditions

 Proximity to infrastructure such as roads for construction and maintenance and, in the case of grid connected systems, proximity to cables for grid connection

 Solar resource which could fluctuate not only due to the amount of irradiance but also variations in the weather such as rain, clouds etc.

 In the case of standalone systems, the local load demand must be investigated which also could include energy storage solutions

Depending on the country, the cost of land and permissions needed are more or less fixed and cannot be influenced, although the site-specific parameters (solar resource and proximity to grid and service roads) can be. In terms of energy yield, the only parameter listed above which could be influenced is of course the solar resource. Solar resource estimation can be accomplished using either statistical approaches or numerical approaches.

(26)

18

are constant through time. The ARIMA model, which includes a backshift operator in order to handle non-stationary time series. In prediction of any parameter there is usually a benchmark method. In solar irradiance this is the persistence method, which assumes that the global irradiance is best predicted at the next time step by the irradiance value at time t. This method is highly inaccurate in time horizons further than one hour or in the case that weather conditions are constant and no cloud coverage is apparent. Thus, the persistence method is usually used a benchmark method [111].

Numerical approaches involves satellite and cloud imagery models as well as numerical weather prediction (NWP). Satellite and cloud imagery are based on tracking cloud movements and extrapolating their movement and applying radiative transfer models (RTM) in order to derive the irradiance. Although, satellite imagery often suffer from low spatial and temporal resolution which is why ground based imagery may also be used. This is accomplished using a total sky imager device which very accurately captures the local conditions at a high spatial resolution but may only be used for short deterministic predictions. Satellite based data irradiance is usually supplied as a service, SolarGIS is an example of this which is a common tool for solar resource assessment. NWP uses a set of differential equations that are derived to describe the physics of the weather. NWP are used to forecast the state of the weather up to 15 days ahead and NWP are used by agencies such as the US national Oceanic and Atmospheric Administration and the European Centre for Medium-Range Weather Forecasts [111].

2.2.4.2 System design

When the initial site has been established and data is known about the site in regards to the ener gy yield, costs and permits, the design and construction of the system can commence. It is of great interest to the design of the system to have knowledge of the operations of the system once it is deployed which is why simulations are often carried out. The simulations are based on the equivalent circuit models for the solar panels, as described above, and aid in describing the system operational performance. The parameters may be found using analytical or numerical approaches. The issue that arise while employing analytical methods are that several assumptions and approximations are made which causes model errors. Numerical methods on the other hand have proven to be a better solution. These methods include the Newton-Raphson method, non-linear least squares optimization and pattern search, although these methods are highly computatio na l demanding. Further, parameter identification has also been accomplished using Markov chains [112].

(27)

19

Figure 15. PV panel to string to array

Further design considerations to PV systems are the possibility to include storage (more common in off-grid systems) as well as the tilt angle of the array which defines the angle of which the arrays are elevated relative to the ground. In the case of a fixed system, the tilt angle is usually set equal to the latitude of the location. Although several techniques may be employed to use solar tracking of the PV arrays, which enables the panels to follow the sun and effectively causing the panels to operate at MPP.

(28)

20

2.2.4.3 System operations

During operations, it is of high interest to keep track if the power output generated by the system both in terms of grid operators which must be able to balance the grid in case of intermittency but also in standalone mode from which auxiliary systems can help to meet the demand. As in the case of solar irradiance, time series data is the key for power forecasting and power forecasting in the case of solar PV is highly dependent on weather forecasting, which is why the same methods apply here.

As mentioned in the above, the solar PV system can be equipped with tracking equipment which attempts to follow the motion of the sun in order to allow the PV panels to operate at MPP. This is obtained by continuously adjusting the resistance between the inverters input stage to the PV panel in order to maximize the module voltage and current which effectively means more power output. MPPT can be achieved using single or double axis tracking which includes additiona l hardware in terms of moving parts for the axis but also the controller which adjusts the axis positions. Thus MPPT is a rather expensive investment, increasing the capital cost as well as the O&M costs, but will provide more power. The tracking in the controller is achieved using differe nt strategies with a common strategy being the Perturb and Observe (PO) algorithm. The PO algorithm introduces perturbation to the voltage applied to the PV module, calculating the new power and if the new value of the calculated power is greater than the old, the process is continued, if it is smaller than the previous power, decrease the voltage applied. By this process, the PO algorithm finds the MPP of the I-V curve and can thus adjust the axis to achieve MPP [114].

2.2.5 Post operation

(29)

21

2.3 Artificial intelligence

Artificial intelligence (AI) is a broad term and is often used interchangeably with deep learning and machine learning. By definition, AI is the ability of a computer system to perform a task that usually require human intelligence, such as decision making. Of course, this definition might contain anything that can be perceived as “requiring human intellect” which is why the term AI is broken down into subsets, namely narrow AI, general AI and super AI. General AI refers to being able to mimic human intelligence and behavior and has the ability to learn how to solve any given problem while a super AI would, not only be able to learn to solve any problem, but be able to outperform any human as well as being self-aware. General nor super AI are not yet available but there was an attempt by Fujitsu to create a general AI but simulating a single neural activity took the machine 40 minutes and thus general AI, as well as super AI, are still merely concepts [23]. Narrow AI gets is named from having a narrow range of abilities, but they can be trained to perform these tasks very fast and accurate (compared to human activity). Narrow AI is yet another subset of AI and this type of AI is the one that is currently available and thus is the type of interest. This chapter aims to break down the concept of AI, its subsets as well as the most commonly used AI techniques within solar power systems.

2.3.1 Machine learning

As can be seen from figure 16 below, machine learning is a central part of artificial intellige nce and some sort of machine learning is used in all types of AI.

Figure 16. The subsets of artificial intelligence

Machine learning aims to find a relationship between a response or dependent variable (𝑌) and one or several input variables (𝑋1, 𝑋2, … , 𝑋𝑛). Thus, this relationship may be defined as:

𝑌 = 𝑓(𝑋) + 𝜀 (6)

(30)

22

machine learning refers to different approaches to estimate this function f, seen in the equation above, and there are two main reasons for doing this:

Prediction: Given a new point of measurement, previous data points are used to choose the

correct identifier from a set of different outcomes. What this means is that the function 𝑓 is considered as a black box and the aim is to make accurate predictions of some event based on some input.

Inference: Given a dataset, the aim here is to infer how the output data from the model is

generated as a function of the data. Inference can also be seen as trying to understand the relationship between the input and the output and from that understanding be able to draw conclusions about the output based on the input [24].

2.3.1.1 Parametric and non-parametric models

Machine learning algorithms are classified as one of two types, namely parametric and non-parametric algorithms. A non-parametric algorithm assumes the functional form or shape of the function 𝑓 with fixed number of parameters. The function is assumed to follow some form of relation between the predictors and the value of the function. For example, a parametric model of some function might be linear which means that instead of having to estimate a completely arbitrary function 𝑓(𝑥) of dimension n, the fitting reduces to estimating the 𝑛 + 1 coefficients:

𝑓(𝑋) = 𝛽0+ 𝛽1𝑋1+ 𝛽2𝑋2+ ⋯ + 𝛽𝑛𝑋𝑛 (7) When the functional form of 𝑓 has been assumed, the data can be fitted to this estimated form. This can be done using several different techniques. One of the most common methods to determine these coefficients are regression.

Non-parametric algorithm on the other hand does not make any assumptions on what the functio n looks like. Instead, a non-parametric algorithm tries to find a pattern of the function from the given dataset that as closely as possible matches the data points in the set. This means that a non-parametric algorithm have more possibilities to fit the data to the correct function than a non-parametric algorithm.

Reading the definitions of parametric and non-parametric algorithms above, it could seem obvious that a non-parametric algorithm should be the choice for any given situation when applying a machine learning algorithm but there are advantages and disadvantages to both of them, these differences are summarized in the table below [24].

Table 1. Summary of advantages and disadvantages of parametric and non -parametric algorithms

Model Advantage Disadvantage

Parametric

Simple to understand Constrained to the assumed functional form Fast to train Limits the complexity of the

model Requires less data than

non-parametric Can lead to a poor fit

Non-parametric

Very flexible in finding patterns within the dataset which means that it can fit simple as well as complex

forms

Requires substantial amounts of data, much more

(31)

23

The models resulting from applying non-parametric

algorithms often display high performance after

training

The training process is very slow compared to parametric

algorithms

Non-parametric algorithms may suffer from overfitting. As the functional form is not

known, it is harder to understand how this overfitting is achieved

2.3.1.2 Supervised, unsupervised and reinforced learning

In machine learning, there are two fundamental problems that are being solved, regression and classification type problems. To solve these problems, the model is subjected to different training regimes depending on what type of problem is to be solved as well as what kind of data is availab le. These training regimes are supervised, unsupervised or reinforced learning.

Supervised learning is the most commonly used type of machine learning and the principle of supervised learning is that the input variables (predictors) are labelled with the corresponding output, thus supervised learning are used for two types of problems: regression and classifica t io n problems.

Regression problems are concerned with predicting a continuous quantity, meaning that the input variables can assume an infinite possible value. As an example, regression can be used to predict the price of a house based on its location, size and proximity to stores, schools etc. In regression, the output variables are classified as quantitative. Classification on the other hand deals with assigning the output to a specific class. A classification problem could for example be to identify a person’s gender (female or male) or determining if there is a tumor in an x-ray image (cancer or no cancer). For a classification problem, the output variables are classified as qualitative.

Besides telling the machine the correct answer to a given input, supervised learning is also includes supervision of the training process in the sense that the training process is monitored in order ensure that the input variables produces the correct output variables. This type of learning is a closed-loop feedback system where the error between the correct output and the estimated output is the feedback signal (see fig 15). The error, as show in equation (8), guides the learning process and the learning is deemed complete when the error is sufficiently small or some criterion is met. This error term, or error function, is usually assumed, in supervised learning, as the mean squared error (MSE) and given by [25]:

𝐸 = 1

𝑁∑ ||𝑦𝑝− 𝑦̂𝑝|| 2 𝑁

𝑝=1 (8)

Where N is the amount of inputs with corresponding outputs, 𝑦𝑝 is the output of the 𝑝th input/output pair and 𝑦̂𝑝 is the estimated output of the 𝑝th input/output pair.

During training, the dataset used to train in a supervised way are divided into three sets:  Training set

(32)

24

The training set is what the model uses to properly find the correct output. The model is trained to fit the model by assigning the model a set of hyperparameters from which the model is then trained.

Figure 17. Illustration of the decomposition of the dataset [26]

After training, the model is then subjected to the validation set, here the model is assigned the same hyperparameters as for the training set and from the performance of the model from the validation set, the hyperparameters can be adjusted. This process of completing a training procedure, also known as an epoch, is repeated until the highest performance on the validation set is recorded, for example using the MSE method. The final model is then trained using both the validation and training sets and the performance of the model is now found from the models performance on the test set [26].

Figure 18. Schematic of supervised learning [100]

Unsupervised learning on the other hand do not tell what output the input variables should produce, i.e. the output is not labelled. In unsupervised learning, the model is handed the unlabeled dataset which may not even have a desired outcome or correct answer. This is up to the model to decide and the model thus aims to find patterns in the dataset known or unknown. Unsupervised learning is often used for clustering problems but can also be used in anomaly detection, association and auto encoders. These are just different ways to organize the data.

(33)

25

clusters in which a new data point could be sorted. Clustering can also be seen as unlabeled classification.

Figure 19. Schematic of unsupervised learning [100]

Reinforced learning differs vastly from supervised and unsupervised learning in that no data is fed to the model. Instead of learning from a dataset, the algorithm makes use of an agent, a piece of software, which is programmed to adapt to its surroundings. The agent is rewarded if it makes progress, defined by the programmer, and is punished if progress is not made.

An example of reinforced learning could be an algorithm that is programmed to play chess. The algorithm would be rewarded if it makes a move such that it kills an opposing piece, and more points for higher valued pieces, but it would be punished if it would lose one of its own. With enough training, it would become a very high performing chess player, making the correct move in every situation [27].

Figure 20. Schematic of reinforced learning [100]

2.3.1.3 Concepts in machine learning

In machine learning, there are several concepts one should be aware of. When building a model which is used to predict some event based on some input, there often arises some sort of error in the output, a perfect prediction is very rare. This error is usually referred to as the prediction error and is composed of two different types of errors, namely reducible and irreducible errors.

Reducible errors are errors related to the output prediction and the true value of the estimate. It is thus errors which may be reduced by, for example, improving the dimension of the function that is estimated from linear to quadratic, i.e. the model used in this case was oversimplified.

(34)

26

Figure 21. Representation of the prediction error [29]

As can be seen from the figure above, reducible errors are further divided into two subfields: bias and variance errors. Bias in the model refers to the model being too simple and is biased towards certain outcomes in the training set. This definition is rather similar to the reducible error definit io n and such an error is also referred to as underfitting. Variance errors are the opposite of bias and causes the model to identify relations in the training dataset that might not actually be a relation, such as relating noise in the dataset to the output. In other words, variance, or high variance, finds every possible pattern within the dataset without regarding faulty inputs and/or noise. This is a phenomenon called overfitting. Trying to decrease on of these will lead to an increase in the other. The balance between these two phenomena’s are known as the bias-variance tradeoff. Finding the optimum balance between the bias and variance, given that the irreducible errors are relative ly low, will also cause the model to behave well outside of the training set, the model is then deemed to have good generalization [29][30].

Figure 22. Illustration of the bias -variance tradeoff [31]

2.3.1.4 Neural networks

(35)

27

and the functionality of the artificial neuron is meant to mimic the behavior of a biological neuron, which appears in the human brain.

Figure 23. McCulloch-Pitts neuron model [32]

The image above depicts how information is processed in each neuron, where 𝑥𝑖 are the inputs to the neuron, 𝑤𝑖 are the weights of each input, 𝑦 is the output from the neuron, 𝜃 is the bias of the neuron and 𝜙(∙) is the activation function. The input and the weights are summed at the neuron and yields a corresponding output. The bias is randomly introduced to the neurons to help with fitting the model to the data and it assumes a value from 0 to 1. As the inputs, weights and bias are calculated at the neuron, the output from that neuron are given by a linear function. By applying this output to the activation function non-linearity is introduced to the neuron which makes it possible to estimate data that is not linear. The activation function could assume several differe nt forms but two common types of activation functio n are the sigmoid and tanh functions [32]. Combining several of these neurons creates a neural network. Each column of neurons within the network makes up a layer and the neuron within the layer may be connected in different ways. Often, each neuron or input is connected to each neuron in the next layer without any intra-la yer connections, these types of layers are also called fully-connected layers. Any layer that is not directly related to the input or output of the network are called hidden layers [33].

Figure 24. Representation of a neural network (perceptron)

(36)

28

2.3.1.4.1 Feed-forward and Backpropagation

Feed-forward is a common term in neural networks and it refers to feeding the information between the neurons forward in the network, with forward meaning from the input layer to the output layer. With this definition, every ANN is a feed-forward network. Although, feed-forward networks cannot update its weights and thus may produce significant errors in the output. To solve this, every ANN utilizes some form of backpropagation. Backpropagation is the method used to feed the error from the output layer back through the network in order to adjust the weights. The error that is found from the output is based on some loss or error function (for example, as described in “Supervised learning”, the RMSE). As the error between the desired output and the output from the network is known, backpropagation only apples for supervised learning.

Figure 25. Representation of error backpropagation [34]

A backpropagation algorithm uses calculates the partial derivatives of the error function with respect to each individual weight in the network. The algorithm applies the chain rule through each individual path in the network and thus finds the error in every layer, the key behind using the chain rule is that any type of ANN are in reality only nested composite functions. This is a stepwise process and is used until the minimum error is found, thus backpropagation applies gradient decent to minimize the error in the output [34][35].

When initializing an NN, a learning rate must be specified. The learning rate tells the network how much each update step will affect the neurons weights. If a learning rate that is too high through the training process, the value of the error function will start to oscillate and the training might be stuck in local minima or maxima. In order to combat this, the learning rate can be incrementally decreased throughout the training. This can be done in several ways such as stepwise or exponential.

Another method to stabilize the training process is regularization. Regularization is the process of limiting overfitting of the model by penalizing complexity, which as seen before may cause the model to perform at a high level on training data but poorly on unseen data. Regulariza t io n implementation is basically to add a term to the loss function in such a way that it penalizes for large weights.

2.3.1.4.2 Convolutional neural networks

(37)

29

learnable weights and are often of spatial dimension 3x3. Each layer in a CNN produces several output features due to the fact that the filters in each layer are not identical. Each layer then outputs a set of new images containing some features of the original input, based on the filters that were applied to the input. As usual, the output from a layer is used as the input to the succeeding layer. The filters within a convolutional layer are analogous to the neurons in MLPs.

Figure 26. Working principles of a CNN

As each convolution is applied, information about the spatial location of the extracted features is lost, but increased performance is achieved in object detection. The first set of convolut io ns produce an output that contains information about low level concepts such as edges, and progressively as the image moves down the network, the output from successive layers contains more and more high- level information about concepts such as hands or feet. Eventually, at the bottom-most layer, a global representation of the input have been extracted. The network can use that global representation to make a prediction (classification) of what the input was. As in the figure above, is this a stickman or not [33].

2.3.1.4.3 Recurrent neural networks

(38)

30

Figure 27. RNN architecture vs. Feed-Forward Neural Network architecture

As an example, assume that one might be interested in predicting what type of scene is appearing at every point in a movie. A regular feed-forward network does not employ any reasoning that would enable it to be able to predict such an output. Of course, given the genre of the movie, the feed-forward network could be trained on a huge set of movies and maybe be able to predict the next scene but the training time and the amount of data required would be huge. Thus, feed-forward networks does not use information about previous events to inform coming events.

RNNs on the other hand use the information from previous events in order to inform the next ones. Although, as mentioned previously, employing RNNs require that the dataset that is used on the network follows a timeline [36].

2.3.1.4.3.1 LSTM

(39)

31

Figure 28. Regular unit (neuron) compared to a LSTM unit

A LSTM introduces several gates within the cell to manage the dataflow. These gates has weights of its own and allows the cell to learn, through the learning process, when to allow data to enter, leave or be erased. By applying this technique, the LSTM carousels vanishing gradients as described above until those gradients are deleted within the cell [37].

2.3.1.5 Fuzzy logic

FL was first presented back in 1965 by Lofti Zadeh and allowed a system to assign data to partial memberships rather than belonging to a set or not. This allows for the handling of noisy, imprecise, missing or vague input data to conclude its correct inherence. FL is a form of “multi- valued logic” which is derived from fuzzy sets which in turn is a set of data which assigns its members a membership to a specific property [45]. The opposite of this type of logic is the “crisp logic” which is often used in classification tasks as described above, a result of crisp logic could for example be if the input data, in the form of an image, represents a dog or not. As FL accepts inconsistency in the input, it’s widely used in control theory and thus serves a good purpose, among others, in solar radiation prediction and solar tracking mechanisms [46].

Thus, implementation of FL into a system is a way to introduce human-like thinking into the system. A classic example of FL is the control of the temperature knob of a water tap, the temperature of the water does not suddenly go from cold to hot and vice versa but has several steps in between, cold, rather cold, warm, warmer etc.

(40)

32

The figure above illustrates how the data is assigned membership depending on the temperature, it could for example have the membership of 0.4warm and 0.6cold while in crisp logic, and it is either cold or hot. This part of the implementation is called “fuzzification” and is the assignme nt of data to these partial memberships through a membership function.

Figure 30. Fuzzification of data vs boolean (crisp) logic

When the data has been fuzzified it is feed into the Inference engine. Here, the matching degree of the current fuzzy input is determined against the rule base (the definition of how the inputs membership/s are decided) and determines which of these rules are to be fired. In order to provide a concluding output, the data is then fed from the inference engine into the “Defuzzifier”, which converts the fuzzy input to a crisp value. Even if the system allows malformed data, it is still expected to provide a concluding output [46].

Figure 31. Fuzzy logic components and architecture

(41)

33

2.3.1.5.1 ANFIS

Fussy systems do not possess any learning capabilities and it has no memory of previous events. In order to create a scalable system which incorporate previous knowledge into the decision making, FL is often used in hybrid systems. A common system which incorporates learning and FL is ANFIS. In ANFIS, the NN is not composed of several layers of nodes with weights attached to them but it is structured according to the fuzzy system architecture described in figure 31 above. The input is fed into a fuzzification layer which generates the grade of membership as nodes to each input. The output of the fuzzification layer is then fed into the product or rule layer which determines the strength with which each rule will fire. The rule firing strength is then normalized in the normalization layer and the result of the normalization is then defuzzified in the defuzzification layer. The nodes in the defuzzification layer is also adaptive, meaning that they calculate the rule output based on consequent parameters, they can memorize the desired output for a given input. The output is then summarized in the output layer which generates a crisp value [48].

Figure 32. ANFIS architecture

2.3.1.6 Optimization algorithms

(42)

34

minimize some function (objective function) based on algebraically derived relations. This chapter will thus aim to present some of these optimization algorithms.

2.3.1.6.1 Genetic algorithms

Genetic algorithms, like neural networks, are inspired by biology and is an optimization procedure. A GA consists of several individuals in a population which carries its genes to the next population in order to find the optimal solution for a given problem. When all the individuals eventua lly converge to the same solution, the algorithm have found the optimal solution for the problem. A common problem in computational science is the “Travelling salesman problem”, an optimiza tio n problem which asks the following question: Given a set of cities and the lengths between each pair of cities, what is the shortest possible path that must be travelled in order to visit every city.

Figure 33. The travelling salesman problem

Using GAs to solve this problem, a population of individuals are generated (usually randomly) in which each individual has a set of routes which will take them to each city. The length of a route between two cities are illustrated with a weighted value, for example the route between city 2 and 4 is of length 3. Depending on their performance, i.e. how far they had to travel to reach all cities , they will be more or less likely to pass their genes (the set of routes) to the next generation. This is usually known as selection and is based on a fitness function which basically determines how well the individual performed, the fitness function is often fairly simple. The algorithm has solved the problem when the individuals in a generation has converged to the same route. In order to ensure convergence, the genes must also be manipulated when passed on to the next generatio n. This is usually accomplished using crossover and mutation [49].

References

Related documents

This means that when calculating equation 10, the value of the irradiance measured by the pyranometer and the solar reference cell do not correspond to the response of the

The country has abundant renewable energy resources and has the potential to generate 60 terawatt (TW) of electric power from hydroelectric, wind, solar and geothermal sources and

The electrons will normally stay in the n-region due to the electric field which makes the electron experience a force F = −eE in the opposite of the diffusion, and the same happens

However, the income statement is such an important aspect in terms of company valuation, since it describes the profitability of a company – which as it can serve as

Due to the big potential and relative fast growing business of Battery En- ergy Storage Systems (BESS) together with the fact that their economic interest is very project specific,

Another advantage of using a stepper motor was its high holding torque, this was used to make sure the panel stays in a fixed location after rotation, which is

The EU exports of waste abroad have negative environmental and public health consequences in the countries of destination, while resources for the circular economy.. domestically

Vanajanlinna case indicates an approach to Internet application, and the company sees Internet application can create business competitive strategy, but the company should do