• No results found

Application of simulated annealing inversion to automated model calibration for residential building energy simulation

N/A
N/A
Protected

Academic year: 2021

Share "Application of simulated annealing inversion to automated model calibration for residential building energy simulation"

Copied!
129
0
0

Loading.... (view fulltext now)

Full text

(1)

APPLICATION OF SIMULATED ANNEALING INVERSION TO AUTOMATED MODEL CALIBRATION FOR RESIDENTIAL

BUILDING ENERGY SIMULATION

by

(2)

A thesis submitted to the Faculty and the Board of Trustees of the Colorado School of Mines in partial fulfillment of the requirements for the degree of Master of Science (Mathe-matical and Computer Sciences).

Golden, Colorado Date Signed: Joseph J. Robertson Signed: Dr. Jon M. Collis Thesis Advisor Golden, Colorado Date Signed: Dr. Willy Hereman Professor and Head Department of Applied Mathematics and Statistics

(3)

ABSTRACT

Building energy simulation programs are often used in the residential sector to model home thermal performance and estimate energy savings for changes to building parameters (retrofit measures). Accurate model predictions are not guaranteed since uncertainty in model inputs leads to uncertainty in model output. Analysts may calibrate model inputs to reduce disagreement between simulation-predicted and measured energy uses, however no widely-accepted guidelines exist for systematic and cost-effective residential building model calibration. This study uses a building energy optimization program developed by the Na-tional Renewable Energy Laboratory called BEoptTM/DOE-2.2 to evaluate three automated residential calibration techniques in the context of monthly, daily, and hourly synthetic util-ity data for a 1960’s era existing home. The home’s model inputs are assigned probabilutil-ity distributions representing uncertainty ranges, random selections are made from the uncer-tainty ranges to define “explicit” input values, and synthetic utility data are generated using the explicit input values. The calibration techniques search for the explicit input values, starting with an uncalibrated model consisting of nominal input values (best-guess values estimated by home audit), using an automated simulated annealing parameter inversion al-gorithm. Each of the techniques employs this simulated annealing algorithm for recovering calibrated model input values (i.e., approximations of explicit input values) by minimizing residuals between the simulation-predicted energy uses and synthetic utility data. The three calibration techniques evaluated in this study are: a direct simulated annealing approach, a regression metamodeling approach, and an approach based on ASHRAE 1051-RP. Various retrofit measures are applied to the calibrated models and the methods are evaluated based on the accuracy of predicted savings, computational cost, repeatability, automation, and ease of implementation. Results show more computationally expensive techniques generally produced more accurate energy savings predictions than less expensive techniques, although the accuracy improvements were modest and may not have warranted the additional

(4)

ex-pense. Calibrations may benefit from additional informational content in utility billing data since calibrations to higher frequency data generally provided more accurate energy savings predictions.

(5)

TABLE OF CONTENTS

ABSTRACT . . . iii

LIST OF FIGURES . . . viii

LIST OF TABLES . . . xii

NOMENCLATURE . . . xiv

LIST OF ABBREVIATIONS . . . xvii

ACKNOWLEDGMENTS . . . xviii

CHAPTER 1 INTRODUCTION . . . 1

1.1 Background . . . 1

1.2 Problem statement . . . 4

CHAPTER 2 LITERATURE REVIEW . . . 5

2.1 Calibration techniques . . . 5

2.1.1 Commercial . . . 5

2.1.2 Residential . . . 8

2.2 Sensitivity analysis methods . . . 9

2.2.1 The χ2 statistic . . . 9

2.2.2 The coefficient of variation σµ . . . 11

2.3 Simulated annealing . . . 12

2.3.1 Gradient calculations . . . 12

2.3.2 Iterative search . . . 14

CHAPTER 3 OBJECTIVES . . . 17

(6)

4.1 Define the house . . . 18

4.2 Define approximate inputs . . . 18

4.3 Generate synthetic utility data . . . 20

4.4 Identify the most influential inputs . . . 21

4.5 Perform calibrations . . . 22

4.6 Apply retrofit measures . . . 23

4.7 Compare calibration techniques . . . 24

CHAPTER 5 MODEL CALIBRATION TECHNIQUES AND RESULTS . . . 26

5.1 Direct simulated annealing . . . 26

5.1.1 Sensitivity analysis . . . 26

5.1.2 Inversion . . . 28

5.2 Regression metamodeling . . . 30

5.2.1 Sensitivity analysis . . . 30

5.2.2 Central composite design and regression . . . 30

5.2.3 Inversion . . . 40

5.3 ASHRAE 1051-RP-based approach . . . 42

5.3.1 Identify influential input parameters . . . 42

5.3.2 Perform coarse grid search . . . 52

5.3.3 Perform refined grid calibration . . . 56

5.3.4 Predict ranges of energy savings for retrofit measures . . . 56

CHAPTER 6 SUMMARY AND DISCUSSION . . . 63

6.1 Accuracy of predicted energy savings . . . 63

(7)

6.3 Automation . . . 68

6.4 Repeatability . . . 68

6.5 Ease of implementation . . . 69

CHAPTER 7 CONCLUSIONS . . . 71

CHAPTER 8 FUTURE WORK . . . 74

REFERENCES CITED . . . 76

APPENDIX A - APPROXIMATE INPUTS AND UNCERTAINTY RANGES . . . 79

APPENDIX B - EXPLICIT INPUTS FOR UTILITY BILL SCENARIOS . . . 81

APPENDIX C - SENSITIVITY ANALYSIS RESULTS . . . 83

APPENDIX D - GOODNESS-OF-FIT INDICES . . . 87

APPENDIX E - CENTRAL COMPOSITE DESIGN . . . 89

APPENDIX F - UNCALIBRATED AND CALIBRATED MODEL ERRORS . . . 90

APPENDIX G - BENEFIT OF CALIBRATION RESULTS . . . 96

(8)

LIST OF FIGURES

Figure 4.1 Triangular probability distribution. . . 20 Figure 4.2 The 24 most influential inputs. Influential parameters are sorted in

de-scending order of ξi. . . 22

Figure 5.1 Flowchart for calibration technique implementations. ‘Adj. Inputs’ refers to adjutable inputs, ‘Cal. Inputs’ to calibrated inputs, and ‘M’, ‘D’, ‘H’ to monthly, daily, and hourly utility data frequencies. . . 27 Figure 5.2 Sensitivity analysis results for the direct simulated annealing calibration

technique. Strong parameters are sorted in descending order of ξi. . . 28

Figure 5.3 Inversion results for the direct simulated annealing calibration technique. Left and right bounds are the parameters’ minimum and maximum values. 29 Figure 5.4 Convergence processes for the direct simulated annealing calibration

tech-nique using monthly data, over-prediction scenario. The dashed lines mark where the calibration solution corresponding to lowest computed energy was recovered. . . 31 Figure 5.5 Convergence processes for the direct simulated annealing calibration

tech-nique using daily data, over-prediction scenario. The dashed lines mark where the calibration solution corresponding to lowest computed energy was recovered. . . 32 Figure 5.6 Convergence processes for the direct simulated annealing calibration

tech-nique using hourly data, over-prediction scenario. The dashed lines mark where the calibration solution corresponding to lowest computed energy was recovered. . . 33 Figure 5.7 Convergence processes for the direct simulated annealing calibration

tech-nique using monthly data, under-prediction scenario. The dashed lines mark where the calibration solution corresponding to lowest computed energy was recovered. . . 34 Figure 5.8 Convergence processes for the direct simulated annealing calibration

tech-nique using daily data, under-prediction scenario. The dashed lines mark where the calibration solution corresponding to lowest computed energy was recovered. . . 35

(9)

Figure 5.9 Convergence processes for the direct simulated annealing calibration tech-nique using hourly data, under-prediction scenario. The dashed lines mark where the calibration solution corresponding to lowest computed energy was recovered. . . 36 Figure 5.10 Plots illustrating agreement between calibrated model predictions and

ref-erence utility data for the direct simulated annealing calibration technique, over-prediction scenario. . . 37 Figure 5.11 Plots illustrating agreement between calibrated model predictions and

ref-erence utility data for the direct simulated annealing calibration technique, under-prediction scenario. . . 38 Figure 5.12 Benefit of calibration for the direct simulated annealing calibration

tech-nique. . . 39 Figure 5.13 Inversion results for the regression metamodeling calibration technique.

Left and right bounds are the parameters’ minimum and maximum values. 41 Figure 5.14 Convergence processes for the regression metamodeling calibration

tech-nique using monthly data, over-prediction scenario. The dashed lines mark where the calibration solution corresponding to lowest computed energy was recovered. . . 43 Figure 5.15 Convergence processes for the regression metamodeling calibration

tech-nique using daily data, over-prediction scenario. The dashed lines mark where the calibration solution corresponding to lowest computed energy was recovered. . . 44 Figure 5.16 Convergence processes for the regression metamodeling calibration

tech-nique using hourly data, over-prediction scenario. The dashed lines mark where the calibration solution corresponding to lowest computed energy was recovered. . . 45 Figure 5.17 Convergence processes for the regression metamodeling calibration

tech-nique using monthly data, under-prediction scenario. The dashed lines mark where the calibration solution corresponding to lowest computed energy was recovered. . . 46 Figure 5.18 Convergence processes for the regression metamodeling calibration

tech-nique using daily data, under-prediction scenario. The dashed lines mark where the calibration solution corresponding to lowest computed energy was recovered. . . 47

(10)

Figure 5.19 Convergence processes for the regression metamodeling calibration tech-nique using hourly data, under-prediction scenario. The dashed lines mark where the calibration solution corresponding to lowest computed energy was recovered. . . 48 Figure 5.20 Plots illustrating agreement between calibrated model predictions and

ref-erence utility data for the regression metamodeling calibration technique, over-prediction scenario. . . 49 Figure 5.21 Plots illustrating agreement between calibrated model predictions and

ref-erence utility data for the regression metamodeling calibration technique, under-prediction scenario. . . 50 Figure 5.22 Benefit of calibration for the regression metamodeling calibration technique. 51 Figure 5.23 Sensitivity analysis results for the ASHRAE 1051-RP-based calibration

technique, over-prediction scenario. Strong parameters are sorted (top to bottom) in descending order of χ2i. . . 54 Figure 5.24 Sensitivity analysis results for the ASHRAE 1051-RP-based calibration

technique, under-prediction scenario. Strong parameters are sorted (top to bottom) in descending order of χ2i. . . 55 Figure 5.25 Refined grid results using the ASHRAE 1051-RP-based calibration

tech-nique, over-prediction scenario. Recovered values are the averages across the 10 refined calibration solutions. Left and right bounds are the param-eters’ minimum and maximum values. . . 57 Figure 5.26 Refined grid results using the ASHRAE 1051-RP-based calibration

tech-nique, under-prediction scenario. Recovered values are the averages across the 10 refined calibration solutions. Left and right bounds are the param-eters’ minimum and maximum values. . . 58 Figure 5.27 Benefit of calibration for the ASHRAE 1051-RP-based calibration

tech-nique. . . 59 Figure 5.28 Energy savings predictions for Combined using the ASHRAE

1051-RP-based calibration technique. . . 61 Figure 5.29 t-based confidence intervals (α = 0.05) for energy savings predictions

us-ing the ASHRAE 1051-RP-based calibration technique. Units are annual savings percent of reference utility data. . . 62

(11)

Figure 6.1 Percent error in savings predictions relative to reference savings for each retrofit measure. The vertical line at zero represents perfect agreement with reference savings. . . 65 Figure H.1 Energy savings predictions for Air-Seal using the ASHRAE 1051-RP-based

calibration technique. . . 103 Figure H.2 Energy savings predictions for Attic Insulation using the ASHRAE

1051-RP-based calibration technique. . . 104 Figure H.3 Energy savings predictions for Wall Insulation using the ASHRAE

1051-RP-based calibration technique. . . 105 Figure H.4 Energy savings predictions for Programmable Thermostat using the ASHRAE

1051-RP-based calibration technique. . . 106 Figure H.5 Energy savings predictions for Low-e Windows using the ASHRAE

1051-RP-based calibration technique. . . 107 Figure H.6 Energy savings predictions for Duct Sealing and Insulation using the ASHRAE

1051-RP-based calibration technique. . . 108 Figure H.7 Energy savings predictions for AC Replacement using the ASHRAE

(12)

LIST OF TABLES

Table 4.1 Characteristics of the modeled house. . . 19

Table 4.2 Reference utility data. . . 21

Table 4.3 BESTEST-EX-based retrofit measures. . . 24

Table 6.1 Total benefit of calibration, over-prediction scenario. . . 63

Table 6.2 Total benefit of calibration, under-prediction scenario. . . 63

Table 6.3 Computational cost. . . 67

Table A.1 Operational inputs and uncertainty ranges. . . 79

Table A.2 Asset-related inputs and uncertainty ranges. . . 80

Table B.1 Explicit input values for the over-prediction scenario. . . 81

Table B.2 Explicit input values for the under-prediction scenario. . . 82

Table C.1 The 24 influential inputs identified from the preliminary sensitivity anal-ysis described in Section 4.4. . . 83

Table C.2 Sensitivity analysis results for the direct simulated annealing and regres-sion metamodeling calibration techniques. . . 84

Table C.3 Sensitivity analysis results for the ASHRAE 1051-RP-based calibration technique, over-prediction scenario. . . 85

Table C.4 Sensitivity analysis results for the ASHRAE 1051-RP-based calibration technique, under-prediction scenario. . . 86

Table F.1 Uncalibrated and calibrated model errors for strong parameters identified from the direct simulated annealing approach, over-prediction scenario. . . 90

Table F.2 Uncalibrated and calibrated model errors for strong parameters identified from the regression metamodeling approach, over-prediction scenario. . . . 91

Table F.3 Uncalibrated and calibrated model errors for strong parameters identified from the ASHRAE 1051-RP-based approach, over-prediction scenario. . . . 92

(13)

Table F.4 Uncalibrated and calibrated model errors for strong parameters identified from the direct simulated annealing approach, under-prediction scenario. . 93 Table F.5 Uncalibrated and calibrated model errors for strong parameters identified

from the regression metamodeling approach, under-prediction scenario. . . 94 Table F.6 Uncalibrated and calibrated model errors for strong parameters identified

from the ASHRAE 1051-RP-based approach, under-prediction scenario. . . 95 Table G.1 Benefit of calibration for the direct simulated annealing approach,

over-prediction scenario. . . 96 Table G.2 Benefit of calibration for the regression metamodeling approach,

over-prediction scenario. . . 97 Table G.3 Benefit of calibration for the ASHRAE 1051-RP-based approach,

over-prediction scenario. . . 98 Table G.4 Benefit of calibration for the direct simulated annealing approach,

under-prediction scenario. . . 99 Table G.5 Benefit of calibration for the regression metamodeling approach,

under-prediction scenario. . . 100 Table G.6 Benefit of calibration for the ASHRAE 1051-RP-based approach,

under-prediction scenario. . . 101 Table H.1 t-based confidence intervals (α = 0.05) for energy savings predictions

us-ing the ASHRAE 1051-RP-based calibration procedure, over-prediction scenario. Units are annual savings percent of reference utility data. . . . 110 Table H.2 t-based confidence intervals (α = 0.05) for energy savings predictions

us-ing the ASHRAE 1051-RP-based calibration procedure, under-prediction scenario. Units are annual savings percent of reference utility data. . . . 111

(14)

NOMENCLATURE

absolute error in savings predictions for the calibrated model . . . ε(ψc)

absolute error in savings predictions for the uncalibrated model . . . ε(ψu) annual calibrated model predicted savings as a percent of the pre-retrofit reference

utility data . . . φc annual reference savings as a percent of the pre-retrofit reference utility data . . . φr

arithmetic mean of the reference utility data . . . ¯y calculated gradient based on perturbation to input j . . . gj

calibrated model’s post-retrofit annual energy savings prediction . . . ψc

chi-squared value corresponding to input i for ASHRAE 1051-RP-based sensitivity analysis . . . χ2

i

coded values of central composite design matrix X . . . ω consolidated index for measuring overall goodness-of-fit . . . δTOTAL

current parameter value in iterative search . . . hi,j

design matrix used in central composite design . . . X difference in absolute error of savings predictions between calibrated and uncalibrated

models . . . ϕ expected number of occurrences on levels xlowi , xhighi , xmidi for ASHRAE 1051-RP-based

sensitivity analysis . . . ρexp

explicit input value randomly selected from triangular probability distribution for ap-proximate input i . . . νi

factor used to determine coverage probability . . . α high level median value in LHMC discretization corresponding to input i . . . xhighi initial temperature for simulated annealing algorithm . . . T0

(15)

matrix whose columns contain ζk polynomial term regression coefficients . . . β

maximum value of strong parameter j . . . fjmax mean of the η annual output values corresponding to input i in the sensitivity analysis µi

middle level median value in LHMC discretization corresponding to input i . . . xmid i

minimum value of strong parameter j . . . fjmin normalized eigenvector j of normalized covariance matrix C . . . vj

number of approximate inputs considered in the study . . . λ number of inputs considered for sensitivity analysis methods described in Section 2.2 . . θ number of iterations per parameter for simulated annealing algorithm . . . N number of model parameters . . . p number of random samples from each triangular probability distribution in the

sensi-tivity analysis . . . η number of sample iterations for gradient calculations . . . G number of strong (adjustable) parameters identified from sensitivity analyses . . . τ number of utility data points used in calibration (12, 365, or 8760) . . . n objective function value difference between successive perturbations . . . ∆E objective function value of new (successive) perturbations . . . Enew

objective function value of old (previous) perturbations . . . Eold

observed number of occurrences on levels xlow i , x

high

i , xmidi for ASHRAE 1051-RP-based

sensitivity analysis . . . ρobs,s,i

perturbed fj in gradient calculations . . . fj0

perturbed hi,j in iterative search . . . h0i,j

pre-retrofit annual energy use (synthetic billing data from reference model) . . . Γr prescribed percent of parameters’ range . . . κ

(16)

quadratic polynomials that best fit each of the sets of simulation data in the least squares sense . . . ζk

random number from uniformly distributed interval (−1, 1) . . . γ(−1,1)

random number from uniformly distributed interval (0, 1) . . . γ(0,1),j

randomly selected value for strong parameter j in gradient calculations . . . fj

reference model’s post-retrofit annual energy savings prediction . . . ψr response matrix used in central composite design . . . Y sensitivity analysis coefficient corresponding to input i . . . ξi

standard deviation of the η annual output values corresponding to input i in the sensitivity analysis . . . σi

sum of the ϕ across all retrofit measures . . . T BoC symmetric covariance matrix assembled in gradient calculations . . . C triangular probability distribution’s maximum value corresponding to input i . . . xmax

i

triangular probability distribution’s minimum value corresponding to input i . . . xmini triangular probability distribution’s nominal value corresponding to input i . . . xnom

i

uncalibrated model’s post-retrofit annual energy savings prediction . . . ψu

value used to perturb fj in gradient calculations . . . ∆fj

value used to perturb hi,j in iterative search . . . ∆hj

vector of reference utility data . . . y vector of simulation-predicted utility data . . . yˆ

(17)

LIST OF ABBREVIATIONS

American Society of Heating, Refrigerating & Air-Conditioning Engineers . . . . ASHRAE Building Energy Optimization . . . BEopt Building Energy Simulation Test for Existing Homes . . . BESTEST-EX Building Loads Analysis and System Thermodynamics . . . BLAST Department of Energy . . . DOE Latin Hypercube Monte Carlo . . . LHMC Lawrence Berkeley National Laboratory . . . LBNL National Renewable Energy Laboratory . . . NREL air changes per hour . . . ACH annual fuel utilization efficiency . . . AFUE building description language . . . BDL building energy simulation . . . BES central composite design . . . CCD coefficient of variation of the root mean square error . . . CV(RMSE) heating, venting, and air conditioning . . . HVAC multiple linear regression . . . MLR normalized mean bias error . . . NMBE seasonal energy efficiency ratio . . . SEER short-term energy monitoring . . . STEM solar heat gain coefficient . . . SHGC typical meteorological year . . . TMY

(18)

ACKNOWLEDGMENTS

This work was funded by the Alliance Partner University Program between the National Renewable Energy Laboratory (NREL) and the Colorado School of Mines, authorization no. UGA-0-41025-12. Specifically, the author would like to thank Ben Polly of NREL for his help in facilitating this.

(19)

CHAPTER 1 INTRODUCTION

This introductory chapter provides background information on residential building energy simulation (BES) and a software tool (BEoptTM) developed to interface with BES engines

(DOE-2 and EnergyPlus). Since simulation engine inputs may have high levels of uncertainty, analysts often calibrate building model inputs for better agreement between measured and simulated-predicted data as well as more accurate energy savings predictions for changes made to the inputs. There are no systematic, cost-effective, and widely-accepted guidelines or methods for accurate residential building model calibration. The chapter concludes with a formal statement of the problem for which this research addresses.

1.1 Background

Residential building energy simulation programs are often used to model the thermal performance of homes. Capabilities of such programs vary, but typically they are used by researchers, architects, engineers, and home energy raters to analyze and predict home energy use. In this manner, analysts are able to predict the energy efficiency impacts of changes to building parameters (e.g., benefits of energy retrofit measures). Two of the more commonly used simulation engines are DOE-2.2 (Hirsch & Associates, 2004) and EnergyPlus (Laboratory, 2012). Both have roots in the original Fortran 77-based DOE-2 and BLAST programs initially developed in the late 1970s. The modern program versions improve on the original programs by including more modularized code structure, updating program subroutines and bug-fixing, featuring sub-hourly time steps, as well as allowing the building description language (BDL) to be a more interactive and flexible I/O interface environment by requiring simpler comma-separated text files (Laboratory, 2012). Both DOE-2.2 and EnergyPlus require a processed file of weather data, such as weather data describing a typical meteorological year (TMY). Model-predicted home energy consumption is not only a

(20)

function of nonlinear equations, but a myriad of building model inputs. Due to the number and complexities of building model inputs involved in energy simulation, these engines require detailed, format-specific, text-based building description input files.

Building description input files detail building parameters which can be categorized as either asset, operational, or site related (Polly et al., 2011). The National Renewable Energy Laboratory (NREL) has developed a residential building energy optimization program called BEoptTM (B uilding E nergy opt imization) (Christensen et al., 2006). This program, pack-aged with libraries of categorized options from which the user selects home characteristics, is a front end to simulation engines DOE-2.2 and EnergyPlus. BEoptTM users save time by

having this front end translate its predefined library options into format-specific building description input files required by the simulation engine. For this study, the building energy simulation program BEoptTM/DOE-2.2 is considered for energy analysis. On an average

PC, each BEoptTM/DOE-2.2 simulation takes approximately 10 – 15 seconds to complete, depending on the nature of the home being modeled.

Many of the hundreds of model inputs may have high levels of uncertainty. Accurate model predictions are therefore not guaranteed since uncertainty in model inputs leads to uncertainty in model output. That is, we would not expect perfect agreement between model predictions and measured output data if significant errors exist in the input values. Since it is difficult to accurately predict energy savings for retrofit measures applied to uncertain retrofit building models, modelers often employ a calibration procedure to pre-retrofit building models to reduce disagreement between software predictions and measured energy uses. In this fashion, the problem of accurately predicting building energy use can be posed as an optimization problem, with the objective function defined as the disagreement between model-predicted and measured data. The modeler deems the model “calibrated” once a desired level of agreement (i.e., objective function minimum) has been achieved. The resultant set of adjusted model inputs are generally assumed to more closely match the set of actual building inputs. The goal in obtaining a calibrated model is to more accurately

(21)

predict energy savings for retrofit measures.

The optimization problem is under-determined in the sense that there can exist many feasible calibration solutions which satisfy objective function value criteria. Model calibra-tion has typically been performed using monthly measured utility data describing one year of energy use (i.e., 12 data points), the most common form of utility data. “Smart-meters”, which typically collect sub-hourly electricity usage data, are becoming more common in residential buildings. They may supply the common monthly calibration problem with ad-ditional information, thereby eliminating extraneous calibration solutions and making the problem less under-determined. Research is needed to understand and evaluate residential building calibration techniques, in the context of higher frequency, smart-meter utility data. Historically, building energy model calibration has been regarded as an art form typically depending heavily on human judgment rather than on systematic and repeatable, mathe-matical procedures (Reddy & Maor, 2006). Solutions resulting from one calibration may not match exactly the solution from another due to subjectivity in manual adjustments. Maxi-mizing the automation and repeatability of residential building model calibration techniques will help to minimize inconsistencies resulting from inexperience and judgment across ana-lysts. Most calibration methods reported in the literature pertain to commercial buildings which typically have higher-stake retrofit measure considerations than residential buildings (Bertagnolio et al., 2010; Carroll & Hitchcock, 1993; Carroll et al., 1989; Norford et al., 1994; Pedrini et al., 2002; Raftery et al., 2009, 2011; Reddy & Maor, 2006; Westphal & Lamberts, 2005; Yoon et al., 2003). For cost-effectiveness, residential building model cali-bration must be computationally less expensive than commercial building model calicali-bration techniques. While accuracy is an important factor in the calibration of building models, cal-ibrating residential building models must be streamlined such that the process and results are practical for decision makers in the residential sector (Development, 2010; Pappas & Reilly, 2011). Research is needed to understand and evaluate practical residential building calibration techniques requiring minimal heuristic knowledge and guess-work and maximal

(22)

cost-effectiveness, repeatability, and automation.

1.2 Problem statement

Model calibration techniques range in complexity and repeatability from manual cali-bration based on user judgment to automated calicali-bration based on analytical, numerical, and statistical methods. Established building calibration techniques have been developed mostly for commercial building applications. Although emerging standards (Building Per-formance Institute, 2011) are attempting to define calibration requirements for residential buildings, there are no widely-accepted guidelines for systematic and cost-effective residen-tial building applications. Research is needed to determine whether commercial building ap-proaches, considering repeatability and automation, can be cost-effectively applied to model calibration in the residential sector. Further research is needed to understand and evaluate these approaches in the context of varying frequencies of utility billng data.

(23)

CHAPTER 2 LITERATURE REVIEW

This chapter reviews calibration techniques developed and described in the literature, sensitivity analysis methods applicable to building energy simulation, and a simulated an-nealing parameter inversion algorithm.

2.1 Calibration techniques

Most calibration methods reported in the literature were developed for commercial build-ings with generally high-stake retrofit measure considerations. Sections 2.1.1 and 2.1.2 review calibration techniques proposed and primarily developed for use in commercial and residen-tial applications, respectively.

2.1.1 Commercial

While some commercial calibration methods adopt manual iterative approaches for re-ducing disagreement between measured and model-predicted data, others employ automated optimization routines for minimizing output residuals.

Manual iterative methods. Norford et al. (1994) suggested calibrating a DOE-2 model in two stages: (1) first tuning tenant-related inputs and HVAC schedules, then (2) tuning HVAC equipment and building shell performance. They ordered their tuning assumptions based on decreasing energy impact and their research formulated calibration guidelines and developed insights for manual calibration of commercial building models. Norford et al. (1994) cautioned that occupant behavior and operation have enormous impact on energy consumption, and as such, focused their calibration guidelines on major loads like build-ing occupants’ use of lightbuild-ing and office equipment, HVAC operation, set points, etc. of commercial buildings.

(24)

Pedrini et al. (2002) proposed a three step calibration methodology: (1) simulation from only the building design plans with no site visit, (2) carrying out measurements from walk-thru and audit (performed by the analyst along with a technician familiar with operation of the building), and (3) monitoring measured energy consumption and tuning model parame-ters. This method was performed on more than 15 office buildings as case studies for their calibration methodology. Pedrini et al. (2002) also noted that utility data recorded every 15 minutes is a very good resource of information, occupation and operation patterns are very influential on annual energy consumption, and embedded sensitivity tools in simulation software could point to the variables with higher impact on building energy consumption.

Yoon et al. (2003) considered the potential benefit in utilizing high frequency data with their “base load analysis approach” for calibrating energy model with monthly utility billing data as well as sub-metered data. Demonstrating the proposed procedure on a 26-story commercial building in Korea, their calibration method consisted of the following steps: (1) base case modeling, (2) base load consumption analysis, (3) swing-season calibration, (4) site interview and measurements, (5) heating and cooling season calibration, and (6) validation of calibrated base model. In short, this methodology basically consists of base case modeling, then tuning key parameters such as lighting, HVAC equipment, plug loads, and elevator consumption. Next, the method reiterates with additional site visits and interviews, then more parameter tuning, and finally model validation.

Considering the enormity and complexity of building model input parameters, many newer calibration methods include energy audit analyses and numerical sensitivity analy-sis methods. The methodology for calibration of building simulation models developed by Westphal & Lamberts (2005) is divided in 6 stages: (1) calibration of power and schedules, (2) simulation of design days for thermal loads analysis, (3) sensitivity analysis over input parameters related to significant heat gains/loss, (4) adjustment of input values of high level of influence and uncertainty, (5) whole year simulation, and (6) final adjustments.

(25)

More recent calibration techniques found in the literature describe an evidence-based calibration methodology (calibration based solely on referenced and reliable sources of in-formation about the building): (1) data analysis and benchmarking, (2) calibration-based audit process, and (3) evidence-based calibration. Bertagnolio et al. (2010) described their evidence-based calibration methodology as a systematic manual iterative method including sensitivity issues in order to help the auditor identify critical points needing further investi-gation.

Like Bertagnolio et al. (2010), the proposed calibration methodology of Raftery et al. (2011) recognizes the need of systematic evidence-based decision-making to improve repro-ducibility and reliability in model calibration. Raftery et al. (2011) suggested that manual adjustments based on unverifiable information to the multitude of input parameters decreases credibility of the model and depends on personal judgment of the analyst. They proposed a new methodology based on three core principles: (1) detailed BES models that accurately represent the real building, (2) reproducibility of results, and (3) detailed energy monitoring systems to supply measurements needed to perform the calibration.

Automated iterative methods. Carroll & Hitchcock (1993) also recognized the de-crease in credibility when relying on personal engineering judgment for manual input pa-rameter adjustments, and so proposed an automated iterative calibration method for tuning building model input parameters to match measured utility data.

Lee & Claridge (2002) also exercised automated iterative calibration, using optimization software to automatically calibrate to measured data. Their method was implemented using a commercial building case study. Lee & Claridge (2002) claimed that their optimization program’s ability to filter out local minima (i.e., incorrect sets of input parameters) was an advantage over manual calibration results since calibrations can differ across engineers.

A thorough study from Reddy & Maor (2006), ASHRAE 1051-RP, was initiated with the intention of identifying and collecting the best techniques and approaches in existing research

(26)

to develop a systematic calibration methodology for parameter estimation and determina-tion of calibrated simuladetermina-tion uncertainty. The calibradetermina-tion technique proposed by ASHRAE 1051-RP is a four step methodology: (1) identify the set of most influential input parameters by walk-thru audits and heuristics (i.e., reduce search space dimensionality), (2) use Monte Carlo simulation to perform coarse grid calibration resulting in small set of the most feasible solutions, (3) adopt a refined grid calibration using either manual iterative methods docu-mented in literature, or mathematical optimization methods, and (4) compute uncertainty in calibrated model predictions. For the study presented in this thesis, a calibration procedure based on the methodology described in ASHRAE 1051-RP is implemented and performed.

2.1.2 Residential

The majority of building model calibration methods described in the literature apply to commercial buildings. One method applicable to residential building model calibration is the Primary and Secondary Terms Analysis and Renormalization (PSTAR) method (Subbarao et al., 1988). This method uses short-term energy monitoring (STEM) to extrapolate to long-term performance of an as-built building. Specifically, primary terms calculated given the weather and building description are renormalized using a linear least squares fit. The results “provide a realistically complex thermal model of a building.”

New et al. (2012) are developing automated calibration capabilities for EnergyPlus. This method aims to accurately reproduce measured data by selecting EnergyPlus input param-eters in a systematic, automated, and repeatable fashion. The Autotune methodology uses supercomputing and machine learning algorithms to perform millions of parametric Monte Carlo simulations which perturb uncertain parameters and approximate sensor data. New et al. (2012) are currently applying their methodology to a heavily instrumented residential research building.

(27)

2.2 Sensitivity analysis methods

Due to the complex nature of residential building energy simulation programs, model output sensitivities to individual model inputs cannot be evaluated analytically. As a result, numerical methods are used to analyze influences on model output caused by model inputs (Lam & Hui, 1996). By eliminating some inputs from the list of all contributing model inputs, the problem search space can be reduced (i.e., reduction in computational expense) by allocating adjustments only to these influential inputs.

Numerical sensitivity analysis methods commonly use Monte Carlo procedures to identify a model’s most influential inputs (Rubinstein & Kroese, 2008). Sections 2.2.1 – 2.2.2 describe various Monte Carlo procedures commonly adopted for sensitivity analysis.

2.2.1 The χ2 statistic

This analysis incorporates a Monte Carlo -type procedure, specifically a midpoint Latin Hypercube Monte Carlo (LHMC) stratified sampling technique. Detailed steps for perform-ing a LHMC stratified samplperform-ing on three levels are described by Reddy & Maor (2006). First, probability distributions are assigned to each of the θ building model input parame-ters. Probability distributions are functions which describe the likelihood of random variables taking certain values. This particular sensitivity analysis method samples on three levels of triangular probability distributions (Section 4.2 provides a more detailed explanation of tri-angular probability distributions and their use in this study). After a tritri-angular probability distribution is assigned to each of θ model input parameters, the distributions are discretized into three ranges of equal probability (i.e., areas of 13).

Next, the median values (levels), xlowi , xmidi , xhighi , i = 1, . . . , θ, of each of these three ranges are determined. Using basic trigonometry, the levels xlowi , xmidi , xhighi divide the three ranges into equal probabilities and are calculated as

(28)

xlowi = xmini + r

(xnom

i − xmini )(xmaxi − xmini )

6 , (2.1)

xhighi = xmaxi − r

(xmax

i − xnomi )(xmaxi − xmini )

6 , (2.2) xmidi =        xmin i + r (xnomi − xmin i )(x max i − x min i ) 2 , if xnomi ≥ xmaxi + xmini 2 , xmaxi − r

(xmaxi − xnomi )(xmaxi − xmini )

2 , if xnomi <

xmaxi + xmini

2 ,

(2.3)

for i = 1, . . . , θ, where xmini , xnomi , xmaxi are the triangular probability distribution’s minimum, nominal, and maximum values, respectively (Reddy & Maor, 2006).

Next, random LHMC combinations of all parameters’ discrete xlow

i , xmidi , x high

i are

gener-ated (i.e., use uniform sampling without replacement to randomly select a subset of the 3θ

total LHMC parameter combinations). Each LHMC parameter combination (containing θ values) becomes a realization by substituting input parameter values of the base-case build-ing description file with values of the LHMC parameter combination. All randomly generated LHMC combinations are substituted one-at-a-time into the base-case building description file in this fashion, and all realizations are simulated in automated batch mode.

After the batch of simulations has completed, the set of utility billing data is compared to each set of model-predicted data produced by the batch-run simulations. Model-predicted sets resulting in close agreement with utility billing data are included in the calculation of a χ2 statistic, which is a process for determining parameter sensitivity. The χ2

i formula for

three-level sampling of each of the θ input parameter’s probability distribution is given by

χ2i = 3 X s=1 (ρobs,s,i− ρexp)2 ρexp , (2.4)

for i = 1, . . . , θ, where ρobs,s,i is the observed number of occurrences on level s (i.e., one

of xlow

i , xmidi , x high

i ) and ρexp is the expected number of occurrences on each level (i.e., 13 of

number of sets resulting in close agreement with utility billing data). The χ2 application quantifies distribution non-uniformity across the three levels (Reddy & Maor, 2006).

(29)

For example, if a total of 30 samples were included in the χ2 statistic and 10 of these

samples occurred on each of the three levels for an input, then samples for this input would be uniformly distributed across the three levels. Using Equation 2.4, this results in a χ2

value of 0 (i.e., degree of agreement between utility billing and model-predicted data is not influenced by values of this input any more on one level than another). On the other hand, samples for inputs with non-uniform distributions across the three levels result in nonzero χ2 values. The degree to which the sample distribution is non-uniform determines the χ2 value (i.e., the more non-uniform the distribution across the three levels, the more sensitive the utility billing and model-predicted data agreement is to particular level values of inputs). In terms of optimization, search space can be narrowed by considering only adjustments to these more influential inputs.

2.2.2 The coefficient of variation σµ

This sensitivity analysis method also incorporates a Monte Carlo procedure. However, unlike the sensitivity analysis method described in Section 2.2.1, this method uses a random sampling technique from input probability distributions. This technique makes η random samples from each of θ input triangular probability distributions. The selected values are substituted into the base-case building description file one-at-a-time (holding all other inputs at their nominal values), and then all files are simulated in automated batch mode. Upon completion of all simulations, a dimensionless sensitivity coefficient ξi is calculated for each

of the θ inputs perturbed in the analysis. The ξi are calculated according to the equation

ξi =

σi

µi

, (2.5)

for i = 1, . . . , θ, where σi, µi are the standard deviation and mean of the η annual output

values corresponding to input i (Hamby, 1994).

Each ξi, therefore, is the output coefficient of variation. Model-predicted output is more

(30)

optimiza-tion search space since inputs with lower ξi values may be eliminated.

2.3 Simulated annealing

Simulated annealing is an efficient global optimization algorithm for nonlinear inversion (Kirkpatrick et al., 1983; Kuperman et al., 1990). Its inspiration comes from the annealing process of heating and gradual cooling, as atoms would return to their natural state in a physical process. At higher temperatures or energies, there is more movement of atoms than at lower energies. As the system temperature gradually cools, the atoms begin to settle into configurations of lower energies.

In the context of optimization, the steps of the simulated annealing algorithm include randomly perturbing the parameters, evaluating the change ∆E in an objective function, deciding whether to accept perturbations resulting in positive ∆E, and lowering the tem-perature T (Collins et al., 1992). A least objective function value (i.e., energy) is ultimately found, along with corresponding optimal parameter solution, as the algorithm goes through the process of accepting all energy decreases and stochastically deciding to accept or reject energy increases. The algorithm, as implemented for this analysis, is executed in two steps: (1) gradient calculations (Section 2.3.1) and (2) iterative search (Section 2.3.2). Also for this analysis, the objective function is defined to be CV(RMSE), the coefficient of variation of the root mean square error (see Equation D.1 and Appendix D for more information).

2.3.1 Gradient calculations

This portion of the simulated annealing algorithm randomly samples the parameter space in order to gain information about how successive perturbations are to be generated and applied to the adjustable parameters in the iterative search portion of the algorithm. The gradient calculations portion of the algorithm is supplied the number of sample iterations G, vectors of adjustable parameters’ minimum and maximum values fmin and fmax, each of length τ , and the utility billing data.

(31)

A Monte Carlo numerical integration technique is implemented to identify any nonzero covariance between parameters. For each sample iteration k, k = 1, . . . , G, gradients are computed to construct symmetric covariance matrix C. The following procedure describes the steps required at each sample iteration k. First, a random point f (vector of τ parameter values) is selected in the parameter space by

fj = fjmin+ (f max

j − f

min

j )γ(0,1),j, (2.6)

for j = 1, . . . , τ , where random number γ(0,1),j from the uniformly distributed interval (0, 1)

is generated. Point f is then substituted into the building description file, and the file is simulated to obtain energy e1 (i.e., objective function value obtained when substituting

simulation-predicted and utility billing data into Equation D.1). Each element of f is then perturbed one-at-a-time according to

fj0 = fj + ∆fj, (2.7)

where each ∆fj is computed as

∆fj = κ(fjmax− f min

j ), (2.8)

and where κ is the prescribed percent of parameters’ range. (For this analysis κ = 0.05, i.e., gradients are calculated based on perturbations by +5% of each parameter’s range). For each j, point f0 is substituted into the building description file, and the file is simulated to obtain energy e2,j. Gradient gj is then calculated for each perturbation of f by

gj = (fmax j − fjmin)(e2,j− e1) 2∆fj (2.9) = e2,j − e1 2κ , (2.10)

and used to update covariance matrix C by

(32)

for i, j = 1, . . . , τ . After all G sample iterations are complete, covariance matrix C is then normalized by its trace and the prinicipal components (i.e., normalized eigenvectors v1, . . . , vτ) of the normalized covariance matrix C are found using an orthogonal linear

transformation. The resulting set of τ eigenvectors is orthogonal (i.e., linearly uncorrelated). Each of the τ eigenvectors is used as a multiplier in the iterative search portion of the algorithm to determine parameter value perturbations. In this fashion, the iterative search permits more than one parameter value to be perturbed per function evaluation.

2.3.2 Iterative search

The iterative search portion systematically perturbs the adjustable parameter values, evaluates the algorithm’s objective function, and finds a residual minimum corresponding to optimal solution (i.e., optimal values for adjustable parameters) from the inversion. The search begins at some “best-guess” point in the parameter space, choosing starting tem-perature T0 and number of iterations N . For each iteration k, k = 1, . . . , N , a total of τ

objective function evaluations are made at different perturbations of parameter configuration h (vector of τ parameter values). The following procedure describes the steps required at each iteration k. A random number γ(−1,1) from the uniformly distributed interval (−1, 1)

is generated and objective function values are computed at perturbations h0i to h for fixed i = 1, . . . , τ , according to

h0i,j = hi,j+ ∆hjvi,jγ(−1,1)3 , (2.12)

for j = 1, . . . , τ , where hi,j is the current parameter value, vi,j is an element of eigenvector

vi computed in the gradient calculations portion, and ∆hj = 12(fjmax − fjmin). Each h0i,j

is required to remain within the interval (fmin

j , fjmax), where fjmin and fjmax are defined as

before. If the perturbed parameter value moves out of this interval, it is reflected back into the interval following

(33)

h0i,j → ( 2fmax j − h 0 i,j, if h 0 i,j > fjmax, 2fmin j − h 0 i,j, if h 0 i,j < fjmin. (2.13)

After each function evaluation, this portion of the algorithm calculates the change in energy ∆E in moving from the previous state to the new one (i.e., from h to h0i). Based on some prescribed acceptance criteria, the algorithm decides whether to accept the new state as the initial guess for the next function evaluation. The formula for evaluating the change in energy ∆E is given by

∆E = Enew− Eold, (2.14)

where Enew is the energy of the perturbed guess h0i and Eold is the energy of the previous

guess h. If ∆E ≤ 0, the perturbation h0i is always accepted, and is used as the starting point h for the next function evaluation. If ∆E > 0, the perturbation h0i may or may not be accepted (i.e., escapes from local minima are permitted). For this reason, the algorithm strives to find the global optimum. The probability P (∆E) that a perturbation h0i results in an acceptable energy increase is given by the commonly-used Boltzmann probability factor

P (∆E) = exp −∆E T



, (2.15)

where T is the decreasing temperature parameter. For each function evaluation, a random number γ(0,1),j from the uniformly distributed interval (0, 1) is generated. If γ(0,1),j < P (∆E)

then the perturbation h0i is accepted, but if γ(0,1),j ≥ P (∆E) then the perturbation h0i is

rejected and the original perturbation h is used to start the next step. Thus, for larger values of T , the algorithm is more likely to accept those perturbations which have resulted in an energy increase. The equation

lim T →∞exp  −∆E T  = 1 (2.16)

(34)

verifies this claim. That is to say, there is a decrease in the probability of accepting energy increases as the search progresses. Therefore, this algorithm has the flexibility of allowing a thorough parameter space evaluation while simultaneously maintaining convergence.

Following the τ function evaluations for each iteration k, the temperature parameter T decreases according to a “fast simulated annealing cooling schedule,” given by

T = T0

k , (2.17)

where T0 is defined as before (Collins et al., 1992; Szu & Hartley, 1987). This completes

the steps required at each iteration k. After all N iterations are complete, the parameter configuration corresponding to lowest computed energy is the inversion’s optimal solution (Kirkpatrick et al., 1983; Kuperman et al., 1990).

(35)

CHAPTER 3 OBJECTIVES

This thesis describes the process for implementing and evaluating several automated calibration techniques; results of the evaluation are also reported. Each calibration tech-nique incorporates a sensitivity analysis method and optimization procedure as its primary components. The key objectives of this research were to:

• code in Python 2.7 programming language the sensitivity analysis methods and simu-lated annealing optimization algorithm as described in Sections 2.2 and 2.3,

• implement an automated technique based on ASHRAE 1051-RP (Reddy & Maor, 2006) and develop two alternative automated techniques, all of which employ the simulated annealing algorithm and one sensitivity analysis method,

• adapt and apply the established framework set forth in BESTEST-EX (Judkoff et al., 2010) for self-testing calibration techniques using a single software tool1,

• test the three calibration techniques using the framework in the context of monthly, daily, and hourly data for two synthetic utility billing scenarios, and

• evaluate the calibration techniques based on the accuracy of predicted savings for retrofit measures, computational cost, repeatability, automation, and ease of imple-mentation.

1See “Performing Calibration Tests Without Using Reference Programs” of http://www.nrel.gov/docs/fy11osti/52414.pdf.

(36)

CHAPTER 4

COMPARING CALIBRATION TECHNIQUES

Sections 4.1 – 4.7 describe the approach used for the evaluation of model calibration techniques. The approach is based on the self-testing procedure described in BESTEST-EX (Judkoff et al., 2010).

4.1 Define the house

One 1960’s era all-electric ranch-style house is considered in the analysis. BESTEST-EX (Judkoff et al., 2010) Case L200EX-P is the basis for the modeled house. The key pre-retrofit characteristics of the modeled house are given in Table 4.1.

4.2 Define approximate inputs

Probability distributions were assigned for each of λ approximate inputs to model the uncertainty in these inputs. Probability distributions are functions which describe the like-lihood of random variables taking certain values. Triangular probability distributions were used for this analysis. This type of distribution is characterized by having greatest probabil-ity of selection at the “best-guess”, or nominal, value, with linearly decreasing probabilprobabil-ity to zero at the range extrema (Evans et al., 2000; Judkoff et al., 2010; Kotz & van Dorp, 2004; Reddy & Maor, 2006). An asymmetric triangular probability distribution is shown in Figure 4.1, where “Nominal” refers to the nominal (“best-guess”) value, “Min” the minimum value, and “Max” the maximum value.

Table A.1 of Appendix A lists the approximate inputs primarily related to the operation of the modeled house (“operational inputs”), including minimum, maximum, and nominal values, as well as units of measure. Table A.2 of Appendix A lists the approximate inputs primariliy related to the physical features of the modeled house (“asset inputs”), including minimum, maximum, and nominal values, as well as units of measure. Nominal input values

(37)

Table 4.1: Characteristics of the modeled house.

Category Pre-retrofit characteristics

Location Las Vegas, NV

Orientation Front of house faces South Dimensions N/S = 57 ft, E/W = 27 ft, 1 story (8 ft) Garage None Neighbors At 15 ft, E/W Eave Depth 2 ft

Vented Crawlspace 2.0 ACH,

R-19 between joists

Exterior Walls 2 x 4, 16” o.c., wood siding,

no cavity insulation (R-1.01 air gap) Unfinished Attic 2 x 6 joists,

R-11 insulation, 3.5” thickness Window Type Single pane, Aluminum frame, U = 0.774, SHGC = 0.679 Window Area/Distribution

20% of exterior wall area,

33.3% of window area each on N/S, 16.7% of window area each on E/W

Furnace Electric

Air Conditioner SEER 10

Ducts Uninsulated, in crawl space,

leakage fraction = 0.30 Living Space SLA 0.000886

Mechanical Ventilation Spot vent only

Water Heater Electric, energy factor = 0.92, in living space

Major Appliances Refrigerator, electric range, dishwasher, clothes washer, electric clothes dryer Thermostat Setpoints Heating 68.0

F,

(38)

Figure 4.1: Triangular probability distribution.

were substituted into the model’s building description file to create the uncalibrated model.

4.3 Generate synthetic utility data

The first step in generating synthetic electric utility data2 (kWh) was to randomly select an explicit input value νi from each approximate input’s triangular probability distribution.

Each probability distribution function is continuous, so randomly selected values could fall anywhere between the minimum and maximum values shown in Table A.1 and Table A.2. The selection process begins by generating random numbers γ1, . . . , γλ from a uniform

distri-bution on the interval (0,1). The set of explicit input values ν1, . . . , νλ were then calculated

as νi =        xmin

i +pγi(xnomi − xmini )(xmaxi − xmini ), if γi ≤

xnomi − xmin i xmaxi − xmin i , xmax

i −p(1 − γi)(xmaxi − xnomi )(xmaxi − xmini ), if γi >

xnomi − xmin i xmaxi − xmin i , (4.1)

for i = 1, . . . , λ (Kotz & van Dorp, 2004). Here, xmini , xnomi , xmaxi are the probability distri-bution’s minimum, nominal, and maximum values, respectively. Explicit input values were then substituted into the model’s building description file to create the reference model, and the reference model was simulated in BEoptTM/DOE-2.2 for a time period of one year using

Typical Meteorological Year 3 (TMY3) weather data for Las Vegas, NV3. The set of n = 8760

hourly total site electricity use data points were extracted from the simulation output.

2This study considers only electric utility data since the modeled house consumes no other fuel type. 3http://rredc.nrel.gov/solar/old data/nsrdb/1991-2005/tmy3.

(39)

This set of hourly data became the reference utility data for hourly calibrations; reference utility data for monthly and daily calibrations were obtained by summing the appropriate reference hourly data into n = 12 monthly and n = 365 daily data points, respectively. To obtain two distinct synthetic utility bill scenarios, this process was performed for 100 sets of randomly selected νi. Simulation output of the 100 sets were sorted by increasing annual

cooling electricity values, and the 5thand 95thsets of νiwere chosen as scenarios in which the

uncalibrated model over-predicts and under-predicts the reference utility data by +25.6% and −4.7%, respectively. See Table 4.2 for a summary of this synthetic utility bill selection. The selected sets of νi are given in Table B.1 and Table B.2.

Table 4.2: Reference utility data.

Cooling electricity (kWh) Percent difference Electricity consumption (kWh) Percent difference Uncalibrated 6145.3 25709.7 Over-predicts 4618.7 +33.1% 20473.2 +25.6% Under-predicts 9225.4 −33.4% 26965.3 −4.7%

4.4 Identify the most influential inputs

A preliminary sensitivity analysis was performed to identify the 24 most influential inputs from the set of approximate inputs listed in Table A.1 and Table A.2. These influential inputs were considered as starting points for each applicable calibration procedure.4 The

sensitivity analysis was a Monte Carlo procedure and was implemented using Equation 2.5 as described in Section 2.2.2 with η = 50 random selections from each approximate input’s triangular probability distribution. The selected input values were substituted into the building description file one-at-a-time (holding the other inputs at their nominal values), and then all files were simulated in automated batch mode. Sensitivity coefficients ξ1, . . . , ξλwere

4ASHRAE 1051-RP requires that 20 – 24 influential inputs be identified using walk-thru audits and heuristics (Reddy & Maor, 2006). To allow for fair comparison across calibration procedures, the same set of 24 influential inputs was considered for each calibration procedure.

(40)

calculated for each of the λ approximate inputs. A descending sort of the ξi values revealed

the 24 most sensitive inputs, which we call influential inputs. The sensitivity coefficients of the influential inputs are depicted in Figure 4.2. These sensitivity coefficients are given in tabular form in Table C.1 of Appendix C.

Figure 4.2: The 24 most influential inputs. Influential parameters are sorted in descending order of ξi.

4.5 Perform calibrations

Three calibration techniques were researched, implemented, and evaluated for this anal-ysis:

(41)

2. a regression metamodeling approach, and 3. an ASHRAE 1051-RP-based approach.

As the first component of each of these procedures, a sensitivity analysis for reducing search space dimensionality was required. For this analysis, these sensitivity analyses were performed on the subset of influential inputs depicted in Figure 4.2 of Section 4.4. De-tailed descriptions for implementations of these three calibration techniques are presented in Chapter 5.

4.6 Apply retrofit measures

After the three calibration techniques were performed using monthly, daily, and hourly sets of utility data, and their calibrated models (i.e., sets of adjusted input values) recovered, they were evaluated based on their ability to recover reference model input values (explicit input values) and improve the accuracy of energy savings predictions for retrofit measures. Table 4.3 describes various retrofit measures based on BESTEST-EX (Judkoff et al., 2010) that were applied to:

• the reference model,

• the uncalibrated model, and • each of the calibrated models.

Following the application of each of these measures, the annual results were: • ψr for the reference model energy savings prediction,

• ψu for the uncalibrated model energy savings prediction, and

(42)

Table 4.3: BESTEST-EX-based retrofit measures.

Retrofit measure BEoptTM input Change5 Units

Air-Seal LivingSpaceSLA − 0.000451 in2/ft2

Attic Insulation UACeilingInsRvalue + 28.9 hr·ft

2·F/Btu

UACeilingInsThickness + 8.0 in

Wall Insulation CavityInsRvalue1 13.0 hr·ft2·F/Btu

Programmable Therm. heating set point − 6.0 (10pm-6am)

F

cooling set point + 6.0 (8am-5pm) ◦F

Low-e Windows WindowUvalue 0.279 Btu/hr·ft

2·F

WindowSHGC 0.346 coefficient

Duct Sealing and Ins.

UnconditionedDuctRvalue 8.0 hr·ft2·F/Btu

ReturnLeak 0.01 frac AH fan

SupplyLeak 0.09 frac AH fan

AHLeakRA 0.04 frac AH fan

AHLeakSA 0.01 frac AH fan

AC Replacement

AC CoolingSEER 16.0 kBtu/kWh

HeatingSizeMethod autosize kBtu/hr

CoolingSizeMethod autosize tons

Combined6 ... ... ...

4.7 Compare calibration techniques

The absolute errors ε(ψc) and ε(ψu) were then calculated for the calibrated and

uncali-brated model energy savings predictions using the equations

ε(ψc) = |ψc− ψr|, (4.2)

ε(ψu) = |ψu− ψr|, (4.3)

and the so-called benefit of calibration ϕ (Judkoff et al., 2010) was calculated for each calibrated model using

ϕ = ε(ψu) − ε(ψc). (4.4)

5Leading +/− denotes a relative change to the pre-retrofit building; all other values are absolute (replace the value for the pre-retrofit building).

6Although not listed, the Combined measure includes an additional absolute change of 0.2 (frac) to BEoptTM input RoofingMaterialAbsorptivity; results are not individually reported throughout since annual savings were less than 2% of pre-retrofit energy use for all models.

(43)

If ϕ > 0, then energy savings predictions are improved by calibration. The benefit of calibration ϕ has the same units as the energy savings predictions (kWh) and therefore can be compared across retrofit measures. The benefit of calibration ϕ can also be converted to monetary values by assuming average utility prices, which helps to understand the benefit of calibration with respect to overall cost of the audit and home improvements.

We can express the calibrated model energy savings prediction, ψc, as a percent of the pre-retrofit reference utility data, Γr, using the equation

φc= ψ

c

Γr x 100%, (4.5)

and the reference model energy savings prediction, ψr, as a percent of the pre-retrofit

refer-ence utility data using the equation

φr = ψ

r

(44)

CHAPTER 5

MODEL CALIBRATION TECHNIQUES AND RESULTS

After the preliminary procedures described in Sections 4.1 – 4.4 were performed, the three calibration techniques listed in Section 4.5 were implemented. Each technique begins with a sensitivity analysis for reducing parameter search space dimensionality and includes some application of the simulated annealing algorithm as described in Section 2.3. A dia-gram illustrating the general process of implementing the calibration techniques, beginning with the definition of approximate inputs described in Section 4.2, is given in Figure 5.1. Detailed descriptions for implementations of these three calibration techniques are presented in Sections 5.1 – 5.3.

5.1 Direct simulated annealing

The first calibration technique evaluated includes a direct application of the simulated annealing algorithm to BEoptTM/DOE-2.2 simulations. The calibration technique’s sensi-tivity analysis and simulated annealing implementations are described in Sections 5.1.1 – 5.1.2.

5.1.1 Sensitivity analysis

The sensitivity analysis for this procedure follows the coefficient of variation method described in Section 2.2.2 and implemented in Section 4.4, with η = 50 random selections and assuming the analyst pre-selects 24 influential inputs to be considered in the analysis. To provide a fair comparison across calibration techniques, procedures began with the 24 influential inputs from Figure 4.2. The most sensitive τ = 6 of these were selected as strong parameters to be adjusted in the simulated annealing algorithm.7 All other input

7This analysis used a constant number of adjustable parameters across calibration procedures; the number of adjustable parameters was chosen to be equal to that identified in Section 5.3.2, which describes a procedure with parameter selection criteria.

(45)

Figure 5.1: Flowchart for calibration technique implementations. ‘Adj. Inputs’ refers to adjutable inputs, ‘Cal. Inputs’ to calibrated inputs, and ‘M’, ‘D’, ‘H’ to monthly, daily, and hourly utility data frequencies.

(46)

parameters were frozen at their nominal values during the inversion. The results for the top τ = 6 parameters of this sensitivity analysis, with η = 50 random samples, are shown in Figure 5.2. Note that the results were expected to be similar to those shown in Figure 4.2, but not necessarily identical since the analyses are based on different sets of randomly selected values. See Table C.2 for these coefficients given in tabular form.

Figure 5.2: Sensitivity analysis results for the direct simulated annealing calibration tech-nique. Strong parameters are sorted in descending order of ξi.

5.1.2 Inversion

The strong parameters presented in Figure 5.2 were made adjustable parameters for the simulated annealing algorithm and the iterative search began at nominal values, choosing number of gradient sample iterations G = 50, the initial temperature T0 = 10, and the

number of iterations per parameter N = 100.8 Each adjusted parameter was bounded by

the minimum and maximum values of its probability distribution. Figure 5.3 gives the results of the simulated annealing inversion in terms of the calibrated input values. For reference, the percent errors in uncalibrated and calibrated model input values relative to reference input values are given in Table F.1 and Table F.4 of Appendix F.

Figure 5.4 – Figure 5.9 show the iterative search convergence processes for parameters relative to reference input values and for the objective function value. Figure 5.4(b) – Fig-ure 5.9(b) depict that at higher temperatFig-ures T the algorithm allows more energy increases and eventually, as T cools according to the schedule, converges on a least objective function

8Simulated annealing algorithm parameter values were chosen based on the author’s experience and the convergence observed during preliminary simulations; algorithm performance depends on these chosen values.

(47)

(a) Over-prediction scenario.

(b) Under-prediction scenario.

Figure 5.3: Inversion results for the direct simulated annealing calibration technique. Left and right bounds are the parameters’ minimum and maximum values.

(48)

value.

A graphical representation of output agreement is shown in Figure 5.10 and Figure 5.11. The daily and hourly output was summed into monthly output so that a meaningful com-parison between errors for the monthly, daily, and hourly calibrated models can be made. Percentages along tops are percent errors of the calibrated model predictions relative to the reference utility data.

Each retrofit measure described in Table 4.3 was applied to the calibrated model, and calibrated model energy savings predictions were calculated. Resulting ϕ values were then calculated using Equation 4.4 to assess the benefit of calibration. These results are summa-rized in Figure 5.12. Tabular results are given in Table G.1 and Table G.4 of Appendix G.

5.2 Regression metamodeling

The second calibration technique evaluated includes an indirect application of the sim-ulated annealing algorithm to BEoptTM/DOE-2.2 simulations. This calibration technique’s sensitivity analysis and specific simulated annealing implementations are described in Sec-tions 5.2.1 – 5.2.3.

5.2.1 Sensitivity analysis

For this approach, the coefficient of variation sensitivity analysis method as described in Section 2.2.2 and implemented in the direct simulated annealing approach was consid-ered. Therefore, the same parameters for adjustment as in the direct simulated annealing calibration technique were considered (presented in Figure 5.2).

5.2.2 Central composite design and regression

One approach in reducing costly simulation run times is to substitute the actual building simulations with evaluations of quadratic polynomials (Chlela et al., 2009). These quadratic polynomials, or metamodels, can be created and assembled using a statistical design of

(49)

(a) Parameter convergence process. The red lines are reference values.

(b) Residual convergence process.

Figure 5.4: Convergence processes for the direct simulated annealing calibration technique using monthly data, over-prediction scenario. The dashed lines mark where the calibration solution corresponding to lowest computed energy was recovered.

(50)

(a) Parameter convergence process. The red lines are reference values.

(b) Residual convergence process.

Figure 5.5: Convergence processes for the direct simulated annealing calibration technique using daily data, over-prediction scenario. The dashed lines mark where the calibration solution corresponding to lowest computed energy was recovered.

(51)

(a) Parameter convergence process. The red lines are reference values.

(b) Residual convergence process.

Figure 5.6: Convergence processes for the direct simulated annealing calibration technique using hourly data, over-prediction scenario. The dashed lines mark where the calibration solution corresponding to lowest computed energy was recovered.

References

Related documents

To evaluate the spread in material properties for some commonly used construction materials, four chipboards, six short and six long construction timber (CT) beams were

D uring the course of this project, we have studied the problem of protein folding using a simulated annealing algorithm on cubic, FCC, and off-lattice models, with the

This includes simultaneous adjustment of GPS and INS data, simultaneous adjustment of GPS/INS positions and attitudes with image or laser scanner data, calibration of systems

Figure 51 and Figure 52 gather stacking unknown results both for axial and centrifugal parts, regarding the pressure ratio, reduced mass flow, and isentropic efficiency

Analysis settings tool should provide a method to display a given set of files and enable changing such parameters as: scaling, level cross reference value, level cross levels

Our aim is to give some sufficient conditions for discreteness of subgroups of automorphisms of the ball and to prepare the study of limit sets of discrete groups.. The

The most complicated issue in matching sound field mea- surements to the simulation model, which is crucial for the implicit calibration algorithm to work, is that we need to

Our paper is organized as follows: Section II describes the self-consistency requirement of rain medium for rainfall rate comparisons based on reflectivity (Z)