• No results found

Nuclear data uncertainty propagation for a lead-cooled fast reactor: Combining TMC with criticality benchmarks for improved accuracy

N/A
N/A
Protected

Academic year: 2021

Share "Nuclear data uncertainty propagation for a lead-cooled fast reactor: Combining TMC with criticality benchmarks for improved accuracy"

Copied!
100
0
0

Loading.... (view fulltext now)

Full text

(1)

Nuclear data uncertainty propagation for a

lead-cooled fast reactor:

Combining TMC with criticality benchmarks

for improved accuracy

Erwin Alhassan

Licentiate Thesis

Department of Physics and Astronomy

Division of Applied Nuclear Physics

(2)
(3)

Abstract

For the successful deployment of advanced nuclear systems and for optimization of current reactor designs, high quality and accurate nuclear data are required. Before nuclear data can be used in applications, they are first evaluated, bench-marked against integral experiments and then converted into formats usable for applications. The evaluation process in the past was usually done by using differential experimental data which was then complimented with nuclear model calculations. This trend is fast changing because of increase in computational power and tremendous improvements in nuclear reaction theories over the last decade. Since these model codes are not perfect, they are usually validated against a large set of experimental data. However, since these experiments are themselves not exact, the calculated quantities of model codes such as cross sections, angular distributions etc., contain uncertainties. A major source of uncertainty being the input parameters to these model codes. Since nuclear data are used in reactor transport codes as input for simulations, the output of transport codes ultimately contain uncertainties due to these data. Quantifying these uncertainties is therefore important for reactor safety assessment and also for deciding where additional efforts could be taken to reduce further, these uncertainties.

Until recently, these uncertainties were mostly propagated using the generalized perturbation theory. With the increase in computational power however, more exact methods based on Monte Carlo are now possible. In the Nuclear Research and Consultancy Group (NRG), Petten, the Netherlands, a new method called ’Total Monte carlo (TMC)’ has been developed for nuclear data evaluation and uncertainty propagation. An advantage of this approach is that, it eliminates the use of covariances and the assumption of linearity that is used in the perturbation approach.

In this work, we have applied the TMC methodology for assessing the impact of nuclear data uncertainties on reactor macroscopic parameters of the European Lead Cooled Training Reactor (ELECTRA). ELECTRA has been proposed within the GEN-IV initiative within Sweden. As part of the work, the uncertainties of plutonium isotopes and ameri-cium within the fuel, uncertainties of the lead isotopes within the coolant and some structural materials of importance have been investigated at the beginning of life. For the actinides, large uncertainties were observed in the keffdue to 238,239,240Pu nuclear data while for the lead coolant, the uncertainty in the k

efffor all the lead isotopes except for204Pb were large with significant contribution coming from208Pb. The dominant contributions to the uncertainty in the keff came from uncertainties in the resonance parameters for208Pb.

Also, before the final product of an evaluation is released, evaluated data are tested against a large set of integral benchmark experiments. Since these benchmarks differ in geometry, type, material composition and neutron spectrum, their selection for specific applications is normally tedious and not straightforward. As a further objective in this the-sis, methodologies for benchmark selection based the TMC method have been developed. This method has also been applied for nuclear data uncertainty reduction using integral benchmarks. From the results obtained, it was observed that by including criticality benchmark experiment information using a binary accept/reject method, a 40% and 20% reduction in nuclear data uncertainty in the keffwas achieved for239Pu and240Pu respectively for ELECTRA.

(4)
(5)
(6)
(7)

List of papers

This thesis is based on the following papers, which are referred to in the text by their Roman numerals.

I Combining Total Monte Carlo and Benchmarks for nuclear data uncertainty propagation on a Lead Fast Reactor’s Safety

E. Alhassan, H. Sjöstrand, J. Duan, C. Gustavsson, A.J. Koning, S. Pomp, D. Rochman, M. Österlund

Accepted for publication in Nuclear Data Sheets (2013).

My contribution: I wrote the scripts, did the simulations, the analyses and interpretation of results. I also wrote the paper.

II Uncertainty and correlation analysis of lead nuclear data on reactor parameters for the European Lead Cooled Training Reactor

E. Alhassan, H. Sjöstrand, P. Helgesson, A.J. Koning, M. Österlund, S. Pomp, D. Rochman

Submitted to Annals of Nuclear Energy (2014).

My contribution: I wrote most of the scripts, did the simulations, the analyses and interpretation of results. I also wrote the paper.

III Selecting benchmarks for reactor calculations

E. Alhassan, H. Sjöstrand, J. Duan, P. Helgesson, S. Pomp, M. Österlund, D. Rochman, A. J. Koning

Accepted for presentation at the PHYSOR 2014 International Conference.

My contribution: I wrote the scripts, did the simulations, the analyses and interpretation of results. I also wrote the paper.

(8)

Other papers not included in this thesis

List of papers related to this thesis but not included in the comprehensive summary. I am first author or co-author of all listed papers.

1. Propagation of nuclear data uncertainties for ELECTRA burn-up calculations H. Sjöstrand, E. Alhassan, J. Duan, C. Gustavsson, A. Koning, S. Pomp, D. Rochman, M. Österlund

Accepted for publication in Nuclear Data Sheets (2013). 2. Total Monte Carlo evaluation for dose calculations

H. Sjöstrand, E. Alhassan, S. Conroy, J. Duan, C. Hellesen, S. Pomp, M. Österlund, A. Koning, D. Rochman

Radiation Protection Dosimetry (2013).

3. Uncertainty Study of Nuclear Model Parameters for the n+56Fe Reactions in the Fast Neutron Region Below 20 MeV

J. Duan, S. Pomp, H. Sjöstrand, E. Alhassan, C. Gustavsson, M. Österlund, D. Rochman, A. J. Koning

Accepted for publication in Nuclear Data Sheets (2013).

4. UO2vs MOX: propagated nuclear data uncertainty for keff, with burnup

P. Helgesson, D. Rochman, H. Sjöstrand, E. Alhassan, A. J. Koning. Accepted for publication in Nuclear Science and Engineering (2014).

5. Uncertainty analysis of Lead cross sections on reactor safety for ELECTRA

E. Alhassan, H. Sjöstrand, J. Duan, C. Gustavsson, A.J. Koning, S. Pomp, D. Rochman, M. Österlund

Submitted to Joint International Conference on Supercomputing in Nuclear Applications + Monte Carlo, Paris, October 27-31, 2013.

6. Incorporating experimental information in the TMC methodology using file weights P. Helgesson, H. Sjöstrand, A. J. Koning, D. Rochman, E. Alhassan, S. Pomp

(9)

Contents

Other papers not included in this thesis. . . .vi

1 Introduction . . . 9

1.1 Background . . . .9

1.2 Outline of thesis. . . 10

2 Nuclear data . . . 11

2.1 Experimental data . . . 11

2.1.1 Differential data (EXFOR). . . 11

2.1.2 Integral data (benchmarks). . . 12

2.2 Model calculations . . . 13

2.3 Nuclear data evaluation . . . 14

2.4 Nuclear data libraries . . . 15

2.4.1 Nuclear data covariances. . . 15

2.5 Nuclear data processing . . . 15

3 Uncertainty Quantification . . . 16

3.1 Statistical estimation . . . 16

3.2 Uncertainty quantification approaches . . . 17

3.2.1 Total Monte Carlo (TMC) . . . 18

3.2.2 TMC compared to other Monte Carlo methods. . . 19

4 Methodology . . . 21

4.1 Simulation tools . . . 21

4.1.1 T6 code package . . . 21

4.1.2 NJOY processing code . . . 22

4.1.3 PREPRO code . . . 22

4.1.4 SERPENT Monte Carlo code . . . 22

4.2 Model calculations with TALYS . . . 22

4.3 Nuclear data uncertainty calculation . . . 23

4.3.1 Partial variation . . . 24

4.4 Reactor Physics . . . 26

4.4.1 Reactor description . . . 26

4.4.2 Reactor neutronic parameters. . . .26

4.5 Benchmark selection method. . . 28

4.6 Nuclear data uncertainty reduction . . . 28

4.7 Monte Carlo sensitivity method . . . 30

5 Results and Discussions . . . 32

5.1 Model calculation results . . . 32

5.2 Reactor parameters . . . .32

5.3 Correlations and partial variations . . . 35

5.4 Benchmark selection and nuclear data uncertainty reduction . . . 36

6 Conclusion and outlook . . . 38

Acknowledgment . . . 39

(10)
(11)

1. Introduction

1.1 Background

The thesis involves the propagation of nuclear data uncertainties from basic nuclear physics to in-tegral reactor parameters such as the keff, βeff, the coolant temperature coefficient and void worth,

using brute force computer power. The reference core chosen for this study is the European Lead-Cooled Training Reactor, ELECTRA, which is a lead cooled reactor proposed within the Swedish National Generation IV reactor framework [1]. As part of the work, the impact of nuclear data un-certainties of some of the actinides in the fuel (PAPER I) [2] and lead isotopes in the coolant, as further explained in PAPER II [3] have been performed. Also, nuclear data uncertainty of some structural materials within the reactor at steady state, have been presented. A further objective has been to develop methodologies for benchmark selection and for reducing nuclear data uncertainty using integral benchmarks which are presented in more detail in PAPER I and III [2, 4].

For several decades, reactor design has been supported by computer simulations for the investi-gation of reactor behavior under both steady state and transient conditions. Since computer codes operate on mathematical models of physical reality, the computed results from these codes can only be approximations of the true value. The physical models in these simulations codes are dependent on the underlying nuclear data used, implying that, uncertainties in nuclear data ul-timately leads to uncertainty in simulations. Understanding these uncertainties is important for setting safety margins and for deciding where additional efforts could be undertaken to reduce these uncertainties [5].

In the past, the procedure for evaluating nuclear data involved the evaluation of differential ex-perimental data, which was then complimented with nuclear-model calculations for important channels and energy ranges for which no experimental data were available [6]. However, with the increase in computational power and sophistication in nuclear reaction theories, modern nuclear data evaluations now rely more on nuclear reaction model codes for the computation of cross sections and angular distributions etc. that are required for a large variety of applications. With the help of these model codes, more complete data libraries including covariance data can be produced [7]. The use of model codes offers several advantages, among them; the preservation of energy balance and coherence of partial cross sections together with total and reaction cross sections, the prediction of data for unstable nuclei and providing data where experimental data are unavailable or scarce [8]. Where experimental data are available, they are used for fine tuning input parameters to these model codes. Consequently, the uncertainty in the underlying basic mi-croscopic nuclear physics, i.e., the input parameters needed to perform theoretical calculations in these model codes, describes the nuclear data uncertainty.

In the Nuclear Research and Consultancy Group, Petten, a new method called ’Total Monte Carlo’ built around the TALYS nuclear reaction code [9], has been developed for nuclear data uncertainty quantification and analyses [7]. By varying model parameters within experimental uncertainties using a TALYS based system [7], a large set of acceptable random nuclear data files are produced. Each file contains a unique set of data. These files are processed into usable formats and then used in neutron transport codes to obtain probability distribution in reactor parameters. From these probability distributions, nuclear data uncertainty information can be inferred. The Total Monte Carlo (TMC) method was applied in this work for nuclear data uncertainty and correlation analy-sis of the ELECTRA reactor. The isotopes considered include, the plutonium isotopes in the fuel (238,239,240,241,242Pu) and the minor actinide 241Am, the lead coolant isotopes (204,206,207,208Pb) together with some structural materials.

(12)

Also, before the final product of an evaluation is released, it is benchmarked against a large set of integral experiments [10]. These benchmarks are used for validating reactor codes used for both existing and future reactor systems. Since these benchmarks differ in geometry, type, material composition and neutron spectrum, their selection for specific applications is normally tedious and not straightforward [11]. Before now, these benchmarks were usually selected by visual in-spection and thereby introducing user bias into the selection process [4]. Therefore, a method has been developed for the selection of these benchmarks based on the TMC method (presented in subsection 3.2.1) for validating reactor codes, testing nuclear data libraries and for reducing nuclear data uncertainty [4]. This method was applied to the ELECTRA reactor and a set of criticality benchmarks [10] and is presented in more detail in PAPER I and III.

1.2 Outline of thesis

The thesis is structured as follows; the background and the objectives of the study are presented in Chapter 1. In Chapter 2, the nuclear data evaluation process which involves the combination of differential experimental data, model calculations as well as nuclear covariance data and nuclear data processing, are described. Sources of uncertainties, uncertainty quantification approaches and uncertainty analysis in Reactor Physics modeling are given in Chapter 3. In Chapter 4, the simulation tools used, the application of the TMC method for reactor calculations are presented and explained. Also, methods developed for integral benchmark selection and for nuclear data uncertainty reduction are described. Furthermore, model calculations and generation of random nuclear data with the TALYS code system are also presented. In chapter 5, the results obtained are discussed and finally, Chapter 6 contains the conclusion and the outlook for future work.

(13)

2. Nuclear data

Nuclear data are physical parameters that describe the properties of atomic nuclei and the funda-mental physical relationships governing their interactions [12]. These include atomic data, nuclear reaction data, thermal scattering, radioactive decay data and fission yields data, which are all im-portant for theoretical nuclear models development and for applications involving radiation and nuclear technology [13]. Because of the wide variety of applications, nuclear data can be divided into three types based on the physics they present [14]. The first being transport data, which de-scribes the interactions of various projectiles such as neutrons, protons etc. with a target nuclei. Transport data are usually associated with cross sections, angular distributions, single and double differential data etc. [14] necessary for both transport and depletion calculations. The second is fission yield data, which are used for the calculation of waste disposal inventories and decay heat, for depletion calculations and in the calculation of beta and gamma ray spectra of fission-product inventories etc. [15]. The third is decay data which describes the nuclear levels, half-lives, Q-values and decay schemes etc. [12, 14]. These data are used for e.g. dosimetry calculations and for estimating decay heat in nuclear repositories.

For the successful deployment of advanced nuclear systems and for optimization of current reac-tor designs, high quality nuclear data and accurate information about the nuclear reactions taking place are required. Using sensitivity and uncertainty analyses, a list of priorities for improvement of nuclear data can be determined [16, 17]. According to Refs. [16, 17], high priority measure-ments with their corresponding target accuracy in bracket are needed within the 50 keV to 6 MeV energy range for the fission cross sections of238,240−242Pu (2-3%),241,241Am (3%) and244−245Cm (5-7%), for the inelastic cross sections of23Na (4%),28Si (3%), 56Fe (3%),206,207Pb (3%) etc. Other nuclear data needs include high precision measurements of fission product yields, fission product cross sections and decay data etc. [17]. From the application side, a preliminary attempt to assign target accuracies for some GEN-IV systems have been carried out by the OECD/NEA expert group [16]. From Ref. [16], target accuracies within 1-sigma accuracy have been identified for the following parameters (target accuracy in bracket) for fast reactors: multiplication factor at Beginning of Life (BOL) (0.3%), power peaking factor (2%), reactivity coefficients at BOL (7%) while nuclide densities at the end of life should be within 10% accuracy. To fulfill these target accuracies, theoretical nuclear physics models and uncertainty quantification methods must be developed in parallel.

2.1 Experimental data

Experimental data can be divided into differential and integral data. Differential data are micro-scopic quantities that describes the properties of nuclei and their interactions with particles while integral data mostly take into account the global behavior of a system. In the following subsec-tions, differential data and integral benchmarks used for nuclear data evaluation are presented.

2.1.1 Differential data (EXFOR)

Microscopic quantities such as cross sections, fission yields and angular distributions are mea-sured in a large number of facilities across the globe. These data are collected, compiled and stored in the EXFOR database (Exchange among nuclear reaction data centers) [18]. The database contains numerical data and experimental/bibliographic information on experiments for neutron,

(14)

charged particle (A ≤ 12) and photon-induced reactions on a wide range of isotopes, natural el-ements and compounds, for incident energies up to about 1 GeV [19]. Fig. 2.1, is an example of selected differential experimental data of the208Pb(n,tot) cross section as a function of incident neutron energy. Even though a large number of experiments are available for the (n,tot) cross sec-tions, very few data exist for other cross sections such as the208Pb(n,el) cross section. Accurate

Pb208(n,tot)

Incident energy (MeV)

Cross section (b) 0.1 0.5 1 5 10 10 5 50 J.A.Harvey, 1999 R.W.Finlay+, 1993 J.A.Farrell+, 1965 J.H.Gibbons+, 1967 J.L.Fowler+, 1962 R.B.Day, 1965 D.G.Foster Jr+, 1971

Figure 2.1. Selected208Pb(n,tot) experimental data from the EXFOR database as a function of incident

neutron energy.

experimental data, which are readily available are necessary ingredients used for validating nu-clear reaction model codes and for nunu-clear data evaluation and uncertainty reduction purposes. For these reason, a careful assessment of possible systematics and statistical experimental uncer-tainties are needed [20]. The sources of these experimental unceruncer-tainties are presented in sub-section 3.1. Experimental data are combined with model calculations for nuclear data evaluation and uncertainty propagation to integral quantities. Users of nuclear data e.g. the nuclear reactor community usually give feedback which helps in prioritizing measurements of particular isotopes and reactions.

2.1.2 Integral data (benchmarks)

Integral data are used in nuclear data evaluation for testing nuclear data. One such example is the extensive testing of nuclear data libraries with a large set of criticality safety and shielding benchmarks by Steven C. van der Marck [21]. There are a number of international efforts geared towards providing the nuclear community with qualified benchmark data. One of such projects is the International Handbook of Evaluated Criticality Safety Benchmark Experiments (ICSBEP), which contains criticality safety benchmarks derived from experiments that were performed at various nuclear critical facilities around the world [22]. Other benchmarks used for nuclear data and reactor applications include the Evaluated Reactor Physics Benchmark Experiments (IRPHE) which contains a set of reactor physics-related integral data and the Radiation shielding experi-ments database (SINBAD) which contains a compilation of reactor shielding, fusion neutronics and accelerator shielding experiments.

(15)

2.2 Model calculations

Nuclear data evaluation in the fast region is performed using nuclear reaction models. These mod-els are used for the prediction of data of unstable nuclei and for providing data where experimental data are unavailable or scarce [8]. This also has an added advantage that, various partial cross sec-tions can be automatically summed up to the total cross section which leads to internal consistency within evaluated files [8, 23]. Where experimental data are available, they are used for

constrain-Figure 2.2.Schematic showing the energy region contributions of direct (D), pre-equilibrium (P) and com-pound nucleus (C) mechanisms as a function of outgoing particle energy and reaction time. The comcom-pound nucleus contribution is distinguished from the rest in the transitional energy region by the dash curve. Figure from the TALYS user manual [9].

ing the model parameters for these model codes. The modeling of nuclear reactions in the fast region can be classified in terms of time scales: short time scales between 10−22∼ 10−20sec are associated with direct reactions while long reaction times in the order of about 10−18∼ 10−16sec

are associated with compound nucleus processes [9]. Pre-equilibrium processes occur at inter-mediate time scales (between the two processes) as can be seen in Fig. 2.2. From the figure, the energy regions where direct (D), pre-equilibrium (P) and compound (C) mechanisms occur are shown as a function of outgoing particle energy and reaction time.

In the fast region, the spherical optical model is used to calculate transmission coefficients for compound nucleus and multi-step compound decay and the distorted wave functions that are used for direct inelastic reactions [7]. For deformed nuclei, the incident channels are treated in terms of coupled-channels rather than the spherical optical model [24]. The Hauser-Feshbach equations are used to model the decay of the compound nucleus. Transmission coefficients calculated with the optical model and level densities are used to represent the relative probability of decay in the various open channels [24]. Due to correlations between the entrance and exit channels at low incident energies, the Hauser-Feshbach equations are modified to include the so-called width fluctuation correction (WFC) factors. WFC factors account for the coupling between the incident and outgoing waves in the elastic channel [9, 24].

For the pre-equilibrium reactions which embodies both direct and compound like features, the exciton model in combination with Kalbach systematics are used [9]. These reaction theories have been implemented in well known nuclear reaction codes such as TALYS, EMPIRE, etc. for theoretical model calculations.

(16)

2.3 Nuclear data evaluation

Several approaches exist for nuclear data evaluation. This include methods such as: experimental data interpolation, Bayesian methods, re-normalization of existing evaluations, copy and paste from other nuclear data evaluations and nuclear reaction modeling [25]. As mentioned in the previous section, because of the considerable improvements and sophistication of modern nuclear reaction models, nuclear data evaluators are increasingly using nuclear reaction models for inter-polating experimental data and for extrapolation to unmeasured regimes. The basic steps involved

Selection and evaluation of exp. data Theoretical model calculations Physical observables e.g. cross sections, fission yields Model adjustments to fit exp. data Data checking Model parameters data fission yields etc. data Benchmark testing Evaluated nuclear data Application e.g. transport calculations feedback feedback

Figure 2.3.Flowchart showing the nuclear data evaluation process using nuclear reaction modeling. in nuclear data evaluation are: (1) The selection and careful analysis of differential experimental data mostly obtained from the EXFOR database, (2) Analysis of the low energy region which includes thermal energy, resolved and unresolved resonances region. Analyses of neutron res-onances are usually performed using R-matrix codes for light nucleus reactions and for heavy nuclides in the low incident energy regions [26]. These codes are complimented with data from the Atlas of neutron resonances [27] which contains a compilation of resonance parameters, ther-mal neutron cross sections and other quantities. The resonance parameters are analyzed in terms of the multilevel Breit-Wigner (MLBW) formalism [27]. In new evaluations however, the Reich-Moore approximation is often used for the major actinides. (3) theoretical model calculations using nuclear reaction codes such as TALYS [9] or EMPIRE [28] etc. For the fast neutron region, statistical, Hauser-Feshbach, pre-equilibrium, direct, fission models are used in combination with coupled channel optical models etc. for modeling medium and heavy nucleus reactions [24, 26] as was explained in the previous section. Here model parameters in these codes are adjusted and fine tuned to fit selected experimental data. These model adjustments are important because, current models are deficient and hence are not able to reproduce available experimental data. Also, even if models do reproduce differential experimental data, final adjustments are sometimes needed to reproduce integral data.

By adding resonance information as explained in (2) above, and other quantities such as the aver-age number of fission neutrons, fission neutron spectra, (n,f) cross section etc., for some isotopes, a complete ENDF file which covers from thermal to high energies can be produced. (4) The data produced is then checked and tested using utility codes such as CHECKR, FIZCON and PSYCHE [29] to verify that, the data conforms to current formats and procedures. The data once checked, are then validated against a large set of integral benchmark experiments. The end product is a nuclear data library which contains information on a large number of incident parti-cles for a large set of isotopes and materials. The general processes involved in modern nuclear data evaluation using nuclear reaction modeling are presented in Fig. 2.3. Feedback from model adjustments and applications such as transport calculations are needed for the improvement of theoretical models and for identifying energy regions where additional experimental efforts are needed.

(17)

2.4 Nuclear data libraries

After a successful evaluation, nuclear data are compiled and stored in nuclear data libraries. The ENDF format is the general accepted format for data storage. The content of an evaluated nuclear data file include the following: general information such as number of fission neutrons (nubar) and delayed neutron data etc.; resonance parameters; reaction cross sections; decay schemes; fis-sion neutron multiplicities and nuclear data covariance information etc., which are represented by the so-called MF-numbers. These MF-numbered files are subdivided into different reaction channels such as (n,tot), (n,el), (n,inl), etc., which are represented by MT-numbers [30]. There are a number of national and international evaluated nuclear data libraries currently available from various nuclear data centers. These are, the Japanese Evaluated Nuclear Data Library (JENDL) [31], Evaluated Nuclear Data Library (ENDF/B) [25] from the USA, the Talys Eval-uated Nuclear Data Library (TENDL) [32] from the Nuclear Research and Consultancy Group (NRG), Petten, the Netherlands, Joint Evaluated Fission and Fusion File (JEFF) [33] from the OECD/NEA data bank, Chinese Nuclear Data Library (CENDL) [34] and Russian Nuclear Eval-uated Data files (BROND) [35]. Other libraries for use in certain applications are the International Reactor Dosimetry File (IRDF) for reactor dosimetry applications [36], the European Activation File (EAF) [37] and the Fusion Evaluated Nuclear Data Library (FENDL) from the International Atomic Energy Commission (IAEA) [38].

2.4.1 Nuclear data covariances

Covariance matrices specifies nuclear data uncertainties and their correlations. These data are needed for the assessment of uncertainties of design and safety parameters in nuclear technol-ogy applications [24]. Covariance data are usually stored in MF31-35 and MF40 in the ENDF formated nuclear data file. On MF31, the covariance data of nubar is stored, the resonance pa-rameters covariance data on MF32, covariances for reaction cross sections on MF33, for angular distributions of emitted particles on MF34, for energy spectra of emitted particles on MF35 and finally covariance data for activation cross sections on MF40 [24]. Once these covariance data are available in a nuclear data evaluated file, uncertainty propagation with different methods and for different applications can be performed as discussed in more detail in section 3.2.

2.5 Nuclear data processing

Between the ENDF formatted evaluated nuclear data and users of nuclear data (mostly particle transport code users), are a set of data-processing codes. These codes translate and manipu-late nuclear data from the ENDF format to a variety of usable formats for use in application codes [39]. Even though these codes are often overlooked, the accuracy of transport calculations depend to a large extent on the assumptions and approximations introduced by these data pro-cessing codes [39]. One widely used code is the NJOY nuclear data propro-cessing code [40] which is used to convert ENDF format files into useful forms for practical applications. To reflect the temperatures in real systems for instance, energy dependent cross sections can be reconstructed from resonance parameters and then Doppler broadened to defined temperatures using the NJOY code.

(18)

3. Uncertainty Quantification

As mention earlier, nuclear data evaluation benefits from both experimental data and nuclear re-action modeling. Since mathematical models are not perfect, they are usually validated using experimental data. The ’true value’ of a measurable quantity, if it were known, would ideally reflect the qualitative and quantitative properties of a specific object [41]. However, since these experiments are themselves not exact, the measured quantities contain uncertainties. The errors in measurements come from a variety of sources such as from the method of measurement, errors due to the measuring instrument used, errors coming from the person performing the experiment [41]. If an error is defined as the difference between the measured value and the true value i.e if it were known, the uncertainty is the estimate of the magnitude of this error. Measurement errors are usually classified as random and systematic errors. Random errors are caused by unknown and unpredictable changes in the experiment e.g. detector noise. These errors can be estimated by repeating a particular measurement under the same conditions; the mean value of the measure-ments then becomes the best estimate of the measured quantity. Systematic errors on the other hand are are reproducible inaccuracies that tend to shift all measurements within an experiment in a systematic way. Examples of systematic uncertainty are detector efficiency, beam intensity etc. Systematic errors are usually identified by using a more accurate measuring instrument or by comparing a given result with a measurement of the same quantity performed with a different method [42]. In many cases the errors can not be directly inferred from measurements but their magnitude are rather inferred from uncertainty propagation.

Similar to experimental uncertainties, the sources of uncertainties in modeling can generally be classified into two categories: (1) Aleatory or random uncertainty, which arises from the random nature of the system under investigation or by the random process in the modeling, e.g., Monte Carlo modeling. (2) The epistemic uncertainty [41], which results in a systematic shift between the modeled response parameter and the response parameters true value. In Reactor Physics, a response parameter can be any quantity of interest such as the keff or reaction rates, that can be

measured or predicted in a simulation [43]. Normally, epistemic uncertainties can arise from one of the following sources [41, 44]: (a) from data and parameters used in the model (e.g., cross sections, boundary conditions etc.). (b) uncertainties in the physics of modeled process as a result of ’incomplete knowledge’, (c) assumptions and simplifications of the methods used for solving model equations and (d) from uncertainties arising from our inability to model complex geome-tries accurately.

3.1 Statistical estimation

Since evaluated nuclear data are obtained from models in combination with experiments which come with their corresponding uncertainties, statistical deductions are generally used to extract valuable information. Statistical information are usually provided by moments of the respective probability distributions [42]. The mean is usually used to describe the best estimate of a param-eter in a normal distribution while the spread is best described using the standard deviation and variance. The uncertainties in physical observations are usually interpreted as standard deviations.

(19)

g= f (x1, x2, x3, ..., xn), then the expectation of g(X ), denoted by E(g(X)) can be defined as [42]:

E(g(X )) ≡ Z

SX

g(X )p(X )dX (3.1)

where p(X ) is the probability density function and SX is the n-dimensional space formed by all

the possible values of X .

The variance of g(X ) which is the dispersion of a random variable around its mean can be given as:

V(g(X )) ≡ E(g(X) − E(g(X)))2

(3.2) From Eq. 3.2, the standard deviation which is the positive square root of the variance denoted by σ can be expressed as:

σ ≡ [V (g(X ))]1/2≡ E(g(X) − E(g(X)))21/2 (3.3) Other higher moments are the skewness and the kurtosis. The skewness, S(g(X )) which is a measure of the degree of asymmetry of a particular distribution, can be expressed as:

S(g(X )) = E



[g(X ) − E(g(X ))]3

σ (g(X ))3 (3.4)

A perfectly symmetric distribution has a skewness value of zero while a negative skewness value implies a spread of data more to the left of the mean. A positive skewness value indicates a spread of data towards the right of the mean.

For a multivariate probability distribution X = (x1, x2..., xn), the second order central moment

comprise not only the variances but also the covariance. Covariance is a measure of the strength of the correlation between two variables. Covariance matrices are symmetric comprising of off-diagonal covariances and variances along the off-diagonal. The sample covariance of N observations between two random variables xjand xk, can be expressed as:

cov(xj, xk) = 1 N− 1 N

i=1 (xi j− xj)(xik− xk) (3.5)

where xj and xk are the mean values of the xjand xk variables respectively. If all the variables in

Xare uncorrelated the covariance matrix is made up of only the diagonal elements,

Using Eq. 3.5, the correlation coefficient (ρxjxk), which is the measure of strength of the linear

relationship between two variables can be given as: ρxjxk=

cov(xj, xk)

σxjσxk

(3.6) where σxj and σxk are the standard deviation of xj and xkrespectively. A perfect negative

correla-tion is represented by the value -1, while a 0 indicates no correlacorrela-tion and a +1 indicates a perfect positive correlation.

3.2 Uncertainty quantification approaches

Methods for nuclear data uncertainty quantification in best-estimate model predictions are usu-ally either stochastic such as the TMC method used in this work or deterministic such as the perturbation approach [45]. With the perturbation approach, the local sensitivity of a particular

(20)

response parameter such as keff to variations in input parameters (nuclear data in our case), can

be determined by using the generalized perturbation theory [45]. Once the sensitivity coefficient matrix is determined, they are combined with the covariance matrix to obtained the corresponding uncertainty on the response parameter of interest. This method relies on the assumption that the sensitivity of the output parameter distributions are Gaussian and depends linearly on the variation in each input parameter. This method has been extensively used to evaluate the impact of neutron cross-section uncertainty on some significant reactor response parameters related to the core and fuel cycle [45].

Our focus is however on Monte Carlo or stochastic methods for nuclear data uncertainty propaga-tion. These methods are based on parametric random sampling of input parameters. For each set of random input parameters, a set of output responses of interest are produced [44].

3.2.1 Total Monte Carlo (TMC)

The TMC methodology (here called ’original TMC’) was first proposed by Koning and Rochman in 2008 [46] for nuclear data uncertainty propagation. The inputs to a nuclear reaction code such as TALYS, are created after varying theoretical nuclear model parameters within experimental data uncertainties [7]. To create a complete ENDF file covering from thermal to fast neutron energies, inputs to other auxiliary codes such as described further in section 4.1.1 are also varied. A summary of the TMC method is depicted in a flow chart in Fig. 3.1. From Fig. 3.1, parameters

Figure 3.1. A flowchart depicting the Total Monte Carlo approach for nuclear data evaluation and uncer-tainty analysis. Random files generated using the TALYS based, T6 code package [7] are processed and used to propagate nuclear data uncertainties in reactor calculations.

in both phenomenological and microscopic models implemented in nuclear reaction codes such as the optical model, pre-equilibrium, compound nucleus models etc. as presented earlier in subsection 2.2 are adjusted to reproduce experimental data. These parameters could be the real central radius (rv) or the real central diffuseness (av) of the optical model for example. The

output of the codes: cross sections, fission yields and angular distributions etc. are compared with differential experimental data by defining an uncertainty band which covers most of the experimental data available. Data that falls within this uncertainty band are accepted while those that do not fulfill this criteria are rejected. The accepted files are then processed into the ENDF format using the TEFAL code [47]. These ENDF formatted files are translated into usable formats and fed into neutron transport codes to obtain distributions in reactor parameters of interest. From these distributions, statistical information such as the moments presented in subsection 3.1 can be inferred.

(21)

In Fig. 3.2, the (n,el) and (n,γ) of 50 random ACE208Pb files are plotted as a function of incident neutron energy. The random ENDF files were processed into ACE format using the ACER module of the NJOY code [40] described in section 4.1.2. A spread in data can be observed for the entire energy region as presented in Fig. 3.2. This is expected as each file contains a unique set of nuclear data. Depending on the variation of the nuclear data, different distributions with their

10−2 100 102 103 104 105 Pb208 (n,el)

Incident Energy (MeV)

Cross section (mbarn)

10−2 100 10−4 10−2 100 102 104 Pb208 (n,γ)

Incident Energy (MeV)

Cross section (mbarn)

Figure 3.2. 50 random ACE208Pb cross sections plotted as a function of incident neutron energy. Left: 208Pb(n,el) and right:208Pb(n,γ). Note that each random ACE files contain a unique set of nuclear data.

corresponding mean values and standard deviations can be obtained for different quantities such as keff, fuel inventory, temperature feedback coefficients, kinetic parameters etc. [48]. By varying

nuclear data using the TMC methodology for a particular response parameter such as keff, the total

variance of a physical observable (σobs2 ) in the case of Monte Carlo codes can be expressed as:

σobs2 = σND2 + σstat2 (3.7)

where σND2 is the variance of the response parameter under study due to nuclear data uncertainties and, σstat2 is the variance due to statistics from the Monte Carlo code. In the case of deterministic

codes, since statistical uncertainties are not given, Eq. 3.7 becomes:

σobs2 = σND2 (3.8)

With the "original TMC" described above, the time taken for a single calculation is increased by a factor of n, where n (the number of samples or random files) ≥ 500 making it not suitable for some applications. As a solution, a faster method called the "Fast TMC" was developed [49]. By changing the seed of the random number generator within the Monte Carlo code and changing nuclear data at the same time, a spread in the data that is due to both statistics and nuclear data is obtained (same as in Eq. 3.7). However, by using different seeds for each simulation, a more accurate estimate of the spread due to statistics is obtained and therefore the statistical require-ment on each run could be lowered, thereby reducing the computational time involved for each calculation. The usual rule of the thumb used for original TMC is: σstat ' 0.05σobs. However,

for fast TMC, σstat' 0.5σobs[49]. A detailed presentation of fast TMC methodology is found in

Ref. [49, 50, 51]. In this work, the fast TMC was used.

3.2.2 TMC compared to other Monte Carlo methods

The Monte Carlo methods used for nuclear data uncertainty propagation can be classify into two types: (1) propagation of nuclear data uncertainties from basic nuclear physics and (2) uncertainty propagation from existing covariance information in the ENDF file. Among (1) above, is the TMC method used in this work. The method has several advantages as it eliminates the usual covariance step and linearity assumption usually made in the perturbation approach [52]. The disadvantage of this methodology however is that, experimental data are not included in a rigorous way. There is however on-going work with the objective of incorporating differential experimental information

(22)

including their correlations [50] and also including integral benchmark information (PAPER III) in a more rigorous way into the TMC methodology. Another method is the Unified Monte Carlo (UMC) approach proposed by D.L. Smith [53]. The UMC method is based on the applications of Bayes theorem and the principle of maximum entropy as well as on fundamental definitions from probability theory [53]. The method seeks to incorporate experimental data into model calcula-tions in a more rigorous and consistent manner [52].

A different approach is the Back-Forward Monte Carlo (BFMC) proposed in Ref. [54]. This method involves the Backward and the Forward Monte Carlo steps. In the Backward step, a co-variance matrix of model parameters is obtained by using a generalized χ2with differential data as constrains leading to observables consistent with experimental data. Starting from the results ob-tained from the backward step, a sampling of the Backward Monte Carlo parameter distribution is performed. The resulting distribution of ND observables hence include experimental uncertainty information [52].

Several other approaches are based on uncertainty propagation using existing covariance matrices. The disadvantage of these approaches is that, they not only depend on existing nuclear data li-braries but they also rely on the assumption of normal distributions. The covariance data available are usually not comprehensive and complete too [7]. Among these methods is a full Monte Carlo sampling of nuclear data inputs based on covariance information that come with new nuclear data evaluations. The method includes uncertainties of multiplicities, resonance parameters, fast neu-tron cross sections and angular distributions etc. and has been been successfully implemented in the AREVA GmbH code NUDUNA (NUclear Data UNcertainty Analysis) [55]. Another method is the GRS method implemented in the SUSA (Software for Uncertainty and Sensitivity Anal-ysis) code. With the GRS method, random grouped cross sections are generated from existing covariance files [56].

(23)

4. Methodology

In this section, the simulation tools used in this work are briefly presented. Also, model calcula-tions performed with the TALYS code, methods for computing uncertainties due to both global and partial variations of nuclear data on neutronic parameters are discussed. In addition, bench-mark selection and nuclear data reduction methods developed as part of this work are described. Finally, cross section - parameter correlations using a Monte Carlo sensitivity based method is presented.

4.1 Simulation tools

The work in this thesis was done based on a number of codes. These include the T6 code package, NJOY, PREPRO and SERPENT codes. These codes are presented in subsequent subsections.

4.1.1 T6 code package

The random files used in this work are produced using the T6 code package by the TENDL team and can be obtained from the TENDL project [6]. Some random files (208Pb and 206Pb) were also produced as part of this work as further described in section 4.2. The T6 code package was developed at the Nuclear Research and Consultancy Group, Petten for evaluation, validation and production of nuclear data libraries; an example is the Talys Evaluated Nuclear Data Library (TENDL) which contains complete ENDF formatted nuclear data libraries including covariance matrices for many isotopes, particles, energies, reaction channels and secondary quantities. Based on the T6 code package, complete ENDF formatted random nuclear data can be produced for nu-clear data uncertainty propagation from nunu-clear physics to applied reactor calculations [57]. The package is made up of a group of codes coupled together with a script called AutoTALYS. These codes include the TALYS code, TASMAN, TAFIS, TANES, TARES and TEFAL codes. The TALYS code which forms the main basis for the TMC methodology, is a state of the art nuclear physics code used for the predictions and analysis of nuclear reactions [9]. In the TMC methodology, the TALYS code is used to generate nuclear data for all open channels in the fast neutron energy region, i.e., beyond the resonance region. This is achieved by fine tuning model parameters of various nuclear reaction models within the code so that model calculations repro-duce differential experimental data. Where experimental data are unavailable, TALYS is used for the prediction and extrapolating of data [7]. Some of the models used in TALYS are described in section 2.2.

The output of TALYS among others include: total, elastic, inelastic cross sections per discrete state; elastic and non-elastic angular distributions and other reaction channels such as (n,2n), (n,np) among others. To create a complete ENDF file covering from thermal to fast neutron ener-gies, non-TALYS data such as the neutron resonance data, total (n,tot), elastic (n,el), capture (n,γ) or fission (n,f) cross sections at low neutron energies, average number of fission neutrons, and fis-sion neutron spectra must be added to results obtained from the TALYS code using other auxiliary codes [7] such as the TARES code [58] for resonance parameters, the TAFIS and TANES codes for average number of fission neutrons and fission neutron spectrum respectively [59, 60]. The TASMAN code [61], is used to create input files to TALYS and other codes by generating random

(24)

distributions of input parameters obtained by randomly sampling each input parameter from a dis-tribution with a specific width for each parameter. The uncertainty disdis-tribution in nuclear model parameters is often assumed to be either Gaussian or uniform shaped [7]. The input files created are then run multiple times each time with a different set of model parameters to obtain distribu-tion in calculated quantities. From the distribudistribu-tions obtained, statistical informadistribu-tion such as the mean, standard deviations and variances and a full covariance matrix can be obtained. Finally, the TEFAL code is used for translating nuclear reaction results from all the different modules within the T6 code package into ENDF formatted nuclear data libraries [47]. Even though this codes are coupled together, each code can work as a stand alone simulation tool.

4.1.2 NJOY processing code

The NJOY processing code [40] is used for translating ENDF formatted nuclear data into usable formats for use in deterministic and Monte Carlo transport codes which are used for reactor cal-culations and analyses. In this work, the NJOY code is used to process random ENDF files into ACE formats at defined temperatures using the following module sequence: MODER-RECONR-BROADR-UNRESR-HEATR-PURR-ACER. The MODER module is used to convert ENDF in-put data into NJOY blocked binary mode.These data are then reconstructed into pointwise cross sections which are then Doppler broaden using the BROADR module. The UNRESR module is used to calculate effective self-shielded pointwise cross sections in the unresolved resonance region while the HEATR module is used to generate pointwise heat production and radiation damage production cross sections. PURR is used to prepare unresolved region probability tables mostly used by MCNP and finally the ACER module converts the libraries into ACE format. More information on the NJOY code can be found in Ref. [40].

4.1.3 PREPRO code

The PREPRO is a collection of a modular codes designed to convert data in the ENDF format into usable formats for applications [62] similar to NJOY presented in the previous section. In this work, the LINEAR-RECENT-SIGMA1-GROUPIE module sequence was used. The LIN-EAR module is used to linearize ENDF formatted cross sections, while the RECENT module adds resonance contribution to the background cross sections in order to define the cross sections as linearly interpolable tables at 0 Kelvin [62]. The SIGMA1 module Doppler broadens cross sections to defined temperatures for use in applications while the GROUPIE module is collapse pointwise cross sections into multigroup cross sections. In this work, the GROUPIE module was used to calculate multigroup cross sections for random ENDF formatted nuclear data for use in cross section-parameter correlation discussed in section 4.7.

4.1.4 SERPENT Monte Carlo code

The 3-D continuous-energy Reactor Physics code Serpent (version 1.1.17) [63] developed at VTT Technical Research Centre in Finland was used for simulations in this work. Serpent is specialized in 2-D lattice physics calculations but has the capability of modeling complicated 3-D geometries. It also has an built -in burnup capability for reactor analyses. SERPENT used the universe-based geometrical modeling for describing two or three dimensional fuel and reactor core configura-tions [63]. It utilizes the latest nuclear data libraries in ACE format for simulaconfigura-tions.

4.2 Model calculations with TALYS

A successful TMC uncertainty propagation relies on realistic central values. The central value is a complete evaluation with its corresponding ’best’ parameter sets which represents the evaluator’s

(25)

best effort at any particular time. The selection of these ’central values’ are guided by high-quality experimental data, complemented by the nuclear model codes such as TALYS and the experience of the evaluator [6, 23]. Once the central value is obtained, an uncertainty band is defined for the model parameters using experimental data as a guide. In Table 4.1, the uncertainties of some of the model parameters (given as a fraction (%) of the absolute values) for 208Pb used in this work are presented. The central value used in this work was provided by the TENDL team. The uncertainties in Table 4.1, were derived using experimental data as visual guide. This approach has been criticized because, experimental information are not included in a more rigorous way. There is however on-going work with the objective of incorporating experimental information including their correlations in a more rigorous way [4, 64].

Based on the central value and the parameter uncertainties, random variations were carried out simultaneously with all the nuclear model parameters for208Pb and206Pb using the TALYS based T6 code package to obtain a set of random nuclear data. The resonance parameters used in this work were adopted from the JEFF-3.1 library together with their corresponding background cross sections found on MF3 using the TARES code. Uncertainties in the resonance region used in this work are default uncertainties within the TARES code that represent the best effort of the code developer at any particular time. As a final step, the TEFAL code was used to translate the output cross section data into the well known ENDF format. These files, after data checking, have been accepted preliminary by the TENDL team and can be obtained from TENDL-2014 beta [32]. The random ENDF nuclear data produced were processed into the ACE format using the NJOY processing code and used for reactor core calculations.

Table 4.1. Uncertainty of some nuclear model parameters of TALYS for208Pb, given as a fraction(%) of the absolute value. A complete list of all the model parameters can be found in the Ref. [9].

Parameter Uncertainty(%) Parameter Uncertainty(%)

rVn 1.5 aVn 2.0 vn1 1.9 vn2 3.0 vn3 3.1 vn4 5.0 wn1 9.7 wn2 10.0 d1n 9.4 d2n 10.0 d3n 9.4 rnD 3.5 anD 4.0 rnSO 9.7 anSO 10.0 vnso1 5.0 vnso2 10.0 wnso1 20.0 wn so2 20.0 Γγ 5.0 a(207Pb) 4.5 a(206Pb) 6.5 a(208Pb) 5.0 a(205Pb) 6.5 σ2 19.0 M2 21.0 gπ(207Pb) 6.5 gν(207Pb) 6.5

4.3 Nuclear data uncertainty calculation

To assess the global effect of nuclear data uncertainty on reactor parameters as discussed earlier in section 4.4, the TMC approach is used. Reaction cross-sections (MF3 in ENDF terminology), res-onance parameters (MF2), angular and energy distributions (MF4 and MF6 respectively), double-differential distributions and gamma-ray production cross-sections (MF6) etc. are all randomly varied using the T6 code, as presented previously, and a large set of random nuclear data libraries generated. These files are then processed into ENDF format using the TEFAL code [7] and into ACE format using the NJOY processing code [40]. A bash script is subsequently used to run the SERPENT code multiple times, each time with a different ACE file and a distribution in a param-eter of interest is obtained. In Fig. 4.1, a diagram showing the practical implementation of the

(26)

Total Monte Carlo method is presented. As shown in the diagram, the process can be divided into three major sections: (1) Model calculations by the TALYS based system which involves com-paring model calculations with experimental reaction data to obtain a specific a priori uncertainty for each nuclear model parameter and then running the TALYS code system, a large number of times each time with a different unique set of model parameters. (2) Nuclear data processing of files generated from (1) into usable formats for neutron transport codes using the NJOY code. (3) Reactor calculations using the processed random files and statistical information inferred from probability distributions of reactor parameters. The statistical information inferred from reactor calculations, are also used to prioritize which cross section data should be focused on for improve-ment of both model calculations and experiimprove-ments as shown in the feedback loop in Fig. 4.1. The outputs from the different codes as seen in the diagram are handled automatically by a suite of bash scripts. As a result of the variation of nuclear model parameters within ranges predetermined

Figure 4.1.Practical implementation of the Total Monte Carlo methodology.

by comparison with differential experimental uncertainties as shown in (1), the uncertainties due to nuclear data can be propagated all the way to reactor parameters. Nuclear data uncertainties on some neutronic parameters are presented in more detail in Paper I and II.

4.3.1 Partial variation

In the previous section, the methods for computing isotope specific global uncertainties due to nuclear data for some reactor safety parameters were presented. However, to be able to give in-formed feedback to model calculations and differential measurements, it is of interest to quantify the contributions of the different reaction channels or parts of the ENDF file that contribute to the global uncertainties. To achieve this goal, perturbed random files were produced by varying specific parts of the ENDF file while keeping other parts constant for a large number of random nuclear data.

To investigate the impact of only resonance parameters on reactor parameters for instance, only MF2 (in ENDF nomenclature) was perturbed. This means that, each complete ENDF file then contain a unique set of resonance parameters such as the scattering radius, the average level spac-ing and the average reduced neutron width. Similar for the (n,el) cross section, MF3-MT2 is kept constant and different parts of the ENDF file are varied. To accomplish this, the first file (i.e run zero of the random files obtained from the TENDL-2012 [6]) is kept as the unperturbed file while different sections of the random ENDF files are perturbed and a unique set of random files pro-duced.

In Fig. 4.2, the perturbed random ACE208Pb cross sections are plotted as a function of incident neutron energy. In the top left and top right, the (n,el) and (n,γ) cross sections are presented

(27)

respectively, after perturbing only resonance parameter data. As can be observed, the partial vari-ation of only resonance parameters, affect both208Pb(n,el)(top left) and208Pb(n,γ) (top right)cross sections from thermal up to about 1 MeV. The boundary between the resolved resonance region and the high energy region for208Pb random files is at about 1 MeV. In the bottom left and bottom

10−2 100 102 103 104 105 Pb208, MF2, (n,el)

Incident Energy (MeV)

Cross section (mbarn)

10−2 100 10−4 10−2 100 102 104 Pb208, MF2, (n,γ)

Incident Energy (MeV)

Cross section (mbarn)

10−2 100

103

104

105

Pb208, MF3−MT2, (n,el)

Incident Energy (MeV)

Cross section (mbarn)

10−2 100 10−4 10−2 100 102 104 Pb208, MF3−MT2, (n,γ)

Incident Energy (MeV)

Cross section (mbarn)

Figure 4.2. Random ACE208Pb cross sections are plotted as a function of incident neutron energy for

varying only resonance (top) parameter data and then elastic scattering cross sections (bottom). For Top left:208Pb(n,el) and top right: 208Pb(n,γ), only MF2 (resonance parameters) were varied while for bottom

left: 208Pb(n,el) and bottom right: 208Pb(n,γ), only the elastic scattering cross sections in the fast energy range were varied.

right of Fig. 4.2, the208Pb(n,el) and208Pb(n,γ) are presented for the partial variation of the (n,el) cross section in the fast energy range (above 1 MeV) respectively. A spread is observed above 1 MeV for the partial variation of208Pb(n,el) cross section (bottom left) as can be observed from Fig. 4.2. Since results in the fast energy region is obtained from TALYS, the spread can be at-tributed to the variation of model parameters within the TALYS code. The lack of spread observed for the (n,γ) is not surprising as the variation of the (n,el) cross section has no significant impact on the (n,γ) cross section.

All the perturbed random files were then processed into ACE files with the NJOY processing code at 600K and used in the SERPENT code for reactor core calculations. Thus, the variance of the response parameter (reactor quantity of interest) due to the partial variation (σ(n,x2

n),obs) can be

expressed as:

σ(n,x2 n),obs= σstat2 + σ(n,x2 n),ND (4.1)

Where σ(n,x2

n),obsis the variance of the observable due to partial variation, σ 2

statis the mean value

of the variance due to statistics and σ(n,x2

n),ND is the variance due to nuclear data as a result of

partial variation and (n, xn) = (n, γ), (n, el), (n, inl), (n, 2n), resonance parameters or angular

distribution. In this way, nuclear data uncertainties due to specific reaction channels or specific parts of the ENDF files were studied and quantified. A detailed presentation of this method and its application to lead coolant isotopes is presented in PAPER II.

(28)

4.4 Reactor Physics

In following subsections, the description of the ELECTRA reactor is presented together with the propagation of nuclear data uncertainties for some reactor macroscopic parameters using TMC.

4.4.1 Reactor description

The ELECTRA - European Lead-Cooled Training Reactor is a conceptual 0.5 MW lead cooled reactor fueled with (Pu,Zr)N with an estimated average neutron flux at beginning of life of 6.3 × 1013n/cm2s [1]. The fuel composition was chosen such that the Pu vector resembles a typical spent fuel of a pressurized water reactor UOX fuel with a burnup of 43 GWd/tonne, which was allowed to cool for four years before reprocessing with an additional two years storage before loading into the ELECTRA core. The extra storage time after reprocessing gives the initial fuel vector realistic levels of Am, which is a product from beta decay of241Pu [1]. The fuel compo-sition is made up of 60% mol of ZrN and 40% mol of PuN. ELECTRA is cooled by pure lead. The objective is to achieve a 100 % heat removal via natural convection while ensuring enough power density to keep the coolant in a liquid state. The core is hexagonally shaped with an active core height of 30 cm and consists of 397 fuel rods. Reactivity compensation is achieved by the rotation of absorbing drums made up of B4Cenriched to 90% in10B, having a pellet density of 2.2

g/cm3[1]. Because of the hard spectrum, ELECTRA has a relatively small negative Doppler con-stant however the presence of a large negative coolant temperature coefficient makes temperature control a possible way of managing reactivity transients. Fig. 4.3 shows the radial configurations of the ELECTRA core. It is envisaged that ELECTRA will provide practical experience and data for research related to the development of GEN-IV reactors. A detailed description of the reactor is presented Ref [1].

Figure 4.3. Radial view of the ELECTRA core showing the hexagonal fuel assembly in the center made up of 397 fuel rods, the lead coolant (pink), the control assembly showing the six rotating control drums around the fuel assembly with the control rods fully inserted.

4.4.2 Reactor neutronic parameters

The propagation of nuclear data uncertainties to reactor neutronic parameters for ELECTRA are presented in this subsection. These parameters are the keff, the coolant temperature coefficient

(29)

or perturbed configurations and the criticality change evaluated while varying nuclear data. The nuclear data library for all other isotopes except the isotope being varied was maintained as JEFF-3.1 nuclear data library. The reference temperature of the fuel and coolant were 1200K and 600K respectively.

Neutron multiplication factor (keff)

The keffis an important parameter in criticality safety analysis. The impact of nuclear data

uncer-tainty on reactor safety margins comes principally from unceruncer-tainty in the criticality (keff) [65]. To

quantify the uncertainty in keffinduced by the uncertainties in nuclear data (ND), ND was varied

while computing the keff each time and given probability distributions were obtained. From this

distributions, a corresponding mean values and standard deviations were determined using Eq. 3.1 and 3.3 respectively. Using Eq. 3.7, the uncertainty due to nuclear data can be extracted. In Paper II, the results for the variation of204,206,207,208Pb nuclear data in the keff and other safety

parame-ters are presented in more detail. More results on nuclear data uncertainty of some actinides and structural materials are presented in PAPER I section 5.

Coolant temperature coefficient (CTC)

The reactivity variations of a system due to density perturbations provide useful information for reactor safety and transient analysis [66]. ELECTRA has a large negative coolant coefficient which makes coolant temperature a possible way of managing reactivity during reactor tran-sients [1].

The CTC sensitivity to nuclear data variations was quantified by performing criticality calcula-tions with the SERPENT Monte Carlo code (version 1.1.17) [67] at two different coolant densities corresponding to the temperatures T1= 600K and T2= 1800K while varying lead nuclear data

each time. Since the density effect is dominant in the CTC, all lead cross sections used in the calculation of the CTC were processed with the NJOY99.336 code at 600K. Since the keff is ≈ 1

for both configurations, the CTC for a temperature change from T1to T2can be expressed as:

CTC=ke f f(T1) − ke f f(T2) T1− T2

(4.2) The nuclear data uncertainty in the CTC is propagated here similar to equation 3.7. If the statistical uncertainty on the keffat T1and T2are σstat,T1and σstat,T2respectively, then the combined statistical

uncertainty (σstat,comb) for the computation of CTC can be expressed as:

σstat,comb2 = σstat,T2 1+ σstat,T2 2 (4.3)

assuming that the statistical errors at T1 and T2 are uncorrelated. From the square of the total

spread (σobs) of the CTC distribution which is equal to quadratic sum of the nuclear data

uncer-tainty (σND) and the combined statistical uncertainty (σstat,comb), the uncertainty due to nuclear

data can be extracted:

σND= [σobs2 − σstat,comb2 ]

1/2 (4.4)

It should be noted that, since the difference between ke f f(T1) and ke f f(T2) is usually small, the

CTCdistribution can easily be dominated by statistics and hence longer computer time are needed in the Monte Carlo simulations to obtain small statistical uncertainty; the usual rule of the thumb used for fast TMC is: σstat' 0.5 × σobs[49].

Coolant Void worth

The Coolant void worth (CVW ) which is the difference in reactivity between the flooded and voided cores can be given by the expression:

CVW=k void e f f − k f lood e f f kvoide f f.ke f ff lood (4.5)

(30)

Where, ke f ff lood and ke f fvoidare the keffvalues for the flooded and voided cores respectively. In order to

investigate the impact of lead cross section uncertainties on the CVW , criticality calculations were performed for two different core configurations: 1) the voided core assuming that all coolant in the primary vessel is removed and 2) for the core flooded with Lead coolant.206,207,208Pb nuclear data were varied separately for the flooded core. Applying 4.5 for each isotope, distributions in CVWwere obtain.

The voided core involves only one SERPENT code calculation, consequently, only the statistical uncertainty of the flooded core( σstatf lood), is used in Eq.(3.7), when σCVW,NDis calculated.

How-ever, the σstatvoidwill introduce a bias in the mean value of the CVW and therefore the 100 % voided

core is calculated with high statistical precision. Since σCVW,NDis only dependent on σstatf lood, from

conventional error propagation, the nuclear data uncertainty of the CVW (σCVW,ND) can be also

approximated as: σCVW,ND≈ σkf lood eff,ND kefff lood 2 (4.6)

However, Eq. 4.6 was not used for the calculation of σCVW,NDin this work. The actual spread of

the CVW was used.

4.5 Benchmark selection method

The selection of benchmarks for reactor calculations and for validation purposes is not straight forward. Until recently, this selection is usually done by visual inspection, which introduces user bias into the selection process as mentioned earlier. In PAPER I, a method for selecting these benchmarks using the Total Monte Carlo method is proposed. This method has been described in more detailed in PAPER III and used for reducing nuclear data uncertainty, which is discussed further in section 4.6 of the paper I and III. A flow diagram that illustrate the benchmark selec-tion method is presented in Fig. 4.4.

The method involves first, the production of random nuclear data libraries. Eventhough, the ran-dom nuclear data used in this work were obtained using the TMC methodlogy [7], other ap-proaches exist for random nuclear data generation. One such method is based on full Monte Carlo sampling of nuclear data inputs based on covariance information that come with new nuclear data evaluations as previously discussed in section 3.2.2. For use in neutron transport codes such as SERPENT and MCNP, the random files produced are processed into ACE format using the NJOY processing code. Using the same processed random files, reactor simulations are performed for 1) the benchmark under consideration and for 2) an application case. The application case refers to the specific reactor system under consideration with full geometry. To quantify the relationship between two systems, the correlation coefficient, which measures the strength of the linear depen-dence between two variables is computed. If a strong correlation exists between the benchmark case and the application case, the benchmark can be considered as a good representation of the reactor system under consideration. In Fig. 4.5, a correlation plot between an application case (ELECTRA in our case) and the benchmark case (pmf1) is presented. Other correlations such as benchmark case against benchmark case can also be studied.

4.6 Nuclear data uncertainty reduction

Even though information on differential measurements together with their uncertainties are in-cluded (implicitly) in the production of random files in the TMC methodology, wide spreads have been observed in the parameter distributions (known here as our ’prior distribution’) leading to large uncertainties in reactor parameters for some nuclides for the European Lead-Cooled Training

(31)

Random Nuclear Data (ND) libraries (TENDL

project)

Random files processing eg. Njoy

Perform calculation with each random file for your system (application case)

Perform calculation with each random file for the ith

benchmark (benchmark case)

Does a strong correlation exist? USE Yes No Reducing ND uncertainties Reactor codes validation

Figure 4.4.Flowchart diagram of the benchmark selection method. Random files obtained from the TENDL project are processed into ACE format and used for reactor code validations and for reducing nuclear data (ND) uncertainties. Similarities between benchmarks and application cases are quantified using the corre-lation coefficient. 0.98 0.99 1 1.01 1.02 0.99 1 1.01 1.02

keff values of benchmark case

k eff

values of application case

Figure 4.5. keff correlation plot between the application case (ELECTRA) and the benchmark case

(pmf1) [10]. A correlation coefficient of R=0.84 was obtained, signifying a strong relationship between the two systems.

Reactor [2, 68]. To meet integral uncertainty requirements and get the full benefit from advanced modeling and simulation initiatives, present nuclear data uncertainties on reactor systems must be reduced significantly [16, 69].

In PAPER I, it was demonstrated that, by setting a more stringent criteria for accepting random files based on integral benchmark information, nuclear data uncertainty could be reduced further. In the paper however, arbitrary acceptance limits were used as an illustration. This was done by defining a strict accept/reject criteria for accepting random nuclear data files as can be seen in Fig. 4.6. The method makes use of prior information included in the random nuclear data li-braries produced using the TALYS based system, which implicitly include nuclear data covariance information. As an improvement to this method, benchmark uncertainty information has been in-cluded in the method and has been presented in more detail in PAPER III. The uncertainties in

References

Related documents

Due to the discrepancy of the data on thermal conductivity of stainless steel and its presence in the heat exchange coefficient evaluation, the dispersion of the calculated heat

The reason is that the 235 U nuclides undergo fission through neutron induced reactions, causing a nuclear chain reaction and creating nuclides of lower atomic mass.. During a

Uncertainty Quantification for Wave Propagation and Flow Problems with Random Data.

I fasta relationer är det vanligt att framför allt kvinnor, men även män, ”ställer upp” på sex trots att de inte har sexuell lust, av omsorg om partnerns sexuella behov, för att

Linköpings universitet | Institutionen för beteendevetenskap och lärande Examensarbete, 15 hp | Speciallärarprogrammet 90 hp Vårterminen 2019 | ISRN LIU-IBL/ SPLÄR-A-19/01-SE..

Figure 21: Nuclear data uncertainty for Normal Case using fast TMC with selected libraries from benchmark LEU-COMP-THERM-016 Case

This is consistent with Warren’s criteria for moral status: the family is an entity towards which others can have moral obligations (see the UDHR) and thus we may

We perform a large number of TALYS runs and compare the calculated results with the experimental data of the cross sections to obtain the uncertainties of the model parameters..