• No results found

Nuclear data uncertainty quantification and data assimilation for a lead-cooled fast reactor: Using integral experiments for improved accuracy

N/A
N/A
Protected

Academic year: 2021

Share "Nuclear data uncertainty quantification and data assimilation for a lead-cooled fast reactor: Using integral experiments for improved accuracy"

Copied!
86
0
0

Loading.... (view fulltext now)

Full text

(1)

ACTA UNIVERSITATIS

UPSALIENSIS UPPSALA

Digital Comprehensive Summaries of Uppsala Dissertations

from the Faculty of Science and Technology

1315

Nuclear data uncertainty

quantification and data assimilation

for a lead-cooled fast reactor

Using integral experiments for improved accuracy

ERWIN ALHASSAN

ISSN 1651-6214 ISBN 978-91-554-9407-0

(2)

Dissertation presented at Uppsala University to be publicly examined in polhemsalen, Ångströmlaboratoriet, Lägerhyddsvägen 1, Uppsala, Thursday, 17 December 2015 at 09:15 for the degree of Doctor of Philosophy. The examination will be conducted in English. Faculty examiner: Dr. Oscar Cabellos (OECD Nuclear Energy Agency (NEA)). Abstract

Alhassan, E. 2015. Nuclear data uncertainty quantification and data assimilation for a lead-cooled fast reactor. Using integral experiments for improved accuracy. Digital Comprehensive

Summaries of Uppsala Dissertations from the Faculty of Science and Technology 1315. 85 pp.

Uppsala: Acta Universitatis Upsaliensis. ISBN 978-91-554-9407-0.

For the successful deployment of advanced nuclear systems and optimization of current reactor designs, high quality nuclear data are required. Before nuclear data can be used in applications they must first be evaluated, tested and validated against a set of integral experiments, and then converted into formats usable for applications. The evaluation process in the past was usually done by using differential experimental data which was then complemented with nuclear model calculations. This trend is fast changing due to the increase in computational power and tremendous improvements in nuclear reaction models over the last decade. Since these models have uncertain inputs, they are normally calibrated using experimental data. However, these experiments are themselves not exact. Therefore, the calculated quantities of model codes such as cross sections and angular distributions contain uncertainties. Since nuclear data are used in reactor transport codes as input for simulations, the output of transport codes contain uncertainties due to these data as well. Quantifying these uncertainties is important for setting safety margins; for providing confidence in the interpretation of results; and for deciding where additional efforts are needed to reduce these uncertainties. Also, regulatory bodies are now moving away from conservative evaluations to best estimate calculations that are accompanied by uncertainty evaluations.

In this work, the Total Monte Carlo (TMC) method was applied to study the impact of nuclear data uncertainties from basic physics to macroscopic reactor parameters for the European Lead Cooled Training Reactor (ELECTRA). As part of the work, nuclear data uncertainties of actinides in the fuel, lead isotopes within the coolant, and some structural materials have been investigated. In the case of the lead coolant it was observed that the uncertainty in the keff

and the coolant void worth (except in the case of 204Pb), were large, with the most significant

contribution coming from 208Pb. New 208Pb and 206Pb random nuclear data libraries with realistic

central values have been produced as part of this work. Also, a correlation based sensitivity method was used in this work, to determine parameter - cross section correlations for different isotopes and energy groups.

Furthermore, an accept/reject method and a method of assigning file weights based on the likelihood function are proposed for uncertainty reduction using criticality benchmark experiments within the TMC method. It was observed from the study that a significant reduction in nuclear data uncertainty was obtained for some isotopes for ELECTRA after incorporating integral benchmark information. As a further objective of this thesis, a method for selecting benchmark for code validation for specific reactor applications was developed and applied to the ELECTRA reactor. Finally, a method for combining differential experiments and integral benchmark data for nuclear data adjustments is proposed and applied for the adjustment of neutron induced 208Pb nuclear data in the fast energy region.

Keywords: Total Monte Carlo, ELECTRA, nuclear data, uncertainty propagation, integral

experiments, nuclear data adjustment, uncertainty reduction

Erwin Alhassan, Department of Physics and Astronomy, Applied Nuclear Physics, Box 516, Uppsala University, SE-751 20 Uppsala, Sweden.

© Erwin Alhassan 2015 ISSN 1651-6214 ISBN 978-91-554-9407-0

(3)
(4)
(5)

List of papers

This thesis is based on the following papers, which are referred to in the text by their Roman numerals.

I E. Alhassan, H. Sjöstrand, J. Duan, C. Gustavsson, A.J. Koning, S. Pomp, D. Rochman, M. Österlund. Combining Total Monte Carlo and Benchmarks for nuclear data uncertainty propagation on a Lead Fast Reactor’s Safety. Nuclear Data Sheets (2014) 118,

542-544.

My contribution: I wrote the scripts and I performed the simulations, the analyses and interpretation of results. I also wrote the paper. II E. Alhassan, H. Sjöstrand, P. Helgesson, A.J. Koning, M. Österlund,

S. Pomp, D. Rochman. Uncertainty and correlation analysis of lead nuclear data on reactor parameters for the European Lead Cooled Training Reactor. Annals of Nuclear Energy (2015) 75, 26-37. My contribution: I wrote most of the scripts and I performed the simulations, the analyses and interpretation of results. I also wrote the paper.

III H. Sjöstrand, E. Alhassan, J. Duan, C. Gustavsson, A.J. Koning, S. Pomp, D. Rochman, M. Österlund. Propagation of nuclear data uncertainties for ELECTRA burn-up calculations. Nuclear Data

Sheets (2014) 118, 527-530.

My contribution: I wrote part of the scripts, performed part of the simulations and took part in the writing of the paper.

IV E. Alhassan, H. Sjöstrand, P. Helgesson, M. Österlund, S. Pomp, A.J.

Koning, D. Rochman. On the use of integral experiments for uncertainty reduction of reactor macroscopic parameters within the TMC methodology. In review process after revision, Progress of

Nuclear Energy, 2015.

My contribution: I developed the method; wrote most of the scripts; and also performed the simulations, the analyses and the interpretation of the results. I also wrote the paper.

(6)

V E. Alhassan, H. Sjöstrand, P. Helgesson, M. Österlund, S. Pomp, A.J. Koning, D. Rochman. Selecting benchmarks for reactor

simulations: an application to a Lead Fast Reactor. Submitted to

Annals of Nuclear Energy, 2015.

My contribution: I developed the method; wrote the scripts; and also performed the simulations, the analyses and the interpretation of the results. I also wrote the paper.

(7)

Other papers not included in this thesis

List of papers related to this thesis but not included in the comprehensive sum-mary. I am first author or co-author of all listed papers.

1. E. Alhassan, H. Sjöstrand, J. Duan, P. Helgesson, S. Pomp, M. Öster-lund, D. Rochman, A.J. Koning. Selecting benchmarks for reactor calculations. In proc. PHYSOR 2014 International Conference, Kyoto,

Japan. 28 September - 3 October 2014.

2. H. Sjöstrand, E. Alhassan, S. Conroy, J. Duan, C. Hellesen, S. Pomp, M. Österlund, A.J. Koning, D. Rochman. Total Monte Carlo evaluation for dose calculations. Radiation Protection Dosimetry (2013) 161

(1-4), 312-315.

3. R. Della, E. Alhassan, N.A. Adoo, C.Y. Bansah, E.H.K. Akaho, B.J.B. Nyarko. Stability analysis of the Ghana Research Reactor-1 (GHARR-1). Energy Conversion and Management (2013) 74, 587-593.

4. C.Y. Bansah, E.H.K. Akaho, A. Ayensu, N.A. Adoo, V.Y. Agbodemegbe, E. Alhassan, R. Della. Theoretical model for predicting the relative timings of potential failures in steam generator tubes of a PWR dur-ing a severe accident. Annals of Nuclear Energy (2013) 59, 10-15. 5. J. Duan, S. Pomp, H. Sjöstrand, E. Alhassan, C. Gustavsson, M.

Öster-lund, D. Rochman, A.J. Koning. Uncertainty Study of Nuclear Model

Parameters for the n+56Fe Reactions in the Fast Neutron Region

Be-low 20 MeV. Nuclear Data Sheets (2014) 118, 346-348.

6. P. Helgesson, D. Rochman, H. Sjöstrand, E. Alhassan, A.J. Koning.

UO2 vs MOX: propagated nuclear data uncertainty for keff, with

burnup. Nuclear Science and Engineering (2014) 3, 321-336.

7. E. Alhassan, H. Sjöstrand, J. Duan, C. Gustavsson, A.J. Koning, S. Pomp, D. Rochman, M. Österlund. Uncertainty analysis of Lead cross sections on reactor safety for ELECTRA. In proc. SNA + MC 2013,

02401 (2014), EDP Sciences.

8. E.K. Boafo, E. Alhassan, E.H.K. Akaho. Utilizing the burnup capa-bility in MCNPX to perform depletion analysis of an MNSR fuel.

Annals of Nuclear Energy (2014) 73, 478-483.

9. P. Helgesson, H. Sjöstrand, A.J. Koning, D. Rochman, E. Alhassan, S. Pomp. Incorporating experimental information in the TMC method-ology using file weights. Nuclear Data Sheets (2015) 123, 214-219.

(8)

10. S. Pomp, A. Al-Adili, E. Alhassan, C. Gustavsson, P. Helgesson, C. Hellesen, A.J. Koning, M. Lantz, M Österlund, D. Rochman, V. Simutkin, H. Sjöstrand, A. Solders. Experiments and theoretical data for study-ing the impact of fission yield uncertainties on the nuclear fuel cycle with TALYS/GEF and the TMC method. Nuclear Data Sheets (2015)

123, 220-224.

11. A. Al-Adili, E. Alhassan, C. Gustavsson, P. Helgesson, K. Jansson, A.J. Koning, M. Lantz, A. Mattera, A.V. Prokofiev, V. Rakopoulos, H. Sjos-trand, A. Solders, D. Tarrio, M. Osterlund, S. Pomp. Fission activities of the nuclear reactions group in Uppsala. In proc., Scientific

Work-shop on Nuclear Fission dynamics and the Emission of Prompt Neutrons and Gamma Rays, THEORY-3. Physics Procedia (2015) 64, 145-149.

12. P. Helgesson, H. Sjöstrand, A.J. Koning, J. Ryden, D. Rochman, E. Al-hassan, S. Pomp. Sampling of systematic errors to compute like-lihood weights in nuclear data uncertainty propagation. In Press,

(9)

Contents

Contents . . . ix

1 Introduction . . . .1

1.1 Background . . . .1

1.2 Nuclear data needs. . . .3

1.3 Outline of thesis . . . .4

2 Nuclear data . . . 5

2.1 Experimental data. . . .5

2.1.1 Differential data . . . 6

2.1.2 Integral data (benchmarks). . . 7

2.2 Model calculations . . . 8

2.3 Nuclear data evaluation . . . .8

2.4 Nuclear data libraries . . . 9

2.5 Nuclear data definitions . . . 10

2.6 Resonance parameters . . . .11

2.7 Covariance data . . . 12

3 Uncertainty Quantification . . . 14

3.1 Sources of uncertainties . . . 15

3.2 Statistical estimation . . . 16

3.3 Uncertainty quantification approaches. . . .18

3.3.1 Deterministic methods . . . .18

3.3.2 Stochastic methods . . . 18

4 Methodology . . . 23

4.1 Simulation tools. . . .23

4.1.1 TALYS based code system . . . 24

4.1.2 Processing codes . . . 25

4.1.3 Neutron transport codes. . . .26

4.2 Model calculations with TALYS . . . 27

4.3 Nuclear data uncertainty calculation. . . .29

4.3.1 Global uncertainty analyses . . . 29

4.3.2 Local uncertainty analyses - Partial TMC . . . .31

4.4 Reactor Physics . . . 32

4.4.1 Reactor description . . . 33

4.4.2 Reactor neutronic parameters. . . .34

(10)

4.5 Nuclear data uncertainty reduction . . . 36

4.5.1 Binary Accept/Reject method . . . 37

4.5.2 Reducing uncertainty using file weights. . . .38

4.5.3 Combined benchmark uncertainty. . . 39

4.6 Benchmark selection method. . . .40

4.6.1 Benchmark cases. . . .42

4.7 Correlation based sensitivity analysis . . . .43

4.8 Burnup calculations. . . .44

4.9 Nuclear data Adjustments. . . .45

4.9.1 Production of random nuclear data libraries . . . 47

4.9.2 Nuclear data adjustment of208Pb in the fast region . . . 47

5 Results and Discussions . . . 50

5.1 Model calculations . . . 50

5.2 Global uncertainties . . . 51

5.3 Partial variations . . . .54

5.4 Nuclear data uncertainty reduction . . . 55

5.5 Similarity Index . . . .57

5.6 Correlation based sensitivity measure. . . .59

5.7 Nuclear data adjustments . . . .60

6 Conclusion and outlook. . . .64

7 Sammanfattning . . . 66

Acknowledgment . . . 69

(11)

1. Introduction

’If you would be a real seeker after truth, it is necessary that at least once in your life you doubt, as far as possible, all things.’ - Renè Descartes

1.1 Background

Today, about 1.4 billion people globally still have no access to electricity, and an additional one billion more only have access to unreliable supply of electricity [1]. As a result of population growth and economic development especially in developing countries, the global energy demands are projected to rise [2]. With increasing energy consumption worldwide, nuclear power is expected to play an increasing role in providing the energy needs for the future [3]. Due to public acceptance issues, the next generations of nuclear power reactors must not only be economically competitive with other energy sources but must also address waste management, proliferation and safety con-cerns. The GEN-IV International Forum (GIF) was therefore initiated with very challenging technology goals, which include sustainability, economics, safety, reliability, proliferation resistance and physical protection as reported in the GEN-IV Technology Roadmap [4]. The six reactor concepts identified by the GEN-IV International Forum as most promising advanced reactor sys-tems are [4]: the gas-cooled fast reactor (GFR), the lead-cooled fast reactor (LFR), the molten salt reactor (MSR), the sodium fast reactor (SFR), the very-high-temperature reactor (VHTR) and the supercritical water-cooled reactor (SCWR).

The Lead Fast Reactor concept, was ranked top in sustainability by the GIF because it uses a closed fuel cycle for the conversion of fertile isotopes, and in proliferation resistance and physical protection because of its long-life core [5]. Its safety features are enhanced by the choice of a relatively inert coolant which has the capability of retaining hazardous radionuclides such as iodine and cesium even in the event of a severe accident. As part of GEN-IV devel-opment in Sweden, the GENIUS project which was a collaboration between Chalmers University of Technology, Royal Institute of Technology and Up-psala University was initiated for the development of the GEN-IV concept in

(12)

Sweden [6]. The development of a lead-cooled Fast Reactor called ELECTRA (European Lead-Cooled Training Reactor), which will permit full recycling of plutonium and americium in the core, was proposed within the project. For the design and successful implementation of GEN-IV reactor concepts, high quality and accurate nuclear data are required. For several decades, re-actor design has been supported by computer simulations for the investiga-tion of reactor behavior under both steady state and transient condiinvestiga-tions. The physical models implemented in these simulations codes are dependent on the underlying nuclear data used, implying that uncertainties in nuclear data are propagated to the outputs of these codes. Before nuclear data can be used in applications, they are first evaluated, benchmarked against integral experi-ments and then converted into formats usable for applications. The evaluation of neutron induced reactions usually involves the combination of nuclear re-action models with experimental data (nuclear rere-action models are adjusted to reproduce experimental data). However, these experiments are themselves not exact and therefore, the calculated quantities of model codes such as cross sections and angular distributions, contain uncertainties.

In the past, nuclear data uncertainties within the Reactor Physics community were mostly propagated using deterministic methods. With this approach, the local sensitivities of a particular response parameter, such as the keff, to the

variations in input parameters (nuclear data in our case), can be determined by using the generalized perturbation theory [7]. Once the sensitivity coefficient matrix has been determined, it is combined with the covariance matrix to ob-tain the corresponding uncerob-tainty on any response parameter of interest. For example, sensitivity profiles obtained by using the so-called perturbation card in MCNP [8] are combined with covariance data using the SUSD code [9] to obtain nuclear data uncertainty on the reactor response parameter of interest. This approach relies on the assumption that the sensitivity of the output pa-rameter depends linearly on the variation in each input papa-rameter [10]. With the increase in computational power, however, Monte Carlo methods are now possible. In the Nuclear Research and Consultancy Group (NRG), Petten, The Netherlands, a method called ’Total Monte Carlo (TMC)’ was developed for nuclear data evaluation and uncertainty propagation. An advantage of this ap-proach is that it eliminates the use of covariances and the assumption of linear-ity used in the perturbation approach. Quantifying nuclear data uncertainties, is important for reactor safety assessment, reliability and risk assessment, and for deciding where additional efforts need to be taken to reduce these uncer-tainties.

In this work, the TMC method was applied to study the impact of nuclear data uncertainties from basic nuclear physics to macroscopic reactor parameters for the ELECTRA reactor in respect of major and minor actinides, structural materials and the coolant. This work is important because, if all uncertainties

(13)

(including nuclear data uncertainties) are not taken into account in reactor de-sign safety margins could be under asde-signed to key reactor safety parameters. This may lead to severe accidents. As part of the work, the impact of nuclear data uncertainties of some of the actinides in the fuel (PAPER I and IV), lead isotopes in the coolant (PAPER II) and some structural materials at beginning of life (BOL) have been estimated. In addition, the propagation of239Pu trans-port data uncertainties in reactor burnup calculations was carried out in PA-PER III. A further objective has been to develop methodologies for selecting benchmarks that can help in reactor code validation (PAPER V). Also, meth-ods for reducing nuclear data uncertainties using integral benchmarks have been developed and presented in more detail in PAPER I and IV. Finally, a method for combining differential experiments and integral benchmark data for data assimilation and nuclear data adjustments using file weights based on the likelihood function is proposed in section 4.9. The proposed method is applied for the adjustment of neutron induced reactions of 208Pb in the fast energy region and the preliminary results are presented in Section 5.7.

1.2 Nuclear data needs

For the successful deployment of advanced nuclear systems and for the op-timization of current reactor designs, accurate information about the nuclear reactions taking place within the reactor core are required. Using sensitivity and uncertainty analyses, a list of priorities for improvement of nuclear data has been determined [11, 12] and this includes both lead and plutonium iso-topes.

From the application side, a preliminary attempt to assign target uncertain-ties for some GEN-IV systems have been carried out by the Organisation for Economic Co-operation and Development (OECD)/Nuclear Energy Agency (NEA) expert group [11]. From Ref. [11], target uncertainties within 1-sigma have been identified for the following parameters (target uncertainties in brack-ets) for fast reactors: multiplication factor at Beginning of Life (BOL) (0.3%), power peaking factor (2%), reactivity coefficients at BOL (7%) while nuclide densities at the end of life should be within 10% uncertainty. To fulfil these targets the scientific community must develop both theoretical nuclear physics models as well as uncertainty quantification and uncertainty reduction meth-ods. In parallel, the initiation of high-quality differential experimental mea-surements are necessary.

To achieve the goal of further reducing nuclear data uncertainties in macro-scopic reactor parameters, the reduction of nuclear data uncertainties using integral benchmark data, and nuclear data adjustments using both differential and integral data is presented as part of this work.

(14)

1.3 Outline of thesis

The thesis is structured as follows; the background, nuclear data needs for advanced nuclear systems and the outline of the thesis are presented in Chap-ter 1. In ChapChap-ter 2, nuclear data definitions, differential and integral bench-mark experimental data, the nuclear data evaluation process which involves the combination of differential experimental data and model calculations, as well as nuclear covariance data are described. Sources of uncertainties, statis-tical estimations, uncertainty quantification approaches and uncertainty anal-ysis in Reactor Physics modeling are given in Chapter 3. In Chapter 4, the simulation tools used in this work and the application of the TMC method for uncertainty quantification of macroscopic reactor parameters are presented for global and local (partial) variations of nuclear data. Also, methods developed for benchmark selection, for nuclear data uncertainty reduction, and nuclear data adjustments in the fast energy region are described. Furthermore, model calculations and generation of random nuclear data using the TALYS based code system (T6) are presented. In chapter 5, the results obtained are pre-sented and discussed. Chapter 6 contains the conclusion and the outlook for future work and finally, the summary of the thesis in Swedish is presented in Chapter 7.

(15)

2. Nuclear data

’It is nice to know that the computer understands the problem. But I would like to understand it too.’ - Eugene P. Wigner

Nuclear data are physical parameters that describe the properties of the atomic nuclei and the fundamental physical relationship that governs their interac-tions [13]. These include atomic data, nuclear reaction data, thermal scatter-ing, radioactive decay data and fission yields data.The data are important for theoretical nuclear models development and for applications involving radia-tion and nuclear technology [14]. Because of the wide variety of applicaradia-tions nuclear data can be divided into three types [15]. The first being transport data which describes the interactions of various projectiles such as neutrons and protons with a target nucleus. Transport data are usually associated with, e.g., cross sections and angular distributions [15]. These data are utilized for both transport and depletion calculations. The second is fission yield data which are, e.g., used for the calculation of waste disposal inventories and de-cay heat, for depletion calculations and in the calculation of beta and gamma ray spectra of fission product inventories [16]. The third is decay data which describes, among others, the nuclear levels, half-lives, Q-values and decay schemes [13, 15]. These data are used for, e.g., dosimetry calculations and for estimating decay heat in nuclear repositories.

This chapter presents a description of experimental data which include both differential and integral data; model calculations; nuclear data definitions; as well as resonance parameters. Also, the nuclear data evaluation process, nu-clear data libraries containing large sets of nunu-clear data, and covariance data in the ENDF formatted libraries are presented.

2.1 Experimental data

Experimental data can be divided into differential and integral data. Differen-tial data are microscopic quantities that describes the properties of nuclei and their interactions with particles while integral data mostly take into account the global behavior of a macroscopic system. Experimental data are needed for fine tuning nuclear reaction models and for nuclear data assimilation. In the following subsections, differential experimental data and integral benchmarks used for nuclear data evaluation are presented.

(16)

2.1.1 Differential data

Microscopic quantities such as cross sections, fission yields and angular dis-tributions are measured at a large number of experimental facilities, such as accelerators, world-wide. Typically, these data are measured as a function of the energy of an incoming particle, e.g., neutron. These data are collected, compiled and stored in the EXFOR database (Exchange among nuclear re-action data centers) [17]. The EXFOR database is maintained by the Interna-tional Network of Nuclear Reaction Data Centres (NRDC) and coordinated by the Nuclear Data Section of the International Atomic Energy Agency (IAEA). The database contains experimental and bibliographic information on exper-iments for neutron, charged particle and photon-induced reactions on a wide range of isotopes and incident energies [18]. In Fig. 2.1, an example of dif-ferential experimental data for the208Pb(n,2n) cross section as a function of incident neutron energy is presented. Usually, when there exist discrepan-cies between similar experiments for the same reaction, comparisons of the original publications containing the measurements is carried out. The mea-surements are sometimes compared with theoretical model calculations [18]. Accurate experimental data are important for nuclear data evaluation and the

Figure 2.1. An example of experimental data for 208Pb(n,2n) cross section as a

function of incident neutron energy. The data were obtained from the EXFOR

database [17].

calibration of nuclear reaction model codes. For these reasons, a careful as-sessment of possible systematics and statistical experimental uncertainties are needed [19]. The sources of experimental uncertainties are presented in sec-tion 3.2. Users of nuclear data, e.g., the nuclear reactor community, usually give feedback which helps in prioritizing measurements of particular isotopes and reactions.

(17)

2.1.2 Integral data (benchmarks)

Integral experiments are used to measure macroscopic quantities such as flux or keff. Indeed, an integral experiment is not used for measuring microscopic

quantities such as cross-sections. However, integral data obtained from these experiments, are used for testing and validating nuclear data libraries. An example of extensive testing of nuclear data libraries with a large set of crit-icality safety and shielding benchmarks is presented in Ref. [20]. Integral benchmarks (only referred to as benchmarks in this thesis), are integral exper-iments which have gone through a strict validation process. For example, the ICSBEP Handbook contains models (normally in MCNP) of the integral ex-periment with an evaluated benchmark value. The evaluated benchmark value is the obtained experimental observable from the benchmark experiment (e.g.

keff) corrected for the simplifications done in the modeling of the experiment.

The evaluated benchmark value is also associated with an evaluated uncer-tainty.

In Fig. 2.2, a diagram showing the cylindrical benchmark model for the hmf57 (case 3) benchmark is presented. From the diagram, the following can be observed: the central cylinder made of HEU (yellow), surrounded by a lead reflector (blue), with a source placed in the middle of the HEU core. The eval-uated benchmark value of the hmf57 case 3, is keff= 1.0000±0.0032 pcm.

Figure 2.2. The cylindrical benchmark model for hmf57 (cases 3) showing the lead

reflector (in blue) the HEU fuel (yellow) and the source (red). The figure was taken from the ICSBEP Handbook [21]. Note, the diagram is not to scale.

In the past, benchmark testing of nuclear data was done after the evaluation process, however, in most modern evaluations the benchmark testing step is carried out as an integral part of the evaluation process [22]. There are a number of international efforts geared towards providing the nuclear commu-nity with qualified benchmark data. One of such projects is the International Criticality Safety Benchmark Evaluation Project (ICSBEP) mentioned above

(18)

which contains criticality safety benchmarks derived from experiments that were performed at various nuclear critical facilities around the world [21]. These benchmarks are categorized according to their fissile media (Plutonium, HEU, LEU, etc.), their physical form (metal, compound, solution etc.), their neutron energy spectrum (thermal, intermediate, fast and mixed spectra) and a three digit reference number. Other benchmarks used for nuclear data and re-actor applications are the Evaluated Rere-actor Physics Benchmark Experiments (IRPHE) [23] which contains a set of reactor physics-related integral data and the Radiation shielding experiments database (SINBAD) [24] which contains a compilation of reactor shielding benchmarks, fusion neutronics and accel-erator shielding experiments. For existing reactor technology, these bench-marks can be used to validate computer codes, test and validate nuclear data libraries, nuclear data adjustments and uncertainty reduction [21, 25]. In this work, benchmark data are used for nuclear data testing, data adjustment and nuclear data uncertainty reduction.

2.2 Model calculations

Modern nuclear data evaluation in the fast region is performed using nuclear reaction models [26]. These models are used for providing data where experi-mental data are scarce or unavailable [27]. The use of nuclear reaction models also has an added advantage that various partial cross sections can be automat-ically summed up to the total cross section. This leads to internal consistency within evaluated files [27, 28]. Where experimental data are available, they are used for constraining and fine-tuning model parameters used as inputs to nuclear model codes.

Nuclear reaction theories have been implemented in well-known nuclear re-action codes such as TALYS [26, 29] and EMPIRE [30] for theoretical model calculations. In this work nuclear model calculations have been performed using the TALYS code. More details on the TALYS code can be found in section 4.1.1.

2.3 Nuclear data evaluation

Several approaches exist for nuclear data evaluation. These methods include: experimental data interpolation, Bayesian methods, re-normalization of exist-ing evaluations, copy and paste from other nuclear data evaluations and nu-clear reaction modeling [31].

The basic steps involved in nuclear data evaluation process involving nuclear reaction modeling are:

1. The selection and careful analysis of differential experimental data mostly obtained from the EXFOR database.

(19)

2. Analysis of the low energy region which includes thermal energy, and the resolved and unresolved resonance regions. Analyses of neutron res-onances are usually performed using R-matrix codes for light nucleus reactions and for heavy nuclides in the low incident energy regions [32]. These codes are complemented with data from the Atlas of neutron reso-nances [33] which contains a compilation of resonance parameters, ther-mal neutron cross sections, and other quantities. In Ref. [33], the reso-nance parameters are analyzed in terms of the multilevel Breit-Wigner (MLBW) formalism. In new evaluations, however, the use of the Reich-Moore approximation is encouraged [34].

3. Theoretical model calculations using nuclear reaction codes such as EM-PIRE [30] or the TALYS code [26, 29] for the fast energy region. Where available, parameters and their uncertainties as recommended in the Ref-erence Input Parameter Library (RIPL) [27] are used as inputs for these nuclear reaction codes. The RIPL database contains Reference Input Pa-rameters for nuclear model calculations. In the fast region, the Hauser-Feshbach, pre-equilibrium and fission models, are used for modeling medium and heavy nucleus reactions [22, 32]. Here model parameters in these codes are adjusted and fine tuned to fit selected experimental data. These model adjustments are important because current models are deficient and hence are not able to reproduce available experimental data. Also, even if models do reproduce differential experimental data, final adjustments are sometimes needed to reproduce integral data. By adding resonance information as explained in (2) above, and other quan-tities such as the average number of fission neutrons, fission neutron spectra, (n,f) cross section, for actinides, a complete ENDF file which covers from thermal to high energies can be produced.

4. The data produced is then checked and tested using utility codes such as CHECKR, FIZCON and PSYCHE [35] to verify that the data conforms to current formats and procedures. The data once checked, are then validated against a large set of integral benchmark experiments. The end product is a nuclear data library which contains information on a large number of incident particles for a large set of isotopes and materials. Feedback from model adjustments to fit both differential and integral data and from applications are needed for the improvement of theoretical models and for identifying energy regions where additional experimental efforts are needed.

2.4 Nuclear data libraries

After successful evaluation, nuclear data are compiled and stored in nuclear data libraries. The ENDF format is the accepted format for data storage. The

(20)

content of an evaluated nuclear data file include the following: general infor-mation such as the number of fission neutrons (nubar) and delayed neutron data; resonance parameters; reaction cross sections; decay schemes; fission neutron multiplicities and nuclear data covariance information which are rep-resented by the so-called MF-numbers. These MF-numbered files are sub-divided into different reaction channels such as (n,tot), (n,el), (n,inl), which are represented by MT-numbers [34]. There are some national and interna-tional evaluated nuclear data libraries currently available from various nu-clear data centers. These are, the Japanese Evaluated Nunu-clear Data Library (JENDL) [36], Evaluated Nuclear Data Library (ENDF/B) [31] from the USA, the TALYS Evaluated Nuclear Data Library (TENDL) [37], Joint Evaluated Fission and Fusion File (JEFF) [38] from the OECD/NEA data bank, the Chi-nese Nuclear Data Library (CENDL) [39] and the Russian Nuclear Evaluated Data library (BROND) [40]. Other libraries for use in certain applications are the International Reactor Dosimetry File (IRDF) for reactor dosimetry appli-cations [41], the European Activation File (EAF) [42] and the Fusion Eval-uated Nuclear Data Library (FENDL) from the International Atomic Energy Commission (IAEA) [43].

2.5 Nuclear data definitions

The relationships between different reaction cross sections in the fast energy region in the evaluated nuclear data files (ENDF) are presented in this section. The total cross section (σtot) is given as a summation of the elastic scattering

cross section (σel) and non-elastic (σnon−el) cross section:

σtot(MT = 1) = σel(MT = 2) + σnon−el(MT = 3) (2.1)

where MT=1, 2 and 3 are the total, elastic and non-elastic cross sections in

ENDF nomenclature. The non-elastic cross section (σnon−el) which

repre-sents all other cross sections except the elastic scattering cross section can be expressed as:

σnon−el(MT = 3) = σinl(MT = 4) + σ2n(MT = 16) + σ3n(MT = 17)

+σf ission(MT = 18) + σnα(MT = 22) + σnp(MT = 28) (2.2)

+σγ(MT = 102) + σchargeparticle(MT = 103 − 107)

whereσinlis the (n,inl),σ2nis the (n,2n),σ3nis the (n,3n),σf issionis the (n,f),

σnα is the (n,nα), σnpis the (n,np),σγis the (n,γ) and σchargeparticlerepresents

the cross sections of producing charge particles. The MT numbers as presented in Eq. 2.2, are the corresponding MT numbers of the cross sections in the ENDF nomenclature. The inelastic cross section (MT=4) is given as the sum of the cross sections for the 1st-40th excited states. In the situation where the

(21)

elastic channel is fed by the compound nucleus decay besides the shape elastic scattering, the elastic cross section is expressed as [26, 29]:

σel(MT = 2) = σshape−el+ σcomp−el (2.3)

whereσshape−elandσcomp−elare the shape elastic and compound elastic cross sections. The shape elastic part comes from the optical model while the com-pound elastic part comes from comcom-pound nucleus theory. The total fission cross section (σf ission) can be found on MT=18 in the ENDF formatted file

and is given as the summation of the first, second, third and fourth chance fission cross sections:

σf ission(MT = 18) = σn, f(MT = 19) + σn,n f(MT = 20)

+σn,2n f(MT = 21) + σn,3n f(MT = 38) (2.4) whereσn, f is the first chance fission cross section andσn,n f,σn,2n f andσn,3n f

are the second, third and fourth chance fission cross sections respectively. The cross sections discussed above can be found on MF=3.

The angular distributions, which is made up of elastic and inelastic angular dis-tributions, can be found on MF=4. The elastic angular distribution



dσel

dΩ

 has two components - the direct and compound parts. The direct (shape-elastic) part comes directly from the optical model while the compound part comes from compound nucleus theory [26, 29]:

dσel dΩ = dσshape−el dΩ + dσcomp−el dΩ (2.5)

Similarly, the inelastic angular distribution to a single discrete state i 

n,ni 

dΩ

 can be given as the addition of the direct and compound parts [26, 29]:

dσi n,n dΩ = ni,direct,n dΩ + ni,compound,n dΩ (2.6)

2.6 Resonance parameters

The resolved and unresolved resonance parameters are found in File 2 (MF2-MT151) in the evaluated nuclear data file. These parameters include Eλ, Γ, Γn, Γγ, andΓf, where Eλ is the resonance energy, Γ,Γn,Γγ,Γf are the total,

neutron, radiative and fission widths respectively. These parameters can not be predicted from theory but can be determine from experiments. In Fig. 2.3,

(22)

J * O E * *J V0

Figure 2.3. An example of239Pu (n,γ) cross section at Eλ = 35.5 eV. The plot was

generated from the ENDF/B-VII.1 nuclear data library using the JANIS nuclear data tool [44].

an example of a single resonance of 239Pu capture cross section at the reso-nance energy of Eλ = 35.5 eV is presented. The plot was generated from the ENDF/B-VII.1 nuclear data library using the JANIS nuclear data tool [44].

Where σ0 is the cross section for the formation of the compound nucleus,

σγ

Γ is the height of the resonance peak for the (n,γ) cross section, Γγ

Γ is the probability of decay viaγ-emissions capture and, Γ, the total width, given as the sum of the partial widths is expressed as:

Γ = Γn+ Γγ+ Γf (2.7)

The resonance parameters are needed for the calculation of cross sections at reactor operating temperatures due to Doppler broadening of neutron reso-nances.

2.7 Covariance data

Covariance data specifies nuclear data uncertainties and their correlations. These data are needed for the assessment of uncertainties of design and safety parameters in nuclear technology applications [22]. Several methods are avail-able for the computation of nuclear data covariances. These methods can be classified under three categories: deterministic, Monte Carlo, and Hybrid ap-proaches [45]. In this work, the Monte Carlo method was used to implicitly determine the covariance of nuclear data.

Covariance data are usually stored in MF31-35 and MF40 in the evaluated nuclear data file. Once these covariance data are available in a nuclear data

(23)

evaluated file, uncertainty propagation for current and advanced reactor appli-cations can be performed as discussed in more detail in section 3.3.

(24)

3. Uncertainty Quantification

’It is in the admission of ignorance and the admission of uncertainty that there is a hope for the continuous motion of human beings in some direction that doesn’t get confined, permanently blocked, as it has so many times before in various periods in the history of man.’ - Richard P. Feynman

This chapter presents uncertainty quantification approaches made up of deter-ministic and stochastic methods currently utilized for nuclear data sensitivity and uncertainty analysis. Also, sources of uncertainties and a brief discussion of statistical estimations used to make statistical deductions are discussed. Uncertainty propagation involves the quantification of the effects of uncertain input parameters on model outputs. This involves studying the impact of vari-ability of the input parameters on model outputs. A closely related field known as sensitivity analysis involves the evaluation of the impacts of the variation of each input parameter in contributing to the output uncertainty. With sensitivity analysis, model inputs that cause significant uncertainty in the output and the relationships between the input and output parameters are identified. Fig. 3.1 illustrates the relation between the input, the model and the model output. If

Model

g = f(x1,x2,…,xn)

g(X)

x1

x2

xn

Figure 3.1. A flowchart showing the propagation of uncertainty in input

parame-ters through a model. The figure shows the input X = (x1,x2,...,xn), the model

g= f (x1,x2,...,xn) and the model output = g(X).

a model, g= f (x1,x2,...,xn), then the uncertainties in the input parameters,

i.e., a collection of random variables X= (x1,x2,...,xn), can be propagated

through the model to obtain a distribution of g(X). From the distribution, the first four moments can be determined as presented later in section 3.2. g(X) could be the neutron cross sections - in which case, the input variable X are

(25)

the distributions of the nuclear reaction model parameters and the model rep-resents the nuclear reaction models such as the optical and Hauser-Feshbach models. g(X) could also represent any reactor quantity of interest such as the

keff; in this case, the input parameter X represents the distributions of the

neu-tron cross sections. The model represents the geometrical and physical model of a neutron transport code such as presented in section 4.1.3.

In the TMC method discussed in more detailed in section 3.3.2, the model in-cludes both the nuclear physics and the neutron transport codes, consequently linking the reaction model parameter distributions, X, all the way to the macro-scopic reactor parameter distribution, g(X), of interest.

3.1 Sources of uncertainties

As mention earlier, nuclear data evaluation benefits from both experimental data and nuclear reaction modeling. Since the models have uncertain inputs, they are normally calibrated using experimental data. However, since exper-iments are not exact, measured quantities contain uncertainties. The errors in measurements come from a variety of sources such as from the method of measurement, errors due to the measuring instrument used and errors coming from the person performing the experiment [46]. If an error is defined as the difference between the measured and the true value, the uncertainty is defined as the estimate of the magnitude of the error. Measurement errors are usu-ally classified as random or systematic errors. Random errors are caused by unknown and unpredictable changes in the experiment, e.g., detector noise. These errors can be estimated by repeating a particular measurement under the same conditions; the mean value of the measurements then becomes the best estimate of the measured quantity. Systematic errors on the other hand are reproducible inaccuracies that shift all measurements within an experiment in a systematic way. An example of systematic uncertainty is the detector ef-ficiency. These errors can be identified by using a more accurate measuring instrument or by comparing a given result with a measurement of the same quantity performed with a different method [47]. In many cases, the errors can not be directly inferred from measurements but their magnitude is rather inferred from uncertainty propagation.

Similar to experimental uncertainties, the sources of uncertainties in modeling can generally be classified into two categories:

1. Aleatory or random uncertainty, which arises from the random nature of the system under investigation or by the random process in the model-ing, e.g., Monte Carlo modeling.

(26)

2. The epistemic uncertainty [46], which results in a systematic shift be-tween the modeled response parameter and the response parameters true value eg. model defect.

In Reactor Physics, a response parameter can be defined as any quantity of interest such as the keffor reaction rates, that can be measured or predicted in

a simulation [48]. Normally, epistemic uncertainties can arise from one of the following sources [46, 10]:

1. From the data and the parameters used in the model, e.g., cross sections, boundary conditions.

2. Uncertainties in the physics of modeled processes as a result of ’incom-plete knowledge’.

3. Assumptions and simplifications of the methods used for solving model equations.

4. From uncertainties arising from our inability to model complex geome-tries accurately.

3.2 Statistical estimation

Since evaluated nuclear data are obtained from models in combination with experiments which come with their corresponding uncertainties, statistical de-ductions are used to extract valuable information. Statistical information is normally provided by moments of the respective probability distributions [47]. The mean is commonly used to describe the best estimate of a parameter in a normal distribution while the spread is best described using the standard devi-ation and the variance. The uncertainties in physical observdevi-ations are usually interpreted as standard deviations.

If we consider a collection of random variables X = (x1,...,xn) and g is a

func-tion of X given as g= f (x1,x2,x3,...,xn), then the expectation of g(X), which

is the first central moment, denoted by E(g(X)) can be defined as [47]:

E(g(X)) ≡



SX

g(X)p(X)dX (3.1)

where p(X) is the probability density function and SX is the n-dimensional

space formed by all the possible values of X.

The second central moment, the variance of g(X), is the dispersion of a ran-dom variable around its mean, and can be expressed as:

V(g(X)) ≡ E(g(X) − E(g(X)))2 (3.2)

From Eq. 3.2, the standard deviation which is the positive square root of the variance denoted byσ can be expressed as:

σ ≡ [V(g(X))]1/2

(27)

Other higher moments are the skewness and the kurtosis. The skewness or the third central moment, S(g(X)), is a measure of the degree of asymmetry of a particular distribution and can be given as:

S(g(X)) ≡ E

[g(X) − E(g(X))]3

σ(g(X))3 (3.4)

A perfectly symmetric (Gaussian) distribution has a skewness value of zero while a negative skewness value implies a long tail of data more to the left of the mean. A positive skewness value indicates a long tail of data towards the right of the mean. The kurtosis which is the fourth central moment, can be given as:

K(g(X)) ≡

E [g(X) − E(g(X))]4

σ(g(X))4 (3.5)

The kurtosis measures the ’peakedness’ of a probability distribution of ran-dom variables. It determines whether the data are peaked or flat, relative to a normal distribution.

For a multivariate probability distribution X = (x1,x2...,xn), the second

or-der central moments comprise not only the variances but also the covariance. Covariance can be defined as a measure of how much two random variables change together; it measures the strength of the correlation between two vari-ables. The covariances between different input variables can be described by the so-called covariance matrix. Covariance matrices are symmetric compris-ing of off-diagonal covariances and variances along the diagonal. The sample covariance of N observations between two random variables xjand xk, can be

expressed as: cov(xj,xk) = 1 N− 1 N

i=1(xi j− xj)(xik− xk) (3.6) where xj and xk are the mean values of the xj and xk variables respectively.

If all the variables in X are uncorrelated the covariance matrix is made up of only the diagonal elements.

Using Eq. 3.6, the correlation coefficient (ρxjxk), which is the measure of

strength of the linear relationship between two variables can be given as:

ρxjxk=

cov(xj,xk)

σxjσxk

(3.7) where σxj and σxk are the standard deviation of xj and xk respectively. A

perfect negative correlation is represented by the value -1, while a 0 indicates no correlation and a +1 indicates a perfect positive correlation.

(28)

3.3 Uncertainty quantification approaches

Methods for nuclear data uncertainty quantification in best-estimate model predictions are usually either stochastic such as the TMC method used in this work or deterministic such as the perturbation approach utilized in, e.g., Ref. [49]. There are recently also hybrid methods that combine the strengths of the Monte Carlo and the deterministic approaches for uncertainty analy-sis [45].

3.3.1 Deterministic methods

With the deterministic approach, local sensitivities of particular response pa-rameters such as keff to variations in input parameters (nuclear data in our

case), can be determined by using the generalized perturbation theory, e.g., [49]. The local sensitivities of the model’s response to input parameter variation are accomplished by computing the response with input parameter values per-turbed usually within 1% from the nominal parameter values, i.e., by varying

x1,x2,...,xn, individually by 1%, the response of g(x) is computed. The

sensi-tivity vector (Sx= (Sx1,...,Sxn)), can be computed using:

Sx1=

g(X) x1

(3.8) The variable, X, is typically comprised of cross sections in multi-group for-mat. This method relies on the assumption that the sensitivity of the output parameter depends linearly on the variation in each input parameter [10]. Given that Vxdenotes the covariance matrix for the parameters (x1,...,xn), then

by using the ’sandwich rule’, the variance of the response can be expressed as [46]:

var(g(X)) = SxVxSxT (3.9)

where the superscript ’T’ indicates a transposed matrix and the covariance matrix (Vx) contains both diagonal and off-diagonal elements for correlated

parameters. In the case where the parameters are uncorrelated, Eq. 3.9 be-comes: var(g(X)) = n

i=1 S2xivar(xi) (3.10)

This approach has been extensively used to evaluate the impact of neutron cross-section uncertainties on some significant reactor response parameters related to the reactor fuel cycle [49].

3.3.2 Stochastic methods

The focus in this thesis is on Monte Carlo or stochastic methods used for nuclear data uncertainty propagation. These methods are based on random

(29)

sampling of input parameters, X. For each random set, m, of input parame-ters, Xm= (x1,m,...,xn,m) a set of output responses, g(Xm) of interest are

pro-duced [10]. Stochastic methods used for nuclear data uncertainty propagation can generally be classified into two types:

1. Propagation of nuclear data uncertainties from basic nuclear physics. 2. Uncertainty propagation from existing covariance information that come

with modern nuclear data evaluations

Among (1), the Total Monte Carlo (TMC) method is used in this work and described in detail in the next subsection.

Total Monte Carlo (TMC)

The TMC methodology (here called ’original TMC’) was first proposed by Koning and Rochman in 2008 [50] for nuclear data uncertainty propagation. With the method, inputs to a nuclear reaction code such as TALYS [26, 29], are created after sampling from nuclear model parameter distributions to cre-ate random nuclear data files [26]. Experimental data and their uncertainties are accounted for by using an accept/reject approach where calculations that fall within an acceptance band determined from comparison with experimen-tal data, are accepted and those that do not fulfil this criterion are rejected. The acceptance band was obtained by visual evaluation methods [51]. This approach has been criticised for not including experimental data in a more rig-orous way. There are, however, on-going work with a goal of incorporating both differential [51, 52] and integral experiments (PAPER IV) into the TMC methodology in a more rigorous way. The inclusion of integral experimental data in a rigorous way is also one of the main objectives of this thesis and is described in detail in section 4.5 and in PAPER IV.

To create a complete ENDF file covering from thermal to fast neutron ener-gies, non-TALYS data such as the neutron resonance data, total (n,tot), elastic (n,el), capture (n,γ) or fission (n,f) cross sections at low neutron energies, av-erage number of fission neutrons, and fission neutron spectra (for fissionable nuclei) are added to the results obtained from the TALYS code using other auxiliary codes [26].

A summary of the TMC method is depicted in a flow chart in Fig. 3.2. From the figure, parameters in both phenomenological and microscopic models im-plemented in nuclear reaction codes such as the optical model, pre-equilibrium, compound nucleus models are adjusted to reproduce experimental data. These parameters could be the real central radius (rv) or the real central diffuseness

(av) of the optical model for example. The output of the codes such as cross

sections, fission yields and angular distributions are compared with differen-tial experimental data by defining an uncertainty band which covers most of

(30)

Compare with Experimental data model parameters Physical models A large set of accepted random ENDF files Applications: Reactor calculations; Depletion studies, Transient analysis Observables: cross sections, fission yields, angular distributions Simulations 1.006 1.008 1.01 1.012 1.014 1.016 0 5 10 15 20 25 k effvalues N um ber of c ou nt s/ bi n 01 1 012 1 obs V

Figure 3.2. A flowchart depicting the Total Monte Carlo approach for uncertainty

analysis. Random files generated using the TALYS based code system [26] are pro-cessed and used to propagate nuclear data uncertainties in reactor calculations.

the experimental data available. Data that falls within this uncertainty band are accepted while those that do not fulfil this criterion are rejected. The accepted files are then processed into the ENDF format using the TEFAL code [53]. These ENDF formatted files are translated into usable formats and fed into neutron transport codes to obtain distributions in reactor parameters of inter-est. From these distributions, statistical information such as the moments pre-sented in subsection 3.2 can be inferred.

In Fig. 3.3, the (n,el) and (n,γ) of 50 random208Pb files are plotted as a func-tion of incident neutron energy. A spread in data can be observed for the entire energy region as presented in Fig. 3.3. This is expected as each file con-tains a unique set of nuclear data obtained from the distribution of the nuclear model parameters. Due to the variation of nuclear data, different distributions with their corresponding mean values and standard deviations can be obtained for different response parameters such as keff, temperature feedback

coeffi-cients and kinetic parameters [54]. By varying nuclear data using the TMC method for a particular response parameter, the observed total variance of the response parameter (σ2

obs) in the case of Monte Carlo neutron transport codes,

e.g., MCNP or SERPENT, can be expressed as:

σ2

obs= σND2 + σstat2 (3.11)

where σND2 is the variance of the response parameter under study due to nu-clear data uncertainties and, σstat2 is the variance due to statistics from the Monte Carlo code. In the case of deterministic codes, there are no statistical

(31)

10−2 10−1 100 101 102 103 104 105 208Pb (n,el)

Incident Energy (MeV)

Cross section (mb) 10−2 10−1 100 101 10−2 100 102 104 208Pb (n,γ)

Incident Energy (MeV)

Cross section (mb)

Figure 3.3. 50 random208Pb cross sections plotted as a function of incident neutron

energy. Left:208Pb(n,el) and right:208Pb(n,γ). Note that each random file contains a unique set of nuclear data.

uncertainties and Eq. 3.11 becomes:

σ2

obs= σND2 (3.12)

With the ’original TMC’ described above, the time taken for a single calcula-tion is increased by a factor of n, where n (the number of samples or random files)≥ 500 making it not suitable for some applications. As a solution, a faster method called the ’Fast TMC’ was developed [55]. By changing the seed of the random number generator within the Monte Carlo code and changing nu-clear data at the same time, a spread in the data that is due to both statistics and nuclear data is obtained (same as in Eq. 3.11). However, by using different seeds for each simulation, a more accurate estimate of the spread due to statis-tics is obtained and, therefore, the statistical requirement on each run could be lowered, thereby reducing the computational time involved for each calcula-tion. The usual rule of the thumb used for original TMC is: σstat 0.05σobs.

However, for fast TMC, σstat 0.5σobs [55]. A detailed presentation of fast

TMC methodology is found in Refs. [55, 56, 57]. In this work, the fast TMC was used and from here on, only referred to as the TMC method.

Other Monte Carlo methods from basic nuclear physics

Other Monte Carlo uncertainty propagation methods from basic nuclear physics include, the Unified Monte Carlo (UMC) approach proposed by D.L. Smith [58] and the Backward-Forward Monte Carlo (BFMC) proposed in Ref. [59]. The UMC method is based on the applications of Bayes theorem and the principle of maximum entropy as well as on fundamental definitions from probability theory [58]. The method seeks to incorporate experimental data into model calculations in a more rigorous and consistent manner [45].

(32)

and the Forward Monte Carlo steps. In the Backward step, a covariance matrix of model parameters is obtained by using a generalized χ2 with differential

data as constrains leading to observables consistent with experimental data. The generalized χ2 is used to quantify the likelihood of model calculation with respect to a set of experimental constraints. Starting from the covariance matrix of model parameters obtained from the backward step, a sampling of the Backward Monte Carlo parameter distribution is performed. The result-ing distribution of ND observables hence includes experimental uncertainty information [45].

Sampling from the evaluated co-variance matrix

Several other approaches are based on Monte Carlo sampling of nuclear data inputs based on covariance information that come with new nuclear data eval-uations. The disadvantage of these approaches is that they rely on the assump-tion of normal distribuassump-tions of the input parameters. The covariance data avail-able are usually not comprehensive and complete too [26]. One such method has been implemented in the AREVA GmbH code NUDUNA (NUclear Data UNcertainty Analysis) [60]. With this method, nuclear input parameters are first randomly sampled according to a multivariate distribution model based on covariance data. A large set of random data is generated and used for the computation of different response parameters. This method can include uncertainties of multiplicities, resonance parameters, fast neutron cross sec-tions and angular distribusec-tions. Another method is the GRS (Gesellschaft für Anlagen-und Reaktorsicherheit, Germany) method implemented in the SUSA (Software for Uncertainty and Sensitivity Analysis) code which depends on randomly grouped cross sections generated from existing covariance files [61] which are propagated to macroscopic reactor parameters.

Similarly, in Ref. [62], a stochastic sampling method for quantifying nuclear data uncertainties is accomplished by utilizing perturbed ACE formatted nu-clear data generated using multigroup nunu-clear data covariance information. This approach has been implemented successfully in the NUSS tool [62]. In another study, the SharkX tool [63] under development at the PSI, is used in combination with the CASMO-5 code [64] for uncertainty quantification and sensitivity analysis. Cross sections, fission spectrum, neutron multiplic-ities, decay constants as well as fission yields are perturbed based on statis-tical sampling methods. Also, uncertainty and sensitivity analyses applied to lattice calculations using perturbed multigroup cross section libraries with the DRAGON lattice code [65] have been presented in Refs. [66, 67]. In this work, however, the random nuclear data were produced using the TMC method.

(33)

4. Methodology

’With the current and future computer technology, and the accumulated amount of knowledge in the nuclear data community, we believe that this methodology is technologically condemned to succeed.’ - D. Rochman and A.J. Koning

In this chapter, the methodology used is presented. First, in section 4.1, the simulation tools used in this work are briefly presented. In section 4.2, model calculations performed with the TALYS code is presented. Methods for com-puting uncertainties due to both global and partial (local) variations of nu-clear data on neutronic parameters are discussed in 4.3. In section 4.4, the description of the ELECTRA reactor together with the propagation of nuclear data uncertainties for some macroscopic reactor parameters using the TMC method is presented. Section 4.5 describes methods developed as part of this work for nuclear data uncertainty reduction using integral experiments. This includes an accept/reject method and a method of assigning file weights based on the likelihood function. In addition, a methodology developed for select-ing benchmark experiments for reactor simulations, is presented in section 4.6. Also, in section 4.7, a correlation based sensitivity method is used to determine the sensitivity of benchmarks and application cases to different cross sections for particular isotopes and energy groups. Finally, a method for uncertainty propagation of nuclear data uncertainties in burnup calculations as well as a method for combining differential and integral experimental data for nuclear data adjustments are explained in sections 4.8 and 4.9.

4.1 Simulation tools

The work in this thesis was done using a number of computer codes. These codes include the TALYS based code system used for nuclear reaction cal-culations and the production of random nuclear data files. The NJOY [68] and PREPRO [69] codes for nuclear data processing, and MCNP5/X [8] and SERPENT [70] codes for reactor core calculations. Brief descriptions of these codes are presented in the subsequent subsections.

(34)

4.1.1 TALYS based code system

The TALYS based code system is made up of a group of codes coupled to-gether with a script called AutoTALYS [26]. This code system is used for nuclear data evaluation and the production of random nuclear data libraries. An example is the TALYS Evaluated Nuclear Data Library (TENDL) [37] which contains complete ENDF formatted nuclear data libraries including co-variance matrices for many isotopes, particles, energies, reaction channels and secondary quantities. The codes included in the TALYS based code system are the TALYS code [26, 29], TASMAN [71], TAFIS [72], TANES [73], TARES [74] and TEFAL [53] codes. These codes are sometimes referred to as the T6 code package.

The TALYS code which forms the main basis for the TMC methodology is a state of the art nuclear physics code used for the predictions and analysis of nuclear reactions [26, 29]. In the TMC methodology, the TALYS code is used to generate nuclear data for all open channels in the fast neutron energy re-gion, i.e., beyond the resonance region. This is achieved by fine tuning model parameters of various nuclear reaction models within the code so that model calculations reproduce differential experimental data. In situations where ex-perimental data are unavailable TALYS is used for the prediction and extrapo-lation of data [26]. The output of TALYS include total, elastic, inelastic cross sections, elastic and inelastic angular distributions and other reaction channels such as (n,2n), (n,np). To create a complete ENDF file covering from ther-mal to fast neutron energies, non-TALYS data such as the neutron resonance data, cross sections at low neutron energies, average number of fission neu-trons, and fission neutron spectra are added to the results obtained from the TALYS code. This is achieved by using other auxiliary codes [26] such as the TARES code [74] for resonance parameters, the TAFIS and TANES codes for the average number of fission neutrons and the fission neutron spectrum respectively [72, 73]. The TASMAN code [71], is used to create input files to TALYS and the other codes by generating random distributions of input pa-rameters by randomly sampling each input parameter from a distribution with a specific width for each parameter.

The uncertainty distribution in nuclear model parameters is often assumed to be either Gaussian or uniform shaped [26]. The different input files created by the TASMAN code are then run multiple times with the TALYS code, each time with a different set of model parameters, to obtain distributions in calcu-lated quantities. From the distributions obtained, statistical information such as the mean, standard deviations and variances and a full covariance matrix which includes both diagonal and off-diagonal elements can be obtained. Fi-nally, the TEFAL code is used to translate the nuclear reaction results obtained from all the different modules within the T6 code package into ENDF format-ted nuclear data libraries [53]. It must be noformat-ted that even though these codes

(35)

are coupled together, each code can work as a standalone simulation tool. Be-cause of the huge amount of time involved in the production of these random nuclear data files, most of the random files used in this work were obtained

from the TENDL project [75]. However, random 208Pb and206Pb files were

produced as part of this work as further described in section 4.2.

4.1.2 Processing codes

Between the ENDF formatted evaluated nuclear data and the users of nuclear data are a set of data-processing codes. These codes prepare nuclear data from the ENDF format to a variety of usable formats used in application codes [51]. Even though these codes are often overlooked, the accuracy of transport and reactor calculations depend to a large extent on the assumptions and approxi-mations introduced by these processing codes [51]. One widely used code is the NJOY nuclear data processing code [52] which is used to convert ENDF format files into useful forms for practical applications. To reflect the tem-peratures in real systems, for instance, energy dependent cross sections have to be reconstructed from resonance parameters, and then Doppler broadened to defined temperatures. In this subsection the processing of the nuclear data using the NJOY and PREPRO codes are presented.

NJOY processing code

The NJOY processing code [68] is used for preparing ENDF formatted nuclear data into usable formats for use in deterministic and Monte Carlo transport codes which are used for reactor calculations and analyses. In this work, the NJOY code was used to process random ENDF files into the ACE format at defined temperatures using the following module sequence:

MODER−→ RECONR −→ BROADR −→ UNRESR

−→ HEATR −→ PURR −→ ACER (4.1) The MODER module was used to convert ENDF input data into NJOY blocked binary mode. These data were then reconstructed into pointwise cross sec-tions which are Doppler broaden using the BROADR module. The UNRESR module is used to calculate effective self-shielded pointwise cross sections in the unresolved resonance region while the HEATR module is used to gener-ate pointwise heat production and radiation damage production cross sections. PURR is used to prepare unresolved region probability tables mostly used by MCNP and finally the ACER module converts the libraries into ACE format. PREPRO code

Similar to the NJOY code, PREPRO is a collection of modular codes designed to prepare data in the ENDF format into usable formats for applications [69].

(36)

In this work, the following module sequence was used:

LINEAR−→ RECENT −→ SIGMA1 −→ GROUPIE −→ FIXUP (4.2)

The LINEAR module was used to convert ENDF random nuclear data files into a linear-linear interpolable form. The RECENT module was used to reconstruct the resonance contributions into a linear interpolable form. The SIGMA1 module was used to Doppler broaden cross sections to defined tem-peratures for use in applications while the GROUPIE module can be used to calculate self-shielded cross sections and for collapsing pointwise ENDF files into multigroup cross sections. In this work, the GROUPIE module was used to calculate multigroup cross sections for random ENDF formatted nuclear data for use in cross section-parameter correlation calculations discussed in section 4.7. The FIXUP module was used for testing data formats from the different modules for consistency.

4.1.3 Neutron transport codes

Neutron transport codes are used to simulate the transport of neutrons in mate-rials. Some of these codes, such as SERPENT and MCNP, can also calculate the criticality. Two transport codes were used in this work: the SERPENT code version 1.1.17 [70] was used for all simulations involving the ELECTRA reactor. For the benchmark cases, criticality calculations were performed us-ing the MCNPX code version 2.5 [8]. These codes are briefly presented in this subsection.

SERPENT Monte Carlo code

The 3-D continuous-energy Reactor Physics code Serpent (version 1.1.17) [70] developed at VTT Technical Research Centre in Finland was used for simula-tions in this work. Serpent is specialized in 2-D lattice physics calculasimula-tions but has the capability of modeling complicated 3-D geometries as well. It also has a built-in burnup capability for reactor analyses. SERPENT uses the universe-based geometrical modeling for describing two or three dimensional fuel and reactor core configurations [70].

SERPENT utilizes following types of data for simulations:

1. Continuous energy neutron data which are made up of reaction cross sec-tions, energy and angular distribusec-tions, fission yields and delayed neu-tron data used for transport calculations. For this thesis, the uncertainties in these quantities are considered.

2. Thermal scattering data for moderator materials such as hydrogen bound in light water (H2O), deuterium bound in heavy water (D2O) and carbon

References

Related documents

Uncertainty Quantification for Wave Propagation and Flow Problems with Random Data.

Att notera är dock att det bara var 2 kvinnor som var missnöjda men ändå svarade på frågan, det som poängterats har då inte gällt vad som har kunnat förändras i familjen utan

Sambandet mellan olika musikaliseringsaspekter och bredare medie- relaterade sociala och kulturella förändringar är ett utmanande och viktigt ämne som bör utforskas ytterligare

In this paper, we choose to model the impact of the outside option by introducing two parameters: a tenant’s moving threshold (or reservation price) and the same tenant’s total

Light absorption in folded solar cells was modelled, and combinations of different active layer thicknesses, folding angles and materials were studied.. A beneficial light

What’s more, to gain less confusion and better view of data presentation, on the chart, each day will only have one data point value for each vital sign.. The value shows on the

As an example, an algorithmic trading system responsible for splitting large orders into several smaller orders could upon receipt of a new order study the results of actions

[r]