Using MCNP for in-core instrument calibration in CANDU
Energy Technology Data Exchange (ETDEWEB)
Taylor, D.C. [Point Lepreau Generating Station, NB Power, Lepreau, New Brunswick (Canada); Anghel, V.N.P.; Sur, B. [Atomic Energy of Canada Limited, Chalk River, Ontario (Canada)
2002-07-01
The calibration of in-core instruments is important for safe and economical CANDU operation. However, in-core detectors are not normally suited to bench calibration procedures. This paper describes the use and validation of detailed neutron transport calculations for the purpose of calibrating the response of in-core neutron flux detectors. The Monte-Carlo transport code, MCNP, was used to model the thermal neutron flux distribution in the region around self-powered in-core flux detectors (ICFDs), and in the vicinity of the calandria edge. The ICFD model was used to evaluate the reduction in signal of a given detector (the 'detector shading factor') due to neutron absorption in surrounding materials, detectors, and lead-cables. The calandria edge model was used to infer the accuracy of the calandria edge position from flux scans performed by AECL's traveling flux detector (TFD) system. The MCNP results were checked against experimental results on ICFDs, and also against shading factors computed by other means. The use of improved in-core detector calibration factors obtained by this new methodology will improve the accuracy of spatial flux control performance in CANDU-6 reactors. The accurate determination of TFD based calandria edge position is useful in the quantitative measurement of changes in in-core component dimensions and position due to aging, such as pressure tube sag. (author)
TET_2MCNP: A conversion program to implement tetrahearal-mesh models in MCNP
International Nuclear Information System (INIS)
Han, Min Cheol; Yeom, Yeon Soo; Nguyen, Thng Tat; Choi, Chan Soo; Lee, Hyun Su; Kim, Chan Hyeong
2016-01-01
Tetrahedral-mesh geometries can be used in the MCNP code, but the MCNP code accepts only the geometry in the Abaqus input file format; hence, the existing tetrahedral-mesh models first need to be converted to the Abacus input file format to be used in the MCNP code. In the present study, we developed a simple but useful computer program, TET_2MCNP, for converting TetGen-generated tetrahedral-mesh models to the Abacus input file format. TET_2MCNP is written in C++ and contains two components: one for converting a TetGen output file to the Abacus input file and the other for the reverse conversion process. The TET_2MCP program also produces an MCNP input file. Further, the program provides some MCNP-specific functions: the maximum number of elements (i.e., tetrahedrons) per part can be limited, and the material density of each element can be transferred to the MCNP input file. To test the developed program, two tetrahedral-mesh models were generated using TetGen and converted to the Abaqus input file format using TET_2MCNP. Subsequently, the converted files were used in the MCNP code to calculate the object- and organ-averaged absorbed dose in the sphere and phantom, respectively. The results show that the converted models provide, within statistical uncertainties, identical dose values to those obtained using the PHITS code, which uses the original tetrahedral-mesh models produced by the TetGen program. The results show that the developed program can successfully convert TetGen tetrahedral-mesh models to Abacus input files. In the present study, we have developed a computer program, TET_2MCNP, which can be used to convert TetGen-generated tetrahedral-mesh models to the Abaqus input file format for use in the MCNP code. We believe this program will be used by many MCNP users for implementing complex tetrahedral-mesh models, including computational human phantoms, in the MCNP code
TET{sub 2}MCNP: A conversion program to implement tetrahearal-mesh models in MCNP
Energy Technology Data Exchange (ETDEWEB)
Han, Min Cheol; Yeom, Yeon Soo; Nguyen, Thng Tat; Choi, Chan Soo; Lee, Hyun Su; Kim, Chan Hyeong [Dept. of Nuclear Engineering, Hanyang University, Seoul (Korea, Republic of)
2016-12-15
Tetrahedral-mesh geometries can be used in the MCNP code, but the MCNP code accepts only the geometry in the Abaqus input file format; hence, the existing tetrahedral-mesh models first need to be converted to the Abacus input file format to be used in the MCNP code. In the present study, we developed a simple but useful computer program, TET{sub 2}MCNP, for converting TetGen-generated tetrahedral-mesh models to the Abacus input file format. TET{sub 2}MCNP is written in C++ and contains two components: one for converting a TetGen output file to the Abacus input file and the other for the reverse conversion process. The TET{sub 2}MCP program also produces an MCNP input file. Further, the program provides some MCNP-specific functions: the maximum number of elements (i.e., tetrahedrons) per part can be limited, and the material density of each element can be transferred to the MCNP input file. To test the developed program, two tetrahedral-mesh models were generated using TetGen and converted to the Abaqus input file format using TET{sub 2}MCNP. Subsequently, the converted files were used in the MCNP code to calculate the object- and organ-averaged absorbed dose in the sphere and phantom, respectively. The results show that the converted models provide, within statistical uncertainties, identical dose values to those obtained using the PHITS code, which uses the original tetrahedral-mesh models produced by the TetGen program. The results show that the developed program can successfully convert TetGen tetrahedral-mesh models to Abacus input files. In the present study, we have developed a computer program, TET{sub 2}MCNP, which can be used to convert TetGen-generated tetrahedral-mesh models to the Abaqus input file format for use in the MCNP code. We believe this program will be used by many MCNP users for implementing complex tetrahedral-mesh models, including computational human phantoms, in the MCNP code.
CREPT-MCNP code for efficiency calibration of HPGe detectors with the representative point method.
Saegusa, Jun
2008-01-01
The representative point method for the efficiency calibration of volume samples has been previously proposed. For smoothly implementing the method, a calculation code named CREPT-MCNP has been developed. The code estimates the position of a representative point which is intrinsic to each shape of volume sample. The self-absorption correction factors are also given to make correction on the efficiencies measured at the representative point with a standard point source. Features of the CREPT-MCNP code are presented.
Elaborate SMART MCNP Modelling Using ANSYS and Its Applications
Song, Jaehoon; Surh, Han-bum; Kim, Seung-jin; Koo, Bonsueng
2017-09-01
An MCNP 3-dimensional model can be widely used to evaluate various design parameters such as a core design or shielding design. Conventionally, a simplified 3-dimensional MCNP model is applied to calculate these parameters because of the cumbersomeness of modelling by hand. ANSYS has a function for converting the CAD `stp' format into an MCNP input in the geometry part. Using ANSYS and a 3- dimensional CAD file, a very detailed and sophisticated MCNP 3-dimensional model can be generated. The MCNP model is applied to evaluate the assembly weighting factor at the ex-core detector of SMART, and the result is compared with a simplified MCNP SMART model and assembly weighting factor calculated by DORT, which is a deterministic Sn code.
MCNP calculation for calibration curve of X-ray fluorescence analysis
International Nuclear Information System (INIS)
Tan Chunming; Wu Zhifang; Guo Xiaojing; Xing Guilai; Wang Zhentao
2011-01-01
Due to the compositional variation of the sample, linear relationship between the element concentration and fluorescent intensity will not be well maintained in most X-ray fluorescence analysis. To overcome this, we use MCNP program to simulate fluorescent intensity of Fe (0∼100% concentration range) within binary mixture of Cr and O which represent typical strong absorption and weak absorption conditions respectively. The theoretic calculation shows that the relationship can be described as a curve determined by parameter p and value of p can be obtained with given absorption coefficient of substrate elements and element under detection. MCNP simulation results are consistent with theoretic calculation. Our research reveals that MCNP program can calculate the Calibration Curve of X-ray fluorescence very well. (authors)
Calibration of a foot borne spectrometry system using the MCNP 4C code
International Nuclear Information System (INIS)
Nylen, T.; Agren, G.
2004-01-01
The increased interest for the cycling of radioactive Caesium in natural ecosystems has gained need for rapid and reliable methods to investigate the deposition density in natural soils. One commonly used method, soil sampling, is a good method that correctly used gives information of both the horizontal and vertical distribution of the desired nuclide. The main disadvantage is that the method is time consuming regarding sampling, preparation and measurements. An alternative method is the use of semiconductors or scintillation detectors in the field i.e. in cars, airplanes, or helicopters. Theses methods are rapid and integrate over large areas which gives a more reliable mean value provided that the operator has some basic knowledge about the depth distribution of the radio nuclides and bulk density in the soil. To be effective the systems are often connected to a GPS to give the exact coordinate for each measurement. In a situation where the area of interest is too large to cover by soil samples and measurements by airplane not will give a spatial resolution good enough, one feasible method is to use a foot borne gamma spectrometry system. The advantage of a foot borne system is that the operator can cover a quite large area within a few hours and that the method can detect small anomalies in the deposition field which may be difficult to discover with soil samples. This abstract describes the calibration of a foot borne gamma-spectrometry system carried in a back-pack and consisting of a NaI-detector, a GPS and a system for logging activity and position. The detector system and surroundings has been modeled in the Monte Carlo code MCNP 4C (Figure 1). The Monte Carlo method gives the possibility to study the influence of complex geometries that are difficult to create for a practical calibration using real activity. The results of the MCNP calibration model, has been compared to foot borne gamma-spectrometry field measurements in a Cs-137 deposition area. A
MCNP modelling of a combined neutron/gamma counter
Bourva, L C A; Ottmar, H; Weaver, D R
1999-01-01
A series of Monte Carlo neutron calculations for a combined gamma/passive neutron coincidence counter has been performed. This type of device, part of a suite of non-destructive assay instruments utilised for the enforcement of the Euratom nuclear safeguards within the European Union, is to be used for high accuracy measurements of the plutonium content of small samples of nuclear materials. The multi-purpose Monte Carlo N-particle (MCNP) code version 4B has been used to model in detail the neutron coincidence detector and to investigate the leakage self-multiplication of PuO sub 2 and mixed U-Pu oxide (MOX) reference samples used to calibrate the instrument. The MCNP calculations have been used together with a neutron coincidence counting interpretative model to determine characteristic parameters of the detector. A comparative study to both experimental and previous numerical results has been performed. Sensitivity curves of the variation of the detector's efficiency, epsilon, to, alpha, the ratio of (alpha...
International Nuclear Information System (INIS)
Pillon, M.; Martone, M.; Verschuur, K.A.; Jarvis, O.N.; Kaellne, J.
1989-01-01
Neutron transport calculations have been performed using fluence ray tracing (FURNACE code) and Monte Carlo particle trajectory sampling methods (MCNP code) in order to determine the neutron fluence and energy distributions at different locations in the JET tokamak. These calculations were used to calibrate the activation measurements used in the determination of the absolute fusion neutron yields from the JET plasma. We present here the neutron activation response coefficients calculated for three different materials. Comparison of the MCNP and FURNACE results helps identify the sources of error in these neutron transport calculations. The accuracy of these calculations was tested by comparing the total 2.5 MeV neutron yields derived from the activation measurements with those obtained with calibrated fission chambers; agreement at the ±15% level was demonstrate. (orig.)
MCNP HPGe detector benchmark with previously validated Cyltran model.
Hau, I D; Russ, W R; Bronson, F
2009-05-01
An exact copy of the detector model generated for Cyltran was reproduced as an MCNP input file and the detection efficiency was calculated similarly with the methodology used in previous experimental measurements and simulation of a 280 cm(3) HPGe detector. Below 1000 keV the MCNP data correlated to the Cyltran results within 0.5% while above this energy the difference between MCNP and Cyltran increased to about 6% at 4800 keV, depending on the electron cut-off energy.
E language based on MCNP modeling software for autonomous
International Nuclear Information System (INIS)
Li Fei; Ge Liangquan; Zhang Qingxian
2010-01-01
MCNP (Monte Carlo N-Particle Code) is based on the Monte Carlo method for computing neutron, photon and other particles as the object of the movement simulation computer program. Because of its powerful computing simulation, flexible and universal features in many fields has been widely used, but due to a software professional in the operating area has been greatly restricted, so that in later development has been greatly hindered. E-language was used in order to develop the autonomy of MCNP modeling software, used to address users not familiar with MCNP and can not create object model, get rid of dull red tape 'notebook' type of program type and built a new MCNP modeling system. (authors)
Calibration curves of a PGNAA system for cement raw material analysis using the MCNP code
International Nuclear Information System (INIS)
Oliveira, Carlos; Salgado, Jose
1998-01-01
In large samples, the γ-ray count rate of a prompt gamma neutron activation analysis system is a multi-variable function of the elemental dry composition, density, water content and thickness of the material. The experimental calibration curves require tremendous laboratory work, using a great number of standards with well-known compositions. Although a Monte Carlo simulation study does not avoid the experimental calibration work, it reduces the number of experimental calibration standards. This paper is part of a feasibility study for a PGNAA system for on-line continuous characterisation of cement raw material conveyed on a belt (Oliveira, C., Salgado, J. and Carvalho, F. G. (1997) Optimisation of PGNAA instrument design for cement raw materials using the MCNP code. J. Radioanal. Nucl. Chem. 216(2), 191-198; Oliveira, C., Salgado, J., Goncalves, I. F., Carvalho, F. G. and Leitao, F. (1997a) A Monte Carlo study of the influence of geometry arrangements and structural materials on a PGNAA system performance for cement raw materials analysis. Appl. Radiat. Isot. (accepted); Oliveira, C., Salgado, J. and Leitao, F. (1997b) Density and water content corrections in the gamma count rate of a PGNAA system for cement raw material analysis using the MCNP code. Appl. Radiat. Isot. (accepted).]. It reports on the influence of the density, mass water content and thickness on the calibration curves of the PGNAA system. The MCNP-4A code, running in a Pentium-PC and in a DEC workstation, was used to simulate the PGNAA configuration system
A MCNP-based calibration method and a voxel phantom for in vivo monitoring of 241Am in skull
International Nuclear Information System (INIS)
Moraleda, M.; Gomez-Ros, J.M.; Lopez, M.A.; Navarro, T.; Navarro, J.F.
2004-01-01
Whole body counter (WBC) facilities are currently used for assessment of internal radionuclide body burdens by directly measuring the radiation emitted from the body. Previous calibration of the detection devices requires the use of specific anthropomorphic phantoms. This paper describes the MCNP-based Monte Carlo technique developed for calibration of the germanium detectors (Canberra LE Ge) used in the CIEMAT WBC for in vivo measurements of 241 Am in skull. The proposed method can also be applied for in vivo counting of different radionuclides distributed in other anatomical regions as well as for other detectors. A computer software was developed to automatically generate the input files for the MCNP code starting from any segmented human anatomy data. A specific model of a human head for the assessment of 241 Am was built based on the tomographic phantom VOXELMAN of Yale University. The germanium detectors were carefully modelled from data provided by the manufacturer. This numerical technique has been applied to investigate the best counting geometry and the uncertainty due to improper positioning of the detectors
Simplification of an MCNP model designed for dose rate estimation
Laptev, Alexander; Perry, Robert
2017-09-01
A study was made to investigate the methods of building a simplified MCNP model for radiological dose estimation. The research was done using an example of a complicated glovebox with extra shielding. The paper presents several different calculations for neutron and photon dose evaluations where glovebox elements were consecutively excluded from the MCNP model. The analysis indicated that to obtain a fast and reasonable estimation of dose, the model should be realistic in details that are close to the tally. Other details may be omitted.
Simplification of an MCNP model designed for dose rate estimation
Directory of Open Access Journals (Sweden)
Laptev Alexander
2017-01-01
Full Text Available A study was made to investigate the methods of building a simplified MCNP model for radiological dose estimation. The research was done using an example of a complicated glovebox with extra shielding. The paper presents several different calculations for neutron and photon dose evaluations where glovebox elements were consecutively excluded from the MCNP model. The analysis indicated that to obtain a fast and reasonable estimation of dose, the model should be realistic in details that are close to the tally. Other details may be omitted.
Shielding calculations for neutron calibration bunker using Monte Carlo code MCNP-4C
International Nuclear Information System (INIS)
Suman, H.; Kharita, M. H.; Yousef, S.
2008-02-01
In this work, the dose arising from an Am-Be source of 10 8 neutron/sec strength located inside the newly constructed neutron calibration bunker in the National Radiation Metrology Laboratories, was calculated using MCNP-4C code. It was found that the shielding of the neutron calibration bunker is sufficient. As the calculated dose is not expected to exceed in inhabited areas 0.183 μSv/hr, which is 10 times smaller than the regulatory dose constraints. Hence, it can be concluded that the calibration bunker can house - from the external exposure point of view - an Am-Be neutron source of 10 9 neutron/sec strength. It turned out that the neutron dose from the source is few times greater than the photon dose. The sky shine was found to contribute significantly to the total dose. This contribution was estimated to be 60% of the neutron dose and 10% of the photon dose. The systematic uncertainties due to various factors have been assessed and was found to be between 4 and 10% due to concrete density variations; 15% due to the dose estimation method; 4 -10% due to weather variations (temperature and moisture). The calculated dose was highly sensitive to the changes in source spectra. The uncertainty due to the use of two different neutron spectra is about 70%.(author)
Development and application of MCNP auto-modeling tool: Mcam 3.0
International Nuclear Information System (INIS)
Liu Xiaoping; Luo Yuetong; Tong Lili
2005-01-01
Mcam is abbreviation of 'MCNP Automatic Modeling', which is a CAD interface program of MCNP geometry model based on CAD technology. Making use of existing CAD technology is Mcam's major characteristic. In rough, CAD technology is utilized in the following two ways: (1) Mcam makes it possible to create MCNP geometry model in some CAD software; (2) accelerate creation of MCNP geometry model by inheriting some existing 3D CAD model. The paper gives an introduction of Mcam's major ability: (1) ability to convert CAD model into MCNP geometry model; (2) ability to convert MCNP geometry model into CAD model; (3) ability to construct CAD model. At the end of the paper, several models are given to demonstrate Mcam's different ability respectively
LEU-fueled SLOWPOKE-2 modelling with MCNP4A
International Nuclear Information System (INIS)
Pierre, J.R.M.; Bonin, H.W.J.
1996-01-01
Following the commissioning of the Low Enrichment Uranium (LEU) Fueled SLOWPOKE-2 research reactor at Royal Military College,excess reactivity measurements were conducted over a range of temperature and power. Given the advance in computer technology, the use of Monte Carlo N-Particle Transport Code System MCNP 4A appeared possible for the simulation of the LEU-fueled SLOWPOKE-2 reactor core, and this work demonstrates that this is indeed the case. MCNP 4A is a full three dimensional program allowing the user to enter a large amount of complexity. The limit on the geometry complexity is the computing time required to achieve a reasonable standard deviation. To this point several models of the SLOWPOKE-2 have been developed giving some insight on the sensitivity of the code. MCNP4A can use various cross section libraries. The aim of this work is to calculate accurately the reactivity of the core and reproduce The temperature trend of the reactivity. The model preserved as much as possible the details of the core and facility in order to allow further study in the flux mapping
International Nuclear Information System (INIS)
Raghunath, T.; Narasimhanath, V.; Sunil, C.N.; Kumaravel, S.; Ramakrishna, V.; Prashanth Kumar, M.; Nair, B.S.K.; Purohit, R.G.; Sarkar, P.K.
2012-01-01
To carry out measurement of 41 Ar gaseous activity an attempt is made to calibrate the detector (HPGe) for Standard Measuring Flask (SMF) and Marinelli vessel geometry and compare their efficiencies. As standard gaseous source of 41 Ar is not available the calibration is done using liquid standard source of 22 Na (having 1274.5 KeV gamma energy close to the 1293.6 KeV gamma energy of 41 Ar). The HPGe detector and both the geometries are simulated and efficiencies for Full Energy Peak (FEP) are obtained using MCNP. The correction factor for energy and sample matrix is obtained from simulated efficiencies. By applying these correction factors the calibration is done. (author)
Cinelli, Giorgia; Tositti, Laura; Mostacci, Domiziano; Baré, Jonathan
2016-05-01
In view of assessing natural radioactivity with on-site quantitative gamma spectrometry, efficiency calibration of NaI(Tl) detectors is investigated. A calibration based on Monte Carlo simulation of detector response is proposed, to render reliable quantitative analysis practicable in field campaigns. The method is developed with reference to contact geometry, in which measurements are taken placing the NaI(Tl) probe directly against the solid source to be analyzed. The Monte Carlo code used for the simulations was MCNP. Experimental verification of the calibration goodness is obtained by comparison with appropriate standards, as reported. On-site measurements yield a quick quantitative assessment of natural radioactivity levels present ((40)K, (238)U and (232)Th). On-site gamma spectrometry can prove particularly useful insofar as it provides information on materials from which samples cannot be taken. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Modeling the PUSPATI TRIGA Reactor using MCNP code
International Nuclear Information System (INIS)
Mohamad Hairie Rabir; Mark Dennis Usang; Naim Syauqi Hamzah; Julia Abdul Karim; Mohd Amin Sharifuldin Salleh
2012-01-01
The 1 MW TRIGA MARK II research reactor at Malaysian Nuclear Agency achieved initial criticality on June 28, 1982. The reactor is designed to effectively implement the various fields of basic nuclear research, manpower training, and production of radioisotopes. This paper describes the reactor parameters calculation for the PUSPATI TRIGA REACTOR (RTP); focusing on the application of the developed reactor 3D model for criticality calculation, analysis of power and neutron flux distribution and depletion study of TRIGA fuel. The 3D continuous energy Monte Carlo code MCNP was used to develop a versatile and accurate full model of the TRIGA reactor. The model represents in detailed all important components of the core and shielding with literally no physical approximation. (author)
BWR Fuel Assemblies Physics Analysis Utilizing 3D MCNP Modeling
International Nuclear Information System (INIS)
Chiang, Ren-Tai; Williams, John B.; Folk, Ken S.
2008-01-01
MCNP is used to model a partially controlled BWR fresh fuel four assemblies (2x2) system for better understanding BWR fuel behavior and for benchmarking production codes. The impact of the GE14 plenum regions on axial power distribution is observed by comparing against the GE13 axial power distribution, in which the GE14 relative power is lower than the GE13 relative power at the 15. node and at the 16. node due to presence of the plenum regions in GE14 fuel in these two nodes. The segmented rod power distribution study indicates that the azimuthally dependent power distribution is very significant for the fuel rods next to the water gap in the uncontrolled portion. (authors)
BWR Fuel Assemblies Physics Analysis Utilizing 3D MCNP Modeling
Energy Technology Data Exchange (ETDEWEB)
Chiang, Ren-Tai [University of Florida, Gainesville, Florida 32611 (United States); Williams, John B.; Folk, Ken S. [Southern Nuclear Company, Birmingham, Alabama 35242 (United States)
2008-07-01
MCNP is used to model a partially controlled BWR fresh fuel four assemblies (2x2) system for better understanding BWR fuel behavior and for benchmarking production codes. The impact of the GE14 plenum regions on axial power distribution is observed by comparing against the GE13 axial power distribution, in which the GE14 relative power is lower than the GE13 relative power at the 15. node and at the 16. node due to presence of the plenum regions in GE14 fuel in these two nodes. The segmented rod power distribution study indicates that the azimuthally dependent power distribution is very significant for the fuel rods next to the water gap in the uncontrolled portion. (authors)
Fuel element transfer cask modelling using MCNP technique
International Nuclear Information System (INIS)
Rosli Darmawan
2009-01-01
Full text: After operating for more than 25 years, some of the Reaktor TRIGA PUSPATI (RTP) fuel elements would have been depleted. A few addition and fuel reconfiguration exercises have to be conducted in order to maintain RTP capacity. Presently, RTP spent fuels are stored at the storage area inside RTP tank. The need to transfer the fuel element outside of RTP tank may be prevalence in the near future. The preparation shall be started from now. A fuel element transfer cask has been designed according to the recommendation by the fuel manufacturer and experience of other countries. A modelling using MCNP code has been conducted to analyse the design. The result shows that the design of transfer cask fuel element is safe for handling outside the RTP tank according to recent regulatory requirement. (author)
Fuel Element Transfer Cask Modelling Using MCNP Technique
International Nuclear Information System (INIS)
Darmawan, Rosli; Topah, Budiman Naim
2010-01-01
After operating for more than 25 years, some of the Reaktor TRIGA Puspati (RTP) fuel elements would have been depleted. A few addition and fuel reconfiguration exercises have to be conducted in order to maintain RTP capacity. Presently, RTP spent fuels are stored at the storage area inside RTP tank. The need to transfer the fuel element outside of RTP tank may be prevalence in the near future. The preparation shall be started from now. A fuel element transfer cask has been designed according to the recommendation by the fuel manufacturer and experience of other countries. A modelling using MCNP code has been conducted to analyse the design. The result shows that the design of transfer cask fuel element is safe for handling outside the RTP tank according to recent regulatory requirement.
International Nuclear Information System (INIS)
Ahlers, C.F.; Liu, H.H.
2001-01-01
The purpose of this Analysis/Model Report (AMR) is to document the Calibrated Properties Model that provides calibrated parameter sets for unsaturated zone (UZ) flow and transport process models for the Yucca Mountain Site Characterization Project (YMP). This work was performed in accordance with the AMR Development Plan for U0035 Calibrated Properties Model REV00 (CRWMS M and O 1999c). These calibrated property sets include matrix and fracture parameters for the UZ Flow and Transport Model (UZ Model), drift seepage models, drift-scale and mountain-scale coupled-processes models, and Total System Performance Assessment (TSPA) models as well as Performance Assessment (PA) and other participating national laboratories and government agencies. These process models provide the necessary framework to test conceptual hypotheses of flow and transport at different scales and predict flow and transport behavior under a variety of climatic and thermal-loading conditions
International Nuclear Information System (INIS)
Ahlers, C.; Liu, H.
2000-01-01
The purpose of this Analysis/Model Report (AMR) is to document the Calibrated Properties Model that provides calibrated parameter sets for unsaturated zone (UZ) flow and transport process models for the Yucca Mountain Site Characterization Project (YMP). This work was performed in accordance with the ''AMR Development Plan for U0035 Calibrated Properties Model REV00. These calibrated property sets include matrix and fracture parameters for the UZ Flow and Transport Model (UZ Model), drift seepage models, drift-scale and mountain-scale coupled-processes models, and Total System Performance Assessment (TSPA) models as well as Performance Assessment (PA) and other participating national laboratories and government agencies. These process models provide the necessary framework to test conceptual hypotheses of flow and transport at different scales and predict flow and transport behavior under a variety of climatic and thermal-loading conditions
Comparison of CdZnTe neutron detector models using MCNP6 and Geant4
Wilson, Emma; Anderson, Mike; Prendergasty, David; Cheneler, David
2018-01-01
The production of accurate detector models is of high importance in the development and use of detectors. Initially, MCNP and Geant were developed to specialise in neutral particle models and accelerator models, respectively; there is now a greater overlap of the capabilities of both, and it is therefore useful to produce comparative models to evaluate detector characteristics. In a collaboration between Lancaster University, UK, and Innovative Physics Ltd., UK, models have been developed in both MCNP6 and Geant4 of Cadmium Zinc Telluride (CdZnTe) detectors developed by Innovative Physics Ltd. Herein, a comparison is made of the relative strengths of MCNP6 and Geant4 for modelling neutron flux and secondary γ-ray emission. Given the increasing overlap of the modelling capabilities of MCNP6 and Geant4, it is worthwhile to comment on differences in results for simulations which have similarities in terms of geometries and source configurations.
Modelling of a proton spot scanning system using MCNP6
International Nuclear Information System (INIS)
Ardenfors, O; Gudowska, I; Dasu, A; Kopeć, M
2017-01-01
The aim of this work was to model the characteristics of a clinical proton spot scanning beam using Monte Carlo simulations with the code MCNP6. The proton beam was defined using parameters obtained from beam commissioning at the Skandion Clinic, Uppsala, Sweden. Simulations were evaluated against measurements for proton energies between 60 and 226 MeV with regard to range in water, lateral spot sizes in air and absorbed dose depth profiles in water. The model was also used to evaluate the experimental impact of lateral signal losses in an ionization chamber through simulations using different detector radii. Simulated and measured distal ranges agreed within 0.1 mm for R 90 and R 80 , and within 0.2 mm for R 50 . The average absolute difference of all spot sizes was 0.1 mm. The average agreement of absorbed dose integrals and Bragg-peak heights was 0.9%. Lateral signal losses increased with incident proton energy with a maximum signal loss of 7% for 226 MeV protons. The good agreement between simulations and measurements supports the assumptions and parameters employed in the presented Monte Carlo model. The characteristics of the proton spot scanning beam were accurately reproduced and the model will prove useful in future studies on secondary neutrons. (paper)
Monte Carlo modelling of large scale NORM sources using MCNP.
Wallace, J D
2013-12-01
The representative Monte Carlo modelling of large scale planar sources (for comparison to external environmental radiation fields) is undertaken using substantial diameter and thin profile planar cylindrical sources. The relative impact of source extent, soil thickness and sky-shine are investigated to guide decisions relating to representative geometries. In addition, the impact of source to detector distance on the nature of the detector response, for a range of source sizes, has been investigated. These investigations, using an MCNP based model, indicate a soil cylinder of greater than 20 m diameter and of no less than 50 cm depth/height, combined with a 20 m deep sky section above the soil cylinder, are needed to representatively model the semi-infinite plane of uniformly distributed NORM sources. Initial investigation of the effect of detector placement indicate that smaller source sizes may be used to achieve a representative response at shorter source to detector distances. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.
MCNP-based computational model for the Leksell gamma knife.
Trnka, Jiri; Novotny, Josef; Kluson, Jaroslav
2007-01-01
We have focused on the usage of MCNP code for calculation of Gamma Knife radiation field parameters with a homogenous polystyrene phantom. We have investigated several parameters of the Leksell Gamma Knife radiation field and compared the results with other studies based on EGS4 and PENELOPE code as well as the Leksell Gamma Knife treatment planning system Leksell GammaPlan (LGP). The current model describes all 201 radiation beams together and simulates all the sources in the same time. Within each beam, it considers the technical construction of the source, the source holder, collimator system, the spherical phantom, and surrounding material. We have calculated output factors for various sizes of scoring volumes, relative dose distributions along basic planes including linear dose profiles, integral doses in various volumes, and differential dose volume histograms. All the parameters have been calculated for each collimator size and for the isocentric configuration of the phantom. We have found the calculated output factors to be in agreement with other authors' works except the case of 4 mm collimator size, where averaging over the scoring volume and statistical uncertainties strongly influences the calculated results. In general, all the results are dependent on the choice of the scoring volume. The calculated linear dose profiles and relative dose distributions also match independent studies and the Leksell GammaPlan, but care must be taken about the fluctuations within the plateau, which can influence the normalization, and accuracy in determining the isocenter position, which is important for comparing different dose profiles. The calculated differential dose volume histograms and integral doses have been compared with data provided by the Leksell GammaPlan. The dose volume histograms are in good agreement as well as integral doses calculated in small calculation matrix volumes. However, deviations in integral doses up to 50% can be observed for large
Monte Carlo modeling of ion chamber performance using MCNP.
Wallace, J D
2012-12-01
Ion Chambers have a generally flat energy response with some deviations at very low (2 MeV) energies. Some improvements in the low energy response can be achieved through use of high atomic number gases, such as argon and xenon, and higher chamber pressures. This work looks at the energy response of high pressure xenon-filled ion chambers using the MCNP Monte Carlo package to develop geometric models of a commercially available high pressure ion chamber (HPIC). The use of the F6 tally as an estimator of the energy deposited in a region of interest per unit mass, and the underlying assumptions associated with its use are described. The effect of gas composition, chamber gas pressure, chamber wall thickness, and chamber holder wall thicknesses on energy response are investigated and reported. The predicted energy response curve for the HPIC was found to be similar to that reported by other investigators. These investigations indicate that improvements to flatten the overall energy response of the HPIC down to 70 keV could be achieved through use of 3 mm-thick stainless steel walls for the ion chamber.
An improved algorithm to convert CAD model to MCNP geometry model based on STEP file
International Nuclear Information System (INIS)
Zhou, Qingguo; Yang, Jiaming; Wu, Jiong; Tian, Yanshan; Wang, Junqiong; Jiang, Hai; Li, Kuan-Ching
2015-01-01
Highlights: • Fully exploits common features of cells, making the processing efficient. • Accurately provide the cell position. • Flexible to add new parameters in the structure. • Application of novel structure in INP file processing, conveniently evaluate cell location. - Abstract: MCNP (Monte Carlo N-Particle Transport Code) is a general-purpose Monte Carlo N-Particle code that can be used for neutron, photon, electron, or coupled neutron/photon/electron transport. Its input file, the INP file, has the characteristics of complicated form and is error-prone when describing geometric models. Due to this, a conversion algorithm that can solve the problem by converting general geometric model to MCNP model during MCNP aided modeling is highly needed. In this paper, we revised and incorporated a number of improvements over our previous work (Yang et al., 2013), which was proposed and targeted after STEP file and INP file were analyzed. Results of experiments show that the revised algorithm is more applicable and efficient than previous work, with the optimized extraction of geometry and topology information of the STEP file, as well as the production efficiency of output INP file. This proposed research is promising, and serves as valuable reference for the majority of researchers involved with MCNP-related researches
International Nuclear Information System (INIS)
Pölz, Stefan; Laubersheimer, Sven; Eberhardt, Jakob S; Harrendorf, Marco A; Keck, Thomas; Benzler, Andreas; Breustedt, Bastian
2013-01-01
The basic idea of Voxel2MCNP is to provide a framework supporting users in modeling radiation transport scenarios using voxel phantoms and other geometric models, generating corresponding input for the Monte Carlo code MCNPX, and evaluating simulation output. Applications at Karlsruhe Institute of Technology are primarily whole and partial body counter calibration and calculation of dose conversion coefficients. A new generic data model describing data related to radiation transport, including phantom and detector geometries and their properties, sources, tallies and materials, has been developed. It is modular and generally independent of the targeted Monte Carlo code. The data model has been implemented as an XML-based file format to facilitate data exchange, and integrated with Voxel2MCNP to provide a common interface for modeling, visualization, and evaluation of data. Also, extensions to allow compatibility with several file formats, such as ENSDF for nuclear structure properties and radioactive decay data, SimpleGeo for solid geometry modeling, ImageJ for voxel lattices, and MCNPX’s MCTAL for simulation results have been added. The framework is presented and discussed in this paper and example workflows for body counter calibration and calculation of dose conversion coefficients is given to illustrate its application. (paper)
International Nuclear Information System (INIS)
Ghezzehej, T.
2004-01-01
The purpose of this model report is to document the calibrated properties model that provides calibrated property sets for unsaturated zone (UZ) flow and transport process models (UZ models). The calibration of the property sets is performed through inverse modeling. This work followed, and was planned in, ''Technical Work Plan (TWP) for: Unsaturated Zone Flow Analysis and Model Report Integration'' (BSC 2004 [DIRS 169654], Sections 1.2.6 and 2.1.1.6). Direct inputs to this model report were derived from the following upstream analysis and model reports: ''Analysis of Hydrologic Properties Data'' (BSC 2004 [DIRS 170038]); ''Development of Numerical Grids for UZ Flow and Transport Modeling'' (BSC 2004 [DIRS 169855]); ''Simulation of Net Infiltration for Present-Day and Potential Future Climates'' (BSC 2004 [DIRS 170007]); ''Geologic Framework Model'' (GFM2000) (BSC 2004 [DIRS 170029]). Additionally, this model report incorporates errata of the previous version and closure of the Key Technical Issue agreement TSPAI 3.26 (Section 6.2.2 and Appendix B), and it is revised for improved transparency
MCNP Modeling Results for Location of Buried TRU Waste Drums
International Nuclear Information System (INIS)
Steinman, D K; Schweitzer, J S
2006-01-01
In the 1960's, fifty-five gallon drums of TRU waste were buried in shallow pits on remote U.S. Government facilities such as the Idaho National Engineering Laboratory (now split into the Idaho National Laboratory and the Idaho Completion Project [ICP]). Subsequently, it was decided to remove the drums and the material that was in them from the burial pits and send the material to the Waste Isolation Pilot Plant in New Mexico. Several technologies have been tried to locate the drums non-intrusively with enough precision to minimize the chance for material to be spread into the environment. One of these technologies is the placement of steel probe holes in the pits into which wireline logging probes can be lowered to measure properties and concentrations of material surrounding the probe holes for evidence of TRU material. There is also a concern that large quantities of volatile organic compounds (VOC) are also present that would contaminate the environment during removal. In 2001, the Idaho National Engineering and Environmental Laboratory (INEEL) built two pulsed neutron wireline logging tools to measure TRU and VOC around the probe holes. The tools are the Prompt Fission Neutron (PFN) and the Pulsed Neutron Gamma (PNG), respectively. They were tested experimentally in surrogate test holes in 2003. The work reported here estimates the performance of the tools using Monte-Carlo modelling prior to field deployment. A MCNP model was constructed by INEEL personnel. It was modified by the authors to assess the ability of the tools to predict quantitatively the position and concentration of TRU and VOC materials disposed around the probe holes. The model was used to simulate the tools scanning the probe holes vertically in five centimetre increments. A drum was included in the model that could be placed near the probe hole and at other locations out to forty-five centimetres from the probe-hole in five centimetre increments. Scans were performed with no chlorine in the
Shahmohammadi Beni, Mehrdad; Krstic, Dragana; Nikezic, Dragoslav; Yu, Kwan Ngok
2016-09-01
Many studies on biological effects of neutrons involve dose responses of neutrons, which rely on accurately determined absorbed doses in the irradiated cells or living organisms. Absorbed doses are difficult to measure, and are commonly surrogated with doses measured using separate detectors. The present work describes the determination of doses absorbed in the cell layer underneath a medium column (D A ) and the doses absorbed in an ionization chamber (D E ) from neutrons through computer simulations using the MCNP-5 code, and the subsequent determination of the conversion coefficients R (= D A /D E ). It was found that R in general decreased with increase in the medium thickness, which was due to elastic and inelastic scattering. For 2-MeV neutrons, conspicuous bulges in R values were observed at medium thicknesses of about 500, 1500, 2500 and 4000 μm, and these were attributed to carbon, oxygen and nitrogen nuclei, and were reflections of spikes in neutron interaction cross sections with these nuclei. For 0.1-MeV neutrons, no conspicuous bulges in R were observed (except one at ~2000 μm that was due to photon interactions), which was explained by the absence of prominent spikes in the interaction cross-sections with these nuclei for neutron energies <0.1 MeV. The ratio R could be increased by ~50% for small medium thickness if the incident neutron energy was reduced from 2 MeV to 0.1 MeV. As such, the absorbed doses in cells (D A ) would vary with the incident neutron energies, even when the absorbed doses shown on the detector were the same. © The Author 2016. Published by Oxford University Press on behalf of The Japan Radiation Research Society and Japanese Society for Radiation Oncology.
Natto, S A; Lewis, D G; Ryde, S J
1998-01-01
The Monte Carlo computer code MCNP (version 4A) has been used to develop a personal computer-based model of the Swansea in vivo neutron activation analysis (IVNAA) system. The model included specification of the neutron source (252Cf), collimators, reflectors and shielding. The MCNP model was 'benchmarked' against fast neutron and thermal neutron fluence data obtained experimentally from the IVNAA system. The Swansea system allows two irradiation geometries using 'short' and 'long' collimators, which provide alternative dose rates for IVNAA. The data presented here relate to the short collimator, although results of similar accuracy were obtained using the long collimator. The fast neutron fluence was measured in air at a series of depths inside the collimator. The measurements agreed with the MCNP simulation within the statistical uncertainty (5-10%) of the calculations. The thermal neutron fluence was measured and calculated inside the cuboidal water phantom. The depth of maximum thermal fluence was 3.2 cm (measured) and 3.0 cm (calculated). The width of the 50% thermal fluence level across the phantom at its mid-depth was found to be the same by both MCNP and experiment. This benchmarking exercise has given us a high degree of confidence in MCNP as a tool for the design of IVNAA systems.
Effect of the MCNP model definition on the computation time
International Nuclear Information System (INIS)
Šunka, Michal
2017-01-01
The presented work studies the influence of the method of defining the geometry in the MCNP transport code and its impact on the computational time, including the difficulty of preparing an input file describing the given geometry. Cases using different geometric definitions including the use of basic 2-dimensional and 3-dimensional objects and theirs combinations were studied. The results indicate that an inappropriate definition can increase the computational time by up to 59% (a more realistic case indicates 37%) for the same results and the same statistical uncertainty. (orig.)
MCNP Techniques for Modeling Sodium Iodide Spectra of Kiwi Surveys
International Nuclear Information System (INIS)
Robert B Hayes
2007-01-01
This work demonstrates how MCNP can be used to predict the response of mobile search and survey equipment from base principles. The instrumentation evaluated comes from the U.S. Department of Energy's Aerial Measurement Systems. Through reconstructing detector responses to various point-source measurements, detector responses to distributed sources can be estimated through superposition. Use of this methodology for currently deployed systems allows predictive determinations of activity levels and distributions for common configurations of interest. This work helps determine the quality and efficacy of certain surveys in fully characterizing an effected site following a radiological event of national interest
International Nuclear Information System (INIS)
Goorley, T.; James, M.; Booth, T.; Brown, F.; Bull, J.; Cox, L.J.; Durkee, J.; Elson, J.; Fensin, M.; Forster, R.A.; Hendricks, J.; Hughes, H.G.; Johns, R.; Kiedrowski, B.; Martz, R.; Mashnik, S.; McKinney, G.; Pelowitz, D.; Prael, R.; Sweezy, J.
2016-01-01
Highlights: • MCNP6 is simply and accurately described as the merger of MCNP5 and MCNPX capabilities, but it is much more than the sum of these two computer codes. • MCNP6 is the result of six years of effort by the MCNP5 and MCNPX code development teams. • These groups of people, residing in Los Alamos National Laboratory’s X Computational Physics Division, Monte Carlo Codes Group (XCP-3) and Nuclear Engineering and Nonproliferation Division, Radiation Transport Modeling Team (NEN-5) respectively, have combined their code development efforts to produce the next evolution of MCNP. • While maintenance and major bug fixes will continue for MCNP5 1.60 and MCNPX 2.7.0 for upcoming years, new code development capabilities only will be developed and released in MCNP6. • In fact, the initial release of MCNP6 contains numerous new features not previously found in either code. • These new features are summarized in this document. • Packaged with MCNP6 is also the new production release of the ENDF/B-VII.1 nuclear data files usable by MCNP. • The high quality of the overall merged code, usefulness of these new features, along with the desire in the user community to start using the merged code, have led us to make the first MCNP6 production release: MCNP6 version 1. • High confidence in the MCNP6 code is based on its performance with the verification and validation test suites, comparisons to its predecessor codes, our automated nightly software debugger tests, the underlying high quality nuclear and atomic databases, and significant testing by many beta testers. - Abstract: MCNP6 can be described as the merger of MCNP5 and MCNPX capabilities, but it is much more than the sum of these two computer codes. MCNP6 is the result of six years of effort by the MCNP5 and MCNPX code development teams. These groups of people, residing in Los Alamos National Laboratory’s X Computational Physics Division, Monte Carlo Codes Group (XCP-3) and Nuclear Engineering and
Studi Model Benchmark Mcnp6 Dalam Perhitungan Reaktivitas Batang Kendali Htr-10
Jupiter S.Pane, Zuhair, Suwoto, Putranto Ilham Yazid
2016-01-01
STUDI MODEL BENCHMARK MCNP6 DALAM PERHITUNGAN REAKTIVITAS BATANG KENDALI HTR-10. Dalam operasi reaktor nuklir, sistem batang kendali memainkan peranan yang sangat penting karena didesain untuk mengendalikan reaktivitas teras dan memadamkan reaktor. Nilai reaktivitas batang kendali harus diprediksi secara akurat melalui eksperimen dan perhitungan. Makalah ini mendiskusikan model Benchmark dalam perhitungan reaktivitas batang kendali reaktor HTR-10. Perhitungan dikerjakan dengan program transpo...
An evaluation of a manganese bath system having a new geometry through MCNP modelling.
Khabaz, Rahim
2012-12-01
In this study, an approximate symmetric cylindrical manganese bath system with equal diameter and height was appraised using a Monte Carlo simulation. For nine sizes of the tank filled with MnSO(4).H(2)O solution of three different concentrations, the necessary correction factors involved in the absolute measurement of neutron emission rate were determined by a detailed modelling of the MCNP4C code with the ENDF/B-VII.0 neutron cross section data library. The results obtained were also used to determine the optimum dimensions of the bath for each concentration of solution in the calibration of (241)Am-Be and (252)Cf sources. Also, the amount of gamma radiation produced as a result of (n,γ) the reaction with the nuclei of the manganese sulphate solution that escaped from the boundary of each tank was evaluated. This gamma can be important for the background in NaI(Tl) detectors and issues concerned with radiation protection.
Observation models in radiocarbon calibration
International Nuclear Information System (INIS)
Jones, M.D.; Nicholls, G.K.
2001-01-01
The observation model underlying any calibration process dictates the precise mathematical details of the calibration calculations. Accordingly it is important that an appropriate observation model is used. Here this is illustrated with reference to the use of reservoir offsets where the standard calibration approach is based on a different model to that which the practitioners clearly believe is being applied. This sort of error can give rise to significantly erroneous calibration results. (author). 12 refs., 1 fig
Directory of Open Access Journals (Sweden)
Sezar Gülbaz
2015-01-01
Full Text Available The land development and increase in urbanization in a watershed affect water quantityand water quality. On one hand, urbanization provokes the adjustment of geomorphicstructure of the streams, ultimately raises peak flow rate which causes flood; on theother hand, it diminishes water quality which results in an increase in Total SuspendedSolid (TSS. Consequently, sediment accumulation in downstream of urban areas isobserved which is not preferred for longer life of dams. In order to overcome thesediment accumulation problem in dams, the amount of TSS in streams and inwatersheds should be taken under control. Low Impact Development (LID is a BestManagement Practice (BMP which may be used for this purpose. It is a land planningand engineering design method which is applied in managing storm water runoff inorder to reduce flooding as well as simultaneously improve water quality. LID includestechniques to predict suspended solid loads in surface runoff generated over imperviousurban surfaces. In this study, the impact of LID-BMPs on surface runoff and TSS isinvestigated by employing a calibrated hydrodynamic model for Sazlidere Watershedwhich is located in Istanbul, Turkey. For this purpose, a calibrated hydrodynamicmodel was developed by using Environmental Protection Agency Storm WaterManagement Model (EPA SWMM. For model calibration and validation, we set up arain gauge and a flow meter into the field and obtain rainfall and flow rate data. Andthen, we select several LID types such as retention basins, vegetative swales andpermeable pavement and we obtain their influence on peak flow rate and pollutantbuildup and washoff for TSS. Consequently, we observe the possible effects ofLID on surface runoff and TSS in Sazlidere Watershed.
SURF Model Calibration Strategy
Energy Technology Data Exchange (ETDEWEB)
Menikoff, Ralph [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-03-10
SURF and SURFplus are high explosive reactive burn models for shock initiation and propagation of detonation waves. They are engineering models motivated by the ignition & growth concept of high spots and for SURFplus a second slow reaction for the energy release from carbon clustering. A key feature of the SURF model is that there is a partial decoupling between model parameters and detonation properties. This enables reduced sets of independent parameters to be calibrated sequentially for the initiation and propagation regimes. Here we focus on a methodology for tting the initiation parameters to Pop plot data based on 1-D simulations to compute a numerical Pop plot. In addition, the strategy for tting the remaining parameters for the propagation regime and failure diameter is discussed.
Implementation of 3D models in the Monte Carlo code MCNP
International Nuclear Information System (INIS)
Lopes, Vivaldo; Millian, Felix M.; Guevara, Maria Victoria M.; Garcia, Fermin; Sena, Isaac; Menezes, Hugo
2009-01-01
On the area of numerical dosimetry Applied to medical physics, the scientific community focuses on the elaboration of new hybrids models based on 3D models. But different steps of the process of simulation with 3D models needed improvement and optimization in order to expedite the calculations and accuracy using this methodology. This project was developed with the aim of optimize the process of introduction of 3D models within the simulation code of radiation transport by Monte Carlo (MCNP). The fast implementation of these models on the simulation code allows the estimation of the dose deposited on the patient organs on a more personalized way, increasing the accuracy with this on the estimates and reducing the risks to health, caused by ionizing radiations. The introduction o these models within the MCNP was made through a input file, that was constructed through a sequence of images, bi-dimensional in the 3D model, generated using the program '3DSMAX', imported by the program 'TOMO M C' and thus, introduced as INPUT FILE of the MCNP code. (author)
An MCNP model of glove boxes in a plutonium processing facility
International Nuclear Information System (INIS)
Dooley, D.E.; Kornreich, D.E.
1998-01-01
Nuclear material processing usually occurs simultaneously in several glove boxes whose primary purpose is to contain radioactive materials and prevent inhalation or ingestion of radioactive materials by workers. A room in the plutonium facility at Los Alamos National Laboratory has been slated for installation of a glove box for storing plutonium metal in various shapes during processing. This storage glove box will be located in a room containing other glove boxes used daily by workers processing plutonium parts. An MCNP model of the room and glove boxes has been constructed to estimate the neutron flux at various locations in the room for two different locations of the storage glove box and to determine the effect of placing polyethylene shielding around the storage glove box. A neutron dose survey of the room with sources dispersed as during normal production operations was used as a benchmark to compare the neutron dose equivalent rates calculated by the MCNP model
Modeling of LVRF critical experiments in ZED-2 using WIMS9A/PANTHER and MCNP5
International Nuclear Information System (INIS)
Sissaoui, M.T.; Carlson, P.A.; Lebenhaft, J.R.
2009-01-01
The accuracy of WIMS9A/PANTHER and MCNP5 in modeling D 2 O-moderated, and H 2 O-, D 2 O- or air-cooled, doubly heterogeneous lattices of fuel clusters was demonstrated using Low Void Reactivity Fuel (LVRF) substitution experiments in the ZED-2 critical facility. MCNP5 with ENDF/B-VI (Release 5) underpredicted k eff but gave excellent coolant void reactivity (CVR) bias values. WIMS9A/PANTHER with JEF-2.2 overpredicted k eff and underpredicted the CVR bias relative to MCNP5 by 100-200 pcm. Both codes reproduced the measured axial and radial flux shapes accurately
Characteristics of Multihole Collimator Gamma Camera Simulation Modeled Using MCNP5
International Nuclear Information System (INIS)
Saripan, M. I.; Mashohor, S.; Adnan, W. A. Wan; Marhaban, M. H.; Hashim, S.
2008-01-01
This paper describes the characteristics of the multihole collimator gamma camera that is simulated using the combination of the Monte Carlo N-Particles Code (MCNP) version 5 and in-house software. The model is constructed based on the GCA-7100A Toshiba Gamma Camera at the Royal Surrey County Hospital, Guildford, Surrey, UK. The characteristics are analyzed based on the spatial resolution of the images detected by the Sodium Iodide (NaI) detector. The result is recorded in a list-mode file referred to as a PTRAC file within MCNP5. All pertinent nuclear reaction mechanisms, such as Compton and Rayleigh scattering and photoelectric absorption are undertaken by MCNP5 for all materials encountered by each photon. The experiments were conducted on Tl-201, Co-57, Tc-99 m and Cr-51 radio nuclides. The comparison of full width half maximum value of each datasets obtained from experimental work, simulation and literature are also reported in this paper. The relationship of the simulated data is in agreement with the experimental results and data obtained in the literature. A careful inspection at each of the data points of the spatial resolution of Tc-99 m shows a slight discrepancy between these sets. However, the difference is very insignificant, i.e. less than 3 mm only, which corresponds to a size of less than 1 pixel only (of the segmented detector)
Energy Technology Data Exchange (ETDEWEB)
Mosleh-Shirazi, M. A.; Hadad, K.; Faghihi, R.; Baradaran-Ghahfarokhi, M.; Naghshnezhad, Z.; Meigooni, A. S. [Center for Research in Medical Physics and Biomedical Engineering and Physics Unit, Radiotherapy Department, Shiraz University of Medical Sciences, Shiraz 71936-13311 (Iran, Islamic Republic of); Radiation Research Center and Medical Radiation Department, School of Engineering, Shiraz University, Shiraz 71936-13311 (Iran, Islamic Republic of); Comprehensive Cancer Center of Nevada, Las Vegas, Nevada 89169 (United States)
2012-08-15
This study primarily aimed to obtain the dosimetric characteristics of the Model 6733 {sup 125}I seed (EchoSeed) with improved precision and accuracy using a more up-to-date Monte-Carlo code and data (MCNP5) compared to previously published results, including an uncertainty analysis. Its secondary aim was to compare the results obtained using the MCNP5, MCNP4c2, and PTRAN codes for simulation of this low-energy photon-emitting source. The EchoSeed geometry and chemical compositions together with a published {sup 125}I spectrum were used to perform dosimetric characterization of this source as per the updated AAPM TG-43 protocol. These simulations were performed in liquid water material in order to obtain the clinically applicable dosimetric parameters for this source model. Dose rate constants in liquid water, derived from MCNP4c2 and MCNP5 simulations, were found to be 0.993 cGyh{sup -1} U{sup -1} ({+-}1.73%) and 0.965 cGyh{sup -1} U{sup -1} ({+-}1.68%), respectively. Overall, the MCNP5 derived radial dose and 2D anisotropy functions results were generally closer to the measured data (within {+-}4%) than MCNP4c and the published data for PTRAN code (Version 7.43), while the opposite was seen for dose rate constant. The generally improved MCNP5 Monte Carlo simulation may be attributed to a more recent and accurate cross-section library. However, some of the data points in the results obtained from the above-mentioned Monte Carlo codes showed no statistically significant differences. Derived dosimetric characteristics in liquid water are provided for clinical applications of this source model.
PWR Facility Dose Modeling Using MCNP5 and the CADIS/ADVANTG Variance-Reduction Methodology
Energy Technology Data Exchange (ETDEWEB)
Blakeman, Edward D [ORNL; Peplow, Douglas E. [ORNL; Wagner, John C [ORNL; Murphy, Brian D [ORNL; Mueller, Don [ORNL
2007-09-01
The feasibility of modeling a pressurized-water-reactor (PWR) facility and calculating dose rates at all locations within the containment and adjoining structures using MCNP5 with mesh tallies is presented. Calculations of dose rates resulting from neutron and photon sources from the reactor (operating and shut down for various periods) and the spent fuel pool, as well as for the photon source from the primary coolant loop, were all of interest. Identification of the PWR facility, development of the MCNP-based model and automation of the run process, calculation of the various sources, and development of methods for visually examining mesh tally files and extracting dose rates were all a significant part of the project. Advanced variance reduction, which was required because of the size of the model and the large amount of shielding, was performed via the CADIS/ADVANTG approach. This methodology uses an automatically generated three-dimensional discrete ordinates model to calculate adjoint fluxes from which MCNP weight windows and source bias parameters are generated. Investigative calculations were performed using a simple block model and a simplified full-scale model of the PWR containment, in which the adjoint source was placed in various regions. In general, it was shown that placement of the adjoint source on the periphery of the model provided adequate results for regions reasonably close to the source (e.g., within the containment structure for the reactor source). A modification to the CADIS/ADVANTG methodology was also studied in which a global adjoint source is weighted by the reciprocal of the dose response calculated by an earlier forward discrete ordinates calculation. This method showed improved results over those using the standard CADIS/ADVANTG approach, and its further investigation is recommended for future efforts.
PWR Facility Dose Modeling Using MCNP5 and the CADIS/ADVANTG Variance-Reduction Methodology
International Nuclear Information System (INIS)
Blakeman, Edward D.; Peplow, Douglas E.; Wagner, John C.; Murphy, Brian D.; Mueller, Don
2007-01-01
The feasibility of modeling a pressurized-water-reactor (PWR) facility and calculating dose rates at all locations within the containment and adjoining structures using MCNP5 with mesh tallies is presented. Calculations of dose rates resulting from neutron and photon sources from the reactor (operating and shut down for various periods) and the spent fuel pool, as well as for the photon source from the primary coolant loop, were all of interest. Identification of the PWR facility, development of the MCNP-based model and automation of the run process, calculation of the various sources, and development of methods for visually examining mesh tally files and extracting dose rates were all a significant part of the project. Advanced variance reduction, which was required because of the size of the model and the large amount of shielding, was performed via the CADIS/ADVANTG approach. This methodology uses an automatically generated three-dimensional discrete ordinates model to calculate adjoint fluxes from which MCNP weight windows and source bias parameters are generated. Investigative calculations were performed using a simple block model and a simplified full-scale model of the PWR containment, in which the adjoint source was placed in various regions. In general, it was shown that placement of the adjoint source on the periphery of the model provided adequate results for regions reasonably close to the source (e.g., within the containment structure for the reactor source). A modification to the CADIS/ADVANTG methodology was also studied in which a global adjoint source is weighted by the reciprocal of the dose response calculated by an earlier forward discrete ordinates calculation. This method showed improved results over those using the standard CADIS/ADVANTG approach, and its further investigation is recommended for future efforts
MCNP model for the many KE-Basin radiation sources
International Nuclear Information System (INIS)
Rittmann, P.D.
1997-01-01
This document presents a model for the location and strength of radiation sources in the accessible areas of KE-Basin which agrees well with data taken on a regular grid in September of 1996. This modelling work was requested to support dose rate reduction efforts in KE-Basin. Anticipated fuel removal activities require lower dose rates to minimize annual dose to workers. With this model, the effects of component cleanup or removal can be estimated in advance to evaluate their effectiveness. In addition, the sources contributing most to the radiation fields in a given location can be identified and dealt with
Energy Technology Data Exchange (ETDEWEB)
Poškus, Andrius, E-mail: andrius.poskus@ff.vu.lt
2016-02-01
This work evaluates the accuracy of the single-event (SE) and condensed-history (CH) models of electron transport in Monte Carlo simulations of electron backscattering from thick layers of Be, C, Al, Cu, Ag, Au and U at incident electron energies from 200 eV to 15 MeV. The CH method is used in simulations performed with MCNP6.1, and the SE method is used in simulations performed with an open-source single-event code MCNelectron written by the author of this paper. Both MCNP6.1 and MCNelectron use mainly ENDF/B-VI.8 library data, but MCNelectron allows replacing cross sections of certain types of interactions by alternative datasets from other sources. The SE method is evaluated both using only ENDF/B-VI.8 cross sections (the “SE-ENDF/B method”, which is equivalent to using MCNP6.1 in SE mode) and with an alternative set of elastic scattering cross sections obtained from relativistic (Dirac) partial-wave (DPW) calculations (the “SE-DPW method”). It is shown that at energies from 200 eV to 300 keV the estimates of the backscattering coefficients obtained using the SE-DPW method are typically within 10% of the experimental data, which is approximately the same accuracy that is achieved using MCNP6.1 in CH mode. At energies below 1 keV and above 300 keV, the SE-DPW method is much more accurate than the SE-ENDF/B method due to lack of angular distribution data in the ENDF/B library in those energy ranges. At energies from 500 keV to 15 MeV, the CH approximation is roughly twice more accurate than the SE-DPW method, with the average relative errors equal 7% and 14%, respectively. The energy probability density functions (PDFs) of backscattered electrons for Al and Cu, calculated using the SE method with DPW cross sections when energy of incident electrons is 20 keV, have an average absolute error as low as 4% of the average PDF. This error is approximately twice less than the error of the corresponding PDF calculated using the CH approximation. It is concluded
Model Calibration in Option Pricing
Directory of Open Access Journals (Sweden)
Andre Loerx
2012-04-01
Full Text Available We consider calibration problems for models of pricing derivatives which occur in mathematical finance. We discuss various approaches such as using stochastic differential equations or partial differential equations for the modeling process. We discuss the development in the past literature and give an outlook into modern approaches of modelling. Furthermore, we address important numerical issues in the valuation of options and likewise the calibration of these models. This leads to interesting problems in optimization, where, e.g., the use of adjoint equations or the choice of the parametrization for the model parameters play an important role.
MCNP: Photon benchmark problems
International Nuclear Information System (INIS)
Whalen, D.J.; Hollowell, D.E.; Hendricks, J.S.
1991-09-01
The recent widespread, markedly increased use of radiation transport codes has produced greater user and institutional demand for assurance that such codes give correct results. Responding to these pressing requirements for code validation, the general purpose Monte Carlo transport code MCNP has been tested on six different photon problem families. MCNP was used to simulate these six sets numerically. Results for each were compared to the set's analytical or experimental data. MCNP successfully predicted the analytical or experimental results of all six families within the statistical uncertainty inherent in the Monte Carlo method. From this we conclude that MCNP can accurately model a broad spectrum of photon transport problems. 8 refs., 30 figs., 5 tabs
MCNP modelling of the wall effects observed in tissue-equivalent proportional counters.
Hoff, J L; Townsend, L W
2002-01-01
Tissue-equivalent proportional counters (TEPCs) utilise tissue-equivalent materials to depict homogeneous microscopic volumes of human tissue. Although both the walls and gas simulate the same medium, they respond to radiation differently. Density differences between the two materials cause distortions, or wall effects, in measurements, with the most dominant effect caused by delta rays. This study uses a Monte Carlo transport code, MCNP, to simulate the transport of secondary electrons within a TEPC. The Rudd model, a singly differential cross section with no dependence on electron direction, is used to describe the energy spectrum obtained by the impact of two iron beams on water. Based on the models used in this study, a wall-less TEPC had a higher lineal energy (keV.micron-1) as a function of impact parameter than a solid-wall TEPC for the iron beams under consideration. An important conclusion of this study is that MCNP has the ability to model the wall effects observed in TEPCs.
MCNP and OMEGA criticality calculations
International Nuclear Information System (INIS)
Seifert, E.
1998-04-01
The reliability of OMEGA criticality calculations is shown by a comparison with calculations by the validated and widely used Monte Carlo code MCNP. The criticality of 16 assemblies with uranium as fissionable is calculated with the codes MCNP (Version 4A, ENDF/B-V cross sections), MCNP (Version 4B, ENDF/B-VI cross sections), and OMEGA. Identical calculation models are used for the three codes. The results are compared mutually and with the experimental criticality of the assemblies. (orig.)
Modeling of LVRF Critical Experiments in ZED-2 Using WIMS9A/PANTHER and MCNP5
International Nuclear Information System (INIS)
Sissaoui, M.T.; Lebenhaft, J.R; Carlson, P.A.
2008-01-01
The accuracy of WIMS9A/PANTHER and MCNP5 in modeling D 2 O-moderated, and H 2 O-, D 2 O- or air-cooled, doubly heterogeneous lattices of fuel clusters was demonstrated using Low Void Reactivity Fuel (LVRF) substitution experiments in the ZED-2 critical facility. MCNP5 with ENDF/B-VI (Release 5) under-predicted k eff but gave excellent coolant void reactivity (CVR) bias values. WIMS9A/PANTHER with JEF-2.2 over-predicted k eff and under-predicted the CVR bias relative to MCNP5 by 100 pcm to 200 pcm. Both codes reproduced the measured axial and radial flux shapes accurately. (authors)
Model Calibration in Watershed Hydrology
Yilmaz, Koray K.; Vrugt, Jasper A.; Gupta, Hoshin V.; Sorooshian, Soroosh
2009-01-01
Hydrologic models use relatively simple mathematical equations to conceptualize and aggregate the complex, spatially distributed, and highly interrelated water, energy, and vegetation processes in a watershed. A consequence of process aggregation is that the model parameters often do not represent directly measurable entities and must, therefore, be estimated using measurements of the system inputs and outputs. During this process, known as model calibration, the parameters are adjusted so that the behavior of the model approximates, as closely and consistently as possible, the observed response of the hydrologic system over some historical period of time. This Chapter reviews the current state-of-the-art of model calibration in watershed hydrology with special emphasis on our own contributions in the last few decades. We discuss the historical background that has led to current perspectives, and review different approaches for manual and automatic single- and multi-objective parameter estimation. In particular, we highlight the recent developments in the calibration of distributed hydrologic models using parameter dimensionality reduction sampling, parameter regularization and parallel computing.
NaI(Tl) detectors modeling in MCNP-X and Gate/Geant4 codes
Energy Technology Data Exchange (ETDEWEB)
Affonso, Renato Raoni Werneck; Silva, Ademir Xavier da, E-mail: raoniwa@yahoo.com.br, E-mail: ademir@nuclear.ufrj.br [Coordenacao de Pos-Graduacao e Pesquisa de Engenharia (PEN/COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Programa de Engenharia Nuclear; Salgado, Cesar Marques, E-mail: otero@ien.gov.br [Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Rio de Janeiro, RJ (Brazil)
2017-07-01
NaI (Tl) detectors are widely used in gamma-ray densitometry, but their modeling in Monte Carlo codes, such as MCNP-X and Gate/Geant4, needs a lot of work and does not yield comparable results with experimental arrangements, possibly due to non-simulated physical phenomena, such as light transport within the scintillator. Therefore, it is necessary a methodology that positively impacts the results of the simulations while maintaining the real dimensions of the detectors and other objects to allow validating a modeling that matches up with the experimental arrangement. Thus, the objective of this paper is to present the studies conducted with the MCNPX and Gate/Geant4 codes, in which the comparisons of their results were satisfactory, showing that both can be used for the same purposes. (author)
Gas Core Reactor Numerical Simulation Using a Coupled MHD-MCNP Model
Kazeminezhad, F.; Anghaie, S.
2008-01-01
Analysis is provided in this report of using two head-on magnetohydrodynamic (MHD) shocks to achieve supercritical nuclear fission in an axially elongated cylinder filled with UF4 gas as an energy source for deep space missions. The motivation for each aspect of the design is explained and supported by theory and numerical simulations. A subsequent report will provide detail on relevant experimental work to validate the concept. Here the focus is on the theory of and simulations for the proposed gas core reactor conceptual design from the onset of shock generations to the supercritical state achieved when the shocks collide. The MHD model is coupled to a standard nuclear code (MCNP) to observe the neutron flux and fission power attributed to the supercritical state brought about by the shock collisions. Throughout the modeling, realistic parameters are used for the initial ambient gaseous state and currents to ensure a resulting supercritical state upon shock collisions.
Calibration of a portable HPGe detector using MCNP code for the determination of 137Cs in soils
International Nuclear Information System (INIS)
Gutierrez-Villanueva, J.L.; Martin-Martin, A.; Pena, V.; Iniguez, M.P.; Celis, B. de
2008-01-01
In situ gamma spectrometry provides a fast method to determine 137 Cs inventories in soils. To improve the accuracy of the estimates, one can use not only the information on the photopeak count rates but also on the peak to forward-scatter ratios. Before applying this procedure to field measurements, a calibration including several experimental simulations must be carried out in the laboratory. In this paper it is shown that Monte Carlo methods are a valuable tool to minimize the number of experimental measurements needed for the calibration
Calibration of a portable HPGe detector using MCNP code for the determination of 137Cs in soils.
Gutiérrez-Villanueva, J L; Martín-Martín, A; Peña, V; Iniguez, M P; de Celis, B; de la Fuente, R
2008-10-01
In situ gamma spectrometry provides a fast method to determine (137)Cs inventories in soils. To improve the accuracy of the estimates, one can use not only the information on the photopeak count rates but also on the peak to forward-scatter ratios. Before applying this procedure to field measurements, a calibration including several experimental simulations must be carried out in the laboratory. In this paper it is shown that Monte Carlo methods are a valuable tool to minimize the number of experimental measurements needed for the calibration.
Modeling of a planning system in radiotherapy and Nuclear Medicine using the MCNP6 code
International Nuclear Information System (INIS)
Massicano, Felipe
2015-01-01
Cancer therapy has many branches and one of them is the use of radiation sources as treatment leading method. Radiotherapy and nuclear medicine are examples of these treatment types. For using the ionization radiation as main tool for the therapy, there is the need of crafting many treatment simulation in order to maximum the tumoral tissue dose without surpass the dose limit in health tissue surrounding. Treatment planning systems (TPS) are systems which have the purpose of simulating these therapy types. Nuclear medicine and radiotherapy have many distinct features linked to the therapy mode and consequently they have different TPS destined for each. The radiotherapy TPS is more developed than the nuclear medicine TPS and by that reason the development of a TPS that was similar to the radiotherapy TPS, but enough generic for include other therapy types, it will contribute with significant advances in nuclear medicine and in others therapy types with radiation. Based on this, the goal of work was to model a TPS that utilizes the Monte Carlo N-Particle Transport code (MCNP6) in order to simulate radiotherapy therapy, nuclear medicine therapy and with potential for simulating other therapy types too. The result of this work was the creation of a Framework in Java language, object oriented, named IBMC which will assist in the development of new TPS with MCNP6 code. The IBMC allowed to develop rapidly and easily TPS for radiotherapy and nuclear medicine and the results were validated with systems already consolidated. The IBMC showed high potential for developing TPS by new therapy types. (author)
International Nuclear Information System (INIS)
Hendricks, J.S.; Whalen, D.J.; Cardon, D.A.; Uhle, J.L.
1991-01-01
Over 50 neutron benchmark calculations have recently been completed as part of an ongoing program to validate the MCNP Monte Carlo radiation transport code. The new and significant aspects of this work are as follows: These calculations are the first attempt at a validation program for MCNP and the first official benchmarking of version 4 of the code. We believe the chosen set of benchmarks is a comprehensive set that may be useful for benchmarking other radiation transport codes and data libraries. These calculations provide insight into how well neutron transport calculations can be expected to model a wide variety of problems
International Nuclear Information System (INIS)
Luneville, L.; Chiron, M.; Toubon, H.; Dogny, S.; Huver, M.; Berger, L.
2001-01-01
The research performed in common these last 3 years by the French Atomic Commission CEA, COGEMA and Eurisys Mesures had for main subject the realization of a complete tool of modelization for the largest range of realistic cases, the Pascalys modelization software. The main purpose of the modelization was to calculate the global measurement efficiency, which delivers the most accurate relationship between the photons emitted by the nuclear source in volume, punctual or deposited form and the germanium hyper pure detector, which detects and analyzes the received photons. It has been stated since long time that experimental global measurement efficiency becomes more and more difficult to address especially for complex scene as we can find in decommissioning and dismantling or in case of high activities for which the use of high activity reference sources become difficult to use for both health physics point of view and regulations. The choice of a calculation code is fundamental if accurate modelization is searched. MCNP represents the reference code but its use is long time calculation consuming and then not practicable in line on the field. Direct line-of-sight point kernel code as the French Atomic Commission 3-D analysis Mercure code can represent the practicable compromise between the most accurate MCNP reference code and the realistic performances needed in modelization. The comparison between the results of Pascalys-Mercure and MCNP code taking in account the last improvements of Mercure in the low energy range where the most important errors can occur, is presented in this paper, Mercure code being supported in line by the recent Pascalys 3-D modelization scene software. The incidence of the intrinsic efficiency of the Germanium detector is also approached for the total efficiency of measurement. (authors)
Human eye analytical and mesh-geometry models for ophthalmic dosimetry using MCNP6
International Nuclear Information System (INIS)
Angelocci, Lucas V.; Fonseca, Gabriel P.; Yoriyaz, Helio
2015-01-01
Eye tumors can be treated with brachytherapy using Co-60 plaques, I-125 seeds, among others materials. The human eye has regions particularly vulnerable to ionizing radiation (e.g. crystalline) and dosimetry for this region must be taken carefully. A mathematical model was proposed in the past [1] for the eye anatomy to be used in Monte Carlo simulations to account for dose distribution in ophthalmic brachytherapy. The model includes the description for internal structures of the eye that were not treated in previous works. The aim of this present work was to develop a new eye model based on the Mesh geometries of the MCNP6 code. The methodology utilized the ABAQUS/CAE (Simulia 3DS) software to build the Mesh geometry. For this work, an ophthalmic applicator containing up to 24 model Amersham 6711 I-125 seeds (Oncoseed) was used, positioned in contact with a generic tumor defined analytically inside the eye. The absorbed dose in eye structures like cornea, sclera, choroid, retina, vitreous body, lens, optical nerve and optical nerve wall were calculated using both models: analytical and MESH. (author)
Human eye analytical and mesh-geometry models for ophthalmic dosimetry using MCNP6
Energy Technology Data Exchange (ETDEWEB)
Angelocci, Lucas V.; Fonseca, Gabriel P.; Yoriyaz, Helio, E-mail: hyoriyaz@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)
2015-07-01
Eye tumors can be treated with brachytherapy using Co-60 plaques, I-125 seeds, among others materials. The human eye has regions particularly vulnerable to ionizing radiation (e.g. crystalline) and dosimetry for this region must be taken carefully. A mathematical model was proposed in the past [1] for the eye anatomy to be used in Monte Carlo simulations to account for dose distribution in ophthalmic brachytherapy. The model includes the description for internal structures of the eye that were not treated in previous works. The aim of this present work was to develop a new eye model based on the Mesh geometries of the MCNP6 code. The methodology utilized the ABAQUS/CAE (Simulia 3DS) software to build the Mesh geometry. For this work, an ophthalmic applicator containing up to 24 model Amersham 6711 I-125 seeds (Oncoseed) was used, positioned in contact with a generic tumor defined analytically inside the eye. The absorbed dose in eye structures like cornea, sclera, choroid, retina, vitreous body, lens, optical nerve and optical nerve wall were calculated using both models: analytical and MESH. (author)
Development and validation of a model TRIGA Mark III reactor with code MCNP5
International Nuclear Information System (INIS)
Galicia A, J.; Francois L, J. L.; Aguilar H, F.
2015-09-01
The main purpose of this paper is to obtain a model of the reactor core TRIGA Mark III that accurately represents the real operating conditions to 1 M Wth, using the Monte Carlo code MCNP5. To provide a more detailed analysis, different models of the reactor core were realized by simulating the control rods extracted and inserted in conditions in cold (293 K) also including an analysis for shutdown margin, so that satisfied the Operation Technical Specifications. The position they must have the control rods to reach a power equal to 1 M Wth, were obtained from practice entitled Operation in Manual Mode performed at Instituto Nacional de Investigaciones Nucleares (ININ). Later, the behavior of the K eff was analyzed considering different temperatures in the fuel elements, achieving calculate subsequently the values that best represent the actual reactor operation. Finally, the calculations in the developed model for to obtain the distribution of average flow of thermal, epithermal and fast neutrons in the six new experimental facilities are presented. (Author)
MCNP6 model of the University of Washington clinical neutron therapy system (CNTS).
Moffitt, Gregory B; Stewart, Robert D; Sandison, George A; Goorley, John T; Argento, David C; Jevremovic, Tatjana
2016-01-21
A MCNP6 dosimetry model is presented for the Clinical Neutron Therapy System (CNTS) at the University of Washington. In the CNTS, fast neutrons are generated by a 50.5 MeV proton beam incident on a 10.5 mm thick Be target. The production, scattering and absorption of neutrons, photons, and other particles are explicitly tracked throughout the key components of the CNTS, including the target, primary collimator, flattening filter, monitor unit ionization chamber, and multi-leaf collimator. Simulations of the open field tissue maximum ratio (TMR), percentage depth dose profiles, and lateral dose profiles in a 40 cm × 40 cm × 40 cm water phantom are in good agreement with ionization chamber measurements. For a nominal 10 × 10 field, the measured and calculated TMR values for depths of 1.5 cm, 5 cm, 10 cm, and 20 cm (compared to the dose at 1.7 cm) are within 0.22%, 2.23%, 4.30%, and 6.27%, respectively. For the three field sizes studied, 2.8 cm × 2.8 cm, 10.4 cm × 10.3 cm, and 28.8 cm × 28.8 cm, a gamma test comparing the measured and simulated percent depth dose curves have pass rates of 96.4%, 100.0%, and 78.6% (depth from 1.5 to 15 cm), respectively, using a 3% or 3 mm agreement criterion. At a representative depth of 10 cm, simulated lateral dose profiles have in-field (⩾ 10% of central axis dose) pass rates of 89.7% (2.8 cm × 2.8 cm), 89.6% (10.4 cm × 10.3 cm), and 100.0% (28.8 cm × 28.8 cm) using a 3% and 3 mm criterion. The MCNP6 model of the CNTS meets the minimum requirements for use as a quality assurance tool for treatment planning and provides useful insights and information to aid in the advancement of fast neutron therapy.
International Nuclear Information System (INIS)
Valentine, T.E.
1997-01-01
The Monte Carlo code MCNP-DSP was developed from the Los Alamos MCNP4a code to calculate the time and frequency response statistics obtained from the 252 Cf-source-driven frequency analysis measurements. This code can be used to validate calculational methods and cross section data sets from subcritical experiments. This code provides a more general model for interpretation and planning of experiments for nuclear criticality safety, nuclear safeguards, and nuclear weapons identification and replaces the use of point kinetics models for interpreting the measurements. The use of MCNP-DSP extends the usefulness of this measurement method to systems with much lower neutron multiplication factors
Calibration and simulation of Heston model
Directory of Open Access Journals (Sweden)
Mrázek Milan
2017-05-01
Full Text Available We calibrate Heston stochastic volatility model to real market data using several optimization techniques. We compare both global and local optimizers for different weights showing remarkable differences even for data (DAX options from two consecutive days. We provide a novel calibration procedure that incorporates the usage of approximation formula and outperforms significantly other existing calibration methods.
An MCNP-based model of a medical linear accelerator x-ray photon beam.
Ajaj, F A; Ghassal, N M
2003-09-01
The major components in the x-ray photon beam path of the treatment head of the VARIAN Clinac 2300 EX medical linear accelerator were modeled and simulated using the Monte Carlo N-Particle radiation transport computer code (MCNP). Simulated components include x-ray target, primary conical collimator, x-ray beam flattening filter and secondary collimators. X-ray photon energy spectra and angular distributions were calculated using the model. The x-ray beam emerging from the secondary collimators were scored by considering the total x-ray spectra from the target as the source of x-rays at the target position. The depth dose distribution and dose profiles at different depths and field sizes have been calculated at a nominal operating potential of 6 MV and found to be within acceptable limits. It is concluded that accurate specification of the component dimensions, composition and nominal accelerating potential gives a good assessment of the x-ray energy spectra.
International Nuclear Information System (INIS)
Cramer, S.N.
1984-01-01
The MCNP code is the major Monte Carlo coupled neutron-photon transport research tool at the Los Alamos National Laboratory, and it represents the most extensive Monte Carlo development program in the United States which is available in the public domain. The present code is the direct descendent of the original Monte Carlo work of Fermi, von Neumaum, and Ulam at Los Alamos in the 1940s. Development has continued uninterrupted since that time, and the current version of MCNP (or its predecessors) has always included state-of-the-art methods in the Monte Carlo simulation of radiation transport, basic cross section data, geometry capability, variance reduction, and estimation procedures. The authors of the present code have oriented its development toward general user application. The documentation, though extensive, is presented in a clear and simple manner with many examples, illustrations, and sample problems. In addition to providing the desired results, the output listings give a a wealth of detailed information (some optional) concerning each state of the calculation. The code system is continually updated to take advantage of advances in computer hardware and software, including interactive modes of operation, diagnostic interrupts and restarts, and a variety of graphical and video aids
Implementation of a Monte Carlo based inverse planning model for clinical IMRT with MCNP code
International Nuclear Information System (INIS)
He, Tongming Tony
2003-01-01
Inaccurate dose calculations and limitations of optimization algorithms in inverse planning introduce systematic and convergence errors to treatment plans. This work was to implement a Monte Carlo based inverse planning model for clinical IMRT aiming to minimize the aforementioned errors. The strategy was to precalculate the dose matrices of beamlets in a Monte Carlo based method followed by the optimization of beamlet intensities. The MCNP 4B (Monte Carlo N-Particle version 4B) code was modified to implement selective particle transport and dose tallying in voxels and efficient estimation of statistical uncertainties. The resulting performance gain was over eleven thousand times. Due to concurrent calculation of multiple beamlets of individual ports, hundreds of beamlets in an IMRT plan could be calculated within a practical length of time. A finite-sized point source model provided a simple and accurate modeling of treatment beams. The dose matrix calculations were validated through measurements in phantoms. Agreements were better than 1.5% or 0.2 cm. The beamlet intensities were optimized using a parallel platform based optimization algorithm that was capable of escape from local minima and preventing premature convergence. The Monte Carlo based inverse planning model was applied to clinical cases. The feasibility and capability of Monte Carlo based inverse planning for clinical IMRT was demonstrated. Systematic errors in treatment plans of a commercial inverse planning system were assessed in comparison with the Monte Carlo based calculations. Discrepancies in tumor doses and critical structure doses were up to 12% and 17%, respectively. The clinical importance of Monte Carlo based inverse planning for IMRT was demonstrated
Energy Technology Data Exchange (ETDEWEB)
Menezes, Artur F.; Reis Junior, Juraci P.; Silva, Ademir X., E-mail: ademir@con.ufrj.b [Coordenacao dos Programas de Pos-Graduacao de Engenharia (PEN/COPPE/UFRJ), RJ (Brazil). Programa de Engenharia Nuclear; Rosa, Luiz A.R. da, E-mail: lrosa@ird.gov.b [Instituto de Radioprotecao e Dosimetria (IRD/CNEN-RJ), Rio de Janeiro, RJ (Brazil); Facure, Alessandro [Comissao Nacional de Energia Nuclear (CNEN), Rio de Janeiro, RJ (Brazil); Cardoso, Simone C., E-mail: Simone@if.ufrj.b [Universidade Federal do Rio de Janeiro (IF/UFRJ), RJ (Brazil). Inst. de Fisica. Dept. de Fisica Nuclear
2011-07-01
Brachytherapy is used in cancer treatment at shorter distances through the use of small encapsulated source of ionizing radiation. In such treatment, a radiation source is positioned directly into or near the target volume to be treated. In this study the Monte Carlo based MCNP code was used to model and simulate the I-125 Amersham Health source model 6711 and the Pd-103 Prospera source model MED3633 in order to obtain the dosimetric parameter dose rate constant ({Lambda}) . The sources geometries were modeled and implemented in MCNPX code. The dose rate constant is an important parameter prostate LDR brachytherapy's treatments planning. This study was based on American Association of Physicists in Medicine (AAPM) recommendations which were produced by its Task Group 43. The results obtained were 0.941 and 0.65 for the dose rate constants of I-125 and Pd-103 sources, respectively. They present good agreement with the literature values based on different Monte Carlo codes. (author)
International Nuclear Information System (INIS)
Shypailo, R J; Ellis, K J
2011-01-01
During construction of the whole body counter (WBC) at the Children's Nutrition Research Center (CNRC), efficiency calibration was needed to translate acquired counts of 40 K to actual grams of potassium for measurement of total body potassium (TBK) in a diverse subject population. The MCNP Monte Carlo n-particle simulation program was used to describe the WBC (54 detectors plus shielding), test individual detector counting response, and create a series of virtual anthropomorphic phantoms based on national reference anthropometric data. Each phantom included an outer layer of adipose tissue and an inner core of lean tissue. Phantoms were designed for both genders representing ages 3.5 to 18.5 years with body sizes from the 5th to the 95th percentile based on body weight. In addition, a spherical surface source surrounding the WBC was modeled in order to measure the effects of subject mass on room background interference. Individual detector measurements showed good agreement with the MCNP model. The background source model came close to agreement with empirical measurements, but showed a trend deviating from unity with increasing subject size. Results from the MCNP simulation of the CNRC WBC agreed well with empirical measurements using BOMAB phantoms. Individual detector efficiency corrections were used to improve the accuracy of the model. Nonlinear multiple regression efficiency calibration equations were derived for each gender. Room background correction is critical in improving the accuracy of the WBC calibration.
MCNP modelling of scintillation-detector gamma-ray spectra from natural radionuclides
Hendriks, Peter; Maucec, M; de Meijer, RJ
gamma-ray spectra of natural radionuclides are simulated for a BGO detector in a borehole geometry using the Monte Carlo code MCNP. All gamma-ray emissions of the decay of K-40 and the series of Th-232 and U-238 are used to describe the source. A procedure is proposed which excludes the
Ceccolini, E.; Gerardy, I.; Ródenas, J.; van Dycke, M.; Gallardo, S.; Mostacci, D.
Brachytherapy is an advanced cancer treatment that is minimally invasive, minimising radiation exposure to the surrounding healthy tissues. Microselectron© Nucletron devices with 192Ir source can be used for gynaecological brachytherapy, in patients with vaginal or uterine cancer. Measurements of isodose curves have been performed in a PMMA phantom and compared with Monte Carlo calculations and TPS (Plato software of Nucletron BPS 14.2) evaluation. The isodose measurements have been performed with radiochromic films (Gafchromic EBT©). The dose matrix has been obtained after digitalisation and use of a dose calibration curve obtained with a 6 MV photon beam provided by a medical linear accelerator. A comparison between the calculated and the measured matrix has been performed. The calculated dose matrix is obtained with a simulation using the MCNP5 Monte Carlo code (F4MESH tally).
Calibration of PMIS pavement performance prediction models.
2012-02-01
Improve the accuracy of TxDOTs existing pavement performance prediction models through calibrating these models using actual field data obtained from the Pavement Management Information System (PMIS). : Ensure logical performance superiority patte...
Potential MCNP enhancements for NCT
International Nuclear Information System (INIS)
Estes, G.P.; Taylor, W.M.
1992-01-01
MCNP a Monte Carlo radiation transport code, is currently widely used in the medical community for a variety of purposes including treatment planning, diagnostics, beam design, tomographic studies, and radiation protection. This is particularly true in the Neutron Capture Therapy (NCT) community. The current widespread medical use of MCNP after its general public distribution in about 1980 attests to the code's general versatility and usefulness, particularly since its development to date has not been influenced by medical applications. This paper discusses enhancements to MCNP that could be implemented at Los Alamos for the benefit of the NCT community. These enhancements generally fall into two categories, namely those that have already been developed to some extent but are not yet publicly available, and those that seem both needed based on our current understanding of NCT goals, and achievable based on our working knowledge of the MCNP code. MCNP is a general, coupled neutron/photon/electron Monte Carlo code developed and maintained by the Radiation Transport Group at Los Alamos. It has been used extensively for radiation shielding studies, reactor analysis, detector design, physics experiment interpretation, oil and gas well logging, radiation protection studies, accelerator design, etc. over the years. MCNP is a three-dimensional geometry, continuous energy physics code capable of modeling complex geometries, specifying material regions such as organs by the intersections of analytical surfaces
Error-in-variables models in calibration
Lira, I.; Grientschnig, D.
2017-12-01
In many calibration operations, the stimuli applied to the measuring system or instrument under test are derived from measurement standards whose values may be considered to be perfectly known. In that case, it is assumed that calibration uncertainty arises solely from inexact measurement of the responses, from imperfect control of the calibration process and from the possible inaccuracy of the calibration model. However, the premise that the stimuli are completely known is never strictly fulfilled and in some instances it may be grossly inadequate. Then, error-in-variables (EIV) regression models have to be employed. In metrology, these models have been approached mostly from the frequentist perspective. In contrast, not much guidance is available on their Bayesian analysis. In this paper, we first present a brief summary of the conventional statistical techniques that have been developed to deal with EIV models in calibration. We then proceed to discuss the alternative Bayesian framework under some simplifying assumptions. Through a detailed example about the calibration of an instrument for measuring flow rates, we provide advice on how the user of the calibration function should employ the latter framework for inferring the stimulus acting on the calibrated device when, in use, a certain response is measured.
MCNP modelling of scintillation-detector gamma-ray spectra from natural radionuclides.
Hendriks, P H G M; Maucec, M; de Meijer, R J
2002-09-01
gamma-ray spectra of natural radionuclides are simulated for a BGO detector in a borehole geometry using the Monte Carlo code MCNP. All gamma-ray emissions of the decay of 40K and the series of 232Th and 238U are used to describe the source. A procedure is proposed which excludes the time-consuming electron tracking in less relevant areas of the geometry. The simulated gamma-ray spectra are benchmarked against laboratory data.
MCNP modeling of NORM dosimetry in the oil and gas industry
International Nuclear Information System (INIS)
Siqiu Wang
2016-01-01
Naturally-occurring radioactive materials wastes in the oil and gas industry create a radioactive environment for the workers in the field. MCNP simulation conducted in this work provides a useful tool in terms of radiation safety design of the oil field, as well as validation and an important addition to in situ measurements. Furthermore, phantoms are employed to observe the dose distribution throughout the human body, demonstrating radiation effects on each individual organ. (author)
Zhang, Xiaomin; Xie, Xiangdong; Cheng, Jie; Ning, Jing; Yuan, Yong; Pan, Jie; Yang, Guoshan
2012-01-01
A set of conversion coefficients from kerma free-in-air to the organ absorbed dose for external photon beams from 10 keV to 10 MeV are presented based on a newly developed voxel mouse model, for the purpose of radiation effect evaluation. The voxel mouse model was developed from colour images of successive cryosections of a normal nude male mouse, in which 14 organs or tissues were segmented manually and filled with different colours, while each colour was tagged by a specific ID number for implementation of mouse model in Monte Carlo N-particle code (MCNP). Monte Carlo simulation with MCNP was carried out to obtain organ dose conversion coefficients for 22 external monoenergetic photon beams between 10 keV and 10 MeV under five different irradiation geometries conditions (left lateral, right lateral, dorsal-ventral, ventral-dorsal, and isotropic). Organ dose conversion coefficients were presented in tables and compared with the published data based on a rat model to investigate the effect of body size and weight on the organ dose. The calculated and comparison results show that the organ dose conversion coefficients varying the photon energy exhibits similar trend for most organs except for the bone and skin, and the organ dose is sensitive to body size and weight at a photon energy approximately <0.1 MeV.
Financial model calibration using consistency hints.
Abu-Mostafa, Y S
2001-01-01
We introduce a technique for forcing the calibration of a financial model to produce valid parameters. The technique is based on learning from hints. It converts simple curve fitting into genuine calibration, where broad conclusions can be inferred from parameter values. The technique augments the error function of curve fitting with consistency hint error functions based on the Kullback-Leibler distance. We introduce an efficient EM-type optimization algorithm tailored to this technique. We also introduce other consistency hints, and balance their weights using canonical errors. We calibrate the correlated multifactor Vasicek model of interest rates, and apply it successfully to Japanese Yen swaps market and US dollar yield market.
Iowa calibration of MEPDG performance prediction models.
2013-06-01
This study aims to improve the accuracy of AASHTO Mechanistic-Empirical Pavement Design Guide (MEPDG) pavement : performance predictions for Iowa pavement systems through local calibration of MEPDG prediction models. A total of 130 : representative p...
International Nuclear Information System (INIS)
Youssef, M.Z.; Feder, R.; Davis, I.
2007-01-01
The ITER IT has adopted the newly developed FEM, 3-D, and CAD-based Discrete Ordinates code, ATTILA for the neutronics studies contingent on its success in predicting key neutronics parameters and nuclear field according to the stringent QA requirements set forth by the Management and Quality Program (MQP). ATTILA has the advantage of providing a full flux and response functions mapping everywhere in one run where components subjected to excessive radiation level and strong streaming paths can be identified. The ITER neutronics community had agreed to use a standard CAD model of ITER (40 degree sector, denoted ''Benchmark CAD Model'') to compare results for several responses selected for calculation benchmarking purposes to test the efficiency and accuracy of the CAD-MCNP approach developed by each party. Since ATTILA seems to lend itself as a powerful design tool with minimal turnaround time, it was decided to benchmark this model with ATTILA as well and compare the results to those obtained with the CAD MCNP calculations. In this paper we report such comparison for five responses, namely: (1) Neutron wall load on the surface of the 18 shield blanket module (SBM), (2) Neutron flux and nuclear heating rate in the divertor cassette, (3) nuclear heating rate in the winding pack of the inner leg of the TF coil, (4) Radial flux profile across dummy port plug and shield plug placed in the equatorial port, and (5) Flux at seven point locations situated behind the equatorial port plug. (orig.)
MCNP variance reduction overview
International Nuclear Information System (INIS)
Hendricks, J.S.; Booth, T.E.
1985-01-01
The MCNP code is rich in variance reduction features. Standard variance reduction methods found in most Monte Carlo codes are available as well as a number of methods unique to MCNP. We discuss the variance reduction features presently in MCNP as well as new ones under study for possible inclusion in future versions of the code
Logarithmic transformed statistical models in calibration
International Nuclear Information System (INIS)
Zeis, C.D.
1975-01-01
A general type of statistical model used for calibration of instruments having the property that the standard deviations of the observed values increase as a function of the mean value is described. The application to the Helix Counter at the Rocky Flats Plant is primarily from a theoretical point of view. The Helix Counter measures the amount of plutonium in certain types of chemicals. The method described can be used also for other calibrations. (U.S.)
MCNP trademark Software Quality Assurance plan
International Nuclear Information System (INIS)
Abhold, H.M.; Hendricks, J.S.
1996-04-01
MCNP is a computer code that models the interaction of radiation with matter. MCNP is developed and maintained by the Transport Methods Group (XTM) of the Los Alamos National Laboratory (LANL). This plan describes the Software Quality Assurance (SQA) program applied to the code. The SQA program is consistent with the requirements of IEEE-730.1 and the guiding principles of ISO 900
A Monte Carlo modeling alternative for the API Gamma Ray Calibration Facility
International Nuclear Information System (INIS)
Galford, J.E.
2017-01-01
The gamma ray pit at the API Calibration Facility, located on the University of Houston campus, defines the API unit for natural gamma ray logs used throughout the petroleum logging industry. Future use of the facility is uncertain. An alternative method is proposed to preserve the gamma ray API unit definition as an industry standard by using Monte Carlo modeling to obtain accurate counting rate-to-API unit conversion factors for gross-counting and spectral gamma ray tool designs. - Highlights: • A Monte Carlo alternative is proposed to replace empirical calibration procedures. • The proposed Monte Carlo alternative preserves the original API unit definition. • MCNP source and materials descriptions are provided for the API gamma ray pit. • Simulated results are presented for several wireline logging tool designs. • The proposed method can be adapted for use with logging-while-drilling tools.
Wangerin, K; Culbertson, C N; Jevremovic, T
2005-08-01
The goal of this study was to evaluate the COG Monte Carlo radiation transport code, developed and tested by Lawrence Livermore National Laboratory, for gadolinium neutron capture therapy (GdNCT) related modeling. The validity of COG NCT model has been established for this model, and here the calculation was extended to analyze the effect of various gadolinium concentrations on dose distribution and cell-kill effect of the GdNCT modality and to determine the optimum therapeutic conditions for treating brain cancers. The computational results were compared with the widely used MCNP code. The differences between the COG and MCNP predictions were generally small and suggest that the COG code can be applied to similar research problems in NCT. Results for this study also showed that a concentration of 100 ppm gadolinium in the tumor was most beneficial when using an epithermal neutron beam.
International Nuclear Information System (INIS)
Hendricks, J.S.
1994-01-01
The MCNP code development program is a relatively large and rapidly changing project in the small and highly-specialized field of radiation transport, specifically radiation protection and shielding. A number of major new MCNP initiatives are described in the subsequent papers in this session. The focus of this paper is the important new developments not described elsewhere and a number of recent developments that have been available since MCNP4A but have gone unnoticed. In particular, we report for the first time a new MCNP quality assurance initiative providing 97% test coverage, a new MCNP feature enabling plotting of nuclear data, and the other new features developed so far for MCNP4B. Finally, an attempt is made to articulate how all these fit together into the overall MCNP development program
Development and Application of MCNP5 and KENO-VI Monte Carlo Models for the Atucha-2 PHWR Analysis
Directory of Open Access Journals (Sweden)
M. Pecchia
2011-01-01
Full Text Available The geometrical complexity and the peculiarities of Atucha-2 PHWR require the adoption of advanced Monte Carlo codes for performing realistic neutronic simulations. Core models of Atucha-2 PHWR were developed using both MCNP5 and KENO-VI codes. The developed models were applied for calculating reactor criticality states at beginning of life, reactor cell constants, and control rods volumes. The last two applications were relevant for performing successive three dimensional neutron kinetic analyses since it was necessary to correctly evaluate the effect of each oblique control rod in each cell discretizing the reactor. These corrective factors were then applied to the cell cross sections calculated by the two-dimensional deterministic lattice physics code HELIOS. These results were implemented in the RELAP-3D model to perform safety analyses for the licensing process.
Culbertson, C N; Wangerin, K; Ghandourah, E; Jevremovic, T
2005-08-01
The goal of this study was to evaluate the COG Monte Carlo radiation transport code, developed and tested by Lawrence Livermore National Laboratory, for neutron capture therapy related modeling. A boron neutron capture therapy model was analyzed comparing COG calculational results to results from the widely used MCNP4B (Monte Carlo N-Particle) transport code. The approach for computing neutron fluence rate and each dose component relevant in boron neutron capture therapy is described, and calculated values are shown in detail. The differences between the COG and MCNP predictions are qualified and quantified. The differences are generally small and suggest that the COG code can be applied for BNCT research related problems.
Benchmark of Atucha-2 PHWR RELAP5-3D control rod model by Monte Carlo MCNP5 core calculation
Energy Technology Data Exchange (ETDEWEB)
Pecchia, M.; D' Auria, F. [San Piero A Grado Nuclear Research Group GRNSPG, Univ. of Pisa, via Diotisalvi, 2, 56122 - Pisa (Italy); Mazzantini, O. [Nucleo-electrica Argentina Societad Anonima NA-SA, Buenos Aires (Argentina)
2012-07-01
Atucha-2 is a Siemens-designed PHWR reactor under construction in the Republic of Argentina. Its geometrical complexity and peculiarities require the adoption of advanced Monte Carlo codes for performing realistic neutronic simulations. Therefore core models of Atucha-2 PHWR were developed using MCNP5. In this work a methodology was set up to collect the flux in the hexagonal mesh by which the Atucha-2 core is represented. The scope of this activity is to evaluate the effect of obliquely inserted control rod on neutron flux in order to validate the RELAP5-3D{sup C}/NESTLE three dimensional neutron kinetic coupled thermal-hydraulic model, applied by GRNSPG/UNIPI for performing selected transients of Chapter 15 FSAR of Atucha-2. (authors)
International Nuclear Information System (INIS)
Muhrer, G.; Ferguson, P.D.; Russell, G.J.; Pitcher, E.J.
2000-01-01
During the design of the Manuel Lujan, Jr., Neutron Scattering Center target, a simplified Monte Carlo model was used to estimate target system performance and to aid engineers as decisions were made regarding the construction of the target system. Although the simplified model ideally would perfectly reflect the as-built system performance, assumptions were made in the model during the design process that may result in deviations between the model predictions and the as-built system performance. Now that the Lujan Center target system has been completed, a more detailed, as-built, model of the target system has been completed. The purpose of this work is to investigate differences between the predicted target system performance of the simplified model and the as-built model from the standpoint of time-averaged moderator brightness. Calculated discrepancies between the two models have been isolated to a few key issues. Figure 1 shows MCNP geometric plots of the simplified and as-built models. Major differences between these two models include details in the moderator designs (plena) and piping, full versus partial moderator canisters (only in the direction of the extracted neutron beam for the simplified model), and reflector details including cooling pipes and engineering tolerance gaps. In addition, Fig. 1 demonstrates that the detailed model includes shielding and additional material beyond that which was modeled by the original simplified model
Calibration of CORSIM models under saturated traffic flow conditions.
2013-09-01
This study proposes a methodology to calibrate microscopic traffic flow simulation models. : The proposed methodology has the capability to calibrate simultaneously all the calibration : parameters as well as demand patterns for any network topology....
The new MCNP6 depletion capability
International Nuclear Information System (INIS)
Fensin, M. L.; James, M. R.; Hendricks, J. S.; Goorley, J. T.
2012-01-01
The first MCNP based in-line Monte Carlo depletion capability was officially released from the Radiation Safety Information and Computational Center as MCNPX 2.6.0. Both the MCNP5 and MCNPX codes have historically provided a successful combinatorial geometry based, continuous energy, Monte Carlo radiation transport solution for advanced reactor modeling and simulation. However, due to separate development pathways, useful simulation capabilities were dispersed between both codes and not unified in a single technology. MCNP6, the next evolution in the MCNP suite of codes, now combines the capability of both simulation tools, as well as providing new advanced technology, in a single radiation transport code. We describe here the new capabilities of the MCNP6 depletion code dating from the official RSICC release MCNPX 2.6.0, reported previously, to the now current state of MCNP6. NEA/OECD benchmark results are also reported. The MCNP6 depletion capability enhancements beyond MCNPX 2.6.0 reported here include: (1) new performance enhancing parallel architecture that implements both shared and distributed memory constructs; (2) enhanced memory management that maximizes calculation fidelity; and (3) improved burnup physics for better nuclide prediction. MCNP6 depletion enables complete, relatively easy-to-use depletion calculations in a single Monte Carlo code. The enhancements described here help provide a powerful capability as well as dictate a path forward for future development to improve the usefulness of the technology. (authors)
The New MCNP6 Depletion Capability
International Nuclear Information System (INIS)
Fensin, Michael Lorne; James, Michael R.; Hendricks, John S.; Goorley, John T.
2012-01-01
The first MCNP based inline Monte Carlo depletion capability was officially released from the Radiation Safety Information and Computational Center as MCNPX 2.6.0. Both the MCNP5 and MCNPX codes have historically provided a successful combinatorial geometry based, continuous energy, Monte Carlo radiation transport solution for advanced reactor modeling and simulation. However, due to separate development pathways, useful simulation capabilities were dispersed between both codes and not unified in a single technology. MCNP6, the next evolution in the MCNP suite of codes, now combines the capability of both simulation tools, as well as providing new advanced technology, in a single radiation transport code. We describe here the new capabilities of the MCNP6 depletion code dating from the official RSICC release MCNPX 2.6.0, reported previously, to the now current state of MCNP6. NEA/OECD benchmark results are also reported. The MCNP6 depletion capability enhancements beyond MCNPX 2.6.0 reported here include: (1) new performance enhancing parallel architecture that implements both shared and distributed memory constructs; (2) enhanced memory management that maximizes calculation fidelity; and (3) improved burnup physics for better nuclide prediction. MCNP6 depletion enables complete, relatively easy-to-use depletion calculations in a single Monte Carlo code. The enhancements described here help provide a powerful capability as well as dictate a path forward for future development to improve the usefulness of the technology.
International Nuclear Information System (INIS)
Katalenich, Jeff; Flaska, Marek; Pozzi, Sara A.; Hartman, Michael R.
2011-01-01
Fast and robust methods for interrogation of special nuclear material (SNM) are of interest to many agencies and institutions in the United States. It is well known that passive interrogation methods are typically sufficient for plutonium identification because of a relatively high neutron production rate from 240 Pu . On the other hand, identification of shielded uranium requires active methods using neutron or photon sources . Deuterium-deuterium (2.45 MeV) and deuterium-tritium (14.1 MeV) neutron-generator sources have been previously tested and proven to be relatively reliable instruments for active interrogation of nuclear materials . In addition, the newest generators of this type are small enough for applications requiring portable interrogation systems. Active interrogation techniques using high-energy neutrons are being investigated as a method to detect hidden SNM in shielded containers . Due to the thickness of some containers, penetrating radiation such as high-energy neutrons can provide a potential means of probing shielded SNM. In an effort to develop the capability to assess the signal seen from various forms of shielded nuclear materials, University of Michigan Neutron Science Laboratory's D-T neutron generator and its shielding were accurately modeled in MCNP. The generator, while operating at nominal power, produces approximately 1x10 10 neutrons/s, a source intensity which requires a large amount of shielding to minimize the dose rates around the generator. For this reason, the existing shielding completely encompasses the generator and does not include beam ports. Therefore, several MCNP simulations were performed to estimate the yield of uncollided 14.1-MeV neutrons from the generator for active interrogation experiments. Beam port diameters of 5, 10, 15, 20, and 25 cm were modeled to assess the resulting neutron fluxes. The neutron flux outside the beam ports was estimated to be approximately 2x10 4 n/cm 2 s.
International Nuclear Information System (INIS)
Kalcheva, S.; Koonen, E.; Ponsard, B.
2005-01-01
The Belgian Material Test Reactor (MTR) BR2 is strongly heterogeneous high flux engineering test reactor at SCK-CEN (Centre d'Etude de l'energie Nucleaire) in Mol at a thermal power 60 to 100 MW. It deploys highly enriched uranium, water cooled concentric plate fuel elements, positioned inside a beryllium reflector with complex hyperboloid arrangement of test holes. The objective of this paper is the validation of a MCNP and ORIGEN-S 3D model for reactivity predictions of the entire BR2 core during reactor operation. We employ the Monte Carlo code MCNP-4C for evaluating the effective multiplication factor k eff and 3D space dependent specific power distribution. The 1D code ORIGEN-S is used for calculation of isotopic fuel depletion versus burn up and preparation of a database (DB) with depleted fuel compositions. The approach taken is to evaluate the 3D power distribution at each time step and along with DB to evaluate the 3D isotopic fuel depletion at the next step and to deduce the corresponding shim rods positions of the reactor operation. The capabilities of the both codes are fully exploited without constraints on the number of involved isotope depletion chains or increase of the computational time. The reactor has a complex operation, with important shutdowns between cycles, and its reactivity is strongly influenced by poisons, mainly 3 He and 6 Li from the beryllium reflector, and burnable absorbers 149 Sm and 10 B in the fresh UAlx fuel. Our computational predictions for the shim rods position at various restarts are within 0.5$ (β eff =0.0072). (author)
Calibration models for high enthalpy calorimetric probes.
Kannel, A
1978-07-01
The accuracy of gas-aspirated liquid-cooled calorimetric probes used for measuring the enthalpy of high-temperature gas streams is studied. The error in the differential temperature measurements caused by internal and external heat transfer interactions is considered and quantified by mathematical models. The analysis suggests calibration methods for the evaluation of dimensionless heat transfer parameters in the models, which then can give a more accurate value for the enthalpy of the sample. Calibration models for four types of calorimeters are applied to results from the literature and from our own experiments: a circular slit calorimeter developed by the author, single-cooling jacket probe, double-cooling jacket probe, and split-flow cooling jacket probe. The results show that the models are useful for describing and correcting the temperature measurements.
Calculation of power density with MCNP in TRIGA reactor
International Nuclear Information System (INIS)
Snoj, L.; Ravnik, M.
2006-01-01
Modern Monte Carlo codes (e.g. MCNP) allow calculation of power density distribution in 3-D geometry assuming detailed geometry without unit-cell homogenization. To normalize MCNP calculation by the steady-state thermal power of a reactor, one must use appropriate scaling factors. The description of the scaling factors is not adequately described in the MCNP manual and requires detailed knowledge of the code model. As the application of MCNP for power density calculation in TRIGA reactors has not been reported in open literature, the procedure of calculating power density with MCNP and its normalization to the power level of a reactor is described in the paper. (author)
SURFplus Model Calibration for PBX 9502
Energy Technology Data Exchange (ETDEWEB)
Menikoff, Ralph [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-12-06
The SURFplus reactive burn model is calibrated for the TATB based explosive PBX 9502 at three initial temperatures; hot (75 C), ambient (23 C) and cold (-55 C). The CJ state depends on the initial temperature due to the variation in the initial density and initial specific energy of the PBX reactants. For the reactants, a porosity model for full density TATB is used. This allows the initial PBX density to be set to its measured value even though the coeffcient of thermal expansion for the TATB and the PBX differ. The PBX products EOS is taken as independent of the initial PBX state. The initial temperature also affects the sensitivity to shock initiation. The model rate parameters are calibrated to Pop plot data, the failure diameter, the limiting detonation speed just above the failure diameters, and curvature effect data for small curvature.
Grid based calibration of SWAT hydrological models
Directory of Open Access Journals (Sweden)
D. Gorgan
2012-07-01
Full Text Available The calibration and execution of large hydrological models, such as SWAT (soil and water assessment tool, developed for large areas, high resolution, and huge input data, need not only quite a long execution time but also high computation resources. SWAT hydrological model supports studies and predictions of the impact of land management practices on water, sediment, and agricultural chemical yields in complex watersheds. The paper presents the gSWAT application as a web practical solution for environmental specialists to calibrate extensive hydrological models and to run scenarios, by hiding the complex control of processes and heterogeneous resources across the grid based high computation infrastructure. The paper highlights the basic functionalities of the gSWAT platform, and the features of the graphical user interface. The presentation is concerned with the development of working sessions, interactive control of calibration, direct and basic editing of parameters, process monitoring, and graphical and interactive visualization of the results. The experiments performed on different SWAT models and the obtained results argue the benefits brought by the grid parallel and distributed environment as a solution for the processing platform. All the instances of SWAT models used in the reported experiments have been developed through the enviroGRIDS project, targeting the Black Sea catchment area.
International Nuclear Information System (INIS)
Naito, Yoshitaka
2001-01-01
To assist succeeding reports which will be presented in this research meeting, following items on the computer code MCNP developed in USA are presented: (1) history of development of MCNP, (2) meaning of the development, (3) progress of study on Monte Carlo codes in the nuclear code committee and (4) expectation to Monte Carlo codes. (author)
MCNP Progress & Performance Improvements
Energy Technology Data Exchange (ETDEWEB)
Brown, Forrest B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Bull, Jeffrey S. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Rising, Michael Evan [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2015-04-14
Twenty-eight slides give information about the work of the US DOE/NNSA Nuclear Criticality Safety Program on MCNP6 under the following headings: MCNP6.1.1 Release, with ENDF/B-VII.1; Verification/Validation; User Support & Training; Performance Improvements; and Work in Progress. Whisper methodology will be incorporated into the code, and run speed should be increased.
High Accuracy Transistor Compact Model Calibrations
Energy Technology Data Exchange (ETDEWEB)
Hembree, Charles E. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Mar, Alan [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Robertson, Perry J. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)
2015-09-01
Typically, transistors are modeled by the application of calibrated nominal and range models. These models consists of differing parameter values that describe the location and the upper and lower limits of a distribution of some transistor characteristic such as current capacity. Correspond- ingly, when using this approach, high degrees of accuracy of the transistor models are not expected since the set of models is a surrogate for a statistical description of the devices. The use of these types of models describes expected performances considering the extremes of process or transistor deviations. In contrast, circuits that have very stringent accuracy requirements require modeling techniques with higher accuracy. Since these accurate models have low error in transistor descriptions, these models can be used to describe part to part variations as well as an accurate description of a single circuit instance. Thus, models that meet these stipulations also enable the calculation of quantifi- cation of margins with respect to a functional threshold and uncertainties in these margins. Given this need, new model high accuracy calibration techniques for bipolar junction transis- tors have been developed and are described in this report.
Gradient-based model calibration with proxy-model assistance
Burrows, Wesley; Doherty, John
2016-02-01
Use of a proxy model in gradient-based calibration and uncertainty analysis of a complex groundwater model with large run times and problematic numerical behaviour is described. The methodology is general, and can be used with models of all types. The proxy model is based on a series of analytical functions that link all model outputs used in the calibration process to all parameters requiring estimation. In enforcing history-matching constraints during the calibration and post-calibration uncertainty analysis processes, the proxy model is run for the purposes of populating the Jacobian matrix, while the original model is run when testing parameter upgrades; the latter process is readily parallelized. Use of a proxy model in this fashion dramatically reduces the computational burden of complex model calibration and uncertainty analysis. At the same time, the effect of model numerical misbehaviour on calculation of local gradients is mitigated, this allowing access to the benefits of gradient-based analysis where lack of integrity in finite-difference derivatives calculation would otherwise have impeded such access. Construction of a proxy model, and its subsequent use in calibration of a complex model, and in analysing the uncertainties of predictions made by that model, is implemented in the PEST suite.
Electroweak Calibration of the Higgs Characterization Model
CERN. Geneva
2015-01-01
I will present the preliminary results of histogram fits using the Higgs Combine histogram fitting package. These fits can be used to estimate the effects of electroweak contributions to the p p -> H mu+ mu- Higgs production channel and calibrate Beyond Standard Model (BSM) simulations which ignore these effects. I will emphasize my findings' significance in the context of other research here at CERN and in the broader world of high energy physics.
Neutronics modeling of TRIGA reactor at the University of Utah using agent, KENO6 and MCNP5 codes
International Nuclear Information System (INIS)
Yang, X.; Xiao, S.; Choe, D.; Jevremovic, T.
2010-01-01
The TRIGA reactor at the University of Utah is modelled in 2D using the AGENT state-of-the-art methodology based on the Method of Characteristics (MOC) and R-function theory supporting detailed reactor analysis of reactor geometries of any type. The TRIGA reactor is also modelled using KENO6 and MCNP5 for comparison. The spatial flux and reaction rates distribution are visualized by AGENT graphics support. All methodologies are in use in to study the effect of different fuel configurations in developing practical educational exercises for students studying reactor physics. At the University of Utah we train graduate and undergraduate students in obtaining the Nuclear Regulatory Commission license in operating the TRIGA reactor. The computational models as developed are in support of these extensive training classes and in helping students visualize the reactor core characteristics in regard to neutron transport under various operational conditions. Additionally, the TRIGA reactor is under the consideration for power uprate; this fleet of computational tools once benchmarked against real measurements will provide us with validated 3D simulation models for simulating operating conditions of TRIGA. (author)
International Nuclear Information System (INIS)
Bilanovic, Z.; McCracken, D.R.
1994-12-01
In order to assess irradiation-induced corrosion effects, coolant radiolysis and the degradation of the physical properties of reactor materials and components, it is necessary to determine the neutron, photon, and electron energy deposition profiles in the fuel channels of the reactor core. At present, several different computer codes must be used to do this. The most recent, advanced and versatile of these is the latest version of MCNP, which may be capable of replacing all the others. Different codes have different assumptions and different restrictions on the way they can model the core physics and geometry. This report presents the results of ANISN and MCNP models of neutron and photon energy deposition. The results validate the use of MCNP for simplified geometrical modelling of energy deposition by neutrons and photons in the complex geometry of the CANDU reactor fuel channel. Discrete ordinates codes such as ANISN were the benchmark codes used in previous work. The results of calculations using various models are presented, and they show very good agreement for fast-neutron energy deposition. In the case of photon energy deposition, however, some modifications to the modelling procedures had to be incorporated. Problems with the use of reflective boundaries were solved by either including the eight surrounding fuel channels in the model, or using a boundary source at the bounding surface of the problem. Once these modifications were incorporated, consistent results between the computer codes were achieved. Historically, simple annular representations of the core were used, because of the difficulty of doing detailed modelling with older codes. It is demonstrated that modelling by MCNP, using more accurate and more detailed geometry, gives significantly different and improved results. (author). 9 refs., 12 tabs., 20 figs
International Nuclear Information System (INIS)
Khattab, K.; Bush, M; Kassery, H.
2009-03-01
A 3-D model for the irradiation plant which belongs to the Atomic Energy Commission, Department of Radiation Technology in the Deir Al-Hajar area near Damascus, is presented in this work using the MCNP-4C code. This model is used to calculate the spatial gamma ray dose in the (x, y, z) coordinate. Good agreements are noticed between the measured and the calculated results. (author)
Ideas for fast accelerator model calibration
International Nuclear Information System (INIS)
Corbett, J.
1997-05-01
With the advent of a simple matrix inversion technique, measurement-based storage ring modeling has made rapid progress in recent years. Using fast computers with large memory, the matrix inversion procedure typically adjusts up to 10 3 model variables to fit the order of 10 5 measurements. The results have been surprisingly accurate. Physics aside, one of the next frontiers is to simplify the process and to reduce computation time. In this paper, the authors discuss two approaches to speed up the model calibration process: recursive least-squares fitting and a piecewise fitting approach
Model calibration for building energy efficiency simulation
International Nuclear Information System (INIS)
Mustafaraj, Giorgio; Marini, Dashamir; Costa, Andrea; Keane, Marcus
2014-01-01
Highlights: • Developing a 3D model relating to building architecture, occupancy and HVAC operation. • Two calibration stages developed, final model providing accurate results. • Using an onsite weather station for generating the weather data file in EnergyPlus. • Predicting thermal behaviour of underfloor heating, heat pump and natural ventilation. • Monthly energy saving opportunities related to heat pump of 20–27% was identified. - Abstract: This research work deals with an Environmental Research Institute (ERI) building where an underfloor heating system and natural ventilation are the main systems used to maintain comfort condition throughout 80% of the building areas. Firstly, this work involved developing a 3D model relating to building architecture, occupancy and HVAC operation. Secondly, the calibration methodology, which consists of two levels, was then applied in order to insure accuracy and reduce the likelihood of errors. To further improve the accuracy of calibration a historical weather data file related to year 2011, was created from the on-site local weather station of ERI building. After applying the second level of calibration process, the values of Mean bias Error (MBE) and Cumulative Variation of Root Mean Squared Error (CV(RMSE)) on hourly based analysis for heat pump electricity consumption varied within the following ranges: (MBE) hourly from −5.6% to 7.5% and CV(RMSE) hourly from 7.3% to 25.1%. Finally, the building was simulated with EnergyPlus to identify further possibilities of energy savings supplied by a water to water heat pump to underfloor heating system. It found that electricity consumption savings from the heat pump can vary between 20% and 27% on monthly bases
Carinou, Eleutheria; Stamatelatos, Ion Evangelos; Kamenopoulou, Vassiliki; Georgolopoulou, Paraskevi; Sandilos, Panayotis
The development of a computational model for the treatment head of a medical electron accelerator (Elekta/Philips SL-18) by the Monte Carlo code mcnp-4C2 is discussed. The model includes the major components of the accelerator head and a pmma phantom representing the patient body. Calculations were performed for a 14 MeV electron beam impinging on the accelerator target and a 10 cmx10 cm beam area at the isocentre. The model was used in order to predict the neutron ambient dose equivalent at the isocentre level and moreover the neutron absorbed dose distribution within the phantom. Calculations were validated against experimental measurements performed by gold foil activation detectors. The results of this study indicated that the equivalent dose at tissues or organs adjacent to the treatment field due to photoneutrons could be up to 10% of the total peripheral dose, for the specific accelerator characteristics examined. Therefore, photoneutrons should be taken into account when accurate dose calculations are required to sensitive tissues that are adjacent to the therapeutic X-ray beam. The method described can be extended to other accelerators and collimation configurations as well, upon specification of treatment head component dimensions, composition and nominal accelerating potential.
International Nuclear Information System (INIS)
Cornic, Philippe; Le Besnerais, Guy; Champagnat, Frédéric; Illoul, Cédric; Cheminet, Adam; Le Sant, Yves; Leclaire, Benjamin
2016-01-01
We address calibration and self-calibration of tomographic PIV experiments within a pinhole model of cameras. A complete and explicit pinhole model of a camera equipped with a 2-tilt angles Scheimpflug adapter is presented. It is then used in a calibration procedure based on a freely moving calibration plate. While the resulting calibrations are accurate enough for Tomo-PIV, we confirm, through a simple experiment, that they are not stable in time, and illustrate how the pinhole framework can be used to provide a quantitative evaluation of geometrical drifts in the setup. We propose an original self-calibration method based on global optimization of the extrinsic parameters of the pinhole model. These methods are successfully applied to the tomographic PIV of an air jet experiment. An unexpected by-product of our work is to show that volume self-calibration induces a change in the world frame coordinates. Provided the calibration drift is small, as generally observed in PIV, the bias on the estimated velocity field is negligible but the absolute location cannot be accurately recovered using standard calibration data. (paper)
International Nuclear Information System (INIS)
Romdhani, Ibtissem
2014-01-01
As part of developing its nuclear infrastructure base, the National Science and Technology Center Nuclear (CNSTN) examines the technical feasibility of setting up a new installation of subcritical assembly. Our study focuses on determining the neutron parameters of a nuclear zero power reactor based on Monte Carlo simulation MCNP. The objective of the simulation is to model the installation, determine the effective multiplication factor, and spatial distribution of neutron flux.
Seepage Calibration Model and Seepage Testing Data
International Nuclear Information System (INIS)
Dixon, P.
2004-01-01
The purpose of this Model Report is to document the Seepage Calibration Model (SCM). The SCM is developed (1) to establish the conceptual basis for the Seepage Model for Performance Assessment (SMPA), and (2) to derive seepage-relevant, model-related parameters and their distributions for use in the SMPA and seepage abstraction in support of the Total System Performance Assessment for License Application (TSPA-LA). The SCM is intended to be used only within this Model Report for the estimation of seepage-relevant parameters through calibration of the model against seepage-rate data from liquid-release tests performed in several niches along the Exploratory Studies Facility (ESF) Main Drift and in the Cross Drift. The SCM does not predict seepage into waste emplacement drifts under thermal or ambient conditions. Seepage predictions for waste emplacement drifts under ambient conditions will be performed with the SMPA (see upcoming REV 02 of CRWMS M and O 2000 [153314]), which inherits the conceptual basis and model-related parameters from the SCM. Seepage during the thermal period is examined separately in the Thermal Hydrologic (TH) Seepage Model (see BSC 2003 [161530]). The scope of this work is (1) to evaluate seepage rates measured during liquid-release experiments performed in several niches in the Exploratory Studies Facility (ESF) and in the Cross Drift, which was excavated for enhanced characterization of the repository block (ECRB); (2) to evaluate air-permeability data measured in boreholes above the niches and the Cross Drift to obtain the permeability structure for the seepage model; (3) to use inverse modeling to calibrate the SCM and to estimate seepage-relevant, model-related parameters on the drift scale; (4) to estimate the epistemic uncertainty of the derived parameters, based on the goodness-of-fit to the observed data and the sensitivity of calculated seepage with respect to the parameters of interest; (5) to characterize the aleatory uncertainty
Installation and validation of MCNP-4A
International Nuclear Information System (INIS)
Marks, N.A.
1997-01-01
MCNP-4A is a multi-purpose Monte Carlo program suitable for the modelling of neutron, photon, and electron transport problems. It is a particularly useful technique when studying systems containing irregular shapes. MCNP has been developed over the last 25 years by Los Alamos, and is distributed internationally via RSIC at Oak Ridge. This document describes the installation of MCNP-4A (henceforth referred to as MCNP) on the Silicon Graphics workstation (bluey.ansto.gov.au). A limited number of benchmarks pertaining to fast and thermal systems were performed to check the installation and validate the code. The results are compared to deterministic calculations performed using the AUS neutronics code system developed at ANSTO. (author)
Thermodynamically consistent model calibration in chemical kinetics
Directory of Open Access Journals (Sweden)
Goutsias John
2011-05-01
Full Text Available Abstract Background The dynamics of biochemical reaction systems are constrained by the fundamental laws of thermodynamics, which impose well-defined relationships among the reaction rate constants characterizing these systems. Constructing biochemical reaction systems from experimental observations often leads to parameter values that do not satisfy the necessary thermodynamic constraints. This can result in models that are not physically realizable and may lead to inaccurate, or even erroneous, descriptions of cellular function. Results We introduce a thermodynamically consistent model calibration (TCMC method that can be effectively used to provide thermodynamically feasible values for the parameters of an open biochemical reaction system. The proposed method formulates the model calibration problem as a constrained optimization problem that takes thermodynamic constraints (and, if desired, additional non-thermodynamic constraints into account. By calculating thermodynamically feasible values for the kinetic parameters of a well-known model of the EGF/ERK signaling cascade, we demonstrate the qualitative and quantitative significance of imposing thermodynamic constraints on these parameters and the effectiveness of our method for accomplishing this important task. MATLAB software, using the Systems Biology Toolbox 2.1, can be accessed from http://www.cis.jhu.edu/~goutsias/CSS lab/software.html. An SBML file containing the thermodynamically feasible EGF/ERK signaling cascade model can be found in the BioModels database. Conclusions TCMC is a simple and flexible method for obtaining physically plausible values for the kinetic parameters of open biochemical reaction systems. It can be effectively used to recalculate a thermodynamically consistent set of parameter values for existing thermodynamically infeasible biochemical reaction models of cellular function as well as to estimate thermodynamically feasible values for the parameters of new
The use of the MCNP code for the quantitative analysis of elements in geological formations
Energy Technology Data Exchange (ETDEWEB)
Cywicka-Jakiel, T.; Woynicka, U. [The Henryk Niewodniczanski Institute of Nuclear Physics, Krakow (Poland); Zorski, T. [University of Mining and Metallurgy, Faculty of Geology, Geophysics and Environmental Protection, Krakow (Poland)
2003-07-01
The Monte Carlo modelling calculations using the MCNP code have been performed, which support the spectrometric neutron-gamma (SNGL) borehole logging. The SNGL enables the lithology identification through the quantitative analysis of the elements in geological formations and thus can be very useful for the oil and gas industry as well as for prospecting of the potential host rocks for radioactive waste disposal. In the SNGL experiment, gamma-rays induced by the neutron interactions with the nuclei of the rock elements are detected using the gamma-ray probe of complex mechanical and electronic construction. The probe has to be calibrated for a wide range of the elemental concentrations, to assure the proper quantitative analysis. The Polish Calibration Station in Zielona Gora is equipped with a limited number of calibration standards. An extension of the experimental calibration and the evaluation of the effect of the so-called side effects (for example the borehole and formation salinity variation) on the accuracy of the SNGL method can be done by the use of the MCNP code. The preliminary MCNP results showing the effect of the borehole and formation fluids salinity variations on the accuracy of silicon (Si), calcium (Ca) and iron (Fe) content determination are presented in the paper. The main effort has been focused on a modelling of the complex SNGL probe situated in a fluid filled borehole, surrounded by a geological formation. Track length estimate of the photon flux from the (n,gamma) interactions as a function of gamma-rays energy was used. Calculations were run on the PC computer with AMD Athlon 1.33 GHz processor. Neutron and photon cross-sections libraries were taken from the MCNP4c package and based mainly on the ENDF/B-6, ENDF/B-5 and MCPLIB02 data. The results of simulated experiment are in conformity with results of the real experiment performed with the use of the main lithology models (sandstones, limestones and dolomite). (authors)
The use of the MCNP code for the quantitative analysis of elements in geological formations
International Nuclear Information System (INIS)
Cywicka-Jakiel, T.; Woynicka, U.; Zorski, T.
2003-01-01
The Monte Carlo modelling calculations using the MCNP code have been performed, which support the spectrometric neutron-gamma (SNGL) borehole logging. The SNGL enables the lithology identification through the quantitative analysis of the elements in geological formations and thus can be very useful for the oil and gas industry as well as for prospecting of the potential host rocks for radioactive waste disposal. In the SNGL experiment, gamma-rays induced by the neutron interactions with the nuclei of the rock elements are detected using the gamma-ray probe of complex mechanical and electronic construction. The probe has to be calibrated for a wide range of the elemental concentrations, to assure the proper quantitative analysis. The Polish Calibration Station in Zielona Gora is equipped with a limited number of calibration standards. An extension of the experimental calibration and the evaluation of the effect of the so-called side effects (for example the borehole and formation salinity variation) on the accuracy of the SNGL method can be done by the use of the MCNP code. The preliminary MCNP results showing the effect of the borehole and formation fluids salinity variations on the accuracy of silicon (Si), calcium (Ca) and iron (Fe) content determination are presented in the paper. The main effort has been focused on a modelling of the complex SNGL probe situated in a fluid filled borehole, surrounded by a geological formation. Track length estimate of the photon flux from the (n,gamma) interactions as a function of gamma-rays energy was used. Calculations were run on the PC computer with AMD Athlon 1.33 GHz processor. Neutron and photon cross-sections libraries were taken from the MCNP4c package and based mainly on the ENDF/B-6, ENDF/B-5 and MCPLIB02 data. The results of simulated experiment are in conformity with results of the real experiment performed with the use of the main lithology models (sandstones, limestones and dolomite). (authors)
Calibration of hydrological model with programme PEST
Brilly, Mitja; Vidmar, Andrej; Kryžanowski, Andrej; Bezak, Nejc; Šraj, Mojca
2016-04-01
PEST is tool based on minimization of an objective function related to the root mean square error between the model output and the measurement. We use "singular value decomposition", section of the PEST control file, and Tikhonov regularization method for successfully estimation of model parameters. The PEST sometimes failed if inverse problems were ill-posed, but (SVD) ensures that PEST maintains numerical stability. The choice of the initial guess for the initial parameter values is an important issue in the PEST and need expert knowledge. The flexible nature of the PEST software and its ability to be applied to whole catchments at once give results of calibration performed extremely well across high number of sub catchments. Use of parallel computing version of PEST called BeoPEST was successfully useful to speed up calibration process. BeoPEST employs smart slaves and point-to-point communications to transfer data between the master and slaves computers. The HBV-light model is a simple multi-tank-type model for simulating precipitation-runoff. It is conceptual balance model of catchment hydrology which simulates discharge using rainfall, temperature and estimates of potential evaporation. Version of HBV-light-CLI allows the user to run HBV-light from the command line. Input and results files are in XML form. This allows to easily connecting it with other applications such as pre and post-processing utilities and PEST itself. The procedure was applied on hydrological model of Savinja catchment (1852 km2) and consists of twenty one sub-catchments. Data are temporary processed on hourly basis.
International Nuclear Information System (INIS)
Greacen, E.L.; Correll, R.L.; Cunningham, R.B.; Johns, G.G.; Nicolls, K.D.
1981-01-01
Procedures common to different methods of calibration of neutron moisture meters are outlined and laboratory and field calibration methods compared. Gross errors which arise from faulty calibration techniques are described. The count rate can be affected by the dry bulk density of the soil, the volumetric content of constitutional hydrogen and other chemical components of the soil and soil solution. Calibration is further complicated by the fact that the neutron meter responds more strongly to the soil properties close to the detector and source. The differences in slope of calibration curves for different soils can be as much as 40%
Calibration of discrete element model parameters: soybeans
Ghodki, Bhupendra M.; Patel, Manish; Namdeo, Rohit; Carpenter, Gopal
2018-05-01
Discrete element method (DEM) simulations are broadly used to get an insight of flow characteristics of granular materials in complex particulate systems. DEM input parameters for a model are the critical prerequisite for an efficient simulation. Thus, the present investigation aims to determine DEM input parameters for Hertz-Mindlin model using soybeans as a granular material. To achieve this aim, widely acceptable calibration approach was used having standard box-type apparatus. Further, qualitative and quantitative findings such as particle profile, height of kernels retaining the acrylic wall, and angle of repose of experiments and numerical simulations were compared to get the parameters. The calibrated set of DEM input parameters includes the following (a) material properties: particle geometric mean diameter (6.24 mm); spherical shape; particle density (1220 kg m^{-3} ), and (b) interaction parameters such as particle-particle: coefficient of restitution (0.17); coefficient of static friction (0.26); coefficient of rolling friction (0.08), and particle-wall: coefficient of restitution (0.35); coefficient of static friction (0.30); coefficient of rolling friction (0.08). The results may adequately be used to simulate particle scale mechanics (grain commingling, flow/motion, forces, etc) of soybeans in post-harvest machinery and devices.
Seepage Calibration Model and Seepage Testing Data
Energy Technology Data Exchange (ETDEWEB)
S. Finsterle
2004-09-02
The purpose of this Model Report is to document the Seepage Calibration Model (SCM). The SCM was developed (1) to establish the conceptual basis for the Seepage Model for Performance Assessment (SMPA), and (2) to derive seepage-relevant, model-related parameters and their distributions for use in the SMPA and seepage abstraction in support of the Total System Performance Assessment for License Application (TSPA-LA). This Model Report has been revised in response to a comprehensive, regulatory-focused evaluation performed by the Regulatory Integration Team [''Technical Work Plan for: Regulatory Integration Evaluation of Analysis and Model Reports Supporting the TSPA-LA'' (BSC 2004 [DIRS 169653])]. The SCM is intended to be used only within this Model Report for the estimation of seepage-relevant parameters through calibration of the model against seepage-rate data from liquid-release tests performed in several niches along the Exploratory Studies Facility (ESF) Main Drift and in the Cross-Drift. The SCM does not predict seepage into waste emplacement drifts under thermal or ambient conditions. Seepage predictions for waste emplacement drifts under ambient conditions will be performed with the SMPA [''Seepage Model for PA Including Drift Collapse'' (BSC 2004 [DIRS 167652])], which inherits the conceptual basis and model-related parameters from the SCM. Seepage during the thermal period is examined separately in the Thermal Hydrologic (TH) Seepage Model [see ''Drift-Scale Coupled Processes (DST and TH Seepage) Models'' (BSC 2004 [DIRS 170338])]. The scope of this work is (1) to evaluate seepage rates measured during liquid-release experiments performed in several niches in the Exploratory Studies Facility (ESF) and in the Cross-Drift, which was excavated for enhanced characterization of the repository block (ECRB); (2) to evaluate air-permeability data measured in boreholes above the niches and the Cross
Seepage Calibration Model and Seepage Testing Data
International Nuclear Information System (INIS)
Finsterle, S.
2004-01-01
The purpose of this Model Report is to document the Seepage Calibration Model (SCM). The SCM was developed (1) to establish the conceptual basis for the Seepage Model for Performance Assessment (SMPA), and (2) to derive seepage-relevant, model-related parameters and their distributions for use in the SMPA and seepage abstraction in support of the Total System Performance Assessment for License Application (TSPA-LA). This Model Report has been revised in response to a comprehensive, regulatory-focused evaluation performed by the Regulatory Integration Team [''Technical Work Plan for: Regulatory Integration Evaluation of Analysis and Model Reports Supporting the TSPA-LA'' (BSC 2004 [DIRS 169653])]. The SCM is intended to be used only within this Model Report for the estimation of seepage-relevant parameters through calibration of the model against seepage-rate data from liquid-release tests performed in several niches along the Exploratory Studies Facility (ESF) Main Drift and in the Cross-Drift. The SCM does not predict seepage into waste emplacement drifts under thermal or ambient conditions. Seepage predictions for waste emplacement drifts under ambient conditions will be performed with the SMPA [''Seepage Model for PA Including Drift Collapse'' (BSC 2004 [DIRS 167652])], which inherits the conceptual basis and model-related parameters from the SCM. Seepage during the thermal period is examined separately in the Thermal Hydrologic (TH) Seepage Model [see ''Drift-Scale Coupled Processes (DST and TH Seepage) Models'' (BSC 2004 [DIRS 170338])]. The scope of this work is (1) to evaluate seepage rates measured during liquid-release experiments performed in several niches in the Exploratory Studies Facility (ESF) and in the Cross-Drift, which was excavated for enhanced characterization of the repository block (ECRB); (2) to evaluate air-permeability data measured in boreholes above the niches and the Cross-Drift to obtain the permeability structure for the seepage model
Application of MCNP in the criticality calculation for reactors
International Nuclear Information System (INIS)
Zhong Zhaopeng; Shi Gong; Hu Yongming
2003-01-01
The criticality calculation is carried out with 3-D Monte Carlo code (MCNP). The author focuses on the introduction of modelling of the core and reflector. The core description is simplified by using repetition structure function of MCNP. k eff in different control rods positions are calculated for the case of JRR3, and the results is consistent with that of the reference. This work shows that MCNP is applicable for reactor criticality calculation
Monte Carlo parameter studies and uncertainty analyses with MCNP5
International Nuclear Information System (INIS)
Brown, F. B.; Sweezy, J. E.; Hayes, R.
2004-01-01
A software tool called mcnp p study has been developed to automate the setup, execution, and collection of results from a series of MCNP5 Monte Carlo calculations. This tool provides a convenient means of performing parameter studies, total uncertainty analyses, parallel job execution on clusters, stochastic geometry modeling, and other types of calculations where a series of MCNP5 jobs must be performed with varying problem input specifications. (authors)
Energy Technology Data Exchange (ETDEWEB)
Galicia A, J.; Francois L, J. L. [UNAM, Facultad de Ingenieria, Departamento de Sistemas Energeticos, Ciudad Universitaria, 04510 Ciudad de Mexico (Mexico); Aguilar H, F., E-mail: blink19871@hotmail.com [ININ, Carretera Mexico-Toluca s/n, 52750 Ocoyoacac, Estado de Mexico (Mexico)
2015-09-15
The main purpose of this paper is to obtain a model of the reactor core TRIGA Mark III that accurately represents the real operating conditions to 1 M Wth, using the Monte Carlo code MCNP5. To provide a more detailed analysis, different models of the reactor core were realized by simulating the control rods extracted and inserted in conditions in cold (293 K) also including an analysis for shutdown margin, so that satisfied the Operation Technical Specifications. The position they must have the control rods to reach a power equal to 1 M Wth, were obtained from practice entitled Operation in Manual Mode performed at Instituto Nacional de Investigaciones Nucleares (ININ). Later, the behavior of the K{sub eff} was analyzed considering different temperatures in the fuel elements, achieving calculate subsequently the values that best represent the actual reactor operation. Finally, the calculations in the developed model for to obtain the distribution of average flow of thermal, epithermal and fast neutrons in the six new experimental facilities are presented. (Author)
International Nuclear Information System (INIS)
Klasky, Marc Louis; Myers, Steven Charles; James, Michael R.; Mayo, Douglas R.
2016-01-01
To facilitate the timely execution of System Threat Reviews (STRs) for DNDO, and also to develop a methodology for performing STRs, LANL performed comparisons of several radiation transport codes (MCNP, GADRAS, and Gamma-Designer) that have been previously utilized to compute radiation signatures. While each of these codes has strengths, it is of paramount interest to determine the limitations of each of the respective codes and also to identify the most time efficient means by which to produce computational results, given the large number of parametric cases that are anticipated in performing STR's. These comparisons serve to identify regions of applicability for each code and provide estimates of uncertainty that may be anticipated. Furthermore, while performing these comparisons, examination of the sensitivity of the results to modeling assumptions was also examined. These investigations serve to enable the creation of the LANL methodology for performing STRs. Given the wide variety of radiation test sources, scenarios, and detectors, LANL calculated comparisons of the following parameters: decay data, multiplicity, device (n,γ) leakages, and radiation transport through representative scenes and shielding. This investigation was performed to understand potential limitations utilizing specific codes for different aspects of the STR challenges.
International Nuclear Information System (INIS)
Hendricks, J.S.; Briesmeister, J.F.
1991-01-01
MCNP is a widely used and actively developed Monte Carlo radiation transport code. Many important features have recently been added and more are under development. Benchmark studies not only indicate that MCNP is accurate but also that modern computer codes can give answers basically as accurate as the physics data that goes in them. Even deep penetration problems can be correct to within a factor of two after 10 to 25 mean free paths of penetration. And finally, Monte Carlo calculations, once thought to be too expensive to run routinely, can now be run effectively on desktop computers which compete with the supercomputers of yesteryear. 21 refs., 3 tabs
MCNP capabilities for nuclear well logging calculations
International Nuclear Information System (INIS)
Forster, R.A.; Little, R.C.; Briesmeister, J.F.; Hendricks, J.S.
1990-01-01
The Los Alamos Radiation Transport Code System (LARTCS) consists of state-of-the-art Monte Carlo and discrete ordinates transport codes and data libraries. This paper discusses how the general-purpose continuous-energy Monte Carlo code MCNP (Monte Carlo neutron photon), part of the LARTCS, provides a computational predictive capability for many applications of interest to the nuclear well logging community. The generalized three-dimensional geometry of MCNP is well suited for borehole-tool models. SABRINA, another component of the LARTCS, is a graphics code that can be used to interactively create a complex MCNP geometry. Users can define many source and tally characteristics with standard MCNP features. The time-dependent capability of the code is essential when modeling pulsed sources. Problems with neutrons, photons, and electrons as either single particle or coupled particles can be calculated with MCNP. The physics of neutron and photon transport and interactions is modeled in detail using the latest available cross-section data
RadBall Technology Testing and MCNP Modeling of the Tungsten Collimator.
Farfán, Eduardo B; Foley, Trevor Q; Coleman, J Rusty; Jannik, G Timothy; Holmes, Christopher J; Oldham, Mark; Adamovics, John; Stanley, Steven J
2010-01-01
The United Kingdom's National Nuclear Laboratory (NNL) has developed a remote, non-electrical, radiation-mapping device known as RadBall(™), which can locate and quantify radioactive hazards within contaminated areas of the nuclear industry. RadBall(™) consists of a colander-like outer shell that houses a radiation-sensitive polymer sphere. The outer shell works to collimate radiation sources and those areas of the polymer sphere that are exposed react, becoming increasingly more opaque, in proportion to the absorbed dose. The polymer sphere is imaged in an optical-CT scanner, which produces a high resolution 3D map of optical attenuation coefficients. Subsequent analysis of the optical attenuation matrix provides information on the spatial distribution of sources in a given area forming a 3D characterization of the area of interest. RadBall(™) has no power requirements and can be positioned in tight or hard-to reach locations. The RadBall(™) technology has been deployed in a number of technology trials in nuclear waste reprocessing plants at Sellafield in the United Kingdom and facilities of the Savannah River National Laboratory (SRNL). This study focuses on the RadBall(™) testing and modeling accomplished at SRNL.
RadBall™ Technology Testing and MCNP Modeling of the Tungsten Collimator
Farfán, Eduardo B.; Foley, Trevor Q.; Coleman, J. Rusty; Jannik, G. Timothy; Holmes, Christopher J.; Oldham, Mark; Adamovics, John; Stanley, Steven J.
2010-01-01
The United Kingdom’s National Nuclear Laboratory (NNL) has developed a remote, non-electrical, radiation-mapping device known as RadBall™, which can locate and quantify radioactive hazards within contaminated areas of the nuclear industry. RadBall™ consists of a colander-like outer shell that houses a radiation-sensitive polymer sphere. The outer shell works to collimate radiation sources and those areas of the polymer sphere that are exposed react, becoming increasingly more opaque, in proportion to the absorbed dose. The polymer sphere is imaged in an optical-CT scanner, which produces a high resolution 3D map of optical attenuation coefficients. Subsequent analysis of the optical attenuation matrix provides information on the spatial distribution of sources in a given area forming a 3D characterization of the area of interest. RadBall™ has no power requirements and can be positioned in tight or hard-to reach locations. The RadBall™ technology has been deployed in a number of technology trials in nuclear waste reprocessing plants at Sellafield in the United Kingdom and facilities of the Savannah River National Laboratory (SRNL). This study focuses on the RadBall™ testing and modeling accomplished at SRNL. PMID:21617740
Banaee, Nooshin; Asgari, Sepideh; Nedaie, Hassan Ali
2018-07-01
The accuracy of penumbral measurements in radiotherapy is pivotal because dose planning computers require accurate data to adequately modeling the beams, which in turn are used to calculate patient dose distributions. Gamma knife is a non-invasive intracranial technique based on principles of the Leksell stereotactic system for open deep brain surgeries, invented and developed by Professor Lars Leksell. The aim of this study is to compare the penumbra widths of Leksell Gamma Knife model C and Gamma ART 6000. Initially, the structure of both systems were simulated by using Monte Carlo MCNP6 code and after validating the accuracy of simulation, beam profiles of different collimators were plotted. MCNP6 beam profile calculations showed that the penumbra values of Leksell Gamma knife model C and Gamma ART 6000 for 18, 14, 8 and 4 mm collimators are 9.7, 7.9, 4.3, 2.6 and 8.2, 6.9, 3.6, 2.4, respectively. The results of this study showed that since Gamma ART 6000 has larger solid angle in comparison with Gamma Knife model C, it produces better beam profile penumbras than Gamma Knife model C in the direct plane. Copyright © 2017 Elsevier Ltd. All rights reserved.
International Nuclear Information System (INIS)
Goorley, John T.
2012-01-01
We, the development teams for MCNP, NJOY, and parts of ENDF, would like to invite you to a proposed 3 day workshop October 30, 31 and November 1 2012, to be held at Los Alamos National Laboratory. At this workshop, we will review new and developing missions that MCNP6 and the underlying nuclear data are being asked to address. LANL will also present its internal plans to address these missions and recent advances in these three capabilities and we will be interested to hear your input on these topics. Additionally we are interested in hearing from you additional technical advances, missions, concerns, and other issues that we should be considering for both short term (1-3 years) and long term (4-6 years)? What are the additional existing capabilities and methods that we should be investigating? The goal of the workshop is to refine priorities for mcnp6 transport methods, algorithms, physics, data and processing as they relate to the intersection of MCNP, NJOY and ENDF.
Energy Technology Data Exchange (ETDEWEB)
Poškus, A., E-mail: andrius.poskus@ff.vu.lt
2016-09-15
This paper evaluates the accuracy of the single-event (SE) and condensed-history (CH) models of electron transport in MCNP6.1 when simulating characteristic K{sub α}, total K (=K{sub α} + K{sub β}) and L{sub α} X-ray emission from thick targets bombarded by electrons with energies from 5 keV to 30 keV. It is shown that the MCNP6.1 implementation of the CH model for the K-shell impact ionization leads to underestimation of the K yield by 40% or more for the elements with atomic numbers Z < 15 and overestimation of the K{sub α} yield by more than 40% for the elements with Z > 25. The L{sub α} yields are underestimated by more than an order of magnitude in CH mode, because MCNP6.1 neglects X-ray emission caused by electron-impact ionization of L, M and higher shells in CH mode (the L{sub α} yields calculated in CH mode reflect only X-ray fluorescence, which is mainly caused by photoelectric absorption of bremsstrahlung photons). The X-ray yields calculated by MCNP6.1 in SE mode (using ENDF/B-VII.1 library data) are more accurate: the differences of the calculated and experimental K yields are within the experimental uncertainties for the elements C, Al and Si, and the calculated K{sub α} yields are typically underestimated by (20–30)% for the elements with Z > 25, whereas the L{sub α} yields are underestimated by (60–70)% for the elements with Z > 49. It is also shown that agreement of the experimental X-ray yields with those calculated in SE mode is additionally improved by replacing the ENDF/B inner-shell electron-impact ionization cross sections with the set of cross sections obtained from the distorted-wave Born approximation (DWBA), which are also used in the PENELOPE code system. The latter replacement causes a decrease of the average relative difference of the experimental X-ray yields and the simulation results obtained in SE mode to approximately 10%, which is similar to accuracy achieved with PENELOPE. This confirms that the DWBA inner
Influence of rainfall observation network on model calibration and application
Directory of Open Access Journals (Sweden)
A. Bárdossy
2008-01-01
Full Text Available The objective in this study is to investigate the influence of the spatial resolution of the rainfall input on the model calibration and application. The analysis is carried out by varying the distribution of the raingauge network. A meso-scale catchment located in southwest Germany has been selected for this study. First, the semi-distributed HBV model is calibrated with the precipitation interpolated from the available observed rainfall of the different raingauge networks. An automatic calibration method based on the combinatorial optimization algorithm simulated annealing is applied. The performance of the hydrological model is analyzed as a function of the raingauge density. Secondly, the calibrated model is validated using interpolated precipitation from the same raingauge density used for the calibration as well as interpolated precipitation based on networks of reduced and increased raingauge density. Lastly, the effect of missing rainfall data is investigated by using a multiple linear regression approach for filling in the missing measurements. The model, calibrated with the complete set of observed data, is then run in the validation period using the above described precipitation field. The simulated hydrographs obtained in the above described three sets of experiments are analyzed through the comparisons of the computed Nash-Sutcliffe coefficient and several goodness-of-fit indexes. The results show that the model using different raingauge networks might need re-calibration of the model parameters, specifically model calibrated on relatively sparse precipitation information might perform well on dense precipitation information while model calibrated on dense precipitation information fails on sparse precipitation information. Also, the model calibrated with the complete set of observed precipitation and run with incomplete observed data associated with the data estimated using multiple linear regressions, at the locations treated as
Impact of MCNP unresolved resonance probability-table treatment on uranium and plutonium benchmarks
International Nuclear Information System (INIS)
Mosteller, R.D.; Little, R.C.
1998-01-01
Versions of MCNP up through and including 4B have not accurately modeled neutron self-shielding effects in the unresolved resonance energy region. Recently, a probability-table treatment has been incorporated into a developmental version of MCNP. This paper presents MCNP results for a variety of uranium and plutonium critical benchmarks, calculated with and without the probability-table treatment
Use of McCad for the conversion of ITER CAD data to MCNP geometry
International Nuclear Information System (INIS)
Tsige-Tamirat, H.; Fischer, U.; Serikov, A.; Stickel, S.
2008-01-01
The program McCad provides a CAD interface for the Monte Carlo transport code MCNP. It is able to convert CAD data into MCNP input geometry description and provides GUI components for modeling, visualization, and data exchange. It performs sequences of tests on CAD data to check its validity and neutronics appropriateness including completion of the final MCNP model by void geometries. McCad has been used to convert a 40 deg. ITER torus sector CAD model to a suitable MCNP geometry model. Results of MCNP calculations performed to validate the converted geometry are presented
Depleted Reactor Analysis With MCNP-4B
International Nuclear Information System (INIS)
Caner, M.; Silverman, L.; Bettan, M.
2004-01-01
Monte Carlo neutronics calculations are mostly done for fresh reactor cores. There is today an ongoing activity in the development of Monte Carlo plus burnup code systems made possible by the fast gains in computer processor speeds. In this work we investigate the use of MCNP-4B for the calculation of a depleted core of the Soreq reactor (IRR-1). The number densities as function of burnup were taken from the WIMS-D/4 cell code calculations. This particular code coupling has been implemented before. The Monte Carlo code MCNP-4B calculates the coupled transport of neutrons and photons for complicated geometries. We have done neutronics calculations of the IRR-1 core with the WIMS and CITATION codes in the past Also, we have developed an MCNP model of the IRR-1 standard fuel for a criticality safety calculation of a spent fuel storage pool
SABRINA, Geometry Plot Program for MCNP
International Nuclear Information System (INIS)
SEIDL, Marcus
2003-01-01
1 - Description of program or function: SABRINA is an interactive, three-dimensional, geometry-modeling code system, primarily for use with CCC-200/MCNP. SABRINA's capabilities include creation, visualization, and verification of three-dimensional geometries specified by either surface- or body-base combinatorial geometry; display of particle tracks are calculated by MCNP; and volume fraction generation. 2 - Method of solution: Rendering is performed by ray tracing or an edge and intersection algorithm. Volume fraction calculations are made by ray tracing. 3 - Restrictions on the complexity of the problem: A graphics display with X Window capability is required
A single model procedure for estimating tank calibration equations
International Nuclear Information System (INIS)
Liebetrau, A.M.
1997-10-01
A fundamental component of any accountability system for nuclear materials is a tank calibration equation that relates the height of liquid in a tank to its volume. Tank volume calibration equations are typically determined from pairs of height and volume measurements taken in a series of calibration runs. After raw calibration data are standardized to a fixed set of reference conditions, the calibration equation is typically fit by dividing the data into several segments--corresponding to regions in the tank--and independently fitting the data for each segment. The estimates obtained for individual segments must then be combined to obtain an estimate of the entire calibration function. This process is tedious and time-consuming. Moreover, uncertainty estimates may be misleading because it is difficult to properly model run-to-run variability and between-segment correlation. In this paper, the authors describe a model whose parameters can be estimated simultaneously for all segments of the calibration data, thereby eliminating the need for segment-by-segment estimation. The essence of the proposed model is to define a suitable polynomial to fit to each segment and then extend its definition to the domain of the entire calibration function, so that it (the entire calibration function) can be expressed as the sum of these extended polynomials. The model provides defensible estimates of between-run variability and yields a proper treatment of between-segment correlations. A portable software package, called TANCS, has been developed to facilitate the acquisition, standardization, and analysis of tank calibration data. The TANCS package was used for the calculations in an example presented to illustrate the unified modeling approach described in this paper. With TANCS, a trial calibration function can be estimated and evaluated in a matter of minutes
SWAT Model Configuration, Calibration and Validation for Lake Champlain Basin
The Soil and Water Assessment Tool (SWAT) model was used to develop phosphorus loading estimates for sources in the Lake Champlain Basin. This document describes the model setup and parameterization, and presents calibration results.
International Nuclear Information System (INIS)
Mosteller, Russell D.
2002-01-01
Two validation suites, one for criticality and another for radiation shielding, have been defined and tested for the MCNP Monte Carlo code. All of the cases in the validation suites are based on experiments so that calculated and measured results can be compared in a meaningful way. The cases in the validation suites are described, and results from those cases are discussed. For several years, the distribution package for the MCNP Monte Carlo code1 has included an installation test suite to verify that MCNP has been installed correctly. However, the cases in that suite have been constructed primarily to test options within the code and to execute quickly. Consequently, they do not produce well-converged answers, and many of them are physically unrealistic. To remedy these deficiencies, sets of validation suites are being defined and tested for specific types of applications. All of the cases in the validation suites are based on benchmark experiments. Consequently, the results from the measurements are reliable and quantifiable, and calculated results can be compared with them in a meaningful way. Currently, validation suites exist for criticality and radiation-shielding applications.
Calibration of the Site-Scale Saturated Zone Flow Model
International Nuclear Information System (INIS)
Zyvoloski, G. A.
2001-01-01
The purpose of the flow calibration analysis work is to provide Performance Assessment (PA) with the calibrated site-scale saturated zone (SZ) flow model that will be used to make radionuclide transport calculations. As such, it is one of the most important models developed in the Yucca Mountain project. This model will be a culmination of much of our knowledge of the SZ flow system. The objective of this study is to provide a defensible site-scale SZ flow and transport model that can be used for assessing total system performance. A defensible model would include geologic and hydrologic data that are used to form the hydrogeologic framework model; also, it would include hydrochemical information to infer transport pathways, in-situ permeability measurements, and water level and head measurements. In addition, the model should include information on major model sensitivities. Especially important are those that affect calibration, the direction of transport pathways, and travel times. Finally, if warranted, alternative calibrations representing different conceptual models should be included. To obtain a defensible model, all available data should be used (or at least considered) to obtain a calibrated model. The site-scale SZ model was calibrated using measured and model-generated water levels and hydraulic head data, specific discharge calculations, and flux comparisons along several of the boundaries. Model validity was established by comparing model-generated permeabilities with the permeability data from field and laboratory tests; by comparing fluid pathlines obtained from the SZ flow model with those inferred from hydrochemical data; and by comparing the upward gradient generated with the model with that observed in the field. This analysis is governed by the Office of Civilian Radioactive Waste Management (OCRWM) Analysis and Modeling Report (AMR) Development Plan ''Calibration of the Site-Scale Saturated Zone Flow Model'' (CRWMS M and O 1999a)
Model Calibration of Exciter and PSS Using Extended Kalman Filter
Energy Technology Data Exchange (ETDEWEB)
Kalsi, Karanjit; Du, Pengwei; Huang, Zhenyu
2012-07-26
Power system modeling and controls continue to become more complex with the advent of smart grid technologies and large-scale deployment of renewable energy resources. As demonstrated in recent studies, inaccurate system models could lead to large-scale blackouts, thereby motivating the need for model calibration. Current methods of model calibration rely on manual tuning based on engineering experience, are time consuming and could yield inaccurate parameter estimates. In this paper, the Extended Kalman Filter (EKF) is used as a tool to calibrate exciter and Power System Stabilizer (PSS) models of a particular type of machine in the Western Electricity Coordinating Council (WECC). The EKF-based parameter estimation is a recursive prediction-correction process which uses the mismatch between simulation and measurement to adjust the model parameters at every time step. Numerical simulations using actual field test data demonstrate the effectiveness of the proposed approach in calibrating the parameters.
Hand-eye calibration using a target registration error model.
Chen, Elvis C S; Morgan, Isabella; Jayarathne, Uditha; Ma, Burton; Peters, Terry M
2017-10-01
Surgical cameras are prevalent in modern operating theatres and are often used as a surrogate for direct vision. Visualisation techniques (e.g. image fusion) made possible by tracking the camera require accurate hand-eye calibration between the camera and the tracking system. The authors introduce the concept of 'guided hand-eye calibration', where calibration measurements are facilitated by a target registration error (TRE) model. They formulate hand-eye calibration as a registration problem between homologous point-line pairs. For each measurement, the position of a monochromatic ball-tip stylus (a point) and its projection onto the image (a line) is recorded, and the TRE of the resulting calibration is predicted using a TRE model. The TRE model is then used to guide the placement of the calibration tool, so that the subsequent measurement minimises the predicted TRE. Assessing TRE after each measurement produces accurate calibration using a minimal number of measurements. As a proof of principle, they evaluated guided calibration using a webcam and an endoscopic camera. Their endoscopic camera results suggest that millimetre TRE is achievable when at least 15 measurements are acquired with the tracker sensor ∼80 cm away on the laparoscope handle for a target ∼20 cm away from the camera.
Fermentation process tracking through enhanced spectral calibration modeling.
Triadaphillou, Sophia; Martin, Elaine; Montague, Gary; Norden, Alison; Jeffkins, Paul; Stimpson, Sarah
2007-06-15
The FDA process analytical technology (PAT) initiative will materialize in a significant increase in the number of installations of spectroscopic instrumentation. However, to attain the greatest benefit from the data generated, there is a need for calibration procedures that extract the maximum information content. For example, in fermentation processes, the interpretation of the resulting spectra is challenging as a consequence of the large number of wavelengths recorded, the underlying correlation structure that is evident between the wavelengths and the impact of the measurement environment. Approaches to the development of calibration models have been based on the application of partial least squares (PLS) either to the full spectral signature or to a subset of wavelengths. This paper presents a new approach to calibration modeling that combines a wavelength selection procedure, spectral window selection (SWS), where windows of wavelengths are automatically selected which are subsequently used as the basis of the calibration model. However, due to the non-uniqueness of the windows selected when the algorithm is executed repeatedly, multiple models are constructed and these are then combined using stacking thereby increasing the robustness of the final calibration model. The methodology is applied to data generated during the monitoring of broth concentrations in an industrial fermentation process from on-line near-infrared (NIR) and mid-infrared (MIR) spectrometers. It is shown that the proposed calibration modeling procedure outperforms traditional calibration procedures, as well as enabling the identification of the critical regions of the spectra with regard to the fermentation process.
Calibration and Monte Carlo modelling of neutron long counters
Tagziria, H
2000-01-01
The Monte Carlo technique has become a very powerful tool in radiation transport as full advantage is taken of enhanced cross-section data, more powerful computers and statistical techniques, together with better characterisation of neutron and photon source spectra. At the National Physical Laboratory, calculations using the Monte Carlo radiation transport code MCNP-4B have been combined with accurate measurements to characterise two long counters routinely used to standardise monoenergetic neutron fields. New and more accurate response function curves have been produced for both long counters. A novel approach using Monte Carlo methods has been developed, validated and used to model the response function of the counters and determine more accurately their effective centres, which have always been difficult to establish experimentally. Calculations and measurements agree well, especially for the De Pangher long counter for which details of the design and constructional material are well known. The sensitivit...
Cosmic CARNage I: on the calibration of galaxy formation models
Knebe, Alexander; Pearce, Frazer R.; Gonzalez-Perez, Violeta; Thomas, Peter A.; Benson, Andrew; Asquith, Rachel; Blaizot, Jeremy; Bower, Richard; Carretero, Jorge; Castander, Francisco J.; Cattaneo, Andrea; Cora, Sofía A.; Croton, Darren J.; Cui, Weiguang; Cunnama, Daniel; Devriendt, Julien E.; Elahi, Pascal J.; Font, Andreea; Fontanot, Fabio; Gargiulo, Ignacio D.; Helly, John; Henriques, Bruno; Lee, Jaehyun; Mamon, Gary A.; Onions, Julian; Padilla, Nelson D.; Power, Chris; Pujol, Arnau; Ruiz, Andrés N.; Srisawat, Chaichalit; Stevens, Adam R. H.; Tollet, Edouard; Vega-Martínez, Cristian A.; Yi, Sukyoung K.
2018-04-01
We present a comparison of nine galaxy formation models, eight semi-analytical, and one halo occupation distribution model, run on the same underlying cold dark matter simulation (cosmological box of comoving width 125h-1 Mpc, with a dark-matter particle mass of 1.24 × 109h-1M⊙) and the same merger trees. While their free parameters have been calibrated to the same observational data sets using two approaches, they nevertheless retain some `memory' of any previous calibration that served as the starting point (especially for the manually tuned models). For the first calibration, models reproduce the observed z = 0 galaxy stellar mass function (SMF) within 3σ. The second calibration extended the observational data to include the z = 2 SMF alongside the z ˜ 0 star formation rate function, cold gas mass, and the black hole-bulge mass relation. Encapsulating the observed evolution of the SMF from z = 2 to 0 is found to be very hard within the context of the physics currently included in the models. We finally use our calibrated models to study the evolution of the stellar-to-halo mass (SHM) ratio. For all models, we find that the peak value of the SHM relation decreases with redshift. However, the trends seen for the evolution of the peak position as well as the mean scatter in the SHM relation are rather weak and strongly model dependent. Both the calibration data sets and model results are publicly available.
Cumulative error models for the tank calibration problem
International Nuclear Information System (INIS)
Goldman, A.; Anderson, L.G.; Weber, J.
1983-01-01
The purpose of a tank calibration equation is to obtain an estimate of the liquid volume that corresponds to a liquid level measurement. Calibration experimental errors occur in both liquid level and liquid volume measurements. If one of the errors is relatively small, the calibration equation can be determined from wellknown regression and calibration methods. If both variables are assumed to be in error, then for linear cases a prototype model should be considered. Many investigators are not familiar with this model or do not have computing facilities capable of obtaining numerical solutions. This paper discusses and compares three linear models that approximate the prototype model and have the advantage of much simpler computations. Comparisons among the four models and recommendations of suitability are made from simulations and from analyses of six sets of experimental data
International Nuclear Information System (INIS)
Poundra Setiawan; Suharyana; Riyatun
2015-01-01
Simulation of measurement absorbed dose on prostate brachytherapy with radius of prostate 2 cm using MCNP5 with seed implant model IsoAid Advantage TM IAPd-103A has been conducted. 103 Pd used as a radioactive source in the seed implant and it has energy gamma emission 20,8 keV with half live 16,9 days and has activity 4 mCi. The prostate cancer is modeled with spherical and it has radius 3 cm, after planting the seed implant 103 Pdover 24,4 days, prostate cancer has absorbed dose 2,172Gy. Lethal dose maximum use 103 Pd is 125 Gy and it was reached with 59 seeds. (author)
Testing of a one dimensional model for Field II calibration
DEFF Research Database (Denmark)
Bæk, David; Jensen, Jørgen Arendt; Willatzen, Morten
2008-01-01
Field II is a program for simulating ultrasound transducer fields. It is capable of calculating the emitted and pulse-echoed fields for both pulsed and continuous wave transducers. To make it fully calibrated a model of the transducer’s electro-mechanical impulse response must be included. We...... examine an adapted one dimensional transducer model originally proposed by Willatzen [9] to calibrate Field II. This model is modified to calculate the required impulse responses needed by Field II for a calibrated field pressure and external circuit current calculation. The testing has been performed...... to the calibrated Field II program for 1, 4, and 10 cycle excitations. Two parameter sets were applied for modeling, one real valued Pz27 parameter set, manufacturer supplied, and one complex valued parameter set found in literature, Alguer´o et al. [11]. The latter implicitly accounts for attenuation. Results show...
Balance between calibration objectives in a conceptual hydrological model
Booij, Martijn J.; Krol, Martinus S.
2010-01-01
Three different measures to determine the optimum balance between calibration objectives are compared: the combined rank method, parameter identifiability and model validation. Four objectives (water balance, hydrograph shape, high flows, low flows) are included in each measure. The contributions of
A Method to Test Model Calibration Techniques: Preprint
Energy Technology Data Exchange (ETDEWEB)
Judkoff, Ron; Polly, Ben; Neymark, Joel
2016-09-01
This paper describes a method for testing model calibration techniques. Calibration is commonly used in conjunction with energy retrofit audit models. An audit is conducted to gather information about the building needed to assemble an input file for a building energy modeling tool. A calibration technique is used to reconcile model predictions with utility data, and then the 'calibrated model' is used to predict energy savings from a variety of retrofit measures and combinations thereof. Current standards and guidelines such as BPI-2400 and ASHRAE-14 set criteria for 'goodness of fit' and assume that if the criteria are met, then the calibration technique is acceptable. While it is logical to use the actual performance data of the building to tune the model, it is not certain that a good fit will result in a model that better predicts post-retrofit energy savings. Therefore, the basic idea here is that the simulation program (intended for use with the calibration technique) is used to generate surrogate utility bill data and retrofit energy savings data against which the calibration technique can be tested. This provides three figures of merit for testing a calibration technique, 1) accuracy of the post-retrofit energy savings prediction, 2) closure on the 'true' input parameter values, and 3) goodness of fit to the utility bill data. The paper will also discuss the pros and cons of using this synthetic surrogate data approach versus trying to use real data sets of actual buildings.
International Nuclear Information System (INIS)
Hendricks, J.S.; Frankle, S.C.; Court, J.D.
1994-01-01
We report here for the first time the availability of an official set of ENDF/B-VI neutron data for MCNP(trademark). The LANL Radiation Transport group engaged the Nuclear Theory and Applications Group to construct a complete library based on ENDF/B-VI Release in the Spring of 1994. A new and thorough set of quality assurance tests was established and data passing those tests were subject only to a limited set of benchmarking tests. All nuclides were subjected to infinite medium calculations. The fissionable materials were benchmarked against critical assemblies, and 28 nuclides were benchmarked against the LLNL pulsed sphere experiments
Using Active Learning for Speeding up Calibration in Simulation Models.
Cevik, Mucahit; Ergun, Mehmet Ali; Stout, Natasha K; Trentham-Dietz, Amy; Craven, Mark; Alagoz, Oguzhan
2016-07-01
Most cancer simulation models include unobservable parameters that determine disease onset and tumor growth. These parameters play an important role in matching key outcomes such as cancer incidence and mortality, and their values are typically estimated via a lengthy calibration procedure, which involves evaluating a large number of combinations of parameter values via simulation. The objective of this study is to demonstrate how machine learning approaches can be used to accelerate the calibration process by reducing the number of parameter combinations that are actually evaluated. Active learning is a popular machine learning method that enables a learning algorithm such as artificial neural networks to interactively choose which parameter combinations to evaluate. We developed an active learning algorithm to expedite the calibration process. Our algorithm determines the parameter combinations that are more likely to produce desired outputs and therefore reduces the number of simulation runs performed during calibration. We demonstrate our method using the previously developed University of Wisconsin breast cancer simulation model (UWBCS). In a recent study, calibration of the UWBCS required the evaluation of 378 000 input parameter combinations to build a race-specific model, and only 69 of these combinations produced results that closely matched observed data. By using the active learning algorithm in conjunction with standard calibration methods, we identify all 69 parameter combinations by evaluating only 5620 of the 378 000 combinations. Machine learning methods hold potential in guiding model developers in the selection of more promising parameter combinations and hence speeding up the calibration process. Applying our machine learning algorithm to one model shows that evaluating only 1.49% of all parameter combinations would be sufficient for the calibration. © The Author(s) 2015.
Using genetic algorithms to calibrate a water quality model.
Liu, Shuming; Butler, David; Brazier, Richard; Heathwaite, Louise; Khu, Soon-Thiam
2007-03-15
With the increasing concern over the impact of diffuse pollution on water bodies, many diffuse pollution models have been developed in the last two decades. A common obstacle in using such models is how to determine the values of the model parameters. This is especially true when a model has a large number of parameters, which makes a full range of calibration expensive in terms of computing time. Compared with conventional optimisation approaches, soft computing techniques often have a faster convergence speed and are more efficient for global optimum searches. This paper presents an attempt to calibrate a diffuse pollution model using a genetic algorithm (GA). Designed to simulate the export of phosphorus from diffuse sources (agricultural land) and point sources (human), the Phosphorus Indicators Tool (PIT) version 1.1, on which this paper is based, consisted of 78 parameters. Previous studies have indicated the difficulty of full range model calibration due to the number of parameters involved. In this paper, a GA was employed to carry out the model calibration in which all parameters were involved. A sensitivity analysis was also performed to investigate the impact of operators in the GA on its effectiveness in optimum searching. The calibration yielded satisfactory results and required reasonable computing time. The application of the PIT model to the Windrush catchment with optimum parameter values was demonstrated. The annual P loss was predicted as 4.4 kg P/ha/yr, which showed a good fitness to the observed value.
Possible Improvements to MCNP6 and its CEM/LAQGSM Event-Generators
Energy Technology Data Exchange (ETDEWEB)
Mashnik, Stepan Georgievich [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2015-08-04
This report is intended to the MCNP6 developers and sponsors of MCNP6. It presents a set of suggested possible future improvements to MCNP6 and to its CEM03.03 and LAQGSM03.03 event-generators. A few suggested modifications of MCNP6 are quite simple, aimed at avoiding possible problems with running MCNP6 on various computers, i.e., these changes are not expected to change or improve any results, but should make the use of MCNP6 easier; such changes are expected to require limited man-power resources. On the other hand, several other suggested improvements require a serious further development of nuclear reaction models, are expected to improve significantly the predictive power of MCNP6 for a number of nuclear reactions; but, such developments require several years of work by real experts on nuclear reactions.
A Generic Software Framework for Data Assimilation and Model Calibration
Van Velzen, N.
2010-01-01
The accuracy of dynamic simulation models can be increased by using observations in conjunction with a data assimilation or model calibration algorithm. However, implementing such algorithms usually increases the complexity of the model software significantly. By using concepts from object oriented
A mathematical model for camera calibration based on straight lines
Directory of Open Access Journals (Sweden)
Antonio M. G. Tommaselli
2005-12-01
Full Text Available In other to facilitate the automation of camera calibration process, a mathematical model using straight lines was developed, which is based on the equivalent planes mathematical model. Parameter estimation of the developed model is achieved by the Least Squares Method with Conditions and Observations. The same method of adjustment was used to implement camera calibration with bundles, which is based on points. Experiments using simulated and real data have shown that the developed model based on straight lines gives results comparable to the conventional method with points. Details concerning the mathematical development of the model and experiments with simulated and real data will be presented and the results with both methods of camera calibration, with straight lines and with points, will be compared.
Cai, Zhongli; Kwon, Yongkyu Luke; Reilly, Raymond M
2017-02-01
64 Cu emits positrons as well as β - particles and Auger and internal conversion electrons useful for radiotherapy. Our objective was to model the cellular dosimetry of 64 Cu under different geometries commonly used to study the cytotoxic effects of 64 Cu. Monte Carlo N-Particle (MCNP) was used to simulate the transport of all particles emitted by 64 Cu from the cell surface (CS), cytoplasm (Cy), or nucleus (N) of a single cell; monolayer in a well (radius = 0.32-1.74 cm); or a sphere (radius = 50-6,000 μm) of cells to calculate S values. The radius of the cell and N ranged from 5 to 12 μm and 2 to 11 μm, respectively. S values were obtained by MIRDcell for comparison. MCF7/HER2-18 cells were exposed in vitro to 64 Cu-labeled trastuzumab. The subcellular distribution of 64 Cu was measured by cell fractionation. The surviving fraction was determined in a clonogenic assay. The relative differences of MCNP versus MIRDcell self-dose S values (S self ) for 64 Cu ranged from -0.2% to 3.6% for N to N (S N←N ), 2.3% to 8.6% for Cy to N (S N←Cy ), and -12.0% to 7.3% for CS to N (S N←CS ). The relative differences of MCNP versus MIRDcell cross-dose S values were 25.8%-30.6% for a monolayer and 30%-34% for a sphere, respectively. The ratios of S N←N versus S N←Cy and S N←Cy versus S N←CS decreased with increasing ratio of the N of the cell versus radius of the cell and the size of the monolayer or sphere. The surviving fraction of MCF7 /: HER2-18 cells treated with 64 Cu-labeled trastuzumab (0.016-0.368 MBq/μg, 67 nM) for 18 h versus the absorbed dose followed a linear survival curve with α = 0.51 ± 0.05 Gy -1 and R 2 = 0.8838. This is significantly different from the linear quadratic survival curve of MCF7 /: HER2-18 cells exposed to γ-rays. MCNP- and MIRDcell-calculated S values agreed well. 64 Cu in the N increases the dose to the N in isolated single cells but has less effect in a cell monolayer or small cluster of cells simulating a micrometastasis
MCNP trademark Monte Carlo: A precis of MCNP
International Nuclear Information System (INIS)
Adams, K.J.
1996-01-01
MCNP trademark is a general purpose three-dimensional time-dependent neutron, photon, and electron transport code. It is highly portable and user-oriented, and backed by stringent software quality assurance practices and extensive experimental benchmarks. The cross section database is based upon the best evaluations available. MCNP incorporates state-of-the-art analog and adaptive Monte Carlo techniques. The code is documented in a 600 page manual which is augmented by numerous Los Alamos technical reports which detail various aspects of the code. MCNP represents over a megahour of development and refinement over the past 50 years and an ongoing commitment to excellence
Stochastic calibration and learning in nonstationary hydroeconomic models
Maneta, M. P.; Howitt, R.
2014-05-01
Concern about water scarcity and adverse climate events over agricultural regions has motivated a number of efforts to develop operational integrated hydroeconomic models to guide adaptation and optimal use of water. Once calibrated, these models are used for water management and analysis assuming they remain valid under future conditions. In this paper, we present and demonstrate a methodology that permits the recursive calibration of economic models of agricultural production from noisy but frequently available data. We use a standard economic calibration approach, namely positive mathematical programming, integrated in a data assimilation algorithm based on the ensemble Kalman filter equations to identify the economic model parameters. A moving average kernel ensures that new and past information on agricultural activity are blended during the calibration process, avoiding loss of information and overcalibration for the conditions of a single year. A regularization constraint akin to the standard Tikhonov regularization is included in the filter to ensure its stability even in the presence of parameters with low sensitivity to observations. The results show that the implementation of the PMP methodology within a data assimilation framework based on the enKF equations is an effective method to calibrate models of agricultural production even with noisy information. The recursive nature of the method incorporates new information as an added value to the known previous observations of agricultural activity without the need to store historical information. The robustness of the method opens the door to the use of new remote sensing algorithms for operational water management.
Model calibration and beam control systems for storage rings
International Nuclear Information System (INIS)
Corbett, W.J.; Lee, M.J.; Ziemann, V.
1993-04-01
Electron beam storage rings and linear accelerators are rapidly gaining worldwide popularity as scientific devices for the production of high-brightness synchrotron radiation. Today, everybody agrees that there is a premium on calibrating the storage ring model and determining errors in the machine as soon as possible after the beam is injected. In addition, the accurate optics model enables machine operators to predictably adjust key performance parameters, and allows reliable identification of new errors that occur during operation of the machine. Since the need for model calibration and beam control systems is common to all storage rings, software packages should be made that are portable between different machines. In this paper, we report on work directed toward achieving in-situ calibration of the optics model, detection of alignment errors, and orbit control techniques, with an emphasis on developing a portable system incorporating these tools
The cost of uniqueness in groundwater model calibration
Moore, Catherine; Doherty, John
2006-04-01
Calibration of a groundwater model requires that hydraulic properties be estimated throughout a model domain. This generally constitutes an underdetermined inverse problem, for which a solution can only be found when some kind of regularization device is included in the inversion process. Inclusion of regularization in the calibration process can be implicit, for example through the use of zones of constant parameter value, or explicit, for example through solution of a constrained minimization problem in which parameters are made to respect preferred values, or preferred relationships, to the degree necessary for a unique solution to be obtained. The "cost of uniqueness" is this: no matter which regularization methodology is employed, the inevitable consequence of its use is a loss of detail in the calibrated field. This, in turn, can lead to erroneous predictions made by a model that is ostensibly "well calibrated". Information made available as a by-product of the regularized inversion process allows the reasons for this loss of detail to be better understood. In particular, it is easily demonstrated that the estimated value for an hydraulic property at any point within a model domain is, in fact, a weighted average of the true hydraulic property over a much larger area. This averaging process causes loss of resolution in the estimated field. Where hydraulic conductivity is the hydraulic property being estimated, high averaging weights exist in areas that are strategically disposed with respect to measurement wells, while other areas may contribute very little to the estimated hydraulic conductivity at any point within the model domain, this possibly making the detection of hydraulic conductivity anomalies in these latter areas almost impossible. A study of the post-calibration parameter field covariance matrix allows further insights into the loss of system detail incurred through the calibration process to be gained. A comparison of pre- and post-calibration
Bayesian calibration of power plant models for accurate performance prediction
International Nuclear Information System (INIS)
Boksteen, Sowande Z.; Buijtenen, Jos P. van; Pecnik, Rene; Vecht, Dick van der
2014-01-01
Highlights: • Bayesian calibration is applied to power plant performance prediction. • Measurements from a plant in operation are used for model calibration. • A gas turbine performance model and steam cycle model are calibrated. • An integrated plant model is derived. • Part load efficiency is accurately predicted as a function of ambient conditions. - Abstract: Gas turbine combined cycles are expected to play an increasingly important role in the balancing of supply and demand in future energy markets. Thermodynamic modeling of these energy systems is frequently applied to assist in decision making processes related to the management of plant operation and maintenance. In most cases, model inputs, parameters and outputs are treated as deterministic quantities and plant operators make decisions with limited or no regard of uncertainties. As the steady integration of wind and solar energy into the energy market induces extra uncertainties, part load operation and reliability are becoming increasingly important. In the current study, methods are proposed to not only quantify various types of uncertainties in measurements and plant model parameters using measured data, but to also assess their effect on various aspects of performance prediction. The authors aim to account for model parameter and measurement uncertainty, and for systematic discrepancy of models with respect to reality. For this purpose, the Bayesian calibration framework of Kennedy and O’Hagan is used, which is especially suitable for high-dimensional industrial problems. The article derives a calibrated model of the plant efficiency as a function of ambient conditions and operational parameters, which is also accurate in part load. The article shows that complete statistical modeling of power plants not only enhances process models, but can also increases confidence in operational decisions
Calibration and Confirmation in Geophysical Models
Werndl, Charlotte
2016-04-01
For policy decisions the best geophysical models are needed. To evaluate geophysical models, it is essential that the best available methods for confirmation are used. A hotly debated issue on confirmation in climate science (as well as in philosophy) is the requirement of use-novelty (i.e. that data can only confirm models if they have not already been used before. This talk investigates the issue of use-novelty and double-counting for geophysical models. We will see that the conclusions depend on the framework of confirmation and that it is not clear that use-novelty is a valid requirement and that double-counting is illegitimate.
Applying Hierarchical Model Calibration to Automatically Generated Items.
Williamson, David M.; Johnson, Matthew S.; Sinharay, Sandip; Bejar, Isaac I.
This study explored the application of hierarchical model calibration as a means of reducing, if not eliminating, the need for pretesting of automatically generated items from a common item model prior to operational use. Ultimately the successful development of automatic item generation (AIG) systems capable of producing items with highly similar…
Cloud-Based Model Calibration Using OpenStudio: Preprint
Energy Technology Data Exchange (ETDEWEB)
Hale, E.; Lisell, L.; Goldwasser, D.; Macumber, D.; Dean, J.; Metzger, I.; Parker, A.; Long, N.; Ball, B.; Schott, M.; Weaver, E.; Brackney, L.
2014-03-01
OpenStudio is a free, open source Software Development Kit (SDK) and application suite for performing building energy modeling and analysis. The OpenStudio Parametric Analysis Tool has been extended to allow cloud-based simulation of multiple OpenStudio models parametrically related to a baseline model. This paper describes the new cloud-based simulation functionality and presents a model cali-bration case study. Calibration is initiated by entering actual monthly utility bill data into the baseline model. Multiple parameters are then varied over multiple iterations to reduce the difference between actual energy consumption and model simulation results, as calculated and visualized by billing period and by fuel type. Simulations are per-formed in parallel using the Amazon Elastic Cloud service. This paper highlights model parameterizations (measures) used for calibration, but the same multi-nodal computing architecture is available for other purposes, for example, recommending combinations of retrofit energy saving measures using the calibrated model as the new baseline.
International Nuclear Information System (INIS)
Brockhoff, R.C.; Hendricks, J.S.
1994-09-01
The MCNP test set is used to test the MCNP code after installation on various computer platforms. For MCNP4 and MCNP4A this test set included 25 test problems designed to test as many features of the MCNP code as possible. A new and better test set has been devised to increase coverage of the code from 85% to 97% with 28 problems. The new test set is as fast as and shorter than the MCNP4A test set. The authors describe the methodology for devising the new test set, the features that were not covered in the MCNP4A test set, and the changes in the MCNP4A test set that have been made for MCNP4B and its developmental versions. Finally, new bugs uncovered by the new test set and a compilation of all known MCNP4A bugs are presented
A group of neutronics calculations in the MNSR using the MCNP-4C code
International Nuclear Information System (INIS)
Khattab, K.; Sulieman, I.
2009-11-01
The MCNP-4C code was used to model the 3-D core configuration for the Syrian Miniature Neutron Source Reactor (MNSR). The continuous energy neutron cross sections were evaluated from ENDF/B-VI library to calculate the thermal and fast neutron fluxes in the MNSR inner and outer irradiation sites. The thermal fluxes in the MNSR inner irradiation sites were measured for the first time using the multiple foil activation method. Good agreements were noticed between the calculated and measured results. This model is used as well to calculate neutron flux spectrum in the reactor inner and outer irradiation sites and the reactor thermal power. Three 3-D neutronic models for the Syrian MNSR reactor using the MCNP-4C code were developed also to assess the possibility of fuel conversion from 89.87 % HEU fuel (UAl 4 -Al) to 19.75 % LEU fuel (UO 2 ). This model is used in this paper to calculate the following reactor core physics parameters: clean cold core excess reactivity, calibration of the control rod worth and calculation its shut down margin, calibration of the top beryllium shim plate reflector, axial neutron flux distributions in the inner and outer irradiation sites and the kinetics parameters ( ι p l and β e ff). (authors)
Calibrating cellular automaton models for pedestrians walking through corners
Dias, Charitha; Lovreglio, Ruggiero
2018-05-01
Cellular Automata (CA) based pedestrian simulation models have gained remarkable popularity as they are simpler and easier to implement compared to other microscopic modeling approaches. However, incorporating traditional floor field representations in CA models to simulate pedestrian corner navigation behavior could result in unrealistic behaviors. Even though several previous studies have attempted to enhance CA models to realistically simulate pedestrian maneuvers around bends, such modifications have not been calibrated or validated against empirical data. In this study, two static floor field (SFF) representations, namely 'discrete representation' and 'continuous representation', are calibrated for CA-models to represent pedestrians' walking behavior around 90° bends. Trajectory data collected through a controlled experiment are used to calibrate these model representations. Calibration results indicate that although both floor field representations can represent pedestrians' corner navigation behavior, the 'continuous' representation fits the data better. Output of this study could be beneficial for enhancing the reliability of existing CA-based models by representing pedestrians' corner navigation behaviors more realistically.
A single model procedure for tank calibration function estimation
International Nuclear Information System (INIS)
York, J.C.; Liebetrau, A.M.
1995-01-01
Reliable tank calibrations are a vital component of any measurement control and accountability program for bulk materials in a nuclear reprocessing facility. Tank volume calibration functions used in nuclear materials safeguards and accountability programs are typically constructed from several segments, each of which is estimated independently. Ideally, the segments correspond to structural features in the tank. In this paper the authors use an extension of the Thomas-Liebetrau model to estimate the entire calibration function in a single step. This procedure automatically takes significant run-to-run differences into account and yields an estimate of the entire calibration function in one operation. As with other procedures, the first step is to define suitable calibration segments. Next, a polynomial of low degree is specified for each segment. In contrast with the conventional practice of constructing a separate model for each segment, this information is used to set up the design matrix for a single model that encompasses all of the calibration data. Estimation of the model parameters is then done using conventional statistical methods. The method described here has several advantages over traditional methods. First, modeled run-to-run differences can be taken into account automatically at the estimation step. Second, no interpolation is required between successive segments. Third, variance estimates are based on all the data, rather than that from a single segment, with the result that discontinuities in confidence intervals at segment boundaries are eliminated. Fourth, the restrictive assumption of the Thomas-Liebetrau method, that the measured volumes be the same for all runs, is not required. Finally, the proposed methods are readily implemented using standard statistical procedures and widely-used software packages
MT3DMS: Model use, calibration, and validation
Zheng, C.; Hill, Mary C.; Cao, G.; Ma, R.
2012-01-01
MT3DMS is a three-dimensional multi-species solute transport model for solving advection, dispersion, and chemical reactions of contaminants in saturated groundwater flow systems. MT3DMS interfaces directly with the U.S. Geological Survey finite-difference groundwater flow model MODFLOW for the flow solution and supports the hydrologic and discretization features of MODFLOW. MT3DMS contains multiple transport solution techniques in one code, which can often be important, including in model calibration. Since its first release in 1990 as MT3D for single-species mass transport modeling, MT3DMS has been widely used in research projects and practical field applications. This article provides a brief introduction to MT3DMS and presents recommendations about calibration and validation procedures for field applications of MT3DMS. The examples presented suggest the need to consider alternative processes as models are calibrated and suggest opportunities and difficulties associated with using groundwater age in transport model calibration.
Effect of Using Extreme Years in Hydrologic Model Calibration Performance
Goktas, R. K.; Tezel, U.; Kargi, P. G.; Ayvaz, T.; Tezyapar, I.; Mesta, B.; Kentel, E.
2017-12-01
Hydrological models are useful in predicting and developing management strategies for controlling the system behaviour. Specifically they can be used for evaluating streamflow at ungaged catchments, effect of climate change, best management practices on water resources, or identification of pollution sources in a watershed. This study is a part of a TUBITAK project named "Development of a geographical information system based decision-making tool for water quality management of Ergene Watershed using pollutant fingerprints". Within the scope of this project, first water resources in Ergene Watershed is studied. Streamgages found in the basin are identified and daily streamflow measurements are obtained from State Hydraulic Works of Turkey. Streamflow data is analysed using box-whisker plots, hydrographs and flow-duration curves focusing on identification of extreme periods, dry or wet. Then a hydrological model is developed for Ergene Watershed using HEC-HMS in the Watershed Modeling System (WMS) environment. The model is calibrated for various time periods including dry and wet ones and the performance of calibration is evaluated using Nash-Sutcliffe Efficiency (NSE), correlation coefficient, percent bias (PBIAS) and root mean square error. It is observed that calibration period affects the model performance, and the main purpose of the development of the hydrological model should guide calibration period selection. Acknowledgement: This study is funded by The Scientific and Technological Research Council of Turkey (TUBITAK) under Project Number 115Y064.
Calibration of a stochastic health evolution model using NHIS data
Gupta, Aparna; Li, Zhisheng
2011-10-01
This paper presents and calibrates an individual's stochastic health evolution model. In this health evolution model, the uncertainty of health incidents is described by a stochastic process with a finite number of possible outcomes. We construct a comprehensive health status index (HSI) to describe an individual's health status, as well as a health risk factor system (RFS) to classify individuals into different risk groups. Based on the maximum likelihood estimation (MLE) method and the method of nonlinear least squares fitting, model calibration is formulated in terms of two mixed-integer nonlinear optimization problems. Using the National Health Interview Survey (NHIS) data, the model is calibrated for specific risk groups. Longitudinal data from the Health and Retirement Study (HRS) is used to validate the calibrated model, which displays good validation properties. The end goal of this paper is to provide a model and methodology, whose output can serve as a crucial component of decision support for strategic planning of health related financing and risk management.
Optical model and calibration of a sun tracker
International Nuclear Information System (INIS)
Volkov, Sergei N.; Samokhvalov, Ignatii V.; Cheong, Hai Du; Kim, Dukhyeon
2016-01-01
Sun trackers are widely used to investigate scattering and absorption of solar radiation in the Earth's atmosphere. We present a method for optimization of the optical altazimuth sun tracker model with output radiation direction aligned with the axis of a stationary spectrometer. The method solves the problem of stability loss in tracker pointing at the Sun near the zenith. An optimal method for tracker calibration at the measurement site is proposed in the present work. A method of moving calibration is suggested for mobile applications in the presence of large temperature differences and errors in the alignment of the optical system of the tracker. - Highlights: • We present an optimal optical sun tracker model for atmospheric spectroscopy. • The problem of loss of stability of tracker pointing at the Sun has been solved. • We propose an optimal method for tracker calibration at a measurement site. • Test results demonstrate the efficiency of the proposed optimization methods.
International Nuclear Information System (INIS)
Cashwell, E.D.; Schrandt, R.G.
1980-01-01
The current state of the art of calculating flux at a point with MCNP is discussed. Various techniques are touched upon, but the main emphasis is on the fast improved version of the once-more-collided flux estimator, which has been modified to treat neutrons thermalized by the free gas model. The method is tested on several problems on interest and the results are presented
Bayesian calibration of the Community Land Model using surrogates
Energy Technology Data Exchange (ETDEWEB)
Ray, Jaideep; Hou, Zhangshuan; Huang, Maoyi; Swiler, Laura Painton
2014-02-01
We present results from the Bayesian calibration of hydrological parameters of the Community Land Model (CLM), which is often used in climate simulations and Earth system models. A statistical inverse problem is formulated for three hydrological parameters, conditional on observations of latent heat surface fluxes over 48 months. Our calibration method uses polynomial and Gaussian process surrogates of the CLM, and solves the parameter estimation problem using a Markov chain Monte Carlo sampler. Posterior probability densities for the parameters are developed for two sites with different soil and vegetation covers. Our method also allows us to examine the structural error in CLM under two error models. We find that surrogate models can be created for CLM in most cases. The posterior distributions are more predictive than the default parameter values in CLM. Climatologically averaging the observations does not modify the parameters' distributions significantly. The structural error model reveals a correlation time-scale which can be used to identify the physical process that could be contributing to it. While the calibrated CLM has a higher predictive skill, the calibration is under-dispersive.
Calibration of hydrological models using flow-duration curves
Directory of Open Access Journals (Sweden)
I. K. Westerberg
2011-07-01
Full Text Available The degree of belief we have in predictions from hydrologic models will normally depend on how well they can reproduce observations. Calibrations with traditional performance measures, such as the Nash-Sutcliffe model efficiency, are challenged by problems including: (1 uncertain discharge data, (2 variable sensitivity of different performance measures to different flow magnitudes, (3 influence of unknown input/output errors and (4 inability to evaluate model performance when observation time periods for discharge and model input data do not overlap. This paper explores a calibration method using flow-duration curves (FDCs to address these problems. The method focuses on reproducing the observed discharge frequency distribution rather than the exact hydrograph. It consists of applying limits of acceptability for selected evaluation points (EPs on the observed uncertain FDC in the extended GLUE approach. Two ways of selecting the EPs were tested – based on equal intervals of discharge and of volume of water. The method was tested and compared to a calibration using the traditional model efficiency for the daily four-parameter WASMOD model in the Paso La Ceiba catchment in Honduras and for Dynamic TOPMODEL evaluated at an hourly time scale for the Brue catchment in Great Britain. The volume method of selecting EPs gave the best results in both catchments with better calibrated slow flow, recession and evaporation than the other criteria. Observed and simulated time series of uncertain discharges agreed better for this method both in calibration and prediction in both catchments. An advantage with the method is that the rejection criterion is based on an estimation of the uncertainty in discharge data and that the EPs of the FDC can be chosen to reflect the aims of the modelling application, e.g. using more/less EPs at high/low flows. While the method appears less sensitive to epistemic input/output errors than previous use of limits of
Calibration of Automatically Generated Items Using Bayesian Hierarchical Modeling.
Johnson, Matthew S.; Sinharay, Sandip
For complex educational assessments, there is an increasing use of "item families," which are groups of related items. However, calibration or scoring for such an assessment requires fitting models that take into account the dependence structure inherent among the items that belong to the same item family. C. Glas and W. van der Linden…
LED-based Photometric Stereo: Modeling, Calibration and Numerical Solutions
DEFF Research Database (Denmark)
Quéau, Yvain; Durix, Bastien; Wu, Tao
2018-01-01
We conduct a thorough study of photometric stereo under nearby point light source illumination, from modeling to numerical solution, through calibration. In the classical formulation of photometric stereo, the luminous fluxes are assumed to be directional, which is very difficult to achieve in pr...
Calibration of a Plastic Classification System with the Ccw Model
International Nuclear Information System (INIS)
Barcala Riveira, J. M.; Fernandez Marron, J. L.; Alberdi Primicia, J.; Navarrete Marin, J. J.; Oller Gonzalez, J. C.
2003-01-01
This document describes the calibration of a plastic Classification system with the Ccw model (Classification by Quantum's built with Wavelet Coefficients). The method is applied to spectra of plastics usually present in domestic wastes. Obtained results are showed. (Author) 16 refs
Technical Note: Calibration and validation of geophysical observation models
Salama, M.S.; van der Velde, R.; van der Woerd, H.J.; Kromkamp, J.C.; Philippart, C.J.M.; Joseph, A.T.; O'Neill, P.E.; Lang, R.H.; Gish, T.; Werdell, P.J.; Su, Z.
2012-01-01
We present a method to calibrate and validate observational models that interrelate remotely sensed energy fluxes to geophysical variables of land and water surfaces. Coincident sets of remote sensing observation of visible and microwave radiations and geophysical data are assembled and subdivided
Criticality Calculations with MCNP6 - Practical Lectures
Energy Technology Data Exchange (ETDEWEB)
Brown, Forrest B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Monte Carlo Methods, Codes, and Applications (XCP-3); Rising, Michael Evan [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Monte Carlo Methods, Codes, and Applications (XCP-3); Alwin, Jennifer Louise [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Monte Carlo Methods, Codes, and Applications (XCP-3)
2016-11-29
These slides are used to teach MCNP (Monte Carlo N-Particle) usage to nuclear criticality safety analysts. The following are the lecture topics: course information, introduction, MCNP basics, criticality calculations, advanced geometry, tallies, adjoint-weighted tallies and sensitivities, physics and nuclear data, parameter studies, NCS validation I, NCS validation II, NCS validation III, case study 1 - solution tanks, case study 2 - fuel vault, case study 3 - B&W core, case study 4 - simple TRIGA, case study 5 - fissile mat. vault, criticality accident alarm systems. After completion of this course, you should be able to: Develop an input model for MCNP; Describe how cross section data impact Monte Carlo and deterministic codes; Describe the importance of validation of computer codes and how it is accomplished; Describe the methodology supporting Monte Carlo codes and deterministic codes; Describe pitfalls of Monte Carlo calculations; Discuss the strengths and weaknesses of Monte Carlo and Discrete Ordinants codes; The diffusion theory model is not strictly valid for treating fissile systems in which neutron absorption, voids, and/or material boundaries are present. In the context of these limitations, identify a fissile system for which a diffusion theory solution would be adequate.
Criticality Calculations with MCNP6 - Practical Lectures
International Nuclear Information System (INIS)
Brown, Forrest B.; Rising, Michael Evan; Alwin, Jennifer Louise
2016-01-01
These slides are used to teach MCNP (Monte Carlo N-Particle) usage to nuclear criticality safety analysts. The following are the lecture topics: course information, introduction, MCNP basics, criticality calculations, advanced geometry, tallies, adjoint-weighted tallies and sensitivities, physics and nuclear data, parameter studies, NCS validation I, NCS validation II, NCS validation III, case study 1 - solution tanks, case study 2 - fuel vault, case study 3 - B&W core, case study 4 - simple TRIGA, case study 5 - fissile mat. vault, criticality accident alarm systems. After completion of this course, you should be able to: Develop an input model for MCNP; Describe how cross section data impact Monte Carlo and deterministic codes; Describe the importance of validation of computer codes and how it is accomplished; Describe the methodology supporting Monte Carlo codes and deterministic codes; Describe pitfalls of Monte Carlo calculations; Discuss the strengths and weaknesses of Monte Carlo and Discrete Ordinants codes; The diffusion theory model is not strictly valid for treating fissile systems in which neutron absorption, voids, and/or material boundaries are present. In the context of these limitations, identify a fissile system for which a diffusion theory solution would be adequate.
International Nuclear Information System (INIS)
Pauzi, A M
2013-01-01
The neutron transport code, Monte Carlo N-Particle (MCNP) which was wellkown as the gold standard in predicting nuclear reaction was used to model the small nuclear reactor core called U -battery TM, which was develop by the University of Manchester and Delft Institute of Technology. The paper introduces on the concept of modeling the small reactor core, a high temperature reactor (HTR) type with small coated TRISO fuel particle in graphite matrix using the MCNPv4C software. The criticality of the core were calculated using the software and analysed by changing key parameters such coolant type, fuel type and enrichment levels, cladding materials, and control rod type. The criticality results from the simulation were validated using the SCALE 5.1 software by [1] M Ding and J L Kloosterman, 2010. The data produced from these analyses would be used as part of the process of proposing initial core layout and a provisional list of materials for newly design reactor core. In the future, the criticality study would be continued with different core configurations and geometries.
Yoriyaz, Hélio; Moralles, Maurício; Siqueira, Paulo de Tarso Dalledone; Guimarães, Carla da Costa; Cintra, Felipe Belonsi; dos Santos, Adimir
2009-11-01
Radiopharmaceutical applications in nuclear medicine require a detailed dosimetry estimate of the radiation energy delivered to the human tissues. Over the past years, several publications addressed the problem of internal dose estimate in volumes of several sizes considering photon and electron sources. Most of them used Monte Carlo radiation transport codes. Despite the widespread use of these codes due to the variety of resources and potentials they offered to carry out dose calculations, several aspects like physical models, cross sections, and numerical approximations used in the simulations still remain an object of study. Accurate dose estimate depends on the correct selection of a set of simulation options that should be carefully chosen. This article presents an analysis of several simulation options provided by two of the most used codes worldwide: MCNP and GEANT4. For this purpose, comparisons of absorbed fraction estimates obtained with different physical models, cross sections, and numerical approximations are presented for spheres of several sizes and composed as five different biological tissues. Considerable discrepancies have been found in some cases not only between the different codes but also between different cross sections and algorithms in the same code. Maximum differences found between the two codes are 5.0% and 10%, respectively, for photons and electrons. Even for simple problems as spheres and uniform radiation sources, the set of parameters chosen by any Monte Carlo code significantly affects the final results of a simulation, demonstrating the importance of the correct choice of parameters in the simulation.
Cagnazzo, M; Borio di Tigliole, A; Böck, H; Villa, M
2018-05-01
Aim of this work was the detection of fission products activity distribution along the axial dimension of irradiated fuel elements (FEs) at the TRIGA Mark II research reactor of the Technische Universität (TU) Wien. The activity distribution was measured by means of a customized fuel gamma scanning device, which includes a vertical lifting system to move the fuel rod along its vertical axis. For each investigated FE, a gamma spectrum measurement was performed along the vertical axis, with steps of 1 cm, in order to determine the axial distribution of the fission products. After the fuel elements underwent a relatively short cooling down period, different fission products were detected. The activity concentration was determined by calibrating the gamma detector with a standard calibration source of known activity and by MCNP6 simulations for the evaluation of self-absorption and geometric effects. Given the specific TRIGA fuel composition, a correction procedure is developed and used in this work for the measurement of the fission product Zr 95 . This measurement campaign is part of a more extended project aiming at the modelling of the TU Wien TRIGA reactor by means of different calculation codes (MCNP6, Serpent): the experimental results presented in this paper will be subsequently used for the benchmark of the models developed with the calculation codes. Copyright © 2018 Elsevier Ltd. All rights reserved.
Evaluation of multivariate calibration models transferred between spectroscopic instruments
DEFF Research Database (Denmark)
Eskildsen, Carl Emil Aae; Hansen, Per W.; Skov, Thomas
2016-01-01
In a setting where multiple spectroscopic instruments are used for the same measurements it may be convenient to develop the calibration model on a single instrument and then transfer this model to the other instruments. In the ideal scenario, all instruments provide the same predictions for the ......In a setting where multiple spectroscopic instruments are used for the same measurements it may be convenient to develop the calibration model on a single instrument and then transfer this model to the other instruments. In the ideal scenario, all instruments provide the same predictions...... for the same samples using the transferred model. However, sometimes the success of a model transfer is evaluated by comparing the transferred model predictions with the reference values. This is not optimal, as uncertainties in the reference method will impact the evaluation. This paper proposes a new method...... for calibration model transfer evaluation. The new method is based on comparing predictions from different instruments, rather than comparing predictions and reference values. A total of 75 flour samples were available for the study. All samples were measured on ten near infrared (NIR) instruments from two...
Calibration and verification of numerical runoff and erosion model
Directory of Open Access Journals (Sweden)
Gabrić Ognjen
2015-01-01
Full Text Available Based on the field and laboratory measurements, and analogous with development of computational techniques, runoff and erosion models based on equations which describe the physics of the process are also developed. Based on the KINEROS2 model, this paper presents basic modelling principles of runoff and erosion processes based on the St. Venant's equations. Alternative equations for friction calculation, calculation of source and deposition elements and transport capacity are also shown. Numerical models based on original and alternative equations are calibrated and verified on laboratory scale model. According to the results, friction calculation based on the analytic solution of laminar flow must be included in all runoff and erosion models.
An Expectation-Maximization Method for Calibrating Synchronous Machine Models
Energy Technology Data Exchange (ETDEWEB)
Meng, Da; Zhou, Ning; Lu, Shuai; Lin, Guang
2013-07-21
The accuracy of a power system dynamic model is essential to its secure and efficient operation. Lower confidence in model accuracy usually leads to conservative operation and lowers asset usage. To improve model accuracy, this paper proposes an expectation-maximization (EM) method to calibrate the synchronous machine model using phasor measurement unit (PMU) data. First, an extended Kalman filter (EKF) is applied to estimate the dynamic states using measurement data. Then, the parameters are calculated based on the estimated states using maximum likelihood estimation (MLE) method. The EM method iterates over the preceding two steps to improve estimation accuracy. The proposed EM method’s performance is evaluated using a single-machine infinite bus system and compared with a method where both state and parameters are estimated using an EKF method. Sensitivity studies of the parameter calibration using EM method are also presented to show the robustness of the proposed method for different levels of measurement noise and initial parameter uncertainty.
Calibration of a Chemistry Test Using the Rasch Model
Directory of Open Access Journals (Sweden)
Nancy Coromoto Martín Guaregua
2011-11-01
Full Text Available The Rasch model was used to calibrate a general chemistry test for the purpose of analyzing the advantages and information the model provides. The sample was composed of 219 college freshmen. Of the 12 questions used, good fit was achieved in 10. The evaluation shows that although there are items of variable difficulty, there are gaps on the scale; in order to make the test complete, it will be necessary to design new items to fill in these gaps.
Data analysis and visualization in MCNP trademark
International Nuclear Information System (INIS)
Waters, L.S.
1994-01-01
There are many situations where the user may wish to go beyond current MCNP capabilities. For example, data produced by the code may need formatting for input into an external graphics package. Limitations on disk space may hinder writing out large PTRAK files. Specialized data analysis routines may be needed to model complex experimental results. One may wish to produce particle histories in a format not currently available in the code. To address these and other similar concerns a new capability in MCNP is being tested. A number of real, integer, logical and character variables describing the current and past characteristics of a particle are made available online to the user in three subroutines. The type of data passed can be controlled by cards in the INP file. The subroutines otherwise are empty, and the user may code in any desired analysis. A new MCNP executable is produced by compiling these subroutines and linking to a library which contains the object files for the rest of the code
Muratov, V. G.; Lopatkin, A. V.
An important aspect in the verification of the engineering techniques used in the safety analysis of MOX-fuelled reactors, is the preparation of test calculations to determine nuclide composition variations under irradiation and analysis of burnup problem errors resulting from various factors, such as, for instance, the effect of nuclear data uncertainties on nuclide concentration calculations. So far, no universally recognized tests have been devised. A calculation technique has been developed for solving the problem using the up-to-date calculation tools and the latest versions of nuclear libraries. Initially, in 1997, a code was drawn up in an effort under ISTC Project No. 116 to calculate the burnup in one VVER-1000 fuel rod, using the MCNP Code. Later on, the authors developed a computation technique which allows calculating fuel burnup in models of a fuel rod, or a fuel assembly, or the whole reactor. It became possible to apply it to fuel burnup in all types of nuclear reactors and subcritical blankets.
International Nuclear Information System (INIS)
Fonseca, Telma Cristina Ferreira
2009-01-01
The Intensity Modulated Radiation Therapy - IMRT is an advanced treatment technique used worldwide in oncology medicine branch. On this master proposal was developed a software package for simulating the IMRT protocol, namely SOFT-RT which attachment the research group 'Nucleo de Radiacoes Ionizantes' - NRI at UFMG. The computational system SOFT-RT allows producing the absorbed dose simulation of the radiotherapic treatment through a three-dimensional voxel model of the patient. The SISCODES code, from NRI, research group, helps in producing the voxel model of the interest region from a set of CT or MRI digitalized images. The SOFT-RT allows also the rotation and translation of the model about the coordinate system axis for better visualization of the model and the beam. The SOFT-RT collects and exports the necessary parameters to MCNP code which will carry out the nuclear radiation transport towards the tumor and adjacent healthy tissues for each orientation and position of the beam planning. Through three-dimensional visualization of voxel model of a patient, it is possible to focus on a tumoral region preserving the whole tissues around them. It takes in account where exactly the radiation beam passes through, which tissues are affected and how much dose is applied in both tissues. The Out-module from SOFT-RT imports the results and express the dose response superimposing dose and voxel model in gray scale in a three-dimensional graphic representation. The present master thesis presents the new computational system of radiotherapic treatment - SOFT-RT code which has been developed using the robust and multi-platform C ++ programming language with the OpenGL graphics packages. The Linux operational system was adopted with the goal of running it in an open source platform and free access. Preliminary simulation results for a cerebral tumor case will be reported as well as some dosimetric evaluations. (author)
Stochastic isotropic hyperelastic materials: constitutive calibration and model selection
Mihai, L. Angela; Woolley, Thomas E.; Goriely, Alain
2018-03-01
Biological and synthetic materials often exhibit intrinsic variability in their elastic responses under large strains, owing to microstructural inhomogeneity or when elastic data are extracted from viscoelastic mechanical tests. For these materials, although hyperelastic models calibrated to mean data are useful, stochastic representations accounting also for data dispersion carry extra information about the variability of material properties found in practical applications. We combine finite elasticity and information theories to construct homogeneous isotropic hyperelastic models with random field parameters calibrated to discrete mean values and standard deviations of either the stress-strain function or the nonlinear shear modulus, which is a function of the deformation, estimated from experimental tests. These quantities can take on different values, corresponding to possible outcomes of the experiments. As multiple models can be derived that adequately represent the observed phenomena, we apply Occam's razor by providing an explicit criterion for model selection based on Bayesian statistics. We then employ this criterion to select a model among competing models calibrated to experimental data for rubber and brain tissue under single or multiaxial loads.
Calibration of two complex ecosystem models with different likelihood functions
Hidy, Dóra; Haszpra, László; Pintér, Krisztina; Nagy, Zoltán; Barcza, Zoltán
2014-05-01
The biosphere is a sensitive carbon reservoir. Terrestrial ecosystems were approximately carbon neutral during the past centuries, but they became net carbon sinks due to climate change induced environmental change and associated CO2 fertilization effect of the atmosphere. Model studies and measurements indicate that the biospheric carbon sink can saturate in the future due to ongoing climate change which can act as a positive feedback. Robustness of carbon cycle models is a key issue when trying to choose the appropriate model for decision support. The input parameters of the process-based models are decisive regarding the model output. At the same time there are several input parameters for which accurate values are hard to obtain directly from experiments or no local measurements are available. Due to the uncertainty associated with the unknown model parameters significant bias can be experienced if the model is used to simulate the carbon and nitrogen cycle components of different ecosystems. In order to improve model performance the unknown model parameters has to be estimated. We developed a multi-objective, two-step calibration method based on Bayesian approach in order to estimate the unknown parameters of PaSim and Biome-BGC models. Biome-BGC and PaSim are a widely used biogeochemical models that simulate the storage and flux of water, carbon, and nitrogen between the ecosystem and the atmosphere, and within the components of the terrestrial ecosystems (in this research the developed version of Biome-BGC is used which is referred as BBGC MuSo). Both models were calibrated regardless the simulated processes and type of model parameters. The calibration procedure is based on the comparison of measured data with simulated results via calculating a likelihood function (degree of goodness-of-fit between simulated and measured data). In our research different likelihood function formulations were used in order to examine the effect of the different model
International Nuclear Information System (INIS)
Cox, Lawrence J.; Barrett, Richard F.; Booth, Thomas Edward; Briesmeister, Judith F.; Brown, Forrest B.; Bull, Jeffrey S.; Giesler, Gregg Carl; Goorley, John T.; Mosteller, Russell D.; Forster, R. Arthur; Post, Susan E.; Prael, Richard E.; Selcow, Elizabeth Carol; Sood, Avneet
2002-01-01
The Monte Carlo transport workhorse, MCNP, is undergoing a massive renovation at Los Alamos National Laboratory (LANL) in support of the Eolus Project of the Advanced Simulation and Computing (ASCI) Program. MCNP Version 5 (V5) (expected to be released to RSICC in Spring, 2002) will consist of a major restructuring from FORTRAN-77 (with extensions) to ANSI-standard FORTRAN-90 with support for all of the features available in the present release (MCNP-4C2/4C3). To most users, the look-and-feel of MCNP will not change much except for the improvements (improved graphics, easier installation, better online documentation). For example, even with the major format change, full support for incremental patching will still be provided. In addition to the language and style updates, MCNP V5 will have various new user features. These include improved photon physics, neutral particle radiography, enhancements and additions to variance reduction methods, new source options, and improved parallelism support (PVM, MPI, OpenMP).
Calibrating corneal material model parameters using only inflation data: an ill-posed problem
CSIR Research Space (South Africa)
Kok, S
2014-08-01
Full Text Available is to perform numerical modelling using the finite element method, for which a calibrated material model is required. These material models are typically calibrated using experimental inflation data by solving an inverse problem. In the inverse problem...
Calibration process of highly parameterized semi-distributed hydrological model
Vidmar, Andrej; Brilly, Mitja
2017-04-01
Hydrological phenomena take place in the hydrological system, which is governed by nature, and are essentially stochastic. These phenomena are unique, non-recurring, and changeable across space and time. Since any river basin with its own natural characteristics and any hydrological event therein, are unique, this is a complex process that is not researched enough. Calibration is a procedure of determining the parameters of a model that are not known well enough. Input and output variables and mathematical model expressions are known, while only some parameters are unknown, which are determined by calibrating the model. The software used for hydrological modelling nowadays is equipped with sophisticated algorithms for calibration purposes without possibility to manage process by modeler. The results are not the best. We develop procedure for expert driven process of calibration. We use HBV-light-CLI hydrological model which has command line interface and coupling it with PEST. PEST is parameter estimation tool which is used widely in ground water modeling and can be used also on surface waters. Process of calibration managed by expert directly, and proportionally to the expert knowledge, affects the outcome of the inversion procedure and achieves better results than if the procedure had been left to the selected optimization algorithm. First step is to properly define spatial characteristic and structural design of semi-distributed model including all morphological and hydrological phenomena, like karstic area, alluvial area and forest area. This step includes and requires geological, meteorological, hydraulic and hydrological knowledge of modeler. Second step is to set initial parameter values at their preferred values based on expert knowledge. In this step we also define all parameter and observation groups. Peak data are essential in process of calibration if we are mainly interested in flood events. Each Sub Catchment in the model has own observations group
Bayesian model calibration of ramp compression experiments on Z
Brown, Justin; Hund, Lauren
2017-06-01
Bayesian model calibration (BMC) is a statistical framework to estimate inputs for a computational model in the presence of multiple uncertainties, making it well suited to dynamic experiments which must be coupled with numerical simulations to interpret the results. Often, dynamic experiments are diagnosed using velocimetry and this output can be modeled using a hydrocode. Several calibration issues unique to this type of scenario including the functional nature of the output, uncertainty of nuisance parameters within the simulation, and model discrepancy identifiability are addressed, and a novel BMC process is proposed. As a proof of concept, we examine experiments conducted on Sandia National Laboratories' Z-machine which ramp compressed tantalum to peak stresses of 250 GPa. The proposed BMC framework is used to calibrate the cold curve of Ta (with uncertainty), and we conclude that the procedure results in simple, fast, and valid inferences. Sandia National Laboratories is a multi-mission laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Calibrating Vadose Zone Models with Time-Lapse Gravity Data
DEFF Research Database (Denmark)
Christiansen, Lars; Hansen, A. B.; Looms, M. C.
2009-01-01
A change in soil water content is a change in mass stored in the subsurface. Given that the mass change is big enough, the change can be measured with a gravity meter. Attempts have been made with varying success over the last decades to use ground-based time-lapse gravity measurements to infer...... hydrogeological parameters. These studies focused on the saturated zone with specific yield as the most prominent target parameter. Any change in storage in the vadose zone has been considered as noise. Our modeling results show a measureable change in gravity from the vadose zone during a forced infiltration...... experiment on 10m by 10m grass land. Simulation studies show a potential for vadose zone model calibration using gravity data in conjunction with other geophysical data, e.g. cross-borehole georadar. We present early field data and calibration results from a forced infiltration experiment conducted over 30...
A new sewage exfiltration model--parameters and calibration.
Karpf, Christian; Krebs, Peter
2011-01-01
Exfiltration of waste water from sewer systems represents a potential danger for the soil and the aquifer. Common models, which are used to describe the exfiltration process, are based on the law of Darcy, extended by a more or less detailed consideration of the expansion of leaks, the characteristics of the soil and the colmation layer. But, due to the complexity of the exfiltration process, the calibration of these models includes a significant uncertainty. In this paper, a new exfiltration approach is introduced, which implements the dynamics of the clogging process and the structural conditions near sewer leaks. The calibration is realised according to experimental studies and analysis of groundwater infiltration to sewers. Furthermore, exfiltration rates and the sensitivity of the approach are estimated and evaluated, respectively, by Monte-Carlo simulations.
Spatial and Temporal Self-Calibration of a Hydroeconomic Model
Howitt, R. E.; Hansen, K. M.
2008-12-01
Hydroeconomic modeling of water systems where risk and reliability of water supply are of critical importance must address explicitly how to model water supply uncertainty. When large fluctuations in annual precipitation and significant variation in flows by location are present, a model which solves with perfect foresight of future water conditions may be inappropriate for some policy and research questions. We construct a simulation-optimization model with limited foresight of future water conditions using positive mathematical programming and self-calibration techniques. This limited foresight netflow (LFN) model signals the value of storing water for future use and reflects a more accurate economic value of water at key locations, given that future water conditions are unknown. Failure to explicitly model this uncertainty could lead to undervaluation of storage infrastructure and contractual mechanisms for managing water supply risk. A model based on sequentially updated information is more realistic, since water managers make annual storage decisions without knowledge of yet to be realized future water conditions. The LFN model runs historical hydrological conditions through the current configuration of the California water system to determine the economically efficient allocation of water under current economic conditions and infrastructure. The model utilizes current urban and agricultural demands, storage and conveyance infrastructure, and the state's hydrological history to indicate the scarcity value of water at key locations within the state. Further, the temporal calibration penalty functions vary by year type, reflecting agricultural water users' ability to alter cropping patterns in response to water conditions. The model employs techniques from positive mathematical programming (Howitt, 1995; Howitt, 1998; Cai and Wang, 2006) to generate penalty functions that are applied to deviations from observed data. The functions are applied to monthly flows
SUPERIMPOSED MESH PLOTTING IN MCNP
Energy Technology Data Exchange (ETDEWEB)
J. HENDRICKS
2001-02-01
The capability to plot superimposed meshes has been added to MCNP{trademark}. MCNP4C featured a superimposed mesh weight window generator which enabled users to set up geometries without having to subdivide geometric cells for variance reduction. The variance reduction was performed with weight windows on a rectangular or cylindrical mesh superimposed over the physical geometry. Experience with the new capability was favorable but also indicated that a number of enhancements would be very beneficial, particularly a means of visualizing the mesh and its values. The mathematics for plotting the mesh and its values is described here along with a description of other upgrades.
Suitability study of MCNP Monte Carlo program for use in medical physics
International Nuclear Information System (INIS)
Jeraj, R.
1998-01-01
MCNP is widely used Monte Carlo program in reactor and nuclear physics. However, an option of simulating electrons was added into the code a few years ago. With this extension MCNP became a code, potentially applicable for applications in medical physics. In 1997, a new version of the code, named MCNP4B was released, which contains several improvements in electron transport modeling. To test suitability of the code, several important issues were considered and examined. Default sampling in MCNP electron transport was found to be inappropriate, because it gives wrong depth dose curves for electron energies of interest in radiotherapy (Me V range). The problem can be solved if ITS-style energy sampling is used instead. One of the most difficult problems in electron transport is simulation of electron backscattering, which MCNP predicts well for all, low and high Z materials. One of the potential drawbacks, if somebody wanted to use MCNP for dosimetry on real patient geometries is that MCNP lattice calculation (e.g. when calculating dose distributions) becomes very slow for large number of scoring voxels. However, if just one scoring voxel is used, the number of geometry voxels only slightly affects the speed. In the study it was found that MCNP could be reliability used for many applications in medical physics. However, the established limitations should be taken into account when MCNP is used for a particular application.(author)
Directory of Open Access Journals (Sweden)
G. Hartmann
2005-01-01
Full Text Available In order to find a model parameterization such that the hydrological model performs well even under different conditions, appropriate model performance measures have to be determined. A common performance measure is the Nash Sutcliffe efficiency. Usually it is calculated comparing observed and modelled daily values. In this paper a modified version is suggested in order to calibrate a model on different time scales simultaneously (days up to years. A spatially distributed hydrological model based on HBV concept was used. The modelling was applied on the Upper Neckar catchment, a mesoscale river in south western Germany with a basin size of about 4000 km2. The observation period 1961-1990 was divided into four different climatic periods, referred to as "warm", "cold", "wet" and "dry". These sub periods were used to assess the transferability of the model calibration and of the measure of performance. In a first step, the hydrological model was calibrated on a certain period and afterwards applied on the same period. Then, a validation was performed on the climatologically opposite period than the calibration, e.g. the model calibrated on the cold period was applied on the warm period. Optimal parameter sets were identified by an automatic calibration procedure based on Simulated Annealing. The results show, that calibrating a hydrological model that is supposed to handle short as well as long term signals becomes an important task. Especially the objective function has to be chosen very carefully.
Model- and calibration-independent test of cosmic acceleration
International Nuclear Information System (INIS)
Seikel, Marina; Schwarz, Dominik J.
2009-01-01
We present a calibration-independent test of the accelerated expansion of the universe using supernova type Ia data. The test is also model-independent in the sense that no assumptions about the content of the universe or about the parameterization of the deceleration parameter are made and that it does not assume any dynamical equations of motion. Yet, the test assumes the universe and the distribution of supernovae to be statistically homogeneous and isotropic. A significant reduction of systematic effects, as compared to our previous, calibration-dependent test, is achieved. Accelerated expansion is detected at significant level (4.3σ in the 2007 Gold sample, 7.2σ in the 2008 Union sample) if the universe is spatially flat. This result depends, however, crucially on supernovae with a redshift smaller than 0.1, for which the assumption of statistical isotropy and homogeneity is less well established
Technical note: Bayesian calibration of dynamic ruminant nutrition models.
Reed, K F; Arhonditsis, G B; France, J; Kebreab, E
2016-08-01
Mechanistic models of ruminant digestion and metabolism have advanced our understanding of the processes underlying ruminant animal physiology. Deterministic modeling practices ignore the inherent variation within and among individual animals and thus have no way to assess how sources of error influence model outputs. We introduce Bayesian calibration of mathematical models to address the need for robust mechanistic modeling tools that can accommodate error analysis by remaining within the bounds of data-based parameter estimation. For the purpose of prediction, the Bayesian approach generates a posterior predictive distribution that represents the current estimate of the value of the response variable, taking into account both the uncertainty about the parameters and model residual variability. Predictions are expressed as probability distributions, thereby conveying significantly more information than point estimates in regard to uncertainty. Our study illustrates some of the technical advantages of Bayesian calibration and discusses the future perspectives in the context of animal nutrition modeling. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
CALIBRATION OF DISTRIBUTED SHALLOW LANDSLIDE MODELS IN FORESTED LANDSCAPES
Directory of Open Access Journals (Sweden)
Gian Battista Bischetti
2010-09-01
Full Text Available In mountainous-forested soil mantled landscapes all around the world, rainfall-induced shallow landslides are one of the most common hydro-geomorphic hazards, which frequently impact the environment and human lives and properties. In order to produce shallow landslide susceptibility maps, several models have been proposed in the last decade, combining simplified steady state topography- based hydrological models with the infinite slope scheme, in a GIS framework. In the present paper, two of the still open issues are investigated: the assessment of the validity of slope stability models and the inclusion of root cohesion values. In such a perspective the “Stability INdex MAPping” has been applied to a small forested pre-Alpine catchment, adopting different calibrating approaches and target indexes. The Single and the Multiple Calibration Regions modality and three quantitative target indexes – the common Success Rate (SR, the Modified Success Rate (MSR, and a Weighted Modified Success Rate (WMSR herein introduced – are considered. The results obtained show that the target index can 34 003_Bischetti(569_23 1-12-2010 9:48 Pagina 34 significantly affect the values of a model’s parameters and lead to different proportions of stable/unstable areas, both for the Single and the Multiple Calibration Regions approach. The use of SR as the target index leads to an over-prediction of the unstable areas, whereas the use of MSR and WMSR, seems to allow a better discrimination between stable and unstable areas. The Multiple Calibration Regions approach should be preferred, using information on space distribution of vegetation to define the Regions. The use of field-based estimation of root cohesion and sliding depth allows the implementation of slope stability models (SINMAP in our case also without the data needed for calibration. To maximize the inclusion of such parameters into SINMAP, however, the assumption of a uniform distribution of
A Linear Viscoelastic Model Calibration of Sylgard 184.
Energy Technology Data Exchange (ETDEWEB)
Long, Kevin Nicholas; Brown, Judith Alice
2017-04-01
We calibrate a linear thermoviscoelastic model for solid Sylgard 184 (90-10 formulation), a lightly cross-linked, highly flexible isotropic elastomer for use both in Sierra / Solid Mechanics via the Universal Polymer Model as well as in Sierra / Structural Dynamics (Salinas) for use as an isotropic viscoelastic material. Material inputs for the calibration in both codes are provided. The frequency domain master curve of oscillatory shear was obtained from a report from Los Alamos National Laboratory (LANL). However, because the form of that data is different from the constitutive models in Sierra, we also present the mapping of the LANL data onto Sandia’s constitutive models. Finally, blind predictions of cyclic tension and compression out to moderate strains of 40 and 20% respectively are compared with Sandia’s legacy cure schedule material. Although the strain rate of the data is unknown, the linear thermoviscoelastic model accurately predicts the experiments out to moderate strains for the slower strain rates, which is consistent with the expectation that quasistatic test procedures were likely followed. This good agreement comes despite the different cure schedules between the Sandia and LANL data.
DEFF Research Database (Denmark)
Petersen, Britta; Gernaey, Krist; Henze, Mogens
2002-01-01
treatment plant. In the case that was studied it was important to have a detailed description of the process dynamics, since the model was to be used as the basis for optimisation scenarios in a later phase. Therefore, a complete model calibration procedure was applied including: (1) a description......The purpose of the calibrated model determines how to approach a model calibration, e.g. which information is needed and to which level of detail the model should be calibrated. A systematic model calibration procedure was therefore defined and evaluated for a municipal–industrial wastewater...
MCNP output data analysis with ROOT (MODAR)
Carasco, C.
2010-12-01
file into Microsoft Excel. Such an approach is satisfactory when the quantity of data is small but is not efficient when the size of the simulated data is large, for example when time-energy correlations are studied in detail such as in problems involving the associated particle technique. In addition, since the finite time resolution of the simulated detector cannot be modeled with MCNP, systems in which time-energy correlation is crucial cannot be described in a satisfactory way. Finally, realistic particle energy deposit in detectors is calculated with MCNP in a two step process involving type-5 then type-8 tallies. In the first step, the photon flux energy spectrum associated to a time region is selected and serves as a source energy distribution for the second step. Thus, several files must be manipulated before getting the result, which can be time consuming if one needs to study several time regions or different detectors performances. In the same way, modeling counting statistics obtained in a limited acquisition time requires several steps and can also be time consuming. Solution method: In order to overcome the previous limitations, the MODAR C++ code has been written to make use of CERN's ROOT data analysis software. MCNP output data are read from the MCNP output file with dedicated routines. Two dimensional histograms are filled and can be handled efficiently within the ROOT framework. To keep a user friendly analysis tool, all processing and data display can be done by means of ROOT Graphical User Interface. Specific routines have been written to include detectors finite time resolution and energy response function as well as counting statistics in a straightforward way. Reasons for new version: For applications involving the Associate Particle Technique, a large number of gamma rays are produced by the fast neutrons interactions. To study the energy spectra, it is useful to identify the gamma-ray energy peaks in a straightforward way. Therefore, the
Dynamic calibration of agent-based models using data assimilation.
Ward, Jonathan A; Evans, Andrew J; Malleson, Nicolas S
2016-04-01
A widespread approach to investigating the dynamical behaviour of complex social systems is via agent-based models (ABMs). In this paper, we describe how such models can be dynamically calibrated using the ensemble Kalman filter (EnKF), a standard method of data assimilation. Our goal is twofold. First, we want to present the EnKF in a simple setting for the benefit of ABM practitioners who are unfamiliar with it. Second, we want to illustrate to data assimilation experts the value of using such methods in the context of ABMs of complex social systems and the new challenges these types of model present. We work towards these goals within the context of a simple question of practical value: how many people are there in Leeds (or any other major city) right now? We build a hierarchy of exemplar models that we use to demonstrate how to apply the EnKF and calibrate these using open data of footfall counts in Leeds.
Calibration and validation of a general infiltration model
Mishra, Surendra Kumar; Ranjan Kumar, Shashi; Singh, Vijay P.
1999-08-01
A general infiltration model proposed by Singh and Yu (1990) was calibrated and validated using a split sampling approach for 191 sets of infiltration data observed in the states of Minnesota and Georgia in the USA. Of the five model parameters, fc (the final infiltration rate), So (the available storage space) and exponent n were found to be more predictable than the other two parameters: m (exponent) and a (proportionality factor). A critical examination of the general model revealed that it is related to the Soil Conservation Service (1956) curve number (SCS-CN) method and its parameter So is equivalent to the potential maximum retention of the SCS-CN method and is, in turn, found to be a function of soil sorptivity and hydraulic conductivity. The general model was found to describe infiltration rate with time varying curve number.
Calibration of the simulation model of the VINCY cyclotron magnet
Directory of Open Access Journals (Sweden)
Ćirković Saša
2002-01-01
Full Text Available The MERMAID program will be used to isochronise the nominal magnetic field of the VINCY Cyclotron. This program simulates the response, i. e. calculates the magnetic field, of a previously defined model of a magnet. The accuracy of 3D field calculation depends on the density of the grid points in the simulation model grid. The size of the VINCY Cyclotron and the maximum number of grid points in the XY plane limited by MERMAID define the maximumobtainable accuracy of field calculations. Comparisons of the field simulated with maximum obtainable accuracy with the magnetic field measured in the first phase of the VINCY Cyclotron magnetic field measurements campaign has shown that the difference between these two fields is not as small as required. Further decrease of the difference between these fields is obtained by the simulation model calibration, i. e. by adjusting the current through the main coils in the simulation model.
Recent Improvements to the Calibration Models for RXTE/PCA
Jahoda, K.
2008-01-01
We are updating the calibration of the PCA to correct for slow variations, primarily in energy to channel relationship. We have also improved the physical model in the vicinity of the Xe K-edge, which should increase the reliability of continuum fits above 20 keV. The improvements to the matrix are especially important to simultaneous observations, where the PCA is often used to constrain the continuum while other higher resolution spectrometers are used to study the shape of lines and edges associated with Iron.
MCNP-DSP, Monte Carlo Neutron-Particle Transport Code with Digital Signal Processing
International Nuclear Information System (INIS)
2002-01-01
1 - Description of program or function: MCNP-DSP is recommended only for experienced MCNP users working with subcritical measurements. It is a modification of the Los Alamos National Laboratory's Monte Carlo code MCNP4a that is used to simulate a variety of subcritical measurements. The DSP version was developed to simulate frequency analysis measurements, correlation (Rossi-) measurements, pulsed neutron measurements, Feynman variance measurements, and multiplicity measurements. CCC-700/MCNP4C is recommended for general purpose calculations. 2 - Methods:MCNP-DSP performs calculations very similarly to MCNP and uses the same generalized geometry capabilities of MCNP. MCNP-DSP can only be used with the continuous-energy cross-section data. A variety of source and detector options are available. However, unlike standard MCNP, the source and detector options are limited to those described in the manual because these options are specified in the MCNP-DSP extra data file. MCNP-DSP is used to obtain the time-dependent response of detectors that are modeled in the simulation geometry. The detectors represent actual detectors used in measurements. These time-dependent detector responses are used to compute a variety of quantities such as frequency analysis signatures, correlation signatures, multiplicity signatures, etc., between detectors or sources and detectors. Energy ranges are 0-60 MeV for neutrons (data generally only available up to 20 MeV) and 1 keV - 1 GeV for photons and electrons. 3 - Restrictions on the complexity of the problem: None noted
Benchmark analysis of MCNP trademark ENDF/B-VI iron
International Nuclear Information System (INIS)
Court, J.D.; Hendricks, J.S.
1994-12-01
The MCNP ENDF/B-VI iron cross-section data was subjected to four benchmark studies as part of the Hiroshima/Nagasaki dose re-evaluation for the National Academy of Science and the Defense Nuclear Agency. The four benchmark studies were: (1) the iron sphere benchmarks from the Lawrence Livermore Pulsed Spheres; (2) the Oak Ridge National Laboratory Fusion Reactor Shielding Benchmark; (3) a 76-cm diameter iron sphere benchmark done at the University of Illinois; (4) the Oak Ridge National Laboratory Benchmark for Neutron Transport through Iron. MCNP4A was used to model each benchmark and computational results from the ENDF/B-VI iron evaluations were compared to ENDF/B-IV, ENDF/B-V, the MCNP Recommended Data Set (which includes Los Alamos National Laboratory Group T-2 evaluations), and experimental data. The results show that the ENDF/B-VI iron evaluations are as good as, or better than, previous data sets
Model validation and calibration based on component functions of model output
International Nuclear Information System (INIS)
Wu, Danqing; Lu, Zhenzhou; Wang, Yanping; Cheng, Lei
2015-01-01
The target in this work is to validate the component functions of model output between physical observation and computational model with the area metric. Based on the theory of high dimensional model representations (HDMR) of independent input variables, conditional expectations are component functions of model output, and the conditional expectations reflect partial information of model output. Therefore, the model validation of conditional expectations tells the discrepancy between the partial information of the computational model output and that of the observations. Then a calibration of the conditional expectations is carried out to reduce the value of model validation metric. After that, a recalculation of the model validation metric of model output is taken with the calibrated model parameters, and the result shows that a reduction of the discrepancy in the conditional expectations can help decrease the difference in model output. At last, several examples are employed to demonstrate the rationality and necessity of the methodology in case of both single validation site and multiple validation sites. - Highlights: • A validation metric of conditional expectations of model output is proposed. • HDRM explains the relationship of conditional expectations and model output. • An improved approach of parameter calibration updates the computational models. • Validation and calibration process are applied at single site and multiple sites. • Validation and calibration process show a superiority than existing methods
Geomechanical Simulation of Bayou Choctaw Strategic Petroleum Reserve - Model Calibration.
Energy Technology Data Exchange (ETDEWEB)
Park, Byoung [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2017-02-01
A finite element numerical analysis model has been constructed that consists of a realistic mesh capturing the geometries of Bayou Choctaw (BC) Strategic Petroleum Reserve (SPR) site and multi - mechanism deformation ( M - D ) salt constitutive model using the daily data of actual wellhead pressure and oil - brine interface. The salt creep rate is not uniform in the salt dome, and the creep test data for BC salt is limited. Therefore, the model calibration is necessary to simulate the geomechanical behavior of the salt dome. The cavern volumetric closures of SPR caverns calculated from CAVEMAN are used for the field baseline measurement. The structure factor, A 2 , and transient strain limit factor, K 0 , in the M - D constitutive model are used for the calibration. The A 2 value obtained experimentally from the BC salt and K 0 value of Waste Isolation Pilot Plant (WIPP) salt are used for the baseline values. T o adjust the magnitude of A 2 and K 0 , multiplication factors A2F and K0F are defined, respectively. The A2F and K0F values of the salt dome and salt drawdown skins surrounding each SPR cavern have been determined through a number of back fitting analyses. The cavern volumetric closures calculated from this model correspond to the predictions from CAVEMAN for six SPR caverns. Therefore, this model is able to predict past and future geomechanical behaviors of the salt dome, caverns, caprock , and interbed layers. The geological concerns issued in the BC site will be explained from this model in a follow - up report .
Selection, calibration, and validation of models of tumor growth.
Lima, E A B F; Oden, J T; Hormuth, D A; Yankeelov, T E; Almeida, R C
2016-11-01
This paper presents general approaches for addressing some of the most important issues in predictive computational oncology concerned with developing classes of predictive models of tumor growth. First, the process of developing mathematical models of vascular tumors evolving in the complex, heterogeneous, macroenvironment of living tissue; second, the selection of the most plausible models among these classes, given relevant observational data; third, the statistical calibration and validation of models in these classes, and finally, the prediction of key Quantities of Interest (QOIs) relevant to patient survival and the effect of various therapies. The most challenging aspects of this endeavor is that all of these issues often involve confounding uncertainties: in observational data, in model parameters, in model selection, and in the features targeted in the prediction. Our approach can be referred to as "model agnostic" in that no single model is advocated; rather, a general approach that explores powerful mixture-theory representations of tissue behavior while accounting for a range of relevant biological factors is presented, which leads to many potentially predictive models. Then representative classes are identified which provide a starting point for the implementation of OPAL, the Occam Plausibility Algorithm (OPAL) which enables the modeler to select the most plausible models (for given data) and to determine if the model is a valid tool for predicting tumor growth and morphology ( in vivo ). All of these approaches account for uncertainties in the model, the observational data, the model parameters, and the target QOI. We demonstrate these processes by comparing a list of models for tumor growth, including reaction-diffusion models, phase-fields models, and models with and without mechanical deformation effects, for glioma growth measured in murine experiments. Examples are provided that exhibit quite acceptable predictions of tumor growth in laboratory
Differential Evolution algorithm applied to FSW model calibration
Idagawa, H. S.; Santos, T. F. A.; Ramirez, A. J.
2014-03-01
Friction Stir Welding (FSW) is a solid state welding process that can be modelled using a Computational Fluid Dynamics (CFD) approach. These models use adjustable parameters to control the heat transfer and the heat input to the weld. These parameters are used to calibrate the model and they are generally determined using the conventional trial and error approach. Since this method is not very efficient, we used the Differential Evolution (DE) algorithm to successfully determine these parameters. In order to improve the success rate and to reduce the computational cost of the method, this work studied different characteristics of the DE algorithm, such as the evolution strategy, the objective function, the mutation scaling factor and the crossover rate. The DE algorithm was tested using a friction stir weld performed on a UNS S32205 Duplex Stainless Steel.
Sin, Gürkan; Van Hulle, Stijn W H; De Pauw, Dirk J W; van Griensven, Ann; Vanrolleghem, Peter A
2005-07-01
Modelling activated sludge systems has gained an increasing momentum after the introduction of activated sludge models (ASMs) in 1987. Application of dynamic models for full-scale systems requires essentially a calibration of the chosen ASM to the case under study. Numerous full-scale model applications have been performed so far which were mostly based on ad hoc approaches and expert knowledge. Further, each modelling study has followed a different calibration approach: e.g. different influent wastewater characterization methods, different kinetic parameter estimation methods, different selection of parameters to be calibrated, different priorities within the calibration steps, etc. In short, there was no standard approach in performing the calibration study, which makes it difficult, if not impossible, to (1) compare different calibrations of ASMs with each other and (2) perform internal quality checks for each calibration study. To address these concerns, systematic calibration protocols have recently been proposed to bring guidance to the modeling of activated sludge systems and in particular to the calibration of full-scale models. In this contribution four existing calibration approaches (BIOMATH, HSG, STOWA and WERF) will be critically discussed using a SWOT (Strengths, Weaknesses, Opportunities, Threats) analysis. It will also be assessed in what way these approaches can be further developed in view of further improving the quality of ASM calibration. In this respect, the potential of automating some steps of the calibration procedure by use of mathematical algorithms is highlighted.
A calibration hierarchy for risk models was defined: from utopia to empirical data.
Van Calster, Ben; Nieboer, Daan; Vergouwe, Yvonne; De Cock, Bavo; Pencina, Michael J; Steyerberg, Ewout W
2016-06-01
Calibrated risk models are vital for valid decision support. We define four levels of calibration and describe implications for model development and external validation of predictions. We present results based on simulated data sets. A common definition of calibration is "having an event rate of R% among patients with a predicted risk of R%," which we refer to as "moderate calibration." Weaker forms of calibration only require the average predicted risk (mean calibration) or the average prediction effects (weak calibration) to be correct. "Strong calibration" requires that the event rate equals the predicted risk for every covariate pattern. This implies that the model is fully correct for the validation setting. We argue that this is unrealistic: the model type may be incorrect, the linear predictor is only asymptotically unbiased, and all nonlinear and interaction effects should be correctly modeled. In addition, we prove that moderate calibration guarantees nonharmful decision making. Finally, results indicate that a flexible assessment of calibration in small validation data sets is problematic. Strong calibration is desirable for individualized decision support but unrealistic and counter productive by stimulating the development of overly complex models. Model development and external validation should focus on moderate calibration. Copyright © 2016 Elsevier Inc. All rights reserved.
Accelerating Pseudo-Random Number Generator for MCNP on GPU
Gong, Chunye; Liu, Jie; Chi, Lihua; Hu, Qingfeng; Deng, Li; Gong, Zhenghu
2010-09-01
Pseudo-random number generators (PRNG) are intensively used in many stochastic algorithms in particle simulations, artificial neural networks and other scientific computation. The PRNG in Monte Carlo N-Particle Transport Code (MCNP) requires long period, high quality, flexible jump and fast enough. In this paper, we implement such a PRNG for MCNP on NVIDIA's GTX200 Graphics Processor Units (GPU) using CUDA programming model. Results shows that 3.80 to 8.10 times speedup are achieved compared with 4 to 6 cores CPUs and more than 679.18 million double precision random numbers can be generated per second on GPU.
International Nuclear Information System (INIS)
Becker, Frank; Blunck, Christoph; Hegenbart, Lars; Heide, Bernd; Leone, Debora; Nagels, Sven; Schimmelpfeng, Jutta; Urban, Manfred
2008-01-01
Inhomogeneous beta-photon radiation fields make a reliable dose difficult to determine. Routine monitoring with dosemeters does not guarantee any accurate determination of the local skin dose. In general, correction factors are used to correct for the measured dose and the maximum exposure. However, strong underestimations of the maximum exposure are possible, depending on the individual handling and the reliability of dose measurements. Simulations provide the possibility to track the points of highest exposure and the origin of the highest dose. In this connection, simulations are performed with MCNPX. In order to investigate the local skin dose, two hand phantoms are used, a model based on geometrical elements and a voxel hand. A typical case of radio synoviorthesis, handling of a syringe filled with 90 Y, is simulated. Another simulation focuses on the selective internal radio therapy, revealing the origin of the main dose component in the mixed beta-photon radiation field of a 90 Y vial in an opened transport container. (author)
Calibrating emergent phenomena in stock markets with agent based models.
Fievet, Lucas; Sornette, Didier
2018-01-01
Since the 2008 financial crisis, agent-based models (ABMs), which account for out-of-equilibrium dynamics, heterogeneous preferences, time horizons and strategies, have often been envisioned as the new frontier that could revolutionise and displace the more standard models and tools in economics. However, their adoption and generalisation is drastically hindered by the absence of general reliable operational calibration methods. Here, we start with a different calibration angle that qualifies an ABM for its ability to achieve abnormal trading performance with respect to the buy-and-hold strategy when fed with real financial data. Starting from the common definition of standard minority and majority agents with binary strategies, we prove their equivalence to optimal decision trees. This efficient representation allows us to exhaustively test all meaningful single agent models for their potential anomalous investment performance, which we apply to the NASDAQ Composite index over the last 20 years. We uncover large significant predictive power, with anomalous Sharpe ratio and directional accuracy, in particular during the dotcom bubble and crash and the 2008 financial crisis. A principal component analysis reveals transient convergence between the anomalous minority and majority models. A novel combination of the optimal single-agent models of both classes into a two-agents model leads to remarkable superior investment performance, especially during the periods of bubbles and crashes. Our design opens the field of ABMs to construct novel types of advanced warning systems of market crises, based on the emergent collective intelligence of ABMs built on carefully designed optimal decision trees that can be reversed engineered from real financial data.
Calibrating emergent phenomena in stock markets with agent based models
Sornette, Didier
2018-01-01
Since the 2008 financial crisis, agent-based models (ABMs), which account for out-of-equilibrium dynamics, heterogeneous preferences, time horizons and strategies, have often been envisioned as the new frontier that could revolutionise and displace the more standard models and tools in economics. However, their adoption and generalisation is drastically hindered by the absence of general reliable operational calibration methods. Here, we start with a different calibration angle that qualifies an ABM for its ability to achieve abnormal trading performance with respect to the buy-and-hold strategy when fed with real financial data. Starting from the common definition of standard minority and majority agents with binary strategies, we prove their equivalence to optimal decision trees. This efficient representation allows us to exhaustively test all meaningful single agent models for their potential anomalous investment performance, which we apply to the NASDAQ Composite index over the last 20 years. We uncover large significant predictive power, with anomalous Sharpe ratio and directional accuracy, in particular during the dotcom bubble and crash and the 2008 financial crisis. A principal component analysis reveals transient convergence between the anomalous minority and majority models. A novel combination of the optimal single-agent models of both classes into a two-agents model leads to remarkable superior investment performance, especially during the periods of bubbles and crashes. Our design opens the field of ABMs to construct novel types of advanced warning systems of market crises, based on the emergent collective intelligence of ABMs built on carefully designed optimal decision trees that can be reversed engineered from real financial data. PMID:29499049
Geomechanical Model Calibration Using Field Measurements for a Petroleum Reserve
Park, Byoung Yoon; Sobolik, Steven R.; Herrick, Courtney G.
2018-03-01
A finite element numerical analysis model has been constructed that consists of a mesh that effectively captures the geometries of Bayou Choctaw (BC) Strategic Petroleum Reserve (SPR) site and multimechanism deformation (M-D) salt constitutive model using the daily data of actual wellhead pressure and oil-brine interface location. The salt creep rate is not uniform in the salt dome, and the creep test data for BC salt are limited. Therefore, the model calibration is necessary to simulate the geomechanical behavior of the salt dome. The cavern volumetric closures of SPR caverns calculated from CAVEMAN are used as the field baseline measurement. The structure factor, A 2, and transient strain limit factor, K 0, in the M-D constitutive model are used for the calibration. The value of A 2, obtained experimentally from BC salt, and the value of K 0, obtained from Waste Isolation Pilot Plant salt, are used for the baseline values. To adjust the magnitude of A 2 and K 0, multiplication factors A 2 F and K 0 F are defined, respectively. The A 2 F and K 0 F values of the salt dome and salt drawdown skins surrounding each SPR cavern have been determined through a number of back analyses. The cavern volumetric closures calculated from this model correspond to the predictions from CAVEMAN for six SPR caverns. Therefore, this model is able to predict behaviors of the salt dome, caverns, caprock, and interbed layers. The geotechnical concerns associated with the BC site from this analysis will be explained in a follow-up paper.
Energy Technology Data Exchange (ETDEWEB)
Sreckovic, G.; Hall, E.R. [British Columbia Univ., Dept. of Civil Engineering, Vancouver, BC (Canada); Thibault, J. [Laval Univ., Dept. of Chemical Engineering, Ste-Foy, PQ (Canada); Savic, D. [Exeter Univ., School of Engineering, Exeter (United Kingdom)
1999-05-01
The issue of proper model calibration techniques applied to mechanistic mathematical models relating to activated sludge systems was discussed. Such calibrations are complex because of the non-linearity and multi-model objective functions of the process. This paper presents a hybrid model which was developed using two techniques to model and calibrate secondary clarifier parts of an activated sludge system. Genetic algorithms were used to successfully calibrate the settler mechanistic model, and neural networks were used to reduce the error between the mechanistic model output and real world data. Results of the modelling study show that the long term response of a one-dimensional settler mechanistic model calibrated by genetic algorithms and compared to full scale plant data can be improved by coupling the calibrated mechanistic model to as black-box model, such as a neural network. 11 refs., 2 figs.
A joint calibration model for combining predictive distributions
Directory of Open Access Journals (Sweden)
Patrizia Agati
2013-05-01
Full Text Available In many research fields, as for example in probabilistic weather forecasting, valuable predictive information about a future random phenomenon may come from several, possibly heterogeneous, sources. Forecast combining methods have been developed over the years in order to deal with ensembles of sources: the aim is to combine several predictions in such a way to improve forecast accuracy and reduce risk of bad forecasts.In this context, we propose the use of a Bayesian approach to information combining, which consists in treating the predictive probability density functions (pdfs from the individual ensemble members as data in a Bayesian updating problem. The likelihood function is shown to be proportional to the product of the pdfs, adjusted by a joint “calibration function” describing the predicting skill of the sources (Morris, 1977. In this paper, after rephrasing Morris’ algorithm in a predictive context, we propose to model the calibration function in terms of bias, scale and correlation and to estimate its parameters according to the least squares criterion. The performance of our method is investigated and compared with that of Bayesian Model Averaging (Raftery, 2005 on simulated data.
A Solvatochromic Model Calibrates Nitriles’ Vibrational Frequencies to Electrostatic Fields
Bagchi, Sayan; Fried, Stephen D.; Boxer, Steven G.
2012-01-01
Electrostatic interactions provide a primary connection between a protein’s three-dimensional structure and its function. Infrared (IR) probes are useful because vibrational frequencies of certain chemical groups, such as nitriles, are linearly sensitive to local electrostatic field, and can serve as a molecular electric field meter. IR spectroscopy has been used to study electrostatic changes or fluctuations in proteins, but measured peak frequencies have not been previously mapped to total electric fields, because of the absence of a field-frequency calibration and the complication of local chemical effects such as H-bonds. We report a solvatochromic model that provides a means to assess the H-bonding status of aromatic nitrile vibrational probes, and calibrates their vibrational frequencies to electrostatic field. The analysis involves correlations between the nitrile’s IR frequency and its 13C chemical shift, whose observation is facilitated by a robust method for introducing isotopes into aromatic nitriles. The method is tested on the model protein Ribonuclease S (RNase S) containing a labeled p-CN-Phe near the active site. Comparison of the measurements in RNase S against solvatochromic data gives an estimate of the average total electrostatic field at this location. The value determined agrees quantitatively with MD simulations, suggesting broader potential for the use of IR probes in the study of protein electrostatics. PMID:22694663
Calibration Modeling Methodology to Optimize Performance for Low Range Applications
McCollum, Raymond A.; Commo, Sean A.; Parker, Peter A.
2010-01-01
Calibration is a vital process in characterizing the performance of an instrument in an application environment and seeks to obtain acceptable accuracy over the entire design range. Often, project requirements specify a maximum total measurement uncertainty, expressed as a percent of full-scale. However in some applications, we seek to obtain enhanced performance at the low range, therefore expressing the accuracy as a percent of reading should be considered as a modeling strategy. For example, it is common to desire to use a force balance in multiple facilities or regimes, often well below its designed full-scale capacity. This paper presents a general statistical methodology for optimizing calibration mathematical models based on a percent of reading accuracy requirement, which has broad application in all types of transducer applications where low range performance is required. A case study illustrates the proposed methodology for the Mars Entry Atmospheric Data System that employs seven strain-gage based pressure transducers mounted on the heatshield of the Mars Science Laboratory mission.
Preliminary report on NTS spectral gamma logging and calibration models
International Nuclear Information System (INIS)
Mathews, M.A.; Warren, R.G.; Garcia, S.R.; Lavelle, M.J.
1985-01-01
Facilities are now available at the Nevada Test Site (NTS) in Building 2201 to calibrate spectral gamma logging equipment in environments of low radioactivity. Such environments are routinely encountered during logging of holes at the NTS. Four calibration models were delivered to Building 2201 in January 1985. Each model, or test pit, consists of a stone block with a 12-inch diameter cored borehole. Preliminary radioelement values from the core for the test pits range from 0.58 to 3.83% potassium (K), 0.48 to 29.11 ppm thorium (Th), and 0.62 to 40.42 ppm uranium (U). Two satellite holes, U19ab number2 and U19ab number3, were logged during the winter of 1984-1985. The response of these logs correlates with contents of the naturally radioactive elements K. Th. and U determined in samples from petrologic zones that occur within these holes. Based on these comparisons, the spectral gamma log aids in the recognition and mapping of subsurface stratigraphic units and alteration features associated with unusual concentration of these radioactive elements, such as clay-rich zones
Hot Cell Window Shielding Analysis Using MCNP
International Nuclear Information System (INIS)
Pope, Chad L.; Scates, Wade W.; Taylor, J. Todd
2009-01-01
The Idaho National Laboratory Materials and Fuels Complex nuclear facilities are undergoing a documented safety analysis upgrade. In conjunction with the upgrade effort, shielding analysis of the Fuel Conditioning Facility (FCF) hot cell windows has been conducted. This paper describes the shielding analysis methodology. Each 4-ft thick window uses nine glass slabs, an oil film between the slabs, numerous steel plates, and packed lead wool. Operations in the hot cell center on used nuclear fuel (UNF) processing. Prior to the shielding analysis, shield testing with a gamma ray source was conducted, and the windows were found to be very effective gamma shields. Despite these results, because the glass contained significant amounts of lead and little neutron absorbing material, some doubt lingered regarding the effectiveness of the windows in neutron shielding situations, such as during an accidental criticality. MCNP was selected as an analysis tool because it could model complicated geometry, and it could track gamma and neutron radiation. A bounding criticality source was developed based on the composition of the UNF. Additionally, a bounding gamma source was developed based on the fission product content of the UNF. Modeling the windows required field inspections and detailed examination of drawings and material specifications. Consistent with the shield testing results, MCNP results demonstrated that the shielding was very effective with respect to gamma radiation, and in addition, the analysis demonstrated that the shielding was also very effective during an accidental criticality.
Modeling Prairie Pothole Lakes: Linking Satellite Observation and Calibration (Invited)
Schwartz, F. W.; Liu, G.; Zhang, B.; Yu, Z.
2009-12-01
This paper examines the response of a complex lake wetland system to variations in climate. The focus is on the lakes and wetlands of the Missouri Coteau, which is part of the larger Prairie Pothole Region of the Central Plains of North America. Information on lake size was enumerated from satellite images, and yielded power law relationships for different hydrological conditions. More traditional lake-stage data were made available to us from the USGS Cottonwood Lake Study Site in North Dakota. A Probabilistic Hydrologic Model (PHM) was developed to simulate lake complexes comprised of tens-of-thousands or more individual closed-basin lakes and wetlands. What is new about this model is a calibration scheme that utilizes remotely-sensed data on lake area as well as stage data for individual lakes. Some ¼ million individual data points are used within a Genetic Algorithm to calibrate the model by comparing the simulated results with observed lake area-frequency power law relationships derived from Landsat images and water depths from seven individual lakes and wetlands. The simulated lake behaviors show good agreement with the observations under average, dry, and wet climatic conditions. The calibrated model is used to examine the impact of climate variability on a large lake complex in ND, in particular, the “Dust Bowl Drought” 1930s. This most famous drought of the 20th Century devastated the agricultural economy of the Great Plains with health and social impacts lingering for years afterwards. Interestingly, the drought of 1930s is unremarkable in relation to others of greater intensity and frequency before AD 1200 in the Great Plains. Major droughts and deluges have the ability to create marked variability of the power law function (e.g. up to one and a half orders of magnitude variability from the extreme Dust Bowl Drought to the extreme 1993-2001 deluge). This new probabilistic modeling approach provides a novel tool to examine the response of the
Non-linear calibration models for near infrared spectroscopy
DEFF Research Database (Denmark)
Ni, Wangdong; Nørgaard, Lars; Mørup, Morten
2014-01-01
by ridge regression (RR). The performance of the different methods is demonstrated by their practical applications using three real-life near infrared (NIR) data sets. Different aspects of the various approaches including computational time, model interpretability, potential over-fitting using the non-linear...... models on linear problems, robustness to small or medium sample sets, and robustness to pre-processing, are discussed. The results suggest that GPR and BANN are powerful and promising methods for handling linear as well as nonlinear systems, even when the data sets are moderately small. The LS......-SVM), relevance vector machines (RVM), Gaussian process regression (GPR), artificial neural network (ANN), and Bayesian ANN (BANN). In this comparison, partial least squares (PLS) regression is used as a linear benchmark, while the relationship of the methods is considered in terms of traditional calibration...
A New Perspective for the Calibration of Computational Predictor Models.
Energy Technology Data Exchange (ETDEWEB)
Crespo, Luis Guillermo
2014-11-01
This paper presents a framework for calibrating computational models using data from sev- eral and possibly dissimilar validation experiments. The offset between model predictions and observations, which might be caused by measurement noise, model-form uncertainty, and numerical error, drives the process by which uncertainty in the models parameters is characterized. The resulting description of uncertainty along with the computational model constitute a predictor model. Two types of predictor models are studied: Interval Predictor Models (IPMs) and Random Predictor Models (RPMs). IPMs use sets to characterize uncer- tainty, whereas RPMs use random vectors. The propagation of a set through a model makes the response an interval valued function of the state, whereas the propagation of a random vector yields a random process. Optimization-based strategies for calculating both types of predictor models are proposed. Whereas the formulations used to calculate IPMs target solutions leading to the interval value function of minimal spread containing all observations, those for RPMs seek to maximize the models' ability to reproduce the distribution of obser- vations. Regarding RPMs, we choose a structure for the random vector (i.e., the assignment of probability to points in the parameter space) solely dependent on the prediction error. As such, the probabilistic description of uncertainty is not a subjective assignment of belief, nor is it expected to asymptotically converge to a fixed value, but instead it is a description of the model's ability to reproduce the experimental data. This framework enables evaluating the spread and distribution of the predicted response of target applications depending on the same parameters beyond the validation domain (i.e., roll-up and extrapolation).
Modified calibration protocol evaluated in a model-based testing of SBR flexibility
DEFF Research Database (Denmark)
Corominas, Lluís; Sin, Gürkan; Puig, Sebastià
2011-01-01
The purpose of this paper is to refine the BIOMATH calibration protocol for SBR systems, in particular to develop a pragmatic calibration protocol that takes advantage of SBR information-rich data, defines a simulation strategy to obtain proper initial conditions for model calibration and provide...
Semi-Analytical Benchmarks for MCNP6
Energy Technology Data Exchange (ETDEWEB)
Grechanuk, Pavel Aleksandrovi [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-11-07
Code verification is an extremely important process that involves proving or disproving the validity of code algorithms by comparing them against analytical results of the underlying physics or mathematical theory on which the code is based. Monte Carlo codes such as MCNP6 must undergo verification and testing upon every release to ensure that the codes are properly simulating nature. Specifically, MCNP6 has multiple sets of problems with known analytic solutions that are used for code verification. Monte Carlo codes primarily specify either current boundary sources or a volumetric fixed source, either of which can be very complicated functions of space, energy, direction and time. Thus, most of the challenges with modeling analytic benchmark problems in Monte Carlo codes come from identifying the correct source definition to properly simulate the correct boundary conditions. The problems included in this suite all deal with mono-energetic neutron transport without energy loss, in a homogeneous material. The variables that differ between the problems are source type (isotropic/beam), medium dimensionality (infinite/semi-infinite), etc.
Root zone water quality model (RZWQM2): Model use, calibration and validation
Ma, Liwang; Ahuja, Lajpat; Nolan, B.T.; Malone, Robert; Trout, Thomas; Qi, Z.
2012-01-01
The Root Zone Water Quality Model (RZWQM2) has been used widely for simulating agricultural management effects on crop production and soil and water quality. Although it is a one-dimensional model, it has many desirable features for the modeling community. This article outlines the principles of calibrating the model component by component with one or more datasets and validating the model with independent datasets. Users should consult the RZWQM2 user manual distributed along with the model and a more detailed protocol on how to calibrate RZWQM2 provided in a book chapter. Two case studies (or examples) are included in this article. One is from an irrigated maize study in Colorado to illustrate the use of field and laboratory measured soil hydraulic properties on simulated soil water and crop production. It also demonstrates the interaction between soil and plant parameters in simulated plant responses to water stresses. The other is from a maize-soybean rotation study in Iowa to show a manual calibration of the model for crop yield, soil water, and N leaching in tile-drained soils. Although the commonly used trial-and-error calibration method works well for experienced users, as shown in the second example, an automated calibration procedure is more objective, as shown in the first example. Furthermore, the incorporation of the Parameter Estimation Software (PEST) into RZWQM2 made the calibration of the model more efficient than a grid (ordered) search of model parameters. In addition, PEST provides sensitivity and uncertainty analyses that should help users in selecting the right parameters to calibrate.
Accurate analysis of water flow pathways from rainfall to streams is critical for simulating water use, climate change impact, and contaminant transport. In this study, we developed a new scheme to simultaneously calibrate surface flow (SF) and baseflow (BF) simulations of Soil and Water Assessment ...
New developments enhancing MCNP for criticality safety
International Nuclear Information System (INIS)
Hendricks, J.S.; McKinney, G.W.; Forster, R.A.
1993-01-01
Since the early 80's MCNP has had three estimates of k eff : collision, absorption, and track length. MCNP has also had collision and absorption estimators of removal lifetime. These are calculated for every cycle and are averaged over the cycles as simple averages and covariance weighted averages. Correlation coefficients between estimators are also calculated. These criticality estimators are all in addition to the extensive summary information and tally edits used in shielding and other problems. A number of significant new developments have been made to enhance the MCNP Monte Carlo radiation transport code for criticality safety applications. These are available in the newly released MCNP4A version of the code
Calibration of a DG–model for fluorescence microscopy
DEFF Research Database (Denmark)
Hansen, Christian Valdemar
It is well known that diseases like Alzheimer, Parkinson, Corea Huntington and Arteriosclerosis are caused by a jam in intracellular membrane traffic [2]. Hence to improve treatment, a quantitative analysis of intracellular transport is essential. Fluorescence loss in photobleaching (FLIP......) is an impor- tant and widely used microscopy method for visualization of molecular transport processes in living cells. Thus, the motivation for making an automated reliable analysis of the image data is high. In this contribution, we present and comment on the calibration of a Discontinuous......–Galerkin simulator [3, 4] on segmented cell images. The cell geometry is extracted from FLIP images using the Chan– Vese active contours algorithm [1] while the DG simulator is implemented in FEniCS [5]. Simulated FLIP sequences based on optimal parameters from the PDE model are presented, with an overall goal...
Energy Technology Data Exchange (ETDEWEB)
Huiping, Guo [The Second Artillery Engineering College, Xi' an (China)
2007-06-15
For satisfying calibration request of radon measure in the laboratory, the calibration apparatus for radon activity measure is designed and realized. The calibration apparatus can auto-control and auto-measure in three models. sequent mode, pulse mode and constant mode. The stability and reliability of the calibration apparatus was tested under the three models. The experimental result shows that the apparatus can provides an adjustable and steady radon activity concentration environment for the research of radon and its progeny and for the calibration of its measure. (authors)
Energy Technology Data Exchange (ETDEWEB)
Thanh, Tran Thien; Tao, Chau Van; Loan, Truong Thi Hong; Nhon, Mai Van; Chuong, Huynh Dinh; Au, Bui Hai [Vietnam National Univ., Ho Chi Minh City (Viet Nam). Dept. of Nuclear Physics
2012-12-15
The accuracy of the coincidence-summing corrections in gamma spectrometry depends on the total efficiency calibration that is hardly obtained over the whole energy as the required experimental conditions are not easily attained. Monte Carlo simulations using MCNP5 code was performed in order to estimate the affect of the shielding to total efficiency. The effect of HPGe response are also shown. (orig.)
Hydrological processes and model representation: impact of soft data on calibration
J.G. Arnold; M.A. Youssef; H. Yen; M.J. White; A.Y. Sheshukov; A.M. Sadeghi; D.N. Moriasi; J.L. Steiner; Devendra Amatya; R.W. Skaggs; E.B. Haney; J. Jeong; M. Arabi; P.H. Gowda
2015-01-01
Hydrologic and water quality models are increasingly used to determine the environmental impacts of climate variability and land management. Due to differing model objectives and differences in monitored data, there are currently no universally accepted procedures for model calibration and validation in the literature. In an effort to develop accepted model calibration...
The comparison of MCNP perturbation technique with MCNP difference method in critical calculation
International Nuclear Information System (INIS)
Liu Bin; Lv Xuefeng; Zhao Wei; Wang Kai; Tu Jing; Ouyang Xiaoping
2010-01-01
For a nuclear fission system, we calculated Δk eff , which arise from system material composition changes, by two different approaches, the MCNP perturbation technique and the MCNP difference method. For every material composition change, we made four different runs, each run with different cycles or each cycle generating different neutrons, then we compared the two Δk eff that are obtained by two different approaches. As a material composition change in any particular cell of the nuclear fission system is small compared to the material compositions in the whole nuclear fission system, in other words, this composition change can be treated as a small perturbation, the Δk eff results obtained from the MCNP perturbation technique are much quicker, much more efficient and reliable than the results from the MCNP difference method. When a material composition change in any particular cell of the nuclear fission system is significant compared to the material compositions in the whole nuclear fission system, both the MCNP perturbation technique and the MCNP difference method can give satisfactory results. But for the run with the same cycles and each cycle generating the same neutrons, the results obtained from the MCNP perturbation technique are systemically less than the results obtained from the MCNP difference method. To further confirm our calculation results from the MCNP4C, we run the exact same MCNP4C input file in MCNP5, the calculation results from MCNP5 are the same as the calculation results from MCNP4C. We need caution when using the MCNP perturbation technique to calculate the Δk eff as the material composition change is large compared to the material compositions in the whole nuclear fission system, even though the material composition changes of any particular cell of the fission system still meet the criteria of MCNP perturbation technique.
A global model for residential energy use: Uncertainty in calibration to regional data
International Nuclear Information System (INIS)
van Ruijven, Bas; van Vuuren, Detlef P.; de Vries, Bert; van der Sluijs, Jeroen P.
2010-01-01
Uncertainties in energy demand modelling allow for the development of different models, but also leave room for different calibrations of a single model. We apply an automated model calibration procedure to analyse calibration uncertainty of residential sector energy use modelling in the TIMER 2.0 global energy model. This model simulates energy use on the basis of changes in useful energy intensity, technology development (AEEI) and price responses (PIEEI). We find that different implementations of these factors yield behavioural model results. Model calibration uncertainty is identified as influential source for variation in future projections: amounting 30% to 100% around the best estimate. Energy modellers should systematically account for this and communicate calibration uncertainty ranges. (author)
Radiation shielding calculation using MCNP
International Nuclear Information System (INIS)
Masukawa, Fumihiro
2001-01-01
To verify the Monte Carlo code MCNP4A as a tool to generate the reference data in the shielding designs and the safety evaluations, various shielding benchmark experiments were analyzed using this code. These experiments were categorized in three types of the shielding subjects; bulk shielding, streaming, and skyshine. For the variance reduction technique, which is indispensable to get meaningful results with the Monte Carlo shielding calculation, we mainly used the weight window, the energy dependent Russian roulette and spitting. As a whole, our analyses performed enough small statistical errors and showed good agreements with these experiments. (author)
NSLS-II: Nonlinear Model Calibration for Synchrotrons
Energy Technology Data Exchange (ETDEWEB)
Bengtsson, J.
2010-10-08
This tech note is essentially a summary of a lecture we delivered to the Acc. Phys. Journal Club Apr, 2010. However, since the estimated accuracy of these methods has been naive and misleading in the field of particle accelerators, i.e., ignores the impact of noise, we will elaborate on this in some detail. A prerequisite for a calibration of the nonlinear Hamiltonian is that the quadratic part has been understood, i.e., that the linear optics for the real accelerator has been calibrated. For synchrotron light source operations, this problem has been solved by the interactive LOCO technique/tool (Linear Optics from Closed Orbits). Before that, in the context of hadron accelerators, it has been done by signal processing of turn-by-turn BPM data. We have outlined how to make a basic calibration of the nonlinear model for synchrotrons. In particular, we have shown how this was done for LEAR, CERN (antiprotons) in the mid-80s. Specifically, our accuracy for frequency estimation was {approx} 1 x 10{sup -5} for 1024 turns (to calibrate the linear optics) and {approx} 1 x 10{sup -4} for 256 turns for tune footprint and betatron spectrum. For a comparison, the estimated tune footprint for stable beam for NSLS-II is {approx}0.1. Since the transverse damping time is {approx}20 msec, i.e., {approx}4,000 turns. There is no fundamental difference for: antiprotons, protons, and electrons in this case. Because the estimated accuracy for these methods in the field of particle accelerators has been naive, i.e., ignoring the impact of noise, we have also derived explicit formula, from first principles, for a quantitative statement. For e.g. N = 256 and 5% noise we obtain {delta}{nu} {approx} 1 x 10{sup -5}. A comparison with the state-of-the-arts in e.g. telecomm and electrical engineering since the 60s is quite revealing. For example, Kalman filter (1960), crucial for the: Ranger, Mariner, and Apollo (including the Lunar Module) missions during the 60s. Or Claude Shannon et al
NSLS-II: Nonlinear Model Calibration for Synchrotrons
International Nuclear Information System (INIS)
Bengtsson, J.
2010-01-01
This tech note is essentially a summary of a lecture we delivered to the Acc. Phys. Journal Club Apr, 2010. However, since the estimated accuracy of these methods has been naive and misleading in the field of particle accelerators, i.e., ignores the impact of noise, we will elaborate on this in some detail. A prerequisite for a calibration of the nonlinear Hamiltonian is that the quadratic part has been understood, i.e., that the linear optics for the real accelerator has been calibrated. For synchrotron light source operations, this problem has been solved by the interactive LOCO technique/tool (Linear Optics from Closed Orbits). Before that, in the context of hadron accelerators, it has been done by signal processing of turn-by-turn BPM data. We have outlined how to make a basic calibration of the nonlinear model for synchrotrons. In particular, we have shown how this was done for LEAR, CERN (antiprotons) in the mid-80s. Specifically, our accuracy for frequency estimation was ∼ 1 x 10 -5 for 1024 turns (to calibrate the linear optics) and ∼ 1 x 10 -4 for 256 turns for tune footprint and betatron spectrum. For a comparison, the estimated tune footprint for stable beam for NSLS-II is ∼0.1. Since the transverse damping time is ∼20 msec, i.e., ∼4,000 turns. There is no fundamental difference for: antiprotons, protons, and electrons in this case. Because the estimated accuracy for these methods in the field of particle accelerators has been naive, i.e., ignoring the impact of noise, we have also derived explicit formula, from first principles, for a quantitative statement. For e.g. N = 256 and 5% noise we obtain (delta)ν ∼ 1 x 10 -5 . A comparison with the state-of-the-arts in e.g. telecomm and electrical engineering since the 60s is quite revealing. For example, Kalman filter (1960), crucial for the: Ranger, Mariner, and Apollo (including the Lunar Module) missions during the 60s. Or Claude Shannon et al since the 40s for that matter. Conclusion: what
A review of radiation dosimetry applications using the MCNP Monte Carlo code
Energy Technology Data Exchange (ETDEWEB)
Solberg, T.D.; DeMarco, J.J.; Chetty, I.J.; Mesa, A.V.; Cagnon, C.H.; Li, A.N.; Mather, K.K.; Medin, P.M.; Arellano, A.R.; Smathers, J.B. [California Univ., Los Angeles, CA (United States). Dept. of Radiation Oncology
2001-07-01
The Monte Carlo code MCNP (Monte Carlo N-Particle) has a significant history dating to the early years of the Manhattan Project. More recently, MCNP has been used successfully to solve many problems in the field of medical physics. In radiotherapy applications MCNP has been used successfully to calculate the bremsstrahlung spectra from medical linear accelerators, for modeling the dose distributions around high dose rate brachytherapy sources, and for evaluating the dosimetric properties of new radioactive sources used in intravascular irradiation for prevention of restenosis following angioplasty. MCNP has also been used for radioimmunotherapy and boron neutron capture therapy applications. It has been used to predict fast neutron activation of shielding and biological materials. One area that holds tremendous clinical promise is that of radiotherapy treatment planning. In diagnostic applications, MCNP has been used to model X-ray computed tomography and positron emission tomography scanners, to compute the dose delivered from CT procedures, and to determine detector characteristics of nuclear medicine devices. MCNP has been used to determine particle fluxes around radiotherapy treatment devices and to perform shielding calculations in radiotherapy treatment rooms. This manuscript is intended to provide to the reader a comprehensive summary of medical physics applications of the MCNP code. (orig.)
A review of radiation dosimetry applications using the MCNP Monte Carlo code
International Nuclear Information System (INIS)
Solberg, T.D.; DeMarco, J.J.; Chetty, I.J.; Mesa, A.V.; Cagnon, C.H.; Li, A.N.; Mather, K.K.; Medin, P.M.; Arellano, A.R.; Smathers, J.B.
2002-01-01
The Monte Carlo code MCNP (Monte Carlo N-Particle) has a significant history dating to the early years of the Manhattan Project. More recently, MCNP has been used successfully to solve many problems in the field of medical physics. In radiotherapy applications MCNP has been used successfully to calculate the bremsstrahlung spectra from medical linear accelerators, for modeling the dose distributions around high dose rate brachytherapy sources, and for evaluating the dosimetric properties of new radioactive sources used in intravascular irradiation for prevention of restenosis following angioplasty. MCNP has also been used for radioimmunotherapy and boron neutron capture therapy applications. It has been used to predict fast neutron activation of shielding and biological materials. One area that holds tremendous clinical promise is that of radiotherapy treatment planning. In diagnostic applications, MCNP has been used to model X-ray computed tomography and positron emission tomography scanners, to compute the dose delivered from CT procedures, and to determine detector characteristics of nuclear medicine devices. MCNP has been used to determine particle fluxes around radiotherapy treatment devices and to perform shielding calculations in radiotherapy treatment rooms. This manuscript is intended to provide to the reader a comprehensive summary of medical physics applications of the MCNP code. (author)
MCNP Version 6.2 Release Notes
Energy Technology Data Exchange (ETDEWEB)
Werner, Christopher John [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Bull, Jeffrey S. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Solomon, C. J. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Brown, Forrest B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); McKinney, Gregg Walter [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Rising, Michael Evan [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Dixon, David A. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Martz, Roger Lee [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Hughes, Henry G. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Cox, Lawrence James [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Zukaitis, Anthony J. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Armstrong, J. C. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Forster, Robert Arthur [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Casswell, Laura [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2018-02-05
Monte Carlo N-Particle or MCNP^{®} is a general-purpose Monte Carlo radiation-transport code designed to track many particle types over broad ranges of energies. This MCNP Version 6.2 follows the MCNP6.1.1 beta version and has been released in order to provide the radiation transport community with the latest feature developments and bug fixes for MCNP. Since the last release of MCNP major work has been conducted to improve the code base, add features, and provide tools to facilitate ease of use of MCNP version 6.2 as well as the analysis of results. These release notes serve as a general guide for the new/improved physics, source, data, tallies, unstructured mesh, code enhancements and tools. For more detailed information on each of the topics, please refer to the appropriate references or the user manual which can be found at http://mcnp.lanl.gov. This release of MCNP version 6.2 contains 39 new features in addition to 172 bug fixes and code enhancements. There are still some 33 known issues the user should familiarize themselves with (see Appendix).
Energy Technology Data Exchange (ETDEWEB)
Bull, Jeffrey S. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-11-13
This presentation describes how to build MCNP 6.2. MCNP®* 6.2 can be compiled on Macs, PCs, and most Linux systems. It can also be built for parallel execution using both OpenMP and Messing Passing Interface (MPI) methods. MCNP6 requires Fortran, C, and C++ compilers to build the code.
Status Report on the MCNP 2020 Initiative
Energy Technology Data Exchange (ETDEWEB)
Brown, Forrest B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Rising, Michael Evan [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-10-02
The discussion below provides a status report on the MCNP 2020 initiative. It includes discussion of the history of MCNP 2020, accomplishments during 2013-17, priorities for near-term development, other related efforts, a brief summary, and a list of references for the plans and work accomplished.
Status of electron transport in MCNP trademark
International Nuclear Information System (INIS)
Hughes, H.G.
1997-01-01
The latest version of MCNP, the Los Alamos Monte Carlo transport code, has now been officially released. MCNP4B has been sent to the Radiation Safety Information Computational Center (RSICC), in Oak Ridge, Tennessee, which is responsible for the further distribution of the code within the US. International distribution of MCNP is done by the Nuclear Energy Agency (ECD/NEA), in Paris, France. Readers with access to the World-Wide-Web should consult the MCNP distribution site http://www-xdiv.lanl.gov/XTM/mcnp/about.html for specific information about contacting RSICC and OECD/NEA. A variety of new features are available in MCNP4B. Among these are differential operator perturbations, cross-section plotting capabilities, enhanced diagnostics for transport in repeated structures and lattices, improved efficiency in distributed-memory multiprocessing, corrected particle lifetime and lifespan estimators, and expanded software quality assurance procedures and testing, including testing of the multigroup Boltzmann-Fokker-Planck capability. New and improved cross section sets in the form of ENDF/B-VI evaluations have also been recently released and can be used in MCNP4B. Perhaps most significant for the interests of this special session, the electron transport algorithm has been improved, especially in the collisional energy-loss straggling and the angular-deflection treatments. In this paper, the author concentrates on a fairly complete documentation of the current status of the electron transport methods in MCNP
Development of MCNP interface code in HFETR
International Nuclear Information System (INIS)
Qiu Liqing; Fu Rong; Deng Caiyu
2007-01-01
In order to describe the HFETR core with MCNP method, the interface code MCNPIP for HFETR and MCNP code is developed. This paper introduces the core DXSY and flowchart of MCNPIP code, and the handling of compositions of fuel elements and requirements on hardware and software. Finally, MCNPIP code is validated against the practical application. (authors)
Calibration of a distributed hydrology and land surface model using energy flux measurements
DEFF Research Database (Denmark)
Larsen, Morten Andreas Dahl; Refsgaard, Jens Christian; Jensen, Karsten H.
2016-01-01
In this study we develop and test a calibration approach on a spatially distributed groundwater-surface water catchment model (MIKE SHE) coupled to a land surface model component with particular focus on the water and energy fluxes. The model is calibrated against time series of eddy flux measure...
Modelling and calibration of a ring-shaped electrostatic meter
Energy Technology Data Exchange (ETDEWEB)
Zhang Jianyong [University of Teesside, Middlesbrough TS1 3BA (United Kingdom); Zhou Bin; Xu Chuanlong; Wang Shimin, E-mail: zhoubinde1980@gmail.co [Southeast University, Sipailou 2, Nanjing 210096 (China)
2009-02-01
Ring-shaped electrostatic flow meters can provide very useful information on pneumatically transported air-solids mixture. This type of meters are popular in measuring and controlling the pulverized coal flow distribution among conveyors leading to burners in coal-fired power stations, and they have also been used for research purposes, e.g. for the investigation of electrification mechanism of air-solids two-phase flow. In this paper, finite element method (FEM) is employed to analyze the characteristics of ring-shaped electrostatic meters, and a mathematic model has been developed to express the relationship between the meter's voltage output and the motion of charged particles in the sensing volume. The theoretical analysis and the test results using a belt rig demonstrate that the output of the meter depends upon many parameters including the characteristics of conditioning circuitry, the particle velocity vector, the amount and the rate of change of the charge carried by particles, the locations of particles and etc. This paper also introduces a method to optimize the theoretical model via calibration.
Hanford statewide groundwater flow and transport model calibration report
International Nuclear Information System (INIS)
Law, A.; Panday, S.; Denslow, C.; Fecht, K.; Knepp, A.
1996-04-01
This report presents the results of the development and calibration of a three-dimensional, finite element model (VAM3DCG) for the unconfined groundwater flow system at the Hanford Site. This flow system is the largest radioactively contaminated groundwater system in the United States. Eleven groundwater plumes have been identified containing organics, inorganics, and radionuclides. Because groundwater from the unconfined groundwater system flows into the Columbia River, the development of a groundwater flow model is essential to the long-term management of these plumes. Cost effective decision making requires the capability to predict the effectiveness of various remediation approaches. Some of the alternatives available to remediate groundwater include: pumping contaminated water from the ground for treatment with reinjection or to other disposal facilities; containment of plumes by means of impermeable walls, physical barriers, and hydraulic control measures; and, in some cases, management of groundwater via planned recharge and withdrawals. Implementation of these methods requires a knowledge of the groundwater flow system and how it responds to remedial actions
MCNP4A: Features and philosophy
International Nuclear Information System (INIS)
Hendricks, J.S.
1993-01-01
This paper describes MCNP, states its philosophy, introduces a number of new features becoming available with version MCNP4A, and answers a number of questions asked by participants in the workshop. MCNP is a general-purpose three-dimensional neutron, photon and electron transport code. Its philosophy is ''Quality, Value and New Features.'' Quality is exemplified by new software quality assurance practices and a program of benchmarking against experiments. Value includes a strong emphasis on documentation and code portability. New features are the third priority. MCNP4A is now available at Los Alamos. New features in MCNP4A include enhanced statistical analysis, distributed processor multitasking, new photon libraries, ENDF/B-VI capabilities, X-Windows graphics, dynamic memory allocation, expanded criticality output, periodic boundaries, plotting of particle tracks via SABRINA, and many other improvements. 23 refs
Poole, Sandra; Vis, Marc; Knight, Rodney; Seibert, Jan
2017-01-01
Ecologically relevant streamflow characteristics (SFCs) of ungauged catchments are often estimated from simulated runoff of hydrologic models that were originally calibrated on gauged catchments. However, SFC estimates of the gauged donor catchments and subsequently the ungauged catchments can be substantially uncertain when models are calibrated using traditional approaches based on optimization of statistical performance metrics (e.g., Nash–Sutcliffe model efficiency). An improved calibration strategy for gauged catchments is therefore crucial to help reduce the uncertainties of estimated SFCs for ungauged catchments. The aim of this study was to improve SFC estimates from modeled runoff time series in gauged catchments by explicitly including one or several SFCs in the calibration process. Different types of objective functions were defined consisting of the Nash–Sutcliffe model efficiency, single SFCs, or combinations thereof. We calibrated a bucket-type runoff model (HBV – Hydrologiska Byråns Vattenavdelning – model) for 25 catchments in the Tennessee River basin and evaluated the proposed calibration approach on 13 ecologically relevant SFCs representing major flow regime components and different flow conditions. While the model generally tended to underestimate the tested SFCs related to mean and high-flow conditions, SFCs related to low flow were generally overestimated. The highest estimation accuracies were achieved by a SFC-specific model calibration. Estimates of SFCs not included in the calibration process were of similar quality when comparing a multi-SFC calibration approach to a traditional model efficiency calibration. For practical applications, this implies that SFCs should preferably be estimated from targeted runoff model calibration, and modeled estimates need to be carefully interpreted.
DEFF Research Database (Denmark)
Van Daele, Timothy; Gernaey, Krist V.; Ringborg, Rolf Hoffmeyer
2017-01-01
The aim of model calibration is to estimate unique parameter values from available experimental data, here applied to a biocatalytic process. The traditional approach of first gathering data followed by performing a model calibration is inefficient, since the information gathered during...... experimentation is not actively used to optimise the experimental design. By applying an iterative robust model-based optimal experimental design, the limited amount of data collected is used to design additional informative experiments. The algorithm is used here to calibrate the initial reaction rate of an ω......-transaminase catalysed reaction in a more accurate way. The parameter confidence region estimated from the Fisher Information Matrix is compared with the likelihood confidence region, which is a more accurate, but also a computationally more expensive method. As a result, an important deviation between both approaches...
Model Robust Calibration: Method and Application to Electronically-Scanned Pressure Transducers
Walker, Eric L.; Starnes, B. Alden; Birch, Jeffery B.; Mays, James E.
2010-01-01
This article presents the application of a recently developed statistical regression method to the controlled instrument calibration problem. The statistical method of Model Robust Regression (MRR), developed by Mays, Birch, and Starnes, is shown to improve instrument calibration by reducing the reliance of the calibration on a predetermined parametric (e.g. polynomial, exponential, logarithmic) model. This is accomplished by allowing fits from the predetermined parametric model to be augmented by a certain portion of a fit to the residuals from the initial regression using a nonparametric (locally parametric) regression technique. The method is demonstrated for the absolute scale calibration of silicon-based pressure transducers.
Calibration of a complex activated sludge model for the full-scale wastewater treatment plant
Liwarska-Bizukojc, Ewa; Olejnik, Dorota; Biernacki, Rafal; Ledakowicz, Stanislaw
2011-01-01
In this study, the results of the calibration of the complex activated sludge model implemented in BioWin software for the full-scale wastewater treatment plant are presented. Within the calibration of the model, sensitivity analysis of its parameters and the fractions of carbonaceous substrate were performed. In the steady-state and dynamic calibrations, a successful agreement between the measured and simulated values of the output variables was achieved. Sensitivity analysis revealed that u...
Energy Technology Data Exchange (ETDEWEB)
Huang, Zhenyu; Du, Pengwei; Kosterev, Dmitry; Yang, Steve
2013-05-01
Disturbance data recorded by phasor measurement units (PMU) offers opportunities to improve the integrity of dynamic models. However, manually tuning parameters through play-back events demands significant efforts and engineering experiences. In this paper, a calibration method using the extended Kalman filter (EKF) technique is proposed. The formulation of EKF with parameter calibration is discussed. Case studies are presented to demonstrate its validity. The proposed calibration method is cost-effective, complementary to traditional equipment testing for improving dynamic model quality.
MCNP-REN a Monte Carlo tool for neutron detector design
Abhold, M E
2002-01-01
The development of neutron detectors makes extensive use of the predictions of detector response through the use of Monte Carlo techniques in conjunction with the point reactor model. Unfortunately, the point reactor model fails to accurately predict detector response in common applications. For this reason, the general Monte Carlo code developed at Los Alamos National Laboratory, Monte Carlo N-Particle (MCNP), was modified to simulate the pulse streams that would be generated by a neutron detector and normally analyzed by a shift register. This modified code, MCNP-Random Exponentially Distributed Neutron Source (MCNP-REN), along with the Time Analysis Program, predicts neutron detector response without using the point reactor model, making it unnecessary for the user to decide whether or not the assumptions of the point model are met for their application. MCNP-REN is capable of simulating standard neutron coincidence counting as well as neutron multiplicity counting. Measurements of mixed oxide fresh fuel w...
A new method to calibrate Lagrangian model with ASAR images for oil slick trajectory.
Tian, Siyu; Huang, Xiaoxia; Li, Hongga
2017-03-15
Since Lagrangian model coefficients vary with different conditions, it is necessary to calibrate the model to obtain optimal coefficient combination for special oil spill accident. This paper focuses on proposing a new method to calibrate Lagrangian model with time series of Envisat ASAR images. Oil slicks extracted from time series images form a detected trajectory of special oil slick. Lagrangian model is calibrated by minimizing the difference between simulated trajectory and detected trajectory. mean center position distance difference (MCPD) and rotation difference (RD) of Oil slicks' or particles' standard deviational ellipses (SDEs) are calculated as two evaluations. The two parameters are taken to evaluate the performance of Lagrangian transport model with different coefficient combinations. This method is applied to Penglai 19-3 oil spill accident. The simulation result with calibrated model agrees well with related satellite observations. It is suggested the new method is effective to calibrate Lagrangian model. Copyright © 2016 Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
Hammam Oktajianto
2014-12-01
Full Text Available Gas-cooled nuclear reactor is a Generation IV reactor which has been receiving significant attention due to many desired characteristics such as inherent safety, modularity, relatively low cost, short construction period, and easy financing. High temperature reactor (HTR pebble-bed as one of type of gas-cooled reactor concept is getting attention. In HTR pebble-bed design, radius and enrichment of the fuel kernel are the key parameter that can be chosen freely to determine the desired value of criticality. This paper models HTR pebble-bed 10 MW and determines an effective of enrichment and radius of the fuel (Kernel to get criticality value of reactor. The TRISO particle coated fuel particle which was modelled explicitly and distributed in the fuelled region of the fuel pebbles using a Simple-Cubic (SC lattice. The pebble-bed balls and moderator balls distributed in the core zone using a Body-Centred Cubic lattice with assumption of a fresh fuel by the fuel enrichment was 7-17% at 1% range and the size of the fuel radius was 175-300 µm at 25 µm ranges. The geometrical model of the full reactor is obtained by using lattice and universe facilities provided by MCNP4C. The details of model are discussed with necessary simplifications. Criticality calculations were conducted by Monte Carlo transport code MCNP4C and continuous energy nuclear data library ENDF/B-VI. From calculation results can be concluded that an effective of enrichment and radius of fuel (Kernel to achieve a critical condition was the enrichment of 15-17% at a radius of 200 µm, the enrichment of 13-17% at a radius of 225 µm, the enrichments of 12-15% at radius of 250 µm, the enrichments of 11-14% at a radius of 275 µm and the enrichment of 10-13% at a radius of 300 µm, so that the effective of enrichments and radii of fuel (Kernel can be considered in the HTR 10 MW. Keywords—MCNP4C, HTR, enrichment, radius, criticality
Preliminary evaluation of pin power distribution for fuel assemblies of SMART by MCNP
International Nuclear Information System (INIS)
Kim, Kyo Youn
1998-08-01
Monte Carlo transport code MCNP can describe an object sophisticately by use of three-dimensional modelling and can adopt a continuous energy cross-section library. Therefore MCNP has been widely utilized in the field of radiation physics to estimate fluxes and dose rates for nuclear facilities and to review results from conventional methods such a as discrete ordinates method and point kernel method. The Monte Carlo method has recently been introduced to estimated the neutron multiplication factor and pin power distribution in the fuel assembly of a reactor core. The operating thermal power of SMART core is 330 MWt and there are 57 fuel assemblies in the core. In this study it was assumed that the core has 4 types of fuel assemblies. In this study, MCNP4a was used to perform to estimate criticality and normalized pin power distribution in a fuel assembly of SMART core. The results from MCNP4a calculations are able to be used review those from nuclear design/analysis code. It is very complicated to pick up interested data from MCNP output list and to normalize pin power distribution in a fuel assembly because MCNP is not only a nuclear design/analysis code. In this study a program FAPIN was developed to generated a generate a normalized pin power distribution from the MCNP output list. (author). 11 refs
Comparisons between MCNP, EGS4 and experiment for clinical electron beams.
Jeraj, R; Keall, P J; Ostwald, P M
1999-03-01
Understanding the limitations of Monte Carlo codes is essential in order to avoid systematic errors in simulations, and to suggest further improvement of the codes. MCNP and EGS4, Monte Carlo codes commonly used in medical physics, were compared and evaluated against electron depth dose data and experimental backscatter results obtained using clinical radiotherapy beams. Different physical models and algorithms used in the codes give significantly different depth dose curves and electron backscattering factors. The default version of MCNP calculates electron depth dose curves which are too penetrating. The MCNP results agree better with experiment if the ITS-style energy-indexing algorithm is used. EGS4 underpredicts electron backscattering for high-Z materials. The results slightly improve if optimal PRESTA-I parameters are used. MCNP simulates backscattering well even for high-Z materials. To conclude the comparison, a timing study was performed. EGS4 is generally faster than MCNP and use of a large number of scoring voxels dramatically slows down the MCNP calculation. However, use of a large number of geometry voxels in MCNP only slightly affects the speed of the calculation.
Comparisons between MCNP, EGS4 and experiment for clinical electron beams
International Nuclear Information System (INIS)
Jeraj, R.; Keall, P.J.; Ostwald, P.M.
1999-01-01
Understanding the limitations of Monte Carlo codes is essential in order to avoid systematic errors in simulations, and to suggest further improvement of the codes. MCNP and EGS4, Monte Carlo codes commonly used in medical physics, were compared and evaluated against electron depth dose data and experimental backscatter results obtained using clinical radiotherapy beams. Different physical models and algorithms used in the codes give significantly different depth dose curves and electron backscattering factors. The default version of MCNP calculates electron depth dose curves which are too penetrating. The MCNP results agree better with experiment if the ITS-style energy-indexing algorithm is used. EGS4 underpredicts electron backscattering for high- Z materials. The results slightly improve if optimal PRESTA-I parameters are used. MCNP simulates backscattering well even for high- Z materials. To conclude the comparison, a timing study was performed. EGS4 is generally faster than MCNP and use of a large number of scoring voxels dramatically slows down the MCNP calculation. However, use of a large number of geometry voxels in MCNP only slightly affects the speed of the calculation. (author)
Calibration models for density borehole logging - construction report
International Nuclear Information System (INIS)
Engelmann, R.E.; Lewis, R.E.; Stromswold, D.C.
1995-10-01
Two machined blocks of magnesium and aluminum alloys form the basis for Hanford's density models. The blocks provide known densities of 1.780 ± 0.002 g/cm 3 and 2.804 ± 0.002 g/cm 3 for calibrating borehole logging tools that measure density based on gamma-ray scattering from a source in the tool. Each block is approximately 33 x 58 x 91 cm (13 x 23 x 36 in.) with cylindrical grooves cut into the sides of the blocks to hold steel casings of inner diameter 15 cm (6 in.) and 20 cm (8 in.). Spacers that can be inserted between the blocks and casings can create air gaps of thickness 0.64, 1.3, 1.9, and 2.5 cm (0.25, 0.5, 0.75 and 1.0 in.), simulating air gaps that can occur in actual wells from hole enlargements behind the casing
Energy Technology Data Exchange (ETDEWEB)
Fonseca, Telma Cristina Ferreira
2009-07-01
The Intensity Modulated Radiation Therapy - IMRT is an advanced treatment technique used worldwide in oncology medicine branch. On this master proposal was developed a software package for simulating the IMRT protocol, namely SOFT-RT which attachment the research group 'Nucleo de Radiacoes Ionizantes' - NRI at UFMG. The computational system SOFT-RT allows producing the absorbed dose simulation of the radiotherapic treatment through a three-dimensional voxel model of the patient. The SISCODES code, from NRI, research group, helps in producing the voxel model of the interest region from a set of CT or MRI digitalized images. The SOFT-RT allows also the rotation and translation of the model about the coordinate system axis for better visualization of the model and the beam. The SOFT-RT collects and exports the necessary parameters to MCNP code which will carry out the nuclear radiation transport towards the tumor and adjacent healthy tissues for each orientation and position of the beam planning. Through three-dimensional visualization of voxel model of a patient, it is possible to focus on a tumoral region preserving the whole tissues around them. It takes in account where exactly the radiation beam passes through, which tissues are affected and how much dose is applied in both tissues. The Out-module from SOFT-RT imports the results and express the dose response superimposing dose and voxel model in gray scale in a three-dimensional graphic representation. The present master thesis presents the new computational system of radiotherapic treatment - SOFT-RT code which has been developed using the robust and multi-platform C{sup ++} programming language with the OpenGL graphics packages. The Linux operational system was adopted with the goal of running it in an open source platform and free access. Preliminary simulation results for a cerebral tumor case will be reported as well as some dosimetric evaluations. (author)
Energy Technology Data Exchange (ETDEWEB)
Fonseca, Telma Cristina Ferreira
2009-07-01
The Intensity Modulated Radiation Therapy - IMRT is an advanced treatment technique used worldwide in oncology medicine branch. On this master proposal was developed a software package for simulating the IMRT protocol, namely SOFT-RT which attachment the research group 'Nucleo de Radiacoes Ionizantes' - NRI at UFMG. The computational system SOFT-RT allows producing the absorbed dose simulation of the radiotherapic treatment through a three-dimensional voxel model of the patient. The SISCODES code, from NRI, research group, helps in producing the voxel model of the interest region from a set of CT or MRI digitalized images. The SOFT-RT allows also the rotation and translation of the model about the coordinate system axis for better visualization of the model and the beam. The SOFT-RT collects and exports the necessary parameters to MCNP code which will carry out the nuclear radiation transport towards the tumor and adjacent healthy tissues for each orientation and position of the beam planning. Through three-dimensional visualization of voxel model of a patient, it is possible to focus on a tumoral region preserving the whole tissues around them. It takes in account where exactly the radiation beam passes through, which tissues are affected and how much dose is applied in both tissues. The Out-module from SOFT-RT imports the results and express the dose response superimposing dose and voxel model in gray scale in a three-dimensional graphic representation. The present master thesis presents the new computational system of radiotherapic treatment - SOFT-RT code which has been developed using the robust and multi-platform C{sup ++} programming language with the OpenGL graphics packages. The Linux operational system was adopted with the goal of running it in an open source platform and free access. Preliminary simulation results for a cerebral tumor case will be reported as well as some dosimetric evaluations. (author)
Energy Technology Data Exchange (ETDEWEB)
Brown, Justin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hund, Lauren [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2017-02-01
Dynamic compression experiments are being performed on complicated materials using increasingly complex drivers. The data produced in these experiments are beginning to reach a regime where traditional analysis techniques break down; requiring the solution of an inverse problem. A common measurement in dynamic experiments is an interface velocity as a function of time, and often this functional output can be simulated using a hydrodynamics code. Bayesian model calibration is a statistical framework to estimate inputs into a computational model in the presence of multiple uncertainties, making it well suited to measurements of this type. In this article, we apply Bayesian model calibration to high pressure (250 GPa) ramp compression measurements in tantalum. We address several issues speci c to this calibration including the functional nature of the output as well as parameter and model discrepancy identi ability. Speci cally, we propose scaling the likelihood function by an e ective sample size rather than modeling the autocorrelation function to accommodate the functional output and propose sensitivity analyses using the notion of `modularization' to assess the impact of experiment-speci c nuisance input parameters on estimates of material properties. We conclude that the proposed Bayesian model calibration procedure results in simple, fast, and valid inferences on the equation of state parameters for tantalum.
Effect of calibration data series length on performance and optimal parameters of hydrological model
Directory of Open Access Journals (Sweden)
Chuan-zhe Li
2010-12-01
Full Text Available In order to assess the effects of calibration data series length on the performance and optimal parameter values of a hydrological model in ungauged or data-limited catchments (data are non-continuous and fragmental in some catchments, we used non-continuous calibration periods for more independent streamflow data for SIMHYD (simple hydrology model calibration. Nash-Sutcliffe efficiency and percentage water balance error were used as performance measures. The particle swarm optimization (PSO method was used to calibrate the rainfall-runoff models. Different lengths of data series ranging from one year to ten years, randomly sampled, were used to study the impact of calibration data series length. Fifty-five relatively unimpaired catchments located all over Australia with daily precipitation, potential evapotranspiration, and streamflow data were tested to obtain more general conclusions. The results show that longer calibration data series do not necessarily result in better model performance. In general, eight years of data are sufficient to obtain steady estimates of model performance and parameters for the SIMHYD model. It is also shown that most humid catchments require fewer calibration data to obtain a good performance and stable parameter values. The model performs better in humid and semi-humid catchments than in arid catchments. Our results may have useful and interesting implications for the efficiency of using limited observation data for hydrological model calibration in different climates.
The reliability of common calibration practices for process based water quality models has recently been questioned. A so-called “adequately calibrated model” may contain input errors not readily identifiable by model users, or may not realistically represent intra-watershed responses. These short...
Walsh, Colin G; Sharman, Kavya; Hripcsak, George
2017-12-01
Prior to implementing predictive models in novel settings, analyses of calibration and clinical usefulness remain as important as discrimination, but they are not frequently discussed. Calibration is a model's reflection of actual outcome prevalence in its predictions. Clinical usefulness refers to the utilities, costs, and harms of using a predictive model in practice. A decision analytic approach to calibrating and selecting an optimal intervention threshold may help maximize the impact of readmission risk and other preventive interventions. To select a pragmatic means of calibrating predictive models that requires a minimum amount of validation data and that performs well in practice. To evaluate the impact of miscalibration on utility and cost via clinical usefulness analyses. Observational, retrospective cohort study with electronic health record data from 120,000 inpatient admissions at an urban, academic center in Manhattan. The primary outcome was thirty-day readmission for three causes: all-cause, congestive heart failure, and chronic coronary atherosclerotic disease. Predictive modeling was performed via L1-regularized logistic regression. Calibration methods were compared including Platt Scaling, Logistic Calibration, and Prevalence Adjustment. Performance of predictive modeling and calibration was assessed via discrimination (c-statistic), calibration (Spiegelhalter Z-statistic, Root Mean Square Error [RMSE] of binned predictions, Sanders and Murphy Resolutions of the Brier Score, Calibration Slope and Intercept), and clinical usefulness (utility terms represented as costs). The amount of validation data necessary to apply each calibration algorithm was also assessed. C-statistics by diagnosis ranged from 0.7 for all-cause readmission to 0.86 (0.78-0.93) for congestive heart failure. Logistic Calibration and Platt Scaling performed best and this difference required analyzing multiple metrics of calibration simultaneously, in particular Calibration
Bhandari, Ammar B; Nelson, Nathan O; Sweeney, Daniel W; Baffaut, Claire; Lory, John A; Senaviratne, Anomaa; Pierzynski, Gary M; Janssen, Keith A; Barnes, Philip L
2017-11-01
Process-based computer models have been proposed as a tool to generate data for Phosphorus (P) Index assessment and development. Although models are commonly used to simulate P loss from agriculture using managements that are different from the calibration data, this use of models has not been fully tested. The objective of this study is to determine if the Agricultural Policy Environmental eXtender (APEX) model can accurately simulate runoff, sediment, total P, and dissolved P loss from 0.4 to 1.5 ha of agricultural fields with managements that are different from the calibration data. The APEX model was calibrated with field-scale data from eight different managements at two locations (management-specific models). The calibrated models were then validated, either with the same management used for calibration or with different managements. Location models were also developed by calibrating APEX with data from all managements. The management-specific models resulted in satisfactory performance when used to simulate runoff, total P, and dissolved P within their respective systems, with > 0.50, Nash-Sutcliffe efficiency > 0.30, and percent bias within ±35% for runoff and ±70% for total and dissolved P. When applied outside the calibration management, the management-specific models only met the minimum performance criteria in one-third of the tests. The location models had better model performance when applied across all managements compared with management-specific models. Our results suggest that models only be applied within the managements used for calibration and that data be included from multiple management systems for calibration when using models to assess management effects on P loss or evaluate P Indices. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.
MCNP Perturbation Capability for Monte Carlo Criticality Calculations
International Nuclear Information System (INIS)
Hendricks, J.S.; Carter, L.L.; McKinney, G.W.
1999-01-01
The differential operator perturbation capability in MCNP4B has been extended to automatically calculate perturbation estimates for the track length estimate of k eff in MCNP4B. The additional corrections required in certain cases for MCNP4B are no longer needed. Calculating the effect of small design changes on the criticality of nuclear systems with MCNP is now straightforward
Directory of Open Access Journals (Sweden)
Huseyin Ozan Tekin
2016-01-01
Full Text Available Gamma-ray measurements in various research fields require efficient detectors. One of these research fields is mass attenuation coefficients of different materials. Apart from experimental studies, the Monte Carlo (MC method has become one of the most popular tools in detector studies. An NaI(Tl detector has been modeled, and, for a validation study of the modeled NaI(Tl detector, the absolute efficiency of 3 × 3 inch cylindrical NaI(Tl detector has been calculated by using the general purpose Monte Carlo code MCNP-X (version 2.4.0 and compared with previous studies in literature in the range of 661–2620 keV. In the present work, the applicability of MCNP-X Monte Carlo code for mass attenuation of concrete sample material as building material at photon energies 59.5 keV, 80 keV, 356 keV, 661.6 keV, 1173.2 keV, and 1332.5 keV has been tested by using validated NaI(Tl detector. The mass attenuation coefficients of concrete sample have been calculated. The calculated results agreed well with experimental and some other theoretical results. The results specify that this process can be followed to determine the data on the attenuation of gamma-rays with other required energies in other materials or in new complex materials. It can be concluded that data from Monte Carlo is a strong tool not only for efficiency studies but also for mass attenuation coefficients calculations.
International Nuclear Information System (INIS)
Carl Stern; Martin Lee
1999-01-01
Phase I work studied the feasibility of developing software for automatic component calibration and error correction in beamline optics models. A prototype application was developed that corrects quadrupole field strength errors in beamline models
Carl-Stern
1999-01-01
Phase I work studied the feasibility of developing software for automatic component calibration and error correction in beamline optics models. A prototype application was developed that corrects quadrupole field strength errors in beamline models.
Comparison of global optimization approaches for robust calibration of hydrologic model parameters
Jung, I. W.
2015-12-01
Robustness of the calibrated parameters of hydrologic models is necessary to provide a reliable prediction of future performance of watershed behavior under varying climate conditions. This study investigated calibration performances according to the length of calibration period, objective functions, hydrologic model structures and optimization methods. To do this, the combination of three global optimization methods (i.e. SCE-UA, Micro-GA, and DREAM) and four hydrologic models (i.e. SAC-SMA, GR4J, HBV, and PRMS) was tested with different calibration periods and objective functions. Our results showed that three global optimization methods provided close calibration performances under different calibration periods, objective functions, and hydrologic models. However, using the agreement of index, normalized root mean square error, Nash-Sutcliffe efficiency as the objective function showed better performance than using correlation coefficient and percent bias. Calibration performances according to different calibration periods from one year to seven years were hard to generalize because four hydrologic models have different levels of complexity and different years have different information content of hydrological observation. Acknowledgements This research was supported by a grant (14AWMP-B082564-01) from Advanced Water Management Research Program funded by Ministry of Land, Infrastructure and Transport of Korean government.
Analysis of parallel computing performance of the code MCNP
International Nuclear Information System (INIS)
Wang Lei; Wang Kan; Yu Ganglin
2006-01-01
Parallel computing can reduce the running time of the code MCNP effectively. With the MPI message transmitting software, MCNP5 can achieve its parallel computing on PC cluster with Windows operating system. Parallel computing performance of MCNP is influenced by factors such as the type, the complexity level and the parameter configuration of the computing problem. This paper analyzes the parallel computing performance of MCNP regarding with these factors and gives measures to improve the MCNP parallel computing performance. (authors)
MatMCNP: A Code for Producing Material Cards for MCNP
Energy Technology Data Exchange (ETDEWEB)
DePriest, Kendall Russell [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Saavedra, Karen C. [American Structurepoint, Inc., Indianapolis, IN (United States)
2014-09-01
A code for generating MCNP material cards (MatMCNP) has been written and verified for naturally occurring, stable isotopes. The program allows for material specification as either atomic or weight percent (fractions). MatMCNP also permits the specification of enriched lithium, boron, and/or uranium. In addition to producing the material cards for MCNP, the code calculates the atomic (or number) density in atoms/barn-cm as well as the multiplier that should be used to convert neutron and gamma fluences into dose in the material specified.
Genetic Algorithm Calibration of Probabilistic Cellular Automata for Modeling Mining Permit Activity
Louis, S.J.; Raines, G.L.
2003-01-01
We use a genetic algorithm to calibrate a spatially and temporally resolved cellular automata to model mining activity on public land in Idaho and western Montana. The genetic algorithm searches through a space of transition rule parameters of a two dimensional cellular automata model to find rule parameters that fit observed mining activity data. Previous work by one of the authors in calibrating the cellular automaton took weeks - the genetic algorithm takes a day and produces rules leading to about the same (or better) fit to observed data. These preliminary results indicate that genetic algorithms are a viable tool in calibrating cellular automata for this application. Experience gained during the calibration of this cellular automata suggests that mineral resource information is a critical factor in the quality of the results. With automated calibration, further refinements of how the mineral-resource information is provided to the cellular automaton will probably improve our model.
Full Core Criticality Modeling of Gas-Cooled Fast Reactor Using the SCALE6.0 and MCNP5 Code Packages
International Nuclear Information System (INIS)
Matijevic, M.; Jecmenica, R.; Pevec, D.; Trontl, K.
2012-01-01
The Gas-Cooled Fast Reactor (GFR) is one of the reactor concepts selected by the Generation IV International Forum (GIF) for the next generation of innovative nuclear energy systems. It was selected among a group of more than 100 prototypes and his commercial availability is expected by 2030. GFR has common goals of the rest GIF advanced reactor types: economy, safety, proliferation resistance, availability and sustainability. Several GFR fuel design concepts such as plates, rod pins and pebbles are currently being investigated in order to meet the high temperature constraints characteristic for a GFR working enviroment. In the previous study we have compared the fuel depletion results for heterogeneous GFR fuel assembly (FA), obtained with TRITON6 sequence of SCALE6.0 code system, with the MCNPX-CINDER90 and TRIPOLI-4-D codes. Present work is a continuation of neutronic criticality analysis of heterogeneous FA and full core configurations of a GFR concept using 3-D Monte Carlo codes KENO-VI/SCALE6.0 and MCNP5. The FA is based on a hexagonal mesh of fuel rods (uranium and plutonium carbide fuel, silicon carbide clad, helium gas coolant) with axial reflector thickness being varied for the purpose of optimization. Three reflector materials were analysed: zirconium carbide (ZrC), silicon carbide (SiC) and natural uranium. ZrC has been selected as a reflector material, having the best contribution to the neutron economy and to the reactivity of the core. The core safety parameters were also analysed: a negative temperature coefficient of reactivity was verified for the heavy metal fuel and coolant density loss. Criticality calculations of different FA active heights were performed and the reflector thickness was also adjusted. Finally, GFR full core criticality calculations using different active fuel rod heights and fixed ZrC reflector height were done to find the optimal height of the core. The Shannon entropy of the GFR core fission distribution was proved to be
He, Zhihua; Vorogushyn, Sergiy; Unger-Shayesteh, Katy; Gafurov, Abror; Kalashnikova, Olga; Omorova, Elvira; Merz, Bruno
2018-03-01
This study refines the method for calibrating a glacio-hydrological model based on Hydrograph Partitioning Curves (HPCs), and evaluates its value in comparison to multidata set optimization approaches which use glacier mass balance, satellite snow cover images, and discharge. The HPCs are extracted from the observed flow hydrograph using catchment precipitation and temperature gradients. They indicate the periods when the various runoff processes, such as glacier melt or snow melt, dominate the basin hydrograph. The annual cumulative curve of the difference between average daily temperature and melt threshold temperature over the basin, as well as the annual cumulative curve of average daily snowfall on the glacierized areas are used to identify the starting and end dates of snow and glacier ablation periods. Model parameters characterizing different runoff processes are calibrated on different HPCs in a stepwise and iterative way. Results show that the HPC-based method (1) delivers model-internal consistency comparably to the tri-data set calibration method; (2) improves the stability of calibrated parameter values across various calibration periods; and (3) estimates the contributions of runoff components similarly to the tri-data set calibration method. Our findings indicate the potential of the HPC-based approach as an alternative for hydrological model calibration in glacierized basins where other calibration data sets than discharge are often not available or very costly to obtain.
Application of heuristic and machine-learning approach to engine model calibration
Cheng, Jie; Ryu, Kwang R.; Newman, C. E.; Davis, George C.
1993-03-01
Automation of engine model calibration procedures is a very challenging task because (1) the calibration process searches for a goal state in a huge, continuous state space, (2) calibration is often a lengthy and frustrating task because of complicated mutual interference among the target parameters, and (3) the calibration problem is heuristic by nature, and often heuristic knowledge for constraining a search cannot be easily acquired from domain experts. A combined heuristic and machine learning approach has, therefore, been adopted to improve the efficiency of model calibration. We developed an intelligent calibration program called ICALIB. It has been used on a daily basis for engine model applications, and has reduced the time required for model calibrations from many hours to a few minutes on average. In this paper, we describe the heuristic control strategies employed in ICALIB such as a hill-climbing search based on a state distance estimation function, incremental problem solution refinement by using a dynamic tolerance window, and calibration target parameter ordering for guiding the search. In addition, we present the application of a machine learning program called GID3* for automatic acquisition of heuristic rules for ordering target parameters.
Multi-Site Calibration of Linear Reservoir Based Geomorphologic Rainfall-Runoff Models
Directory of Open Access Journals (Sweden)
Bahram Saeidifarzad
2014-09-01
Full Text Available Multi-site optimization of two adapted event-based geomorphologic rainfall-runoff models was presented using Non-dominated Sorting Genetic Algorithm (NSGA-II method for the South Fork Eel River watershed, California. The first model was developed based on Unequal Cascade of Reservoirs (UECR and the second model was presented as a modified version of Geomorphological Unit Hydrograph based on Nash’s model (GUHN. Two calibration strategies were considered as semi-lumped and semi-distributed for imposing (or unimposing the geomorphology relations in the models. The results of models were compared with Nash’s model. Obtained results using the observed data of two stations in the multi-site optimization framework showed reasonable efficiency values in both the calibration and the verification steps. The outcomes also showed that semi-distributed calibration of the modified GUHN model slightly outperformed other models in both upstream and downstream stations during calibration. Both calibration strategies for the developed UECR model during the verification phase showed slightly better performance in the downstream station, but in the upstream station, the modified GUHN model in the semi-lumped strategy slightly outperformed the other models. The semi-lumped calibration strategy could lead to logical lag time parameters related to the basin geomorphology and may be more suitable for data-based statistical analyses of the rainfall-runoff process.
MCNP5 development, verification, and performance
International Nuclear Information System (INIS)
Forrest B, Brown
2003-01-01
MCNP is a well-known and widely used Monte Carlo code for neutron, photon, and electron transport simulations. During the past 18 months, MCNP was completely reworked to provide MCNP5, a modernized version with many new features, including plotting enhancements, photon Doppler broadening, radiography image tallies, enhancements to source definitions, improved variance reduction, improved random number generator, tallies on a superimposed mesh, and edits of criticality safety parameters. Significant improvements in software engineering and adherence to standards have been made. Over 100 verification problems have been used to ensure that MCNP5 produces the same results as before and that all capabilities have been preserved. Testing on large parallel systems shows excellent parallel scaling. (author)
MCNP5 development, verification, and performance
Energy Technology Data Exchange (ETDEWEB)
Forrest B, Brown [Los Alamos National Laboratory (United States)
2003-07-01
MCNP is a well-known and widely used Monte Carlo code for neutron, photon, and electron transport simulations. During the past 18 months, MCNP was completely reworked to provide MCNP5, a modernized version with many new features, including plotting enhancements, photon Doppler broadening, radiography image tallies, enhancements to source definitions, improved variance reduction, improved random number generator, tallies on a superimposed mesh, and edits of criticality safety parameters. Significant improvements in software engineering and adherence to standards have been made. Over 100 verification problems have been used to ensure that MCNP5 produces the same results as before and that all capabilities have been preserved. Testing on large parallel systems shows excellent parallel scaling. (author)
MCNP application for the 21 century
International Nuclear Information System (INIS)
McKinney, G.W.
2000-01-01
The Los Alamos National Laboratory (LANL) Monte Carlo N-Particle radiation transport code, MCNP, has become an international standard for a wide spectrum of neutron, photon, and electron radiation transport applications. The latest version of the code, MCNP 4C, was released to the Radiation Safety Information Computational Center (RSICC) in February 2000. This paper describes the code development philosophy, new features and capabilities, applicability to various problems, and future directions
Neutron-induced photon production in MCNP
International Nuclear Information System (INIS)
Little, R.C.; Seamon, R.E.
1983-01-01
An improved method of neutron-induced photon production has been incorporated into the Monte Carlo transport code MCNP. The new method makes use of all partial photon-production reaction data provided by ENDF/B evaluators including photon-production cross sections as well as energy and angular distributions of secondary photons. This faithful utilization of sophisticated ENDF/B evaluations allows more precise MCNP calculations for several classes of coupled neutron-photon problems
Criticality calculations with MCNP trademark: A primer
International Nuclear Information System (INIS)
Harmon, C.D. II; Busch, R.D.; Briesmeister, J.F.; Forster, R.A.
1994-01-01
With the closure of many experimental facilities, the nuclear criticality safety analyst increasingly is required to rely on computer calculations to identify safe limits for the handling and storage of fissile materials. However, in many cases, the analyst has little experience with the specific codes available at his/her facility. This primer will help you, the analyst, understand and use the MCNP Monte Carlo code for nuclear criticality safety analyses. It assumes that you have a college education in a technical field. There is no assumption of familiarity with Monte Carlo codes in general or with MCNP in particular. Appendix A gives an introduction to Monte Carlo techniques. The primer is designed to teach by example, with each example illustrating two or three features of MCNP that are useful in criticality analyses. Beginning with a Quickstart chapter, the primer gives an overview of the basic requirements for MCNP input and allows you to run a simple criticality problem with MCNP. This chapter is not designed to explain either the input or the MCNP options in detail; but rather it introduces basic concepts that are further explained in following chapters. Each chapter begins with a list of basic objectives that identify the goal of the chapter, and a list of the individual MCNP features that are covered in detail in the unique chapter example problems. It is expected that on completion of the primer you will be comfortable using MCNP in criticality calculations and will be capable of handling 80 to 90 percent of the situations that normally arise in a facility. The primer provides a set of basic input files that you can selectively modify to fit the particular problem at hand
Calibration of the 7—Equation Transition Model for High Reynolds Flows at Low Mach
Colonia, S.; Leble, V.; Steijl, R.; Barakos, G.
2016-09-01
The numerical simulation of flows over large-scale wind turbine blades without considering the transition from laminar to fully turbulent flow may result in incorrect estimates of the blade loads and performance. Thanks to its relative simplicity and promising results, the Local-Correlation based Transition Modelling concept represents a valid way to include transitional effects into practical CFD simulations. However, the model involves coefficients that need tuning. In this paper, the γ—equation transition model is assessed and calibrated, for a wide range of Reynolds numbers at low Mach, as needed for wind turbine applications. An aerofoil is used to evaluate the original model and calibrate it; while a large scale wind turbine blade is employed to show that the calibrated model can lead to reliable solutions for complex three-dimensional flows. The calibrated model shows promising results for both two-dimensional and three-dimensional flows, even if cross-flow instabilities are neglected.
MCNP evaluation of top node control rod depletion below the core in KKL
International Nuclear Information System (INIS)
Beran, Tâm; Seltborg, Per; Lindahl, Sten-Örjan; Bieli, Roger; Ledergerber, Guido
2014-01-01
In previous studies, there has been identified a significant discrepancy in the BWR control rod top node depletion between the two core simulator nodal codes POLCA7 and PRESTO-2, which indicates that there is a large general uncertainty in nodal codes in calculating the top node depletion of fully withdrawn control rods. In this study, the stochastic Monte Carlo code MCNP has been used to calculate the top node control rod depletion for benchmarking the nodal codes. By using the TIP signal obtained from an extended TIP campaign below the core performed in the KKL reactor, the MCNP model has been verified by comparing the axial profile between the TIP data and the gamma flux calculated by MCNP. The MCNP results have also been compared with calculations from POLCA7, which was found to yield slightly higher depletion rates than MCNP. It was also found that the 10 B depletion in the top node is very sensitive to the exact axial location of the control rod top when it is fully withdrawn. By using the MCNP results, the neutron flux model below the core in the nodal codes can be improved by implementing an exponential function for the neutron flux. (author)
MCNP6 Simulation of Light and Medium Nuclei Fragmentation at Intermediate Energies
Energy Technology Data Exchange (ETDEWEB)
Mashnik, Stepan Georgievich [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Kerby, Leslie Marie [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2015-05-22
MCNP6, the latest and most advanced LANL Monte Carlo transport code, representing a merger of MCNP5 and MCNPX, is actually much more than the sum of those two computer codes; MCNP6 is available to the public via RSICC at Oak Ridge, TN, USA. In the present work, MCNP6 was validated and verified (V&V) against different experimental data on intermediate-energy fragmentation reactions, and results by several other codes, using mainly the latest modifications of the Cascade-Exciton Model (CEM) and of the Los Alamos version of the Quark-Gluon String Model (LAQGSM) event generators CEM03.03 and LAQGSM03.03. It was found that MCNP6 using CEM03.03 and LAQGSM03.03 describes well fragmentation reactions induced on light and medium target nuclei by protons and light nuclei of energies around 1 GeV/nucleon and below, and can serve as a reliable simulation tool for different applications, like cosmic-ray-induced single event upsets (SEU’s), radiation protection, and cancer therapy with proton and ion beams, to name just a few. Future improvements of the predicting capabilities of MCNP6 for such reactions are possible, and are discussed in this work.
Matching Images to Models: Camera Calibration for 3-D Surface Reconstruction
Morris, Robin D.; Smelyanskiy, Vadim N.; Cheeseman. Peter C.; Norvig, Peter (Technical Monitor)
2001-01-01
In a previous paper we described a system which recursively recovers a super-resolved three dimensional surface model from a set of images of the surface. In that paper we assumed that the camera calibration for each image was known. In this paper we solve two problems. Firstly, if an estimate of the surface is already known, the problem is to calibrate a new image relative to the existing surface model. Secondly, if no surface estimate is available, the relative camera calibration between the images in the set must be estimated. This will allow an initial surface model to be estimated. Results of both types of estimation are given.
Jiang, Sanyuan; Jomaa, Seifeddine; Büttner, Olaf; Rode, Michael
2014-05-01
Hydrological water quality modeling is increasingly used for investigating runoff and nutrient transport processes as well as watershed management but it is mostly unclear how data availablity determins model identification. In this study, the HYPE (HYdrological Predictions for the Environment) model, which is a process-based, semi-distributed hydrological water quality model, was applied in two different mesoscale catchments (Selke (463 km2) and Weida (99 km2)) located in central Germany to simulate discharge and inorganic nitrogen (IN) transport. PEST and DREAM(ZS) were combined with the HYPE model to conduct parameter calibration and uncertainty analysis. Split-sample test was used for model calibration (1994-1999) and validation (1999-2004). IN concentration and daily IN load were found to be highly correlated with discharge, indicating that IN leaching is mainly controlled by runoff. Both dynamics and balances of water and IN load were well captured with NSE greater than 0.83 during validation period. Multi-objective calibration (calibrating hydrological and water quality parameters simultaneously) was found to outperform step-wise calibration in terms of model robustness. Multi-site calibration was able to improve model performance at internal sites, decrease parameter posterior uncertainty and prediction uncertainty. Nitrogen-process parameters calibrated using continuous daily averages of nitrate-N concentration observations produced better and more robust simulations of IN concentration and load, lower posterior parameter uncertainty and IN concentration prediction uncertainty compared to the calibration against uncontinuous biweekly nitrate-N concentration measurements. Both PEST and DREAM(ZS) are efficient in parameter calibration. However, DREAM(ZS) is more sound in terms of parameter identification and uncertainty analysis than PEST because of its capability to evolve parameter posterior distributions and estimate prediction uncertainty based on global
International Nuclear Information System (INIS)
Christensen, L.H.; Pind, N.
1982-01-01
A matrix-independent fundamental parameter-based calibration model for an energy-dispersive X-ray fluorescence spectrometer has been developed. This model, which is part of a fundamental parameter approach quantification method, accounts for both the excitation and detection probability. For each secondary target a number of relative calibration constants are calculated on the basis of knowledge of the irradiation geometry, the detector specifications, and tabulated fundamental physical parameters. The absolute calibration of the spectrometer is performed by measuring one pure element standard per secondary target. For sample systems where all elements can be analyzed by means of the same secondary target the absolute calibration constant can be determined during the iterative solution of the basic equation. Calculated and experimentally determined relative calibration constants agree to within 5-10% of each other and so do the results obtained from the analysis of an NBS certified alloy using the two sets of constants. (orig.)
Polomčić, Dušan M.; Bajić, Dragoljub I.; Močević, Jelena M.
2015-01-01
The calibration process of hydrodynamic model is done usually manually by 'testing' with different values of hydrogeological parameters and hydraulic characteristics of the boundary conditions. By using the PEST program, automatic calibration of models has been introduced, and it has proved to significantly reduce the subjective influence of the model creator on results. With the relatively new approach of PEST, i.e. with the introduction of so-called 'pilot points', the concept of homogeneou...
Electron/Photon Verification Calculations Using MCNP4B
Energy Technology Data Exchange (ETDEWEB)
D. P. Gierga; K. J. Adams
1999-04-01
MCNP4BW was released in February 1997 with significant enhancements to electron/photon transport methods. These enhancements have been verified against a wide range of published electron/photon experiments, spanning high energy bremsstrahlung production to electron transmission and reflection. The impact of several MCNP tally options and physics parameters was explored in detail. The agreement between experiment and simulation was usually within two standard deviations of the experimental and calculational errors. Furthermore, sub-step artifacts for bremsstrahlung production were shown to be mitigated. A detailed suite of electron depth dose calculations in water is also presented. Areas for future code development have also been explored and include the dependence of cell and detector tallies on different bremsstrahlung angular models and alternative variance reduction splitting schemes for bremsstrahlung production.
A Monte Carlo burnup code linking MCNP and REBUS
International Nuclear Information System (INIS)
Hanan, N.A.; Olson, A.P.; Pond, R.B.; Matos, J.E.
1998-01-01
The REBUS-3 burnup code, used in the anl RERTR Program, is a very general code that uses diffusion theory (DIF3D) to obtain the fluxes required for reactor burnup analyses. Diffusion theory works well for most reactors. However, to include the effects of exact geometry and strong absorbers that are difficult to model using diffusion theory, a Monte Carlo method is required. MCNP, a general-purpose, generalized-geometry, time-dependent, Monte Carlo transport code, is the most widely used Monte Carlo code. This paper presents a linking of the MCNP code and the REBUS burnup code to perform these difficult analyses. The linked code will permit the use of the full capabilities of REBUS which include non-equilibrium and equilibrium burnup analyses. Results of burnup analyses using this new linked code are also presented. (author)
A Monte Carlo burnup code linking MCNP and REBUS
International Nuclear Information System (INIS)
Hanan, N. A.
1998-01-01
The REBUS-3 burnup code, used in the ANL RERTR Program, is a very general code that uses diffusion theory (DIF3D) to obtain the fluxes required for reactor burnup analyses. Diffusion theory works well for most reactors. However, to include the effects of exact geometry and strong absorbers that are difficult to model using diffusion theory, a Monte Carlo method is required. MCNP, a general-purpose, generalized-geometry, time-dependent, Monte Carlo transport code, is the most widely used Monte Carlo code. This paper presents a linking of the MCNP code and the REBUS burnup code to perform these difficult burnup analyses. The linked code will permit the use of the full capabilities of REBUS which include non-equilibrium and equilibrium burnup analyses. Results of burnup analyses using this new linked code are also presented
New Methods for Kinematic Modelling and Calibration of Robots
DEFF Research Database (Denmark)
Søe-Knudsen, Rune
2014-01-01
the accuracy in an easy and accessible way. The required equipment is accessible, since the cost is held to a minimum and can be made with conventional processing equipment. Our first method calibrates the kinematics of a robot using known relative positions measured with the robot itself and a plate...... with holes matching the robot tool flange. The second method calibrates the kinematics using two robots. This method allows the robots to carry out the collection of measurements and the adjustment, by themselves, after the robots have been connected. Furthermore, we also propose a method for restoring......Improving a robot's accuracy increases its ability to solve certain tasks, and is therefore valuable. Practical ways of achieving this improved accuracy, even after robot repair, is also valuable. In this work, we introduce methods that improve the robot's accuracy and make it possible to maintain...
Validation and calibration of structural models that combine information from multiple sources.
Dahabreh, Issa J; Wong, John B; Trikalinos, Thomas A
2017-02-01
Mathematical models that attempt to capture structural relationships between their components and combine information from multiple sources are increasingly used in medicine. Areas covered: We provide an overview of methods for model validation and calibration and survey studies comparing alternative approaches. Expert commentary: Model validation entails a confrontation of models with data, background knowledge, and other models, and can inform judgments about model credibility. Calibration involves selecting parameter values to improve the agreement of model outputs with data. When the goal of modeling is quantitative inference on the effects of interventions or forecasting, calibration can be viewed as estimation. This view clarifies issues related to parameter identifiability and facilitates formal model validation and the examination of consistency among different sources of information. In contrast, when the goal of modeling is the generation of qualitative insights about the modeled phenomenon, calibration is a rather informal process for selecting inputs that result in model behavior that roughly reproduces select aspects of the modeled phenomenon and cannot be equated to an estimation procedure. Current empirical research on validation and calibration methods consists primarily of methodological appraisals or case-studies of alternative techniques and cannot address the numerous complex and multifaceted methodological decisions that modelers must make. Further research is needed on different approaches for developing and validating complex models that combine evidence from multiple sources.
Uncertainty modelling and code calibration for composite materials
DEFF Research Database (Denmark)
Toft, Henrik Stensgaard; Branner, Kim; Mishnaevsky, Leon, Jr
2013-01-01
and measurement uncertainties which are introduced on the different scales. Typically, these uncertainties are taken into account in the design process using characteristic values and partial safety factors specified in a design standard. The value of the partial safety factors should reflect a reasonable balance...... to wind turbine blades are calibrated for two typical lay-ups using a large number of load cases and ratios between the aerodynamic forces and the inertia forces....
A Low Cost Calibration Method for Urban Drainage Models
DEFF Research Database (Denmark)
Rasmussen, Michael R.; Thorndahl, Søren; Schaarup-Jensen, Kjeld
2008-01-01
The calibration of the hydrological reduction coefficient is examined for a small catchment. The objective is to determine the hydrological reduction coefficient, which is used for describing how much of the precipitation which falls on impervious areas, that actually ends up in the sewer...... to what can be found with intensive in-sewer measurement of rain and runoff. The results also clearly indicate that there is a large variation in hydrological reduction coefficient between different rain events....
Step wise, multiple objective calibration of a hydrologic model for a snowmelt dominated basin
Hay, L.E.; Leavesley, G.H.; Clark, M.P.; Markstrom, S.L.; Viger, R.J.; Umemoto, M.
2006-01-01
The ability to apply a hydrologic model to large numbers of basins for forecasting purposes requires a quick and effective calibration strategy. This paper presents a step wise, multiple objective, automated procedure for hydrologic model calibration. This procedure includes the sequential calibration of a model's simulation of solar radiation (SR), potential evapotranspiration (PET), water balance, and daily runoff. The procedure uses the Shuffled Complex Evolution global search algorithm to calibrate the U.S. Geological Survey's Precipitation Runoff Modeling System in the Yampa River basin of Colorado. This process assures that intermediate states of the model (SR and PET on a monthly mean basis), as well as the water balance and components of the daily hydrograph are simulated, consistently with measured values.
MCNP6 fragmentation of light nuclei at intermediate energies
Energy Technology Data Exchange (ETDEWEB)
Mashnik, Stepan G., E-mail: mashnik@lanl.gov [Los Alamos National Laboratory, Los Alamos, NM 87545 (United States); Kerby, Leslie M. [Los Alamos National Laboratory, Los Alamos, NM 87545 (United States); University of Idaho, Moscow, ID 83844 (United States)
2014-11-11
Fragmentation reactions induced on light target nuclei by protons and light nuclei of energies around 1 GeV/nucleon and below are studied with the latest Los Alamos Monte Carlo transport code MCNP6 and with its cascade-exciton model (CEM) and Los Alamos version of the quark-gluon string model (LAQGSM) event generators, version 03.03, used as stand-alone codes. Such reactions are involved in different applications, like cosmic-ray-induced single event upsets (SEU's), radiation protection, and cancer therapy with proton and ion beams, among others; therefore, it is important that MCNP6 simulates them as well as possible. CEM and LAQGSM assume that intermediate-energy fragmentation reactions on light nuclei occur generally in two stages. The first stage is the intranuclear cascade (INC), followed by the second, Fermi breakup disintegration of light excited residual nuclei produced after the INC. Both CEM and LAQGSM account also for coalescence of light fragments (complex particles) up to {sup 4}He from energetic nucleons emitted during INC. We investigate the validity and performance of MCNP6, CEM, and LAQGSM in simulating fragmentation reactions at intermediate energies and discuss possible ways of further improving these codes.
DEFF Research Database (Denmark)
Christensen, Leif Højslet; Pind, Niels
1982-01-01
A matrix-independent fundamental parameter-based calibration model for an energy-dispersive X-ray fluorescence spectrometer has been developed. This model, which is part of a fundamental parameter approach quantification method, accounts for both the excitation and detection probability. For each...... secondary target a number of relative calibration constants are calculated on the basis of knowledge of the irradiation geometry, the detector specifications, and tabulated fundamental physical parameters. The absolute calibration of the spectrometer is performed by measuring one pure element standard per...
Influence of smoothing of X-ray spectra on parameters of calibration model
International Nuclear Information System (INIS)
Antoniak, W.; Urbanski, P.; Kowalska, E.
1998-01-01
Parameters of the calibration model before and after smoothing of X-ray spectra have been investigated. The calibration model was calculated using multivariate procedure - namely the partial least square regression (PLS). Investigations have been performed on an example of six sets of various standards used for calibration of some instruments based on X-ray fluorescence principle. The smoothing methods were compared: regression splines, Savitzky-Golay and Discrete Fourier Transform. The calculations were performed using a software package MATLAB and some home-made programs. (author)
An Open-Source Auto-Calibration Routine Supporting the Stormwater Management Model
Tiernan, E. D.; Hodges, B. R.
2017-12-01
The stormwater management model (SWMM) is a clustered model that relies on subcatchment-averaged parameter assignments to correctly capture catchment stormwater runoff behavior. Model calibration is considered a critical step for SWMM performance, an arduous task that most stormwater management designers undertake manually. This research presents an open-source, automated calibration routine that increases the efficiency and accuracy of the model calibration process. The routine makes use of a preliminary sensitivity analysis to reduce the dimensions of the parameter space, at which point a multi-objective function, genetic algorithm (modified Non-dominated Sorting Genetic Algorithm II) determines the Pareto front for the objective functions within the parameter space. The solutions on this Pareto front represent the optimized parameter value sets for the catchment behavior that could not have been reasonably obtained through manual calibration.
DEFF Research Database (Denmark)
Vangsgaard, Anna Katrine; Mutlu, Ayten Gizem; Gernaey, Krist
2013-01-01
BACKGROUND: A validated model describing the nitritation-anammox process in a granular sequencing batch reactor (SBR) system is an important tool for: a) design of future experiments and b) prediction of process performance during optimization, while applying process control, or during system scale......-up. RESULTS: A model was calibrated using a step-wise procedure customized for the specific needs of the system. The important steps in the procedure were initialization, steady-state and dynamic calibration, and validation. A fast and effective initialization approach was developed to approximate pseudo...... screening of the parameter space proposed by Sin et al. (2008) - to find the best fit of the model to dynamic data. Finally, the calibrated model was validated with an independent data set. CONCLUSION: The presented calibration procedure is the first customized procedure for this type of system...
Modeling, Calibration and Control for Extreme-Precision MEMS Deformable Mirrors, Phase I
National Aeronautics and Space Administration — Iris AO will develop electromechanical models and actuator calibration methods to enable open-loop control of MEMS deformable mirrors (DMs) with unprecedented...
Elsheikh, A. H.; Wheeler, M. F.; Hoteit, Ibrahim
2013-01-01
Calibration of subsurface flow models is an essential step for managing ground water aquifers, designing of contaminant remediation plans, and maximizing recovery from hydrocarbon reservoirs. We investigate an efficient sampling algorithm known
International Nuclear Information System (INIS)
Merino, J.; Cera, E.; Bruno, J.; Quinones, J.; Casas, I.; Clarens, F.; Gimenez, J.; Pablo, J. de; Rovira, M.; Martinez-Esparza, A.
2005-01-01
Calibration and testing are inherent aspects of any modelling exercise and consequently they are key issues in developing a model for the oxidative dissolution of spent fuel. In the present work we present the outcome of the calibration process for the kinetic constants of a UO 2 oxidative dissolution mechanism developed for using in a radiolytic model. Experimental data obtained in dynamic leaching experiments of unirradiated UO 2 has been used for this purpose. The iterative calibration process has provided some insight into the detailed mechanism taking place in the alteration of UO 2 , particularly the role of · OH radicals and their interaction with the carbonate system. The results show that, although more simulations are needed for testing in different experimental systems, the calibrated oxidative dissolution mechanism could be included in radiolytic models to gain confidence in the prediction of the long-term alteration rate of the spent fuel under repository conditions
International Nuclear Information System (INIS)
Ijiri, Yuji; Ono, Makoto; Sugihara, Yutaka; Shimo, Michito; Yamamoto, Hajime; Fumimura, Kenichi
2003-03-01
This study involves evaluation of uncertainty in hydrogeological modeling and groundwater flow analysis. Three-dimensional groundwater flow in Shobasama site in Tono was analyzed using two continuum models and one discontinuous model. The domain of this study covered area of four kilometers in east-west direction and six kilometers in north-south direction. Moreover, for the purpose of evaluating how uncertainties included in modeling of hydrogeological structure and results of groundwater simulation decreased with progress of investigation research, updating and calibration of the models about several modeling techniques of hydrogeological structure and groundwater flow analysis techniques were carried out, based on the information and knowledge which were newly acquired. The acquired knowledge is as follows. As a result of setting parameters and structures in renewal of the models following to the circumstances by last year, there is no big difference to handling between modeling methods. The model calibration is performed by the method of matching numerical simulation with observation, about the pressure response caused by opening and closing of a packer in MIU-2 borehole. Each analysis technique attains reducing of residual sum of squares of observations and results of numerical simulation by adjusting hydrogeological parameters. However, each model adjusts different parameters as water conductivity, effective porosity, specific storage, and anisotropy. When calibrating models, sometimes it is impossible to explain the phenomena only by adjusting parameters. In such case, another investigation may be required to clarify details of hydrogeological structure more. As a result of comparing research from beginning to this year, the following conclusions are obtained about investigation. (1) The transient hydraulic data are effective means in reducing the uncertainty of hydrogeological structure. (2) Effective porosity for calculating pore water velocity of
International Nuclear Information System (INIS)
Wang Jianhua; Zhang Hualin
2008-01-01
A recently developed alternative brachytherapy seed, Cs-1 Rev2 cesium-131, has begun to be used in clinical practice. The dosimetric characteristics of this source in various media, particularly in human tissues, have not been fully evaluated. The aim of this study was to calculate the dosimetric parameters for the Cs-1 Rev2 cesium-131 seed following the recommendations of the AAPM TG-43U1 report [Rivard et al., Med. Phys. 31, 633-674 (2004)] for new sources in brachytherapy applications. Dose rate constants, radial dose functions, and anisotropy functions of the source in water, Virtual Water, and relevant human soft tissues were calculated using MCNP5 Monte Carlo simulations following the TG-43U1 formalism. The results yielded dose rate constants of 1.048, 1.024, 1.041, and 1.044 cGy h -1 U -1 in water, Virtual Water, muscle, and prostate tissue, respectively. The conversion factor for this new source between water and Virtual Water was 1.02, between muscle and water was 1.006, and between prostate and water was 1.004. The authors' calculation of anisotropy functions in a Virtual Water phantom agreed closely with Murphy's measurements [Murphy et al., Med. Phys. 31, 1529-1538 (2004)]. Our calculations of the radial dose function in water and Virtual Water have good agreement with those in previous experimental and Monte Carlo studies. The TG-43U1 parameters for clinical applications in water, muscle, and prostate tissue are presented in this work
Improvement, calibration and validation of a distributed hydrological model over France
Directory of Open Access Journals (Sweden)
P. Quintana Seguí
2009-02-01
Full Text Available The hydrometeorological model SAFRAN-ISBA-MODCOU (SIM computes water and energy budgets on the land surface and riverflows and the level of several aquifers at the scale of France. SIM is composed of a meteorological analysis system (SAFRAN, a land surface model (ISBA, and a hydrogeological model (MODCOU. In this study, an exponential profile of hydraulic conductivity at saturation is introduced to the model and its impact analysed. It is also studied how calibration modifies the performance of the model. A very simple method of calibration is implemented and applied to the parameters of hydraulic conductivity and subgrid runoff. The study shows that a better description of the hydraulic conductivity of the soil is important to simulate more realistic discharges. It also shows that the calibrated model is more robust than the original SIM. In fact, the calibration mainly affects the processes related to the dynamics of the flow (drainage and runoff, and the rest of relevant processes (like evaporation remain stable. It is also proven that it is only worth introducing the new empirical parameterization of hydraulic conductivity if it is accompanied by a calibration of its parameters, otherwise the simulations can be degraded. In conclusion, it is shown that the new parameterization is necessary to obtain good simulations. Calibration is a tool that must be used to improve the performance of distributed models like SIM that have some empirical parameters.
Predictive sensor based x-ray calibration using a physical model
International Nuclear Information System (INIS)
Fuente, Matias de la; Lutz, Peter; Wirtz, Dieter C.; Radermacher, Klaus
2007-01-01
Many computer assisted surgery systems are based on intraoperative x-ray images. To achieve reliable and accurate results these images have to be calibrated concerning geometric distortions, which can be distinguished between constant distortions and distortions caused by magnetic fields. Instead of using an intraoperative calibration phantom that has to be visible within each image resulting in overlaying markers, the presented approach directly takes advantage of the physical background of the distortions. Based on a computed physical model of an image intensifier and a magnetic field sensor, an online compensation of distortions can be achieved without the need of an intraoperative calibration phantom. The model has to be adapted once to each specific image intensifier through calibration, which is based on an optimization algorithm systematically altering the physical model parameters, until a minimal error is reached. Once calibrated, the model is able to predict the distortions caused by the measured magnetic field vector and build an appropriate dewarping function. The time needed for model calibration is not yet optimized and takes up to 4 h on a 3 GHz CPU. In contrast, the time needed for distortion correction is less than 1 s and therefore absolutely acceptable for intraoperative use. First evaluations showed that by using the model based dewarping algorithm the distortions of an XRII with a 21 cm FOV could be significantly reduced. The model was able to predict and compensate distortions by approximately 80% to a remaining error of 0.45 mm (max) (0.19 mm rms)
The effects of model complexity and calibration period on groundwater recharge simulations
Moeck, Christian; Van Freyberg, Jana; Schirmer, Mario
2017-04-01
A significant number of groundwater recharge models exist that vary in terms of complexity (i.e., structure and parametrization). Typically, model selection and conceptualization is very subjective and can be a key source of uncertainty in the recharge simulations. Another source of uncertainty is the implicit assumption that model parameters, calibrated over historical periods, are also valid for the simulation period. To the best of our knowledge there is no systematic evaluation of the effect of the model complexity and calibration strategy on the performance of recharge models. To address this gap, we utilized a long-term recharge data set (20 years) from a large weighting lysimeter. We performed a differential split sample test with four groundwater recharge models that vary in terms of complexity. They were calibrated using six calibration periods with climatically contrasting conditions in a constrained Monte Carlo approach. Despite the climatically contrasting conditions, all models performed similarly well during the calibration. However, during validation a clear effect of the model structure on model performance was evident. The more complex, physically-based models predicted recharge best, even when calibration and prediction periods had very different climatic conditions. In contrast, more simplistic soil-water balance and lumped model performed poorly under such conditions. For these models we found a strong dependency on the chosen calibration period. In particular, our analysis showed that this can have relevant implications when using recharge models as decision-making tools in a broad range of applications (e.g. water availability, climate change impact studies, water resource management, etc.).
Multivariate Calibration Models for Sorghum Composition using Near-Infrared Spectroscopy
Energy Technology Data Exchange (ETDEWEB)
Wolfrum, E.; Payne, C.; Stefaniak, T.; Rooney, W.; Dighe, N.; Bean, B.; Dahlberg, J.
2013-03-01
NREL developed calibration models based on near-infrared (NIR) spectroscopy coupled with multivariate statistics to predict compositional properties relevant to cellulosic biofuels production for a variety of sorghum cultivars. A robust calibration population was developed in an iterative fashion. The quality of models developed using the same sample geometry on two different types of NIR spectrometers and two different sample geometries on the same spectrometer did not vary greatly.
Tian, Hai-Qing; Wang, Chun-Guang; Zhang, Hai-Jun; Yu, Zhi-Hong; Li, Jian-Kang
2012-11-01
Outlier samples strongly influence the precision of the calibration model in soluble solids content measurement of melons using NIR Spectra. According to the possible sources of outlier samples, three methods (predicted concentration residual test; Chauvenet test; leverage and studentized residual test) were used to discriminate these outliers respectively. Nine suspicious outliers were detected from calibration set which including 85 fruit samples. Considering the 9 suspicious outlier samples maybe contain some no-outlier samples, they were reclaimed to the model one by one to see whether they influence the model and prediction precision or not. In this way, 5 samples which were helpful to the model joined in calibration set again, and a new model was developed with the correlation coefficient (r) 0. 889 and root mean square errors for calibration (RMSEC) 0.6010 Brix. For 35 unknown samples, the root mean square errors prediction (RMSEP) was 0.854 degrees Brix. The performance of this model was more better than that developed with non outlier was eliminated from calibration set (r = 0.797, RMSEC= 0.849 degrees Brix, RMSEP = 1.19 degrees Brix), and more representative and stable with all 9 samples were eliminated from calibration set (r = 0.892, RMSEC = 0.605 degrees Brix, RMSEP = 0.862 degrees).
Energy Technology Data Exchange (ETDEWEB)
Stewart, R [University of Washington, Seattle, WA (United States); Streitmatter, S [University of Utah Hospitals, Salt Lake City, UT (United States); Traneus, E [RAYSEARCH LABORATORIES AB, Stockholm (Sweden); Moskvin, V [St. Jude Children’s Hospital, Memphis, TN (United States); Schuemann, J [Massachusetts General Hospital, Boston, MA (United States)
2016-06-15
Purpose: Validate implementation of a published RBE model for DSB induction (RBEDSB) in several general purpose Monte Carlo (MC) code systems and the RayStation™ treatment planning system (TPS). For protons and other light ions, DSB induction is a critical initiating molecular event that correlates well with the RBE for cell survival. Methods: An efficient algorithm to incorporate information on proton and light ion RBEDSB from the independently tested Monte Carlo Damage Simulation (MCDS) has now been integrated into MCNP (Stewart et al. PMB 60, 8249–8274, 2015), FLUKA, TOPAS and a research build of the RayStation™ TPS. To cross-validate the RBEDSB model implementation LET distributions, depth-dose and lateral (dose and RBEDSB) profiles for monodirectional monoenergetic (100 to 200 MeV) protons incident on a water phantom are compared. The effects of recoil and secondary ion production ({sub 2}H{sub +}, {sub 3}H{sub +}, {sub 3}He{sub 2+}, {sub 4}He{sub 2+}), spot size (3 and 10 mm), and transport physics on beam profiles and RBEDSB are examined. Results: Depth-dose and RBEDSB profiles among all of the MC models are in excellent agreement using a 1 mm distance criterion (width of a voxel). For a 100 MeV proton beam (10 mm spot), RBEDSB = 1.2 ± 0.03 (− 2–3%) at the tip of the Bragg peak and increases to 1.59 ± 0.3 two mm distal to the Bragg peak. RBEDSB tends to decrease as the kinetic energy of the incident proton increases. Conclusion: The model for proton RBEDSB has been accurately implemented into FLUKA, MCNP, TOPAS and the RayStation™TPS. The transport of secondary light ions (Z > 1) has a significant impact on RBEDSB, especially distal to the Bragg peak, although light ions have a small effect on (dosexRBEDSB) profiles. The ability to incorporate spatial variations in proton RBE within a TPS creates new opportunities to individualize treatment plans and increase the therapeutic ratio. Dr. Erik Traneus is employed full-time as a Research Scientist
Our calibrated model has poor predictive value: An example from the petroleum industry
International Nuclear Information System (INIS)
Carter, J.N.; Ballester, P.J.; Tavassoli, Z.; King, P.R.
2006-01-01
It is often assumed that once a model has been calibrated to measurements then it will have some level of predictive capability, although this may be limited. If the model does not have predictive capability then the assumption is that the model needs to be improved in some way. Using an example from the petroleum industry, we show that cases can exit where calibrated models have limited predictive capability. This occurs even when there is no modelling error present. It is also shown that the introduction of a small modelling error can make it impossible to obtain any models with useful predictive capability. We have been unable to find ways of identifying which calibrated models will have some predictive capacity and those which will not
Our calibrated model has poor predictive value: An example from the petroleum industry
Energy Technology Data Exchange (ETDEWEB)
Carter, J.N. [Department of Earth Science and Engineering, Imperial College, London (United Kingdom)]. E-mail: j.n.carter@ic.ac.uk; Ballester, P.J. [Department of Earth Science and Engineering, Imperial College, London (United Kingdom); Tavassoli, Z. [Department of Earth Science and Engineering, Imperial College, London (United Kingdom); King, P.R. [Department of Earth Science and Engineering, Imperial College, London (United Kingdom)
2006-10-15
It is often assumed that once a model has been calibrated to measurements then it will have some level of predictive capability, although this may be limited. If the model does not have predictive capability then the assumption is that the model needs to be improved in some way. Using an example from the petroleum industry, we show that cases can exit where calibrated models have limited predictive capability. This occurs even when there is no modelling error present. It is also shown that the introduction of a small modelling error can make it impossible to obtain any models with useful predictive capability. We have been unable to find ways of identifying which calibrated models will have some predictive capacity and those which will not.
Hutton, C.; Wagener, T.; Freer, J. E.; Duffy, C.; Han, D.
2015-12-01
Distributed models offer the potential to resolve catchment systems in more detail, and therefore simulate the hydrological impacts of spatial changes in catchment forcing (e.g. landscape change). Such models may contain a large number of model parameters which are computationally expensive to calibrate. Even when calibration is possible, insufficient data can result in model parameter and structural equifinality. In order to help reduce the space of feasible models and supplement traditional outlet discharge calibration data, semi-quantitative information (e.g. knowledge of relative groundwater levels), may also be used to identify behavioural models when applied to constrain spatially distributed predictions of states and fluxes. The challenge is to combine these different sources of information together to identify a behavioural region of state-space, and efficiently search a large, complex parameter space to identify behavioural parameter sets that produce predictions that fall within this behavioural region. Here we present a methodology to incorporate different sources of data to efficiently calibrate distributed catchment models. Metrics of model performance may be derived from multiple sources of data (e.g. perceptual understanding and measured or regionalised hydrologic signatures). For each metric, an interval or inequality is used to define the behaviour of the catchment system, accounting for data uncertainties. These intervals are then combined to produce a hyper-volume in state space. The state space is then recast as a multi-objective optimisation problem, and the Borg MOEA is applied to first find, and then populate the hyper-volume, thereby identifying acceptable model parameter sets. We apply the methodology to calibrate the PIHM model at Plynlimon, UK by incorporating perceptual and hydrologic data into the calibration problem. Furthermore, we explore how to improve calibration efficiency through search initialisation from shorter model runs.
Calibration of Mine Ventilation Network Models Using the Non-Linear Optimization Algorithm
Directory of Open Access Journals (Sweden)
Guang Xu
2017-12-01
Full Text Available Effective ventilation planning is vital to underground mining. To ensure stable operation of the ventilation system and to avoid airflow disorder, mine ventilation network (MVN models have been widely used in simulating and optimizing the mine ventilation system. However, one of the challenges for MVN model simulation is that the simulated airflow distribution results do not match the measured data. To solve this problem, a simple and effective calibration method is proposed based on the non-linear optimization algorithm. The calibrated model not only makes simulated airflow distribution results in accordance with the on-site measured data, but also controls the errors of other parameters within a minimum range. The proposed method was then applied to calibrate an MVN model in a real case, which is built based on ventilation survey results and Ventsim software. Finally, airflow simulation experiments are carried out respectively using data before and after calibration, whose results were compared and analyzed. This showed that the simulated airflows in the calibrated model agreed much better to the ventilation survey data, which verifies the effectiveness of calibrating method.
The ENSDF based radionuclide source for MCNP
International Nuclear Information System (INIS)
Berlizov, A.N.; Tryshyn, V.V.
2003-01-01
A utility for generating source code of the Source subroutine of MCNP (a general Monte Carlo NxParticle transport code) on the basis of ENSDF (Evaluated Nuclear Structure Data File) is described. The generated code performs statistical simulation of processes, accompanying radioactive decay of a chosen radionuclide through a specified decay branch, providing characteristics of emitted correlated particles on its output. At modeling the following processes are taken into account: emission of continuum energy electrons at beta - -decay to different exited levels of a daughter nucleus; annihilation photon emission accompanying beta + -decay; gamma-ray emission; emission of discrete energy electrons resulted from internal conversion process on atomic K- and L I,II,III -shells; K and LX-ray emission at single and double fluorescence, accompanying electron capture and internal conversion processes. Number of emitted particles, their types, energies and emission times are sampled according to characteristics of a decay scheme of a particular radionuclide as well as characteristics of atomic shells of mother and daughter nuclei. Angular correlations, calculated for a particular combination of nuclear level spins, mixing ratios and gamma-ray multipolarities, are taken into account at sampling of directional cosines of emitted gamma-rays. The paper contains examples of spectrometry system response simulation at measurements with real radionuclide sources. (authors)
More efficient evolutionary strategies for model calibration with watershed model for demonstration
Baggett, J. S.; Skahill, B. E.
2008-12-01
Evolutionary strategies allow automatic calibration of more complex models than traditional gradient based approaches, but they are more computationally intensive. We present several efficiency enhancements for evolution strategies, many of which are not new, but when combined have been shown to dramatically decrease the number of model runs required for calibration of synthetic problems. To reduce the number of expensive model runs we employ a surrogate objective function for an adaptively determined fraction of the population at each generation (Kern et al., 2006). We demonstrate improvements to the adaptive ranking strategy that increase its efficiency while sacrificing little reliability and further reduce the number of model runs required in densely sampled parts of parameter space. Furthermore, we include a gradient individual in each generation that is usually not selected when the search is in a global phase or when the derivatives are poorly approximated, but when selected near a smooth local minimum can dramatically increase convergence speed (Tahk et al., 2007). Finally, the selection of the gradient individual is used to adapt the size of the population near local minima. We show, by incorporating these enhancements into the Covariance Matrix Adaption Evolution Strategy (CMAES; Hansen, 2006), that their synergetic effect is greater than their individual parts. This hybrid evolutionary strategy exploits smooth structure when it is present but degrades to an ordinary evolutionary strategy, at worst, if smoothness is not present. Calibration of 2D-3D synthetic models with the modified CMAES requires approximately 10%-25% of the model runs of ordinary CMAES. Preliminary demonstration of this hybrid strategy will be shown for watershed model calibration problems. Hansen, N. (2006). The CMA Evolution Strategy: A Comparing Review. In J.A. Lozano, P. Larrañga, I. Inza and E. Bengoetxea (Eds.). Towards a new evolutionary computation. Advances in estimation of
Energy Technology Data Exchange (ETDEWEB)
Thornton, Peter E [ORNL; Wang, Weile [ORNL; Law, Beverly E. [Oregon State University; Nemani, Ramakrishna R [NASA Ames Research Center
2009-01-01
The increasing complexity of ecosystem models represents a major difficulty in tuning model parameters and analyzing simulated results. To address this problem, this study develops a hierarchical scheme that simplifies the Biome-BGC model into three functionally cascaded tiers and analyzes them sequentially. The first-tier model focuses on leaf-level ecophysiological processes; it simulates evapotranspiration and photosynthesis with prescribed leaf area index (LAI). The restriction on LAI is then lifted in the following two model tiers, which analyze how carbon and nitrogen is cycled at the whole-plant level (the second tier) and in all litter/soil pools (the third tier) to dynamically support the prescribed canopy. In particular, this study analyzes the steady state of these two model tiers by a set of equilibrium equations that are derived from Biome-BGC algorithms and are based on the principle of mass balance. Instead of spinning-up the model for thousands of climate years, these equations are able to estimate carbon/nitrogen stocks and fluxes of the target (steady-state) ecosystem directly from the results obtained by the first-tier model. The model hierarchy is examined with model experiments at four AmeriFlux sites. The results indicate that the proposed scheme can effectively calibrate Biome-BGC to simulate observed fluxes of evapotranspiration and photosynthesis; and the carbon/nitrogen stocks estimated by the equilibrium analysis approach are highly consistent with the results of model simulations. Therefore, the scheme developed in this study may serve as a practical guide to calibrate/analyze Biome-BGC; it also provides an efficient way to solve the problem of model spin-up, especially for applications over large regions. The same methodology may help analyze other similar ecosystem models as well.
Using genetic algorithm and TOPSIS for Xinanjiang model calibration with a single procedure
Cheng, Chun-Tian; Zhao, Ming-Yan; Chau, K. W.; Wu, Xin-Yu
2006-01-01
Genetic Algorithm (GA) is globally oriented in searching and thus useful in optimizing multiobjective problems, especially where the objective functions are ill-defined. Conceptual rainfall-runoff models that aim at predicting streamflow from the knowledge of precipitation over a catchment have become a basic tool for flood forecasting. The parameter calibration of a conceptual model usually involves the multiple criteria for judging the performances of observed data. However, it is often difficult to derive all objective functions for the parameter calibration problem of a conceptual model. Thus, a new method to the multiple criteria parameter calibration problem, which combines GA with TOPSIS (technique for order performance by similarity to ideal solution) for Xinanjiang model, is presented. This study is an immediate further development of authors' previous research (Cheng, C.T., Ou, C.P., Chau, K.W., 2002. Combining a fuzzy optimal model with a genetic algorithm to solve multi-objective rainfall-runoff model calibration. Journal of Hydrology, 268, 72-86), whose obvious disadvantages are to split the whole procedure into two parts and to become difficult to integrally grasp the best behaviors of model during the calibration procedure. The current method integrates the two parts of Xinanjiang rainfall-runoff model calibration together, simplifying the procedures of model calibration and validation and easily demonstrated the intrinsic phenomenon of observed data in integrity. Comparison of results with two-step procedure shows that the current methodology gives similar results to the previous method, is also feasible and robust, but simpler and easier to apply in practice.
Radulescu, E G; Wójcik, J; Lewin, P A; Nowicki, A
2003-06-01
To facilitate the implementation and verification of the new ultrasound hydrophone calibration techniques described in the companion paper (somewhere in this issue) a nonlinear propagation model was developed. A brief outline of the theoretical considerations is presented and the model's advantages and disadvantages are discussed. The results of simulations yielding spatial and temporal acoustic pressure amplitude are also presented and compared with those obtained using KZK and Field II models. Excellent agreement between all models is evidenced. The applicability of the model in discrete wideband calibration of hydrophones is documented in the companion paper somewhere in this volume.
MCNP-REN: a Monte Carlo tool for neutron detector design
International Nuclear Information System (INIS)
Abhold, M.E.; Baker, M.C.
2002-01-01
The development of neutron detectors makes extensive use of the predictions of detector response through the use of Monte Carlo techniques in conjunction with the point reactor model. Unfortunately, the point reactor model fails to accurately predict detector response in common applications. For this reason, the general Monte Carlo code developed at Los Alamos National Laboratory, Monte Carlo N-Particle (MCNP), was modified to simulate the pulse streams that would be generated by a neutron detector and normally analyzed by a shift register. This modified code, MCNP-Random Exponentially Distributed Neutron Source (MCNP-REN), along with the Time Analysis Program, predicts neutron detector response without using the point reactor model, making it unnecessary for the user to decide whether or not the assumptions of the point model are met for their application. MCNP-REN is capable of simulating standard neutron coincidence counting as well as neutron multiplicity counting. Measurements of mixed oxide fresh fuel were taken with the Underwater Coincidence Counter, and measurements of highly enriched uranium reactor fuel were taken with the active neutron interrogation Research Reactor Fuel Counter and compared to calculation. Simulations completed for other detector design applications are described. The method used in MCNP-REN is demonstrated to be fundamentally sound and shown to eliminate the need to use the point model for detector performance predictions
Modelling Machine Tools using Structure Integrated Sensors for Fast Calibration
Directory of Open Access Journals (Sweden)
Benjamin Montavon
2018-02-01
Full Text Available Monitoring of the relative deviation between commanded and actual tool tip position, which limits the volumetric performance of the machine tool, enables the use of contemporary methods of compensation to reduce tolerance mismatch and the uncertainties of on-machine measurements. The development of a primarily optical sensor setup capable of being integrated into the machine structure without limiting its operating range is presented. The use of a frequency-modulating interferometer and photosensitive arrays in combination with a Gaussian laser beam allows for fast and automated online measurements of the axes’ motion errors and thermal conditions with comparable accuracy, lower cost, and smaller dimensions as compared to state-of-the-art optical measuring instruments for offline machine tool calibration. The development is tested through simulation of the sensor setup based on raytracing and Monte-Carlo techniques.
Trendafiloski, G.; Gaspa Rebull, O.; Ewing, C.; Podlaha, A.; Magee, B.
2012-04-01
Calibration and validation are crucial steps in the production of the catastrophe models for the insurance industry in order to assure the model's reliability and to quantify its uncertainty. Calibration is needed in all components of model development including hazard and vulnerability. Validation is required to ensure that the losses calculated by the model match those observed in past events and which could happen in future. Impact Forecasting, the catastrophe modelling development centre of excellence within Aon Benfield, has recently launched its earthquake model for Algeria as a part of the earthquake model for the Maghreb region. The earthquake model went through a detailed calibration process including: (1) the seismic intensity attenuation model by use of macroseismic observations and maps from past earthquakes in Algeria; (2) calculation of the country-specific vulnerability modifiers by use of past damage observations in the country. The use of Benouar, 1994 ground motion prediction relationship was proven as the most appropriate for our model. Calculation of the regional vulnerability modifiers for the country led to 10% to 40% larger vulnerability indexes for different building types compared to average European indexes. The country specific damage models also included aggregate damage models for residential, commercial and industrial properties considering the description of the buildings stock given by World Housing Encyclopaedia and the local rebuilding cost factors equal to 10% for damage grade 1, 20% for damage grade 2, 35% for damage grade 3, 75% for damage grade 4 and 100% for damage grade 5. The damage grades comply with the European Macroseismic Scale (EMS-1998). The model was validated by use of "as-if" historical scenario simulations of three past earthquake events in Algeria M6.8 2003 Boumerdes, M7.3 1980 El-Asnam and M7.3 1856 Djidjelli earthquake. The calculated return periods of the losses for client market portfolio align with the
Modeling and Calibration of a Novel One-Mirror Galvanometric Laser Scanner
Directory of Open Access Journals (Sweden)
Chengyi Yu
2017-01-01
Full Text Available A laser stripe sensor has limited application when a point cloud of geometric samples on the surface of the object needs to be collected, so a galvanometric laser scanner is designed by using a one-mirror galvanometer element as its mechanical device to drive the laser stripe to sweep along the object. A novel mathematical model is derived for the proposed galvanometer laser scanner without any position assumptions and then a model-driven calibration procedure is proposed. Compared with available model-driven approaches, the influence of machining and assembly errors is considered in the proposed model. Meanwhile, a plane-constraint-based approach is proposed to extract a large number of calibration points effectively and accurately to calibrate the galvanometric laser scanner. Repeatability and accuracy of the galvanometric laser scanner are evaluated on the automobile production line to verify the efficiency and accuracy of the proposed calibration method. Experimental results show that the proposed calibration approach yields similar measurement performance compared with a look-up table calibration method.
CALIBRATING THE JOHNSON-HOLMQUIST CERAMIC MODEL FOR SIC USING CTH
International Nuclear Information System (INIS)
Cazamias, J. U.; Bilyk, S. R.
2009-01-01
The Johnson-Holmquist ceramic material model has been calibrated and successfully applied to numerically simulate ballistic events using the Lagrangian code EPIC. While the majority of the constants are ''physics'' based, two of the constants for the failed material response are calibrated using ballistic experiments conducted on a confined cylindrical ceramic target. The maximum strength of the failed ceramic is calibrated by matching the penetration velocity. The second refers to the equivalent plastic strain at failure under constant pressure and is calibrated using the dwell time. Use of these two constants in the CTH Eulerian hydrocode does not predict the ballistic response. This difference may be due to the phenomenological nature of the model and the different numerical schemes used by the codes. This paper determines the aforementioned material constants for SiC suitable for simulating ballistic events using CTH.
A case study on robust optimal experimental design for model calibration of ω-Transaminase
DEFF Research Database (Denmark)
Daele, Timothy, Van; Van Hauwermeiren, Daan; Ringborg, Rolf Hoffmeyer
the experimental space. However, it is expected that more informative experiments can be designed to increase the confidence of the parameter estimates. Therefore, we apply Optimal Experimental Design (OED) to the calibrated model of Shin and Kim (1998). The total number of samples was retained to allow fair......” parameter values are not known before finishing the model calibration. However, it is important that the chosen parameter values are close to the real parameter values, otherwise the OED can possibly yield non-informative experiments. To counter this problem, one can use robust OED. The idea of robust OED......Proper calibration of models describing enzyme kinetics can be quite challenging. This is especially the case for more complex models like transaminase models (Shin and Kim, 1998). The latter fitted model parameters, but the confidence on the parameter estimation was not derived. Hence...
Tian, Jialin; Smith, William L.; Gazarik, Michael J.
2008-12-01
The ultimate remote sensing benefits of the high resolution Infrared radiance spectrometers will be realized with their geostationary satellite implementation in the form of imaging spectrometers. This will enable dynamic features of the atmosphere's thermodynamic fields and pollutant and greenhouse gas constituents to be observed for revolutionary improvements in weather forecasts and more accurate air quality and climate predictions. As an important step toward realizing this application objective, the Geostationary Imaging Fourier Transform Spectrometer (GIFTS) Engineering Demonstration Unit (EDU) was successfully developed under the NASA New Millennium Program, 2000-2006. The GIFTS-EDU instrument employs three focal plane arrays (FPAs), which gather measurements across the long-wave IR (LWIR), short/mid-wave IR (SMWIR), and visible spectral bands. The GIFTS calibration is achieved using internal blackbody calibration references at ambient (260 K) and hot (286 K) temperatures. In this paper, we introduce a refined calibration technique that utilizes Principle Component (PC) analysis to compensate for instrument distortions and artifacts, therefore, enhancing the absolute calibration accuracy. This method is applied to data collected during the GIFTS Ground Based Measurement (GBM) experiment, together with simultaneous observations by the accurately calibrated AERI (Atmospheric Emitted Radiance Interferometer), both simultaneously zenith viewing the sky through the same external scene mirror at ten-minute intervals throughout a cloudless day at Logan Utah on September 13, 2006. The accurately calibrated GIFTS radiances are produced using the first four PC scores in the GIFTS-AERI regression model. Temperature and moisture profiles retrieved from the PC-calibrated GIFTS radiances are verified against radiosonde measurements collected throughout the GIFTS sky measurement period. Using the GIFTS GBM calibration model, we compute the calibrated radiances from data
CTEx Beowulf cluster for MCNP performance
International Nuclear Information System (INIS)
Gonzaga, Roberto N.; Amorim, Aneuri S. de; Balthar, Mario Cesar V.
2011-01-01
This work is an introduction to the CTEx Nuclear Defense Department's Beowulf Cluster. Building a Beowulf Cluster is a complex learning process that greatly depends upon your hardware and software requirements. The feasibility and efficiency of performing MCNP5 calculations with a small, heterogeneous computing cluster built in Red Hat's Fedora Linux operating system personal computers (PC) are explored. The performance increases that may be expected with such clusters are estimated for cases that typify general radiation transport calculations. Our results show that the speed increase from additional slave PCs is nearly linear up to 10 processors. The pre compiled parallel binary version of MCNP uses the Message-Passing Interface (MPI) protocol. The use of this pre compiled parallel version of MCNP5 with the MPI protocol on a small, heterogeneous computing cluster built from Red Hat's Fedora Linux operating system PCs is the subject of this work. (author)
Energy Technology Data Exchange (ETDEWEB)
Ward, Anderson L.; Wittman, Richard S.
2009-08-01
Computation of soil moisture content from thermalized neutron counts for the T-Farm Interim cover requires a calibration relationship but none exists for 2-in tubes. A number of calibration options are available for the neutron probe, including vendor calibration, field calibration, but none of these methods were deemed appropriate for the configuration of interest. The objective of this work was to develop a calibration relation for converting neutron counts measured in 2-in access tubes to soil water content. The calibration method chosen for this study was a computational approach using the Monte Carlo N-Particle Transport Code (MCNP). Model calibration was performed using field measurements in the Hanford calibration models with 6-in access tubes, in air and in the probe shield. The bet-fit model relating known water content to measured neutron counts was an exponential model that was essentially equivalent to that currently being used for 6-in steel cased wells. The MCNP simulations successfully predicted the neutron count rate for the neutron shield and the three calibration models for which data were collected in the field. However, predictions for air were about 65% lower than the measured counts . This discrepancy can be attributed to uncertainties in the configuration used for the air measurements. MCNP-simulated counts for the physical models were essentially equal to the measured counts with values. Accurate prediction of the response in 6-in casings in the three calibration models was motivation to predict the response in 2-in access tubes. Simulations were performed for six of the seven calibration models as well as 4 virtual models with the entire set covering a moisture range of 0 to 40%. Predicted counts for the calibration models with 2-in access tubes were 40 to 50% higher than in the 6-inch tubes. Predicted counts for water were about 60% higher in the 2-in tube than in the 6-in tube. The discrepancy between the 2-in and 6-in tube can be
Impact of the calibration period on the conceptual rainfall-runoff model parameter estimates
Todorovic, Andrijana; Plavsic, Jasna
2015-04-01
A conceptual rainfall-runoff model is defined by its structure and parameters, which are commonly inferred through model calibration. Parameter estimates depend on objective function(s), optimisation method, and calibration period. Model calibration over different periods may result in dissimilar parameter estimates, while model efficiency decreases outside calibration period. Problem of model (parameter) transferability, which conditions reliability of hydrologic simulations, has been investigated for decades. In this paper, dependence of the parameter estimates and model performance on calibration period is analysed. The main question that is addressed is: are there any changes in optimised parameters and model efficiency that can be linked to the changes in hydrologic or meteorological variables (flow, precipitation and temperature)? Conceptual, semi-distributed HBV-light model is calibrated over five-year periods shifted by a year (sliding time windows). Length of the calibration periods is selected to enable identification of all parameters. One water year of model warm-up precedes every simulation, which starts with the beginning of a water year. The model is calibrated using the built-in GAP optimisation algorithm. The objective function used for calibration is composed of Nash-Sutcliffe coefficient for flows and logarithms of flows, and volumetric error, all of which participate in the composite objective function with approximately equal weights. Same prior parameter ranges are used in all simulations. The model is calibrated against flows observed at the Slovac stream gauge on the Kolubara River in Serbia (records from 1954 to 2013). There are no trends in precipitation nor in flows, however, there is a statistically significant increasing trend in temperatures at this catchment. Parameter variability across the calibration periods is quantified in terms of standard deviations of normalised parameters, enabling detection of the most variable parameters
The Wally plot approach to assess the calibration of clinical prediction models.
Blanche, Paul; Gerds, Thomas A; Ekstrøm, Claus T
2017-12-06
A prediction model is calibrated if, roughly, for any percentage x we can expect that x subjects out of 100 experience the event among all subjects that have a predicted risk of x%. Typically, the calibration assumption is assessed graphically but in practice it is often challenging to judge whether a "disappointing" calibration plot is the consequence of a departure from the calibration assumption, or alternatively just "bad luck" due to sampling variability. We propose a graphical approach which enables the visualization of how much a calibration plot agrees with the calibration assumption to address this issue. The approach is mainly based on the idea of generating new plots which mimic the available data under the calibration assumption. The method handles the common non-trivial situations in which the data contain censored observations and occurrences of competing events. This is done by building on ideas from constrained non-parametric maximum likelihood estimation methods. Two examples from large cohort data illustrate our proposal. The 'wally' R package is provided to make the methodology easily usable.
Analysis of JSI TRIGA MARK II reactor physical parameters calculated with TRIPOLI and MCNP.
Henry, R; Tiselj, I; Snoj, L
2015-03-01
New computational model of the JSI TRIGA Mark II research reactor was built for TRIPOLI computer code and compared with existing MCNP code model. The same modelling assumptions were used in order to check the differences of the mathematical models of both Monte Carlo codes. Differences between the TRIPOLI and MCNP predictions of keff were up to 100pcm. Further validation was performed with analyses of the normalized reaction rates and computations of kinetic parameters for various core configurations. Copyright © 2014 Elsevier Ltd. All rights reserved.
Evaluation of Automated Model Calibration Techniques for Residential Building Energy Simulation
Energy Technology Data Exchange (ETDEWEB)
and Ben Polly, Joseph Robertson [National Renewable Energy Lab. (NREL), Golden, CO (United States); Polly, Ben [National Renewable Energy Lab. (NREL), Golden, CO (United States); Collis, Jon [Colorado School of Mines, Golden, CO (United States)
2013-09-01
This simulation study adapts and applies the general framework described in BESTEST-EX (Judkoff et al 2010) for self-testing residential building energy model calibration methods. BEopt/DOE-2.2 is used to evaluate four mathematical calibration methods in the context of monthly, daily, and hourly synthetic utility data for a 1960's-era existing home in a cooling-dominated climate. The home's model inputs are assigned probability distributions representing uncertainty ranges, random selections are made from the uncertainty ranges to define "explicit" input values, and synthetic utility billing data are generated using the explicit input values. The four calibration methods evaluated in this study are: an ASHRAE 1051-RP-based approach (Reddy and Maor 2006), a simplified simulated annealing optimization approach, a regression metamodeling optimization approach, and a simple output ratio calibration approach. The calibration methods are evaluated for monthly, daily, and hourly cases; various retrofit measures are applied to the calibrated models and the methods are evaluated based on the accuracy of predicted savings, computational cost, repeatability, automation, and ease of implementation.
Evaluation of Automated Model Calibration Techniques for Residential Building Energy Simulation
Energy Technology Data Exchange (ETDEWEB)
Robertson, J.; Polly, B.; Collis, J.
2013-09-01
This simulation study adapts and applies the general framework described in BESTEST-EX (Judkoff et al 2010) for self-testing residential building energy model calibration methods. BEopt/DOE-2.2 is used to evaluate four mathematical calibration methods in the context of monthly, daily, and hourly synthetic utility data for a 1960's-era existing home in a cooling-dominated climate. The home's model inputs are assigned probability distributions representing uncertainty ranges, random selections are made from the uncertainty ranges to define 'explicit' input values, and synthetic utility billing data are generated using the explicit input values. The four calibration methods evaluated in this study are: an ASHRAE 1051-RP-based approach (Reddy and Maor 2006), a simplified simulated annealing optimization approach, a regression metamodeling optimization approach, and a simple output ratio calibration approach. The calibration methods are evaluated for monthly, daily, and hourly cases; various retrofit measures are applied to the calibrated models and the methods are evaluated based on the accuracy of predicted savings, computational cost, repeatability, automation, and ease of implementation.
Karagiannis, Georgios; Lin, Guang
2017-08-01
For many real systems, several computer models may exist with different physics and predictive abilities. To achieve more accurate simulations/predictions, it is desirable for these models to be properly combined and calibrated. We propose the Bayesian calibration of computer model mixture method which relies on the idea of representing the real system output as a mixture of the available computer model outputs with unknown input dependent weight functions. The method builds a fully Bayesian predictive model as an emulator for the real system output by combining, weighting, and calibrating the available models in the Bayesian framework. Moreover, it fits a mixture of calibrated computer models that can be used by the domain scientist as a mean to combine the available computer models, in a flexible and principled manner, and perform reliable simulations. It can address realistic cases where one model may be more accurate than the others at different input values because the mixture weights, indicating the contribution of each model, are functions of the input. Inference on the calibration parameters can consider multiple computer models associated with different physics. The method does not require knowledge of the fidelity order of the models. We provide a technique able to mitigate the computational overhead due to the consideration of multiple computer models that is suitable to the mixture model framework. We implement the proposed method in a real-world application involving the Weather Research and Forecasting large-scale climate model.
Adjoint-Based Uncertainty Quantification with MCNP
Energy Technology Data Exchange (ETDEWEB)
Seifried, Jeffrey E. [Univ. of California, Berkeley, CA (United States)
2011-09-01
This work serves to quantify the instantaneous uncertainties in neutron transport simulations born from nuclear data and statistical counting uncertainties. Perturbation and adjoint theories are used to derive implicit sensitivity expressions. These expressions are transformed into forms that are convenient for construction with MCNP6, creating the ability to perform adjoint-based uncertainty quantification with MCNP6. These new tools are exercised on the depleted-uranium hybrid LIFE blanket, quantifying its sensitivities and uncertainties to important figures of merit. Overall, these uncertainty estimates are small (< 2%). Having quantified the sensitivities and uncertainties, physical understanding of the system is gained and some confidence in the simulation is acquired.
Vansteenkiste, Thomas; Tavakoli, Mohsen; Ntegeka, Victor; De Smedt, Florimond; Batelaan, Okke; Pereira, Fernando; Willems, Patrick
2014-11-01
The objective of this paper is to investigate the effects of hydrological model structure and calibration on climate change impact results in hydrology. The uncertainty in the hydrological impact results is assessed by the relative change in runoff volumes and peak and low flow extremes from historical and future climate conditions. The effect of the hydrological model structure is examined through the use of five hydrological models with different spatial resolutions and process descriptions. These were applied to a medium sized catchment in Belgium. The models vary from the lumped conceptual NAM, PDM and VHM models over the intermediate detailed and distributed WetSpa model to the fully distributed MIKE SHE model. The latter model accounts for the 3D groundwater processes and interacts bi-directionally with a full hydrodynamic MIKE 11 river model. After careful and manual calibration of these models, accounting for the accuracy of the peak and low flow extremes and runoff subflows, and the changes in these extremes for changing rainfall conditions, the five models respond in a similar way to the climate scenarios over Belgium. Future projections on peak flows are highly uncertain with expected increases as well as decreases depending on the climate scenario. The projections on future low flows are more uniform; low flows decrease (up to 60%) for all models and for all climate scenarios. However, the uncertainties in the impact projections are high, mainly in the dry season. With respect to the model structural uncertainty, the PDM model simulates significantly higher runoff peak flows under future wet scenarios, which is explained by its specific model structure. For the low flow extremes, the MIKE SHE model projects significantly lower low flows in dry scenario conditions in comparison to the other models, probably due to its large difference in process descriptions for the groundwater component, the groundwater-river interactions. The effect of the model
Rafique, Rashad; Fienen, Michael N.; Parkin, Timothy B.; Anex, Robert P.
2013-01-01
DayCent is a biogeochemical model of intermediate complexity widely used to simulate greenhouse gases (GHG), soil organic carbon and nutrients in crop, grassland, forest and savannah ecosystems. Although this model has been applied to a wide range of ecosystems, it is still typically parameterized through a traditional “trial and error” approach and has not been calibrated using statistical inverse modelling (i.e. algorithmic parameter estimation). The aim of this study is to establish and demonstrate a procedure for calibration of DayCent to improve estimation of GHG emissions. We coupled DayCent with the parameter estimation (PEST) software for inverse modelling. The PEST software can be used for calibration through regularized inversion as well as model sensitivity and uncertainty analysis. The DayCent model was analysed and calibrated using N2O flux data collected over 2 years at the Iowa State University Agronomy and Agricultural Engineering Research Farms, Boone, IA. Crop year 2003 data were used for model calibration and 2004 data were used for validation. The optimization of DayCent model parameters using PEST significantly reduced model residuals relative to the default DayCent parameter values. Parameter estimation improved the model performance by reducing the sum of weighted squared residual difference between measured and modelled outputs by up to 67 %. For the calibration period, simulation with the default model parameter values underestimated mean daily N2O flux by 98 %. After parameter estimation, the model underestimated the mean daily fluxes by 35 %. During the validation period, the calibrated model reduced sum of weighted squared residuals by 20 % relative to the default simulation. Sensitivity analysis performed provides important insights into the model structure providing guidance for model improvement.
Wright, David; Thyer, Mark; Westra, Seth
2015-04-01
Highly influential data points are those that have a disproportionately large impact on model performance, parameters and predictions. However, in current hydrological modelling practice the relative influence of individual data points on hydrological model calibration is not commonly evaluated. This presentation illustrates and evaluates several influence diagnostics tools that hydrological modellers can use to assess the relative influence of data. The feasibility and importance of including influence detection diagnostics as a standard tool in hydrological model calibration is discussed. Two classes of influence diagnostics are evaluated: (1) computationally demanding numerical "case deletion" diagnostics; and (2) computationally efficient analytical diagnostics, based on Cook's distance. These diagnostics are compared against hydrologically orientated diagnostics that describe changes in the model parameters (measured through the Mahalanobis distance), performance (objective function displacement) and predictions (mean and maximum streamflow). These influence diagnostics are applied to two case studies: a stage/discharge rating curve model, and a conceptual rainfall-runoff model (GR4J). Removing a single data point from the calibration resulted in differences to mean flow predictions of up to 6% for the rating curve model, and differences to mean and maximum flow predictions of up to 10% and 17%, respectively, for the hydrological model. When using the Nash-Sutcliffe efficiency in calibration, the computationally cheaper Cook's distance metrics produce similar results to the case-deletion metrics at a fraction of the computational cost. However, Cooks distance is adapted from linear regression with inherit assumptions on the data and is therefore less flexible than case deletion. Influential point detection diagnostics show great potential to improve current hydrological modelling practices by identifying highly influential data points. The findings of this
International Nuclear Information System (INIS)
Stripling, H.F.; McClarren, R.G.; Kuranz, C.C.; Grosskopf, M.J.; Rutter, E.; Torralva, B.R.
2011-01-01
We present a method for calibrating the uncertain inputs to a computer model using available experimental data. The goal of the procedure is to produce posterior distributions of the uncertain inputs such that when samples from the posteriors are used as inputs to future model runs, the model is more likely to replicate (or predict) the experimental response. The calibration is performed by sampling the space of the uncertain inputs, using the computer model (or, more likely, an emulator for the computer model) to assign weights to the samples, and applying the weights to produce the posterior distributions and generate predictions of new experiments within confidence bounds. The method is similar to the Markov chain Monte Carlo (MCMC) calibration methods with independent sampling with the exception that we generate samples beforehand and replace the candidate acceptance routine with a weighting scheme. We apply our method to the calibration of a Hyades 2D model of laser energy deposition in beryllium. We employ a Bayesian Multivariate Adaptive Regression Splines (BMARS) emulator as a surrogate for Hyades 2D. We treat a range of uncertainties in our system, including uncertainties in the experimental inputs, experimental measurement error, and systematic experimental timing errors. The results of the calibration are posterior distributions that both agree with intuition and improve the accuracy and decrease the uncertainty in experimental predictions. (author)
A system-theory-based model for monthly river runoff forecasting: model calibration and optimization
Directory of Open Access Journals (Sweden)
Wu Jianhua
2014-03-01
Full Text Available River runoff is not only a crucial part of the global water cycle, but it is also an important source for hydropower and an essential element of water balance. This study presents a system-theory-based model for river runoff forecasting taking the Hailiutu River as a case study. The forecasting model, designed for the Hailiutu watershed, was calibrated and verified by long-term precipitation observation data and groundwater exploitation data from the study area. Additionally, frequency analysis, taken as an optimization technique, was applied to improve prediction accuracy. Following model optimization, the overall relative prediction errors are below 10%. The system-theory-based prediction model is applicable to river runoff forecasting, and following optimization by frequency analysis, the prediction error is acceptable.
Utilization of new 150-MeV neutron and proton evaluations in MCNP
International Nuclear Information System (INIS)
Little, R.C.; Frankle, S.C.; Hughes, H.G. III; Prael, R.E.
1997-01-01
MCNP trademark and LAHET trademark are two of the codes included in the LARAMIE (Los Alamos Radiation Modeling Interactive Environment) code system. Both MCNP and LAHET are three-dimensional continuous-energy Monte Carlo radiation transport codes. The capabilities of MCNP and LAHET are currently being merged into one code for the Accelerator Production of Tritium (APT) program at Los Alamos National Laboratory. Concurrently, a significant effort is underway to improve the accuracy of the physics in the merged code. In particular, full nuclear-data evaluations (in ENDF6 format) for many materials of importance to APT are being produced for incident neutrons and protons up to an energy of 150-MeV. After processing, cross-section tables based on these new evaluations will be available for use fin the merged code. In order to utilize these new cross-section tables, significant enhancements are required for the merged code. Neutron cross-section tables for MCNP currently specify emission data for neutrons and photons only; the new evaluations also include complete neutron-induced data for protons, deuterons, tritons, and alphas. In addition, no provision in either MCNP or LAHET currently exists for the use of incident charged-particle tables other than for electrons. To accommodate the new neutron-induced data, it was first necessary to expand the format definition of an MCNP neutron cross-section table. The authors have prepared a 150-MeV neutron cross-section library in this expanded format for 15 nuclides. Modifications to MCNP have been implemented so that this expanded neutron library can be utilized
Procedure for the Selection and Validation of a Calibration Model I-Description and Application.
Desharnais, Brigitte; Camirand-Lemyre, Félix; Mireault, Pascal; Skinner, Cameron D
2017-05-01
Calibration model selection is required for all quantitative methods in toxicology and more broadly in bioanalysis. This typically involves selecting the equation order (quadratic or linear) and weighting factor correctly modelizing the data. A mis-selection of the calibration model will generate lower quality control (QC) accuracy, with an error up to 154%. Unfortunately, simple tools to perform this selection and tests to validate the resulting model are lacking. We present a stepwise, analyst-independent scheme for selection and validation of calibration models. The success rate of this scheme is on average 40% higher than a traditional "fit and check the QCs accuracy" method of selecting the calibration model. Moreover, the process was completely automated through a script (available in Supplemental Data 3) running in RStudio (free, open-source software). The need for weighting was assessed through an F-test using the variances of the upper limit of quantification and lower limit of quantification replicate measurements. When weighting was required, the choice between 1/x and 1/x2 was determined by calculating which option generated the smallest spread of weighted normalized variances. Finally, model order was selected through a partial F-test. The chosen calibration model was validated through Cramer-von Mises or Kolmogorov-Smirnov normality testing of the standardized residuals. Performance of the different tests was assessed using 50 simulated data sets per possible calibration model (e.g., linear-no weight, quadratic-no weight, linear-1/x, etc.). This first of two papers describes the tests, procedures and outcomes of the developed procedure using real LC-MS-MS results for the quantification of cocaine and naltrexone. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Calibration of a distributed hydrologic model for six European catchments using remote sensing data
Stisen, S.; Demirel, M. C.; Mendiguren González, G.; Kumar, R.; Rakovec, O.; Samaniego, L. E.
2017-12-01
While observed streamflow has been the single reference for most conventional hydrologic model calibration exercises, the availability of spatially distributed remote sensing observations provide new possibilities for multi-variable calibration assessing both spatial and temporal variability of different hydrologic processes. In this study, we first identify the key transfer parameters of the mesoscale Hydrologic Model (mHM) controlling both the discharge and the spatial distribution of actual evapotranspiration (AET) across six central European catchments (Elbe, Main, Meuse, Moselle, Neckar and Vienne). These catchments are selected based on their limited topographical and climatic variability which enables to evaluate the effect of spatial parameterization on the simulated evapotranspiration patterns. We develop a European scale remote sensing based actual evapotranspiration dataset at a 1 km grid scale driven primarily by land surface temperature observations from MODIS using the TSEB approach. Using the observed AET maps we analyze the potential benefits of incorporating spatial patterns from MODIS data to calibrate the mHM model. This model allows calibrating one-basin-at-a-time or all-basins-together using its unique structure and multi-parameter regionalization approach. Results will indicate any tradeoffs between spatial pattern and discharge simulation during model calibration and through validation against independent internal discharge locations. Moreover, added value on internal water balances will be analyzed.
Elsheikh, A. H.
2013-12-01
Calibration of subsurface flow models is an essential step for managing ground water aquifers, designing of contaminant remediation plans, and maximizing recovery from hydrocarbon reservoirs. We investigate an efficient sampling algorithm known as nested sampling (NS), which can simultaneously sample the posterior distribution for uncertainty quantification, and estimate the Bayesian evidence for model selection. Model selection statistics, such as the Bayesian evidence, are needed to choose or assign different weights to different models of different levels of complexities. In this work, we report the first successful application of nested sampling for calibration of several nonlinear subsurface flow problems. The estimated Bayesian evidence by the NS algorithm is used to weight different parameterizations of the subsurface flow models (prior model selection). The results of the numerical evaluation implicitly enforced Occam\\'s razor where simpler models with fewer number of parameters are favored over complex models. The proper level of model complexity was automatically determined based on the information content of the calibration data and the data mismatch of the calibrated model.
Calibration and analysis of genome-based models for microbial ecology.
Louca, Stilianos; Doebeli, Michael
2015-10-16
Microbial ecosystem modeling is complicated by the large number of unknown parameters and the lack of appropriate calibration tools. Here we present a novel computational framework for modeling microbial ecosystems, which combines genome-based model construction with statistical analysis and calibration to experimental data. Using this framework, we examined the dynamics of a community of Escherichia coli strains that emerged in laboratory evolution experiments, during which an ancestral strain diversified into two coexisting ecotypes. We constructed a microbial community model comprising the ancestral and the evolved strains, which we calibrated using separate monoculture experiments. Simulations reproduced the successional dynamics in the evolution experiments, and pathway activation patterns observed in microarray transcript profiles. Our approach yielded detailed insights into the metabolic processes that drove bacterial diversification, involving acetate cross-feeding and competition for organic carbon and oxygen. Our framework provides a missing link towards a data-driven mechanistic microbial ecology.
S. Wang; Z. Zhang; G. Sun; P. Strauss; J. Guo; Y. Tang; A. Yao
2012-01-01
Model calibration is essential for hydrologic modeling of large watersheds in a heterogeneous mountain environment. Little guidance is available for model calibration protocols for distributed models that aim at capturing the spatial variability of hydrologic processes. This study used the physically-based distributed hydrologic model, MIKE SHE, to contrast a lumped...
Necpálová, Magdalena; Anex, Robert P.; Fienen, Michael N.; Del Grosso, Stephen J.; Castellano, Michael J.; Sawyer, John E.; Iqbal, Javed; Pantoja, Jose L.; Barker, Daniel W.
2015-01-01
The ability of biogeochemical ecosystem models to represent agro-ecosystems depends on their correct integration with field observations. We report simultaneous calibration of 67 DayCent model parameters using multiple observation types through inverse modeling using the PEST parameter estimation software. Parameter estimation reduced the total sum of weighted squared residuals by 56% and improved model fit to crop productivity, soil carbon, volumetric soil water content, soil temperature, N2O, and soil3NO− compared to the default simulation. Inverse modeling substantially reduced predictive model error relative to the default model for all model predictions, except for soil 3NO− and 4NH+. Post-processing analyses provided insights into parameter–observation relationships based on parameter correlations, sensitivity and identifiability. Inverse modeling tools are shown to be a powerful way to systematize and accelerate the process of biogeochemical model interrogation, improving our understanding of model function and the underlying ecosystem biogeochemical processes that they represent.
Wentworth, Mami Tonoe
Uncertainty quantification plays an important role when making predictive estimates of model responses. In this context, uncertainty quantification is defined as quantifying and reducing uncertainties, and the objective is to quantify uncertainties in parameter, model and measurements, and propagate the uncertainties through the model, so that one can make a predictive estimate with quantified uncertainties. Two of the aspects of uncertainty quantification that must be performed prior to propagating uncertainties are model calibration and parameter selection. There are several efficient techniques for these processes; however, the accuracy of these methods are often not verified. This is the motivation for our work, and in this dissertation, we present and illustrate verification frameworks for model calibration and parameter selection in the context of biological and physical models. First, HIV models, developed and improved by [2, 3, 8], describe the viral infection dynamics of an HIV disease. These are also used to make predictive estimates of viral loads and T-cell counts and to construct an optimal control for drug therapy. Estimating input parameters is an essential step prior to uncertainty quantification. However, not all the parameters are identifiable, implying that they cannot be uniquely determined by the observations. These unidentifiable parameters can be partially removed by performing parameter selection, a process in which parameters that have minimal impacts on the model response are determined. We provide verification techniques for Bayesian model calibration and parameter selection for an HIV model. As an example of a physical model, we employ a heat model with experimental measurements presented in [10]. A steady-state heat model represents a prototypical behavior for heat conduction and diffusion process involved in a thermal-hydraulic model, which is a part of nuclear reactor models. We employ this simple heat model to illustrate verification
Calibrated Blade-Element/Momentum Theory Aerodynamic Model of the MARIN Stock Wind Turbine: Preprint
Energy Technology Data Exchange (ETDEWEB)
Goupee, A.; Kimball, R.; de Ridder, E. J.; Helder, J.; Robertson, A.; Jonkman, J.
2015-04-02
In this paper, a calibrated blade-element/momentum theory aerodynamic model of the MARIN stock wind turbine is developed and documented. The model is created using open-source software and calibrated to closely emulate experimental data obtained by the DeepCwind Consortium using a genetic algorithm optimization routine. The provided model will be useful for those interested in validating interested in validating floating wind turbine numerical simulators that rely on experiments utilizing the MARIN stock wind turbine—for example, the International Energy Agency Wind Task 30’s Offshore Code Comparison Collaboration Continued, with Correlation project.
Comparison of TITAN hybrid deterministic transport code and MCNP5 for simulation of SPECT
International Nuclear Information System (INIS)
Royston, K.; Haghighat, A.; Yi, C.
2010-01-01
Traditionally, Single Photon Emission Computed Tomography (SPECT) simulations use Monte Carlo methods. The hybrid deterministic transport code TITAN has recently been applied to the simulation of a SPECT myocardial perfusion study. The TITAN SPECT simulation uses the discrete ordinates formulation in the phantom region and a simplified ray-tracing formulation outside of the phantom. A SPECT model has been created in the Monte Carlo Neutral particle (MCNP)5 Monte Carlo code for comparison. In MCNP5 the collimator is directly modeled, but TITAN instead simulates the effect of collimator blur using a circular ordinate splitting technique. Projection images created using the TITAN code are compared to results using MCNP5 for three collimator acceptance angles. Normalized projection images for 2.97 deg, 1.42 deg and 0.98 deg collimator acceptance angles had maximum relative differences of 21.3%, 11.9% and 8.3%, respectively. Visually the images are in good agreement. Profiles through the projection images were plotted to find that the TITAN results followed the shape of the MCNP5 results with some differences in magnitude. A timing comparison on 16 processors found that the TITAN code completed the calculation 382 to 2787 times faster than MCNP5. Both codes exhibit good parallel performance. (author)
Energy Technology Data Exchange (ETDEWEB)
Sun, Kaiyu; Yan, Da; Hong, Tianzhen; Guo, Siyue
2014-02-28
Overtime is a common phenomenon around the world. Overtime drives both internal heat gains from occupants, lighting and plug-loads, and HVAC operation during overtime periods. Overtime leads to longer occupancy hours and extended operation of building services systems beyond normal working hours, thus overtime impacts total building energy use. Current literature lacks methods to model overtime occupancy because overtime is stochastic in nature and varies by individual occupants and by time. To address this gap in the literature, this study aims to develop a new stochastic model based on the statistical analysis of measured overtime occupancy data from an office building. A binomial distribution is used to represent the total number of occupants working overtime, while an exponential distribution is used to represent the duration of overtime periods. The overtime model is used to generate overtime occupancy schedules as an input to the energy model of a second office building. The measured and simulated cooling energy use during the overtime period is compared in order to validate the overtime model. A hybrid approach to energy model calibration is proposed and tested, which combines ASHRAE Guideline 14 for the calibration of the energy model during normal working hours, and a proposed KS test for the calibration of the energy model during overtime. The developed stochastic overtime model and the hybrid calibration approach can be used in building energy simulations to improve the accuracy of results, and better understand the characteristics of overtime in office buildings.
Visible spectroscopy calibration transfer model in determining pH of Sala mangoes
International Nuclear Information System (INIS)
Yahaya, O.K.M.; MatJafri, M.Z.; Aziz, A.A.; Omar, A.F.
2015-01-01
The purpose of this study is to compare the efficiency of calibration transfer procedures between three spectrometers involving two Ocean Optics Inc. spectrometers, namely, QE65000 and Jaz, and also, ASD FieldSpec 3 in measuring the pH of Sala mango by visible reflectance spectroscopy. This study evaluates the ability of these spectrometers in measuring the pH of Sala mango by applying similar calibration algorithms through direct calibration transfer. This visible reflectance spectroscopy technique defines a spectrometer as a master instrument and another spectrometer as a slave. The multiple linear regression (MLR) of calibration model generated using the QE65000 spectrometer is transferred to the Jaz spectrometer and vice versa for Set 1. The same technique is applied for Set 2 with QE65000 spectrometer is transferred to the FieldSpec3 spectrometer and vice versa. For Set 1, the result showed that the QE65000 spectrometer established a calibration model with higher accuracy than that of the Jaz spectrometer. In addition, the calibration model developed on Jaz spectrometer successfully predicted the pH of Sala mango, which was measured using QE65000 spectrometer, with a root means square error of prediction RMSEP = 0.092 pH and coefficients of determination R 2 = 0.892. Moreover, the best prediction result is obtained for Set 2 when the calibration model developed on QE65000 spectrometer is successfully transferred to FieldSpec 3 with R 2 = 0.839 and RMSEP = 0.16 pH
Innovative Calibration Method for System Level Simulation Models of Internal Combustion Engines
Directory of Open Access Journals (Sweden)
Ivo Prah
2016-09-01
Full Text Available The paper outlines a procedure for the computer-controlled calibration of the combined zero-dimensional (0D and one-dimensional (1D thermodynamic simulation model of a turbocharged internal combustion engine (ICE. The main purpose of the calibration is to determine input parameters of the simulation model in such a way as to achieve the smallest difference between the results of the measurements and the results of the numerical simulations with minimum consumption of the computing time. An innovative calibration methodology is based on a novel interaction between optimization methods and physically based methods of the selected ICE sub-systems. Therein physically based methods were used for steering the division of the integral ICE to several sub-models and for determining parameters of selected components considering their governing equations. Innovative multistage interaction between optimization methods and physically based methods allows, unlike the use of well-established methods that rely only on the optimization techniques, for successful calibration of a large number of input parameters with low time consumption. Therefore, the proposed method is suitable for efficient calibration of simulation models of advanced ICEs.
Directory of Open Access Journals (Sweden)
Polomčić Dušan M.
2015-01-01
Full Text Available The calibration process of hydrodynamic model is done usually manually by 'testing' with different values of hydrogeological parameters and hydraulic characteristics of the boundary conditions. By using the PEST program, automatic calibration of models has been introduced, and it has proved to significantly reduce the subjective influence of the model creator on results. With the relatively new approach of PEST, i.e. with the introduction of so-called 'pilot points', the concept of homogeneous zones with parameter values of porous media or zones with the given boundary conditions has been outdated. However, the consequence of this kind of automatic calibration is that a significant amount of time is required to perform the calculation. The duration of calibration is measured in hours, sometimes even days. PEST contains two modules for the shortening of that process - Parallel PEST and BeoPEST. The paper presents performed experiments and analysis of different cases of PEST module usage, based on which the reduction in the time required to calibrate the model is done.
MOCUP, MCNP/ORIGEN Coupling Utility Programs
International Nuclear Information System (INIS)
SEIDL, Marcus
2003-01-01
1 - Description of program or function: MOCUP is a series of utility and data manipulation programs to solve time and space-dependent coupled neutronics/isotopics problems. 2 - Methods: The neutronics calculation is performed by the Los Alamos National Laboratory code system, version 4a or later (CCC-200 or CCC-660),and the depletion and isotopics calculation is performed by CCC-371/ORIGEN2.1 developed at Oak Ridge National Laboratory. MCNP and ORIGEN2.1 are NOT included in this package. MOCUP consists of three utility programs (mcnpPRO, origenPRO, compPRO) to, respectively, search the MCNP output and tally files for relevant cell and tally parameters, prepare ORIGEN2.1 input files and execute the ORIGEN2.1 runs, and search ORIGEN2.1 punch files for relevant isotope concentrations and produce new MCNP input files. A graphical user interface is provided for execution convenience. 3 - Restrictions on the complexity of the problem: At present, no mechanism exists for automatic serial execution of the program modules. The user must interface with the GUI to run each of the modules
A multi-objective approach to improve SWAT model calibration in alpine catchments
Tuo, Ye; Marcolini, Giorgia; Disse, Markus; Chiogna, Gabriele
2018-04-01
Multi-objective hydrological model calibration can represent a valuable solution to reduce model equifinality and parameter uncertainty. The Soil and Water Assessment Tool (SWAT) model is widely applied to investigate water quality and water management issues in alpine catchments. However, the model calibration is generally based on discharge records only, and most of the previous studies have defined a unique set of snow parameters for an entire basin. Only a few studies have considered snow observations to validate model results or have taken into account the possible variability of snow parameters for different subbasins. This work presents and compares three possible calibration approaches. The first two procedures are single-objective calibration procedures, for which all parameters of the SWAT model were calibrated according to river discharge alone. Procedures I and II differ from each other by the assumption used to define snow parameters: The first approach assigned a unique set of snow parameters to the entire basin, whereas the second approach assigned different subbasin-specific sets of snow parameters to each subbasin. The third procedure is a multi-objective calibration, in which we considered snow water equivalent (SWE) information at two different spatial scales (i.e. subbasin and elevation band), in addition to discharge measurements. We tested these approaches in the Upper Adige river basin where a dense network of snow depth measurement stations is available. Only the set of parameters obtained with this multi-objective procedure provided an acceptable prediction of both river discharge and SWE. These findings offer the large community of SWAT users a strategy to improve SWAT modeling in alpine catchments.
Wang, Ling; van Meerveld, Ilja; Seibert, Jan
2016-04-01
Streamflow isotope samples taken during rainfall-runoff events are very useful for multi-criteria model calibration because they can help decrease parameter uncertainty and improve internal model consistency. However, the number of samples that can be collected and analysed is often restricted by practical and financial constraints. It is, therefore, important to choose an appropriate sampling strategy and to obtain samples that have the highest information content for model calibration. We used the Birkenes hydrochemical model and synthetic rainfall, streamflow and isotope data to explore which samples are most informative for model calibration. Starting with error-free observations, we investigated how many samples are needed to obtain a certain model fit. Based on different parameter sets, representing different catchments, and different rainfall events, we also determined which sampling times provide the most informative data for model calibration. Our results show that simulation performance for models calibrated with the isotopic data from two intelligently selected samples was comparable to simulations based on isotopic data for all 100 time steps. The models calibrated with the intelligently selected samples also performed better than the model calibrations with two benchmark sampling strategies (random selection and selection based on hydrologic information). Surprisingly, samples on the rising limb and at the peak were less informative than expected and, generally, samples taken at the end of the event were most informative. The timing of the most informative samples depends on the proportion of different flow components (baseflow, slow response flow, fast response flow and overflow). For events dominated by baseflow and slow response flow, samples taken at the end of the event after the fast response flow has ended were most informative; when the fast response flow was dominant, samples taken near the peak were most informative. However when overflow
Diagnosing the impact of alternative calibration strategies on coupled hydrologic models
Smith, T. J.; Perera, C.; Corrigan, C.
2017-12-01
Hydrologic models represent a significant tool for understanding, predicting, and responding to the impacts of water on society and society on water resources and, as such, are used extensively in water resources planning and management. Given this important role, the validity and fidelity of hydrologic models is imperative. While extensive focus has been paid to improving hydrologic models through better process representation, better parameter estimation, and better uncertainty quantification, significant challenges remain. In this study, we explore a number of competing model calibration scenarios for simple, coupled snowmelt-runoff models to better understand the sensitivity / variability of parameterizations and its impact on model performance, robustness, fidelity, and transferability. Our analysis highlights the sensitivity of coupled snowmelt-runoff model parameterizations to alterations in calibration approach, underscores the concept of information content in hydrologic modeling, and provides insight into potential strategies for improving model robustness / fidelity.
Lu, Dan; Ricciuto, Daniel; Walker, Anthony; Safta, Cosmin; Munger, William
2017-09-01
Calibration of terrestrial ecosystem models is important but challenging. Bayesian inference implemented by Markov chain Monte Carlo (MCMC) sampling provides a comprehensive framework to estimate model parameters and associated uncertainties using their posterior distributions. The effectiveness and efficiency of the method strongly depend on the MCMC algorithm used. In this work, a differential evolution adaptive Metropolis (DREAM) algorithm is used to estimate posterior distributions of 21 parameters for the data assimilation linked ecosystem carbon (DALEC) model using 14 years of daily net ecosystem exchange data collected at the Harvard Forest Environmental Measurement Site eddy-flux tower. The calibration of DREAM results in a better model fit and predictive performance compared to the popular adaptive Metropolis (AM) scheme. Moreover, DREAM indicates that two parameters controlling autumn phenology have multiple modes in their posterior distributions while AM only identifies one mode. The application suggests that DREAM is very suitable to calibrate complex terrestrial ecosystem models, where the uncertain parameter size is usually large and existence of local optima is always a concern. In addition, this effort justifies the assumptions of the error model used in Bayesian calibration according to the residual analysis. The result indicates that a heteroscedastic, correlated, Gaussian error model is appropriate for the problem, and the consequent constructed likelihood function can alleviate the underestimation of parameter uncertainty that is usually caused by using uncorrelated error models.
Calibration and validation of the SWAT model for a forested watershed in coastal South Carolina
Devendra M. Amatya; Elizabeth B. Haley; Norman S. Levine; Timothy J. Callahan; Artur Radecki-Pawlik; Manoj K. Jha
2008-01-01
Modeling the hydrology of low-gradient coastal watersheds on shallow, poorly drained soils is a challenging task due to the complexities in watershed delineation, runoff generation processes and pathways, flooding, and submergence caused by tropical storms. The objective of the study is to calibrate and validate a GIS-based spatially-distributed hydrologic model, SWAT...
Calibration of a user-defined mine blast model in LSDYNA and comparison with ale simultions
Verreault, J.; Leerdam, P.J.C.; Weerheijm, J.
2016-01-01
The calibration of a user-defined blast model implemented in LS-DYNA is presented using full-scale test rig experiments, partly according to the NATO STANAG 4569 AEP-55 Volume 2 specifications where the charge weight varies between 6 kg and 10 kg and the burial depth is 100 mm and deeper. The model
Methodology for converting CT medical images to MCNP input using the Scan2MCNP system
International Nuclear Information System (INIS)
Boia, L.S.; Silva, A.X.; Cardoso, S.C.; Castro, R.C.
2009-01-01
This paper develops a methodology for the application software Scan2MCNP, which converts medical images DICOM (Digital Imaging and Communications in Medicine) for MCNP input file. The Scan2MCNP handles, processes and executes the medical images generated by CT equipment, allowing the user to perform the selection and parameterization of the study area in question (tissues and organs). The details of these worked in medical imaging software, therefore, will be converted to equity to the process of language analysis of MCNP radiation transport, through the generation of a code input file. With this file, it is possible to simulate any situation/problem of the type and level of radiation to the proposed treatment chosen by the medical staff responsible for the patient. Within a computational process oriented, the Scan2MCNP can contribute along with other software that has been used recently in the area of medical physics, to improve the levels of quality and precision of radiotherapy treatments. In this work, medical images DICOM of the Anthropomorphic Rando Phantom were used in the process of analysis and development of computer software Scan2MCNP. However, it emphasized that the software is successful in certain situations, depending upon a number of auxiliary procedures and software that can help in the solution of certain problems in the natural radiation treatment or express agility by the team of medical physics. (author)
AUTOMATIC CALIBRATION OF A STOCHASTIC-LAGRANGIAN TRANSPORT MODEL (SLAM)
Numerical models are a useful tool in evaluating and designing NAPL remediation systems. Traditional constitutive finite difference and finite element models are complex and expensive to apply. For this reason, this paper presents the application of a simplified stochastic-Lagran...
Modelling and calibration with mechatronic blockset for Simulink
DEFF Research Database (Denmark)
Ravn, Ole; Szymkat, Maciej
1997-01-01
The paper describes the design considerations for a software tool for modelling and simulation of mechatronic systems. The tool is based on a concept enabling the designer to pick component models that match the physical components of the system to be modelled from a block library. Another...... on the component level and for the whole model. The library that can be extended by the user contains all the standard components, DC-motors, potentiometers, encoders etc. The library is presently being tested in different projects and the response of these users is being incorporated in the code. The Mechatronic...... Simulink Library blockset is implemented basing on MATLAB and Simulink and has been used to model several mechatronic systems....
A simple topography-driven, calibration-free runoff generation model
Gao, H.; Birkel, C.; Hrachowitz, M.; Tetzlaff, D.; Soulsby, C.; Savenije, H. H. G.
2017-12-01
Determining the amount of runoff generation from rainfall occupies a central place in rainfall-runoff modelling. Moreover, reading landscapes and developing calibration-free runoff generation models that adequately reflect land surface heterogeneities remains the focus of much hydrological research. In this study, we created a new method to estimate runoff generation - HAND-based Storage Capacity curve (HSC) which uses a topographic index (HAND, Height Above the Nearest Drainage) to identify hydrological similarity and partially the saturated areas of catchments. We then coupled the HSC model with the Mass Curve Technique (MCT) method to estimate root zone storage capacity (SuMax), and obtained the calibration-free runoff generation model HSC-MCT. Both the two models (HSC and HSC-MCT) allow us to estimate runoff generation and simultaneously visualize the spatial dynamic of saturated area. We tested the two models in the data-rich Bruntland Burn (BB) experimental catchment in Scotland with an unusual time series of the field-mapped saturation area extent. The models were subsequently tested in 323 MOPEX (Model Parameter Estimation Experiment) catchments in the United States. HBV and TOPMODEL were used as benchmarks. We found that the HSC performed better in reproducing the spatio-temporal pattern of the observed saturated areas in the BB catchment compared with TOPMODEL which is based on the topographic wetness index (TWI). The HSC also outperformed HBV and TOPMODEL in the MOPEX catchments for both calibration and validation. Despite having no calibrated parameters, the HSC-MCT model also performed comparably well with the calibrated HBV and TOPMODEL, highlighting the robustness of the HSC model to both describe the spatial distribution of the root zone storage capacity and the efficiency of the MCT method to estimate the SuMax. Moreover, the HSC-MCT model facilitated effective visualization of the saturated area, which has the potential to be used for broader
Sensitivity analysis and calibration of a dynamic physically based slope stability model
Zieher, Thomas; Rutzinger, Martin; Schneider-Muntau, Barbara; Perzl, Frank; Leidinger, David; Formayer, Herbert; Geitner, Clemens
2017-06-01
Physically based modelling of slope stability on a catchment scale is still a challenging task. When applying a physically based model on such a scale (1 : 10 000 to 1 : 50 000), parameters with a high impact on the model result should be calibrated to account for (i) the spatial variability of parameter values, (ii) shortcomings of the selected model, (iii) uncertainties of laboratory tests and field measurements or (iv) parameters that cannot be derived experimentally or measured in the field (e.g. calibration constants). While systematic parameter calibration is a common task in hydrological modelling, this is rarely done using physically based slope stability models. In the present study a dynamic, physically based, coupled hydrological-geomechanical slope stability model is calibrated based on a limited number of laboratory tests and a detailed multitemporal shallow landslide inventory covering two landslide-triggering rainfall events in the Laternser valley, Vorarlberg (Austria). Sensitive parameters are identified based on a local one-at-a-time sensitivity analysis. These parameters (hydraulic conductivity, specific storage, angle of internal friction for effective stress, cohesion for effective stress) are systematically sampled and calibrated for a landslide-triggering rainfall event in August 2005. The identified model ensemble, including 25 behavioural model runs with the highest portion of correctly predicted landslides and non-landslides, is then validated with another landslide-triggering rainfall event in May 1999. The identified model ensemble correctly predicts the location and the supposed triggering timing of 73.0 % of the observed landslides triggered in August 2005 and 91.5 % of the observed landslides triggered in May 1999. Results of the model ensemble driven with raised precipitation input reveal a slight increase in areas potentially affected by slope failure. At the same time, the peak run-off increases more markedly, suggesting that
Simulation of Photon energy Spectra Using MISC, SOURCES, MCNP and GADRAS
International Nuclear Information System (INIS)
Tucker, Lucas P.; Shores, Erik F.; Myers, Steven C.; Felsher, Paul D.; Garner, Scott E.; Solomon, Clell J. Jr.
2012-01-01
The detector response functions included in the Gamma Detector Response and Analysis Software (GADRAS) are a valuable resource for simulating radioactive source emission spectra. Application of these response functions to the results of three-dimensional transport calculations is a useful modeling capability. Using a 26.2 kg shell of depleted uranium (DU) as a simple test problem, this work illustrates a method for manipulating current tally results from MCNP into the GAM file format necessary for a practical link to GADRAS detector response functions. MISC (MCNP Intrinsic Source Constructor) and SOURCES 4C were used to develop photon and neutron source terms for subsequent MCNP transport, and the resultant spectrum is shown to be in good agreement with that from GADRAS. A 1 kg DU sphere was also modeled with the method described here and showed similarly encouraging results.
Simulation of Photon energy Spectra Using MISC, SOURCES, MCNP and GADRAS
Energy Technology Data Exchange (ETDEWEB)
Tucker, Lucas P. [Los Alamos National Laboratory; Shores, Erik F. [Los Alamos National Laboratory; Myers, Steven C. [Los Alamos National Laboratory; Felsher, Paul D. [Los Alamos National Laboratory; Garner, Scott E. [Los Alamos National Laboratory; Solomon, Clell J. Jr. [Los Alamos National Laboratory
2012-08-14
The detector response functions included in the Gamma Detector Response and Analysis Software (GADRAS) are a valuable resource for simulating radioactive source emission spectra. Application of these response functions to the results of three-dimensional transport calculations is a useful modeling capability. Using a 26.2 kg shell of depleted uranium (DU) as a simple test problem, this work illustrates a method for manipulating current tally results from MCNP into the GAM file format necessary for a practical link to GADRAS detector response functions. MISC (MCNP Intrinsic Source Constructor) and SOURCES 4C were used to develop photon and neutron source terms for subsequent MCNP transport, and the resultant spectrum is shown to be in good agreement with that from GADRAS. A 1 kg DU sphere was also modeled with the method described here and showed similarly encouraging results.
An alternative method for calibration of narrow band radiometer using a radiative transfer model
Energy Technology Data Exchange (ETDEWEB)
Salvador, J; Wolfram, E; D' Elia, R [Centro de Investigaciones en Laseres y Aplicaciones, CEILAP (CITEFA-CONICET), Juan B. de La Salle 4397 (B1603ALO), Villa Martelli, Buenos Aires (Argentina); Zamorano, F; Casiccia, C [Laboratorio de Ozono y Radiacion UV, Universidad de Magallanes, Punta Arenas (Chile) (Chile); Rosales, A [Universidad Nacional de la Patagonia San Juan Bosco, UNPSJB, Facultad de Ingenieria, Trelew (Argentina) (Argentina); Quel, E, E-mail: jsalvador@citefa.gov.ar [Universidad Nacional de la Patagonia Austral, Unidad Academica Rio Gallegos Avda. Lisandro de la Torre 1070 ciudad de Rio Gallegos-Sta Cruz (Argentina) (Argentina)
2011-01-01
The continual monitoring of solar UV radiation is one of the major objectives proposed by many atmosphere research groups. The purpose of this task is to determine the status and degree of progress over time of the anthropogenic composition perturbation of the atmosphere. Such changes affect the intensity of the UV solar radiation transmitted through the atmosphere that then interacts with living organisms and all materials, causing serious consequences in terms of human health and durability of materials that interact with this radiation. One of the many challenges that need to be faced to perform these measurements correctly is the maintenance of periodic calibrations of these instruments. Otherwise, damage caused by the UV radiation received will render any one calibration useless after the passage of some time. This requirement makes the usage of these instruments unattractive, and the lack of frequent calibration may lead to the loss of large amounts of acquired data. Motivated by this need to maintain calibration or, at least, know the degree of stability of instrumental behavior, we have developed a calibration methodology that uses the potential of radiative transfer models to model solar radiation with 5% accuracy or better relative to actual conditions. Voltage values in each radiometer channel involved in the calibration process are carefully selected from clear sky data. Thus, tables are constructed with voltage values corresponding to various atmospheric conditions for a given solar zenith angle. Then we model with a radiative transfer model using the same conditions as for the measurements to assemble sets of values for each zenith angle. The ratio of each group (measured and modeled) allows us to calculate the calibration coefficient value as a function of zenith angle as well as the cosine response presented by the radiometer. The calibration results obtained by this method were compared with those obtained with a Brewer MKIII SN 80 located in the
Visualizing MCNP Tally Segment Geometry and Coupling Results with ABAQUS
International Nuclear Information System (INIS)
J. R. Parry; J. A. Galbraith
2007-01-01
The Advanced Graphite Creep test, AGC-1, is planned for irradiation in the Advanced Test Reactor (ATR) in support of the Next Generation Nuclear Plant program. The experiment requires very detailed neutronics and thermal hydraulics analyses to show compliance with programmatic and ATR safety requirements. The MCNP model used for the neutronics analysis required hundreds of tally regions to provide the desired detail. A method for visualizing the hundreds of tally region geometries and the tally region results in 3 dimensions has been created to support the AGC-1 irradiation. Additionally, a method was created which would allow ABAQUS to access the results directly for the thermal analysis of the AGC-1 experiment
International Nuclear Information System (INIS)
Hussein, M.S.; Bonin, H.W.; Lewis, B.J.
2013-01-01
The theory of multipoint coupled reactors developed by multi-group transport is verified by using the probabilistic transport code MCNP5. The verification was performed by calculating the multiplication factors (or criticality factors) and coupling coefficients for a two-region test reactor known as Deuterium Critical Assembly, (DCA). The variations of the criticality factors and the coupling coefficients were investigated by changing of the water levels in the inner and outer cores. The numerical results of the model developed with MCNP5 code were validated and verified against published results and the mathematical model based on coupled reactor theory. (author)
Energy Technology Data Exchange (ETDEWEB)
Hussein, M.S.; Bonin, H.W.; Lewis, B.J., E-mail: mohamed.hussein@rmc.ca, E-mail: bonin-h@rmc.ca, E-mail: lewis-b@rmc.ca [Royal Military College of Canada, Dept. of Chemistry and Chemical Engineering, Kingston, Ontario (Canada)
2013-07-01
The theory of multipoint coupled reactors developed by multi-group transport is verified by using the probabilistic transport code MCNP5. The verification was performed by calculating the multiplication factors (or criticality factors) and coupling coefficients for a two-region test reactor known as Deuterium Critical Assembly, (DCA). The variations of the criticality factors and the coupling coefficients were investigated by changing of the water levels in the inner and outer cores. The numerical results of the model developed with MCNP5 code were validated and verified against published results and the mathematical model based on coupled reactor theory. (author)
Collis, Joe; Connor, Anthony J; Paczkowski, Marcin; Kannan, Pavitra; Pitt-Francis, Joe; Byrne, Helen M; Hubbard, Matthew E
2017-04-01
In this work, we present a pedagogical tumour growth example, in which we apply calibration and validation techniques to an uncertain, Gompertzian model of tumour spheroid growth. The key contribution of this article is the discussion and application of these methods (that are not commonly employed in the field of cancer modelling) in the context of a simple model, whose deterministic analogue is widely known within the community. In the course of the example, we calibrate the model against experimental data that are subject to measurement errors, and then validate the resulting uncertain model predictions. We then analyse the sensitivity of the model predictions to the underlying measurement model. Finally, we propose an elementary learning approach for tuning a threshold parameter in the validation procedure in order to maximize predictive accuracy of our validated model.
International Nuclear Information System (INIS)
Bourauel, Peter; Nabbi, Rahim; Biel, Wolfgang; Forrest, Robin
2009-01-01
The MCNP 3D Monte Carlo computer code is used not only for criticality calculations of nuclear systems but also to simulate transports of radiation and particles. The findings so obtained about neutron flux distribution and the associated spectra allow information about materials activation, nuclear heating, and radiation damage to be obtained by means of activation codes such as FISPACT. The stochastic character of particle and radiation transport processes normally links findings to the materials cells making up the geometry model of MCNP. Where high spatial resolution is required for the activation calculations with FISPACT, fine segmentation of the MCNP geometry becomes compulsory, which implies considerable expense for the modeling process. For this reason, an alternative simulation technique has been developed in an effort to automate and optimize data transfer between MCNP and FISPACT. (orig.)
Multi-objective calibration of a reservoir model: aggregation and non-dominated sorting approaches
Huang, Y.
2012-12-01
Numerical reservoir models can be helpful tools for water resource management. These models are generally calibrated against historical measurement data made in reservoirs. In this study, two methods are proposed for the multi-objective calibration of such models: aggregation and non-dominated sorting methods. Both methods use a hybrid genetic algorithm as an optimization engine and are different in fitness assignment. In the aggregation method, a weighted sum of scaled simulation errors is designed as an overall objective function to measure the fitness of solutions (i.e. parameter values). The contribution of this study to the aggregation method is the correlation analysis and its implication to the choice of weight factors. In the non-dominated sorting method, a novel method based on non-dominated sorting and the method of minimal distance is used to calculate the dummy fitness of solutions. The proposed methods are illustrated using a water quality model that was set up to simulate the water quality of Pepacton Reservoir, which is located to the north of New York City and is used for water supply of city. The study also compares the aggregation and the non-dominated sorting methods. The purpose of this comparison is not to evaluate the pros and cons between the two methods but to determine whether the parameter values, objective function values (simulation errors) and simulated results obtained are significantly different with each other. The final results (objective function values) from the two methods are good compromise between all objective functions, and none of these results are the worst for any objective function. The calibrated model provides an overall good performance and the simulated results with the calibrated parameter values match the observed data better than the un-calibrated parameters, which supports and justifies the use of multi-objective calibration. The results achieved in this study can be very useful for the calibration of water
Presentation, calibration and validation of the low-order, DCESS Earth System Model
DEFF Research Database (Denmark)
Shaffer, G.; Olsen, S. Malskaer; Pedersen, Jens Olaf Pepke
2008-01-01
A new, low-order Earth system model is described, calibrated and tested against Earth system data. The model features modules for the atmosphere, ocean, ocean sediment, land biosphere and lithosphere and has been designed to simulate global change on time scales of years to millions of years...... remineralization. The lithosphere module considers outgassing, weathering of carbonate and silicate rocks and weathering of rocks containing old organic carbon and phosphorus. Weathering rates are related to mean atmospheric temperatures. A pre-industrial, steady state calibration to Earth system data is carried...
Using genetic algorithms for calibrating simplified models of nuclear reactor dynamics
International Nuclear Information System (INIS)
Marseguerra, Marzio; Zio, Enrico; Canetta, Raffaele
2004-01-01
In this paper the use of genetic algorithms for the estimation of the effective parameters of a model of nuclear reactor dynamics is investigated. The calibration of the effective parameters is achieved by best fitting the model responses of the quantities of interest (e.g., reactor power, average fuel and coolant temperatures) to the actual evolution profiles, here simulated by the Quandry based reactor kinetics (Quark) code available from the Nuclear Energy Agency. Alternative schemes of single- and multi-objective optimization are investigated. The efficiency of convergence of the algorithm with respect to the different effective parameters to be calibrated is studied with reference to the physical relationships involved
Using genetic algorithms for calibrating simplified models of nuclear reactor dynamics
Energy Technology Data Exchange (ETDEWEB)
Marseguerra, Marzio E-mail: marzio.marseguerra@polimi.it; Zio, Enrico E-mail: enrico.zio@polimi.it; Canetta, Raffaele
2004-07-01
In this paper the use of genetic algorithms for the estimation of the effective parameters of a model of nuclear reactor dynamics is investigated. The calibration of the effective parameters is achieved by best fitting the model responses of the quantities of interest (e.g., reactor power, average fuel and coolant temperatures) to the actual evolution profiles, here simulated by the Quandry based reactor kinetics (Quark) code available from the Nuclear Energy Agency. Alternative schemes of single- and multi-objective optimization are investigated. The efficiency of convergence of the algorithm with respect to the different effective parameters to be calibrated is studied with reference to the physical relationships involved.
Calibration plots for risk prediction models in the presence of competing risks
DEFF Research Database (Denmark)
Gerds, Thomas A; Andersen, Per K; Kattan, Michael W
2014-01-01
A predicted risk of 17% can be called reliable if it can be expected that the event will occur to about 17 of 100 patients who all received a predicted risk of 17%. Statistical models can predict the absolute risk of an event such as cardiovascular death in the presence of competing risks...... prediction model is well calibrated. The first is lack of independent validation data, the second is right censoring, and the third is that when the risk scale is continuous, the estimation problem is as difficult as density estimation. To deal with these problems, we propose to estimate calibration curves...
Calibration of an agricultural-hydrological model (RZWQM2) using surrogate global optimization
Xi, Maolong; Lu, Dan; Gui, Dongwei; Qi, Zhiming; Zhang, Guannan
2017-01-01
Robust calibration of an agricultural-hydrological model is critical for simulating crop yield and water quality and making reasonable agricultural management. However, calibration of the agricultural-hydrological system models is challenging because of model complexity, the existence of strong parameter correlation, and significant computational requirements. Therefore, only a limited number of simulations can be allowed in any attempt to find a near-optimal solution within an affordable time, which greatly restricts the successful application of the model. The goal of this study is to locate the optimal solution of the Root Zone Water Quality Model (RZWQM2) given a limited simulation time, so as to improve the model simulation and help make rational and effective agricultural-hydrological decisions. To this end, we propose a computationally efficient global optimization procedure using sparse-grid based surrogates. We first used advanced sparse grid (SG) interpolation to construct a surrogate system of the actual RZWQM2, and then we calibrate the surrogate model using the global optimization algorithm, Quantum-behaved Particle Swarm Optimization (QPSO). As the surrogate model is a polynomial with fast evaluation, it can be efficiently evaluated with a sufficiently large number of times during the optimization, which facilitates the global search. We calibrate seven model parameters against five years of yield, drain flow, and NO3-N loss data from a subsurface-drained corn-soybean field in Iowa. Results indicate that an accurate surrogate model can be created for the RZWQM2 with a relatively small number of SG points (i.e., RZWQM2 runs). Compared to the conventional QPSO algorithm, our surrogate-based optimization method can achieve a smaller objective function value and better calibration performance using a fewer number of expensive RZWQM2 executions, which greatly improves computational efficiency.
Calibration of controlling input models for pavement management system.
2013-07-01
The Oklahoma Department of Transportation (ODOT) is currently using the Deighton Total Infrastructure Management System (dTIMS) software for pavement management. This system is based on several input models which are computational backbones to dev...
Value of using remotely sensed evapotranspiration for SWAT model calibration
Hydrologic models are useful management tools for assessing water resources solutions and estimating the potential impact of climate variation scenarios. A comprehensive understanding of the water budget components and especially the evapotranspiration (ET) is critical and often overlooked for adeq...
Calibration of Chaboche Model with a Memory Surface
Directory of Open Access Journals (Sweden)
Radim HALAMA
2013-06-01
Full Text Available This paper points out a sufficient description of the stress-strain behaviour of the Chaboche nonlinear kinematic hardening model only for materials with the Masing's behaviour, regardless of the number of backstress parts. Subsequently, there are presented two concepts of most widely used memory surfaces: Jiang-Sehitoglu concept (deviatoric plane and Chaboche concept (strain-space. On the base of experimental data of steel ST52 is then shown the possibility of capturing hysteresis loops and cyclic strain curve simultaneously in the usual range for low cycle fatigue calculations. A new model for cyclic hardening/softening behaviour modeling has been also developed based on the Jiang-Sehitoglu memory surface concept. Finally, there are formulated some recommendations for the use of individual models and the direction of further research in conclusions.
Constitutive Model Calibration via Autonomous Multiaxial Experimentation (Postprint)
2016-09-17
ABSTRACT (Maximum 200 words) Modern plasticity models contain numerous parameters that can be difficult and time consuming to fit using current...Abstract Modern plasticity models contain numerous parameters that can be difficult and time consuming to fit using current methods. Additional...complexity, is a difficult and time consuming process that has historically be a separate process from the experimental testing. As such, additional
Embodying, calibrating and caring for a local model of obesity
DEFF Research Database (Denmark)
Winther, Jonas; Hillersdal, Line
Interdisciplinary research collaborations are increasingly made a mandatory 'standard' within strategic research grants. Collaborations between the natural, social and humanistic sciences are conceptualized as uniquely suited to study pressing societal problems. The obesity epidemic has been...... highlighted as such a problem. Within research communities disparate explanatory models of obesity exist (Ulijaszek 2008) and some of these models of obesity are brought together in the Copenhagen-based interdisciplinary research initiative; Governing Obesity (GO) with the aim of addressing the causes...
Calibration under uncertainty for finite element models of masonry monuments
Energy Technology Data Exchange (ETDEWEB)
Atamturktur, Sezer,; Hemez, Francois,; Unal, Cetin
2010-02-01
Historical unreinforced masonry buildings often include features such as load bearing unreinforced masonry vaults and their supporting framework of piers, fill, buttresses, and walls. The masonry vaults of such buildings are among the most vulnerable structural components and certainly among the most challenging to analyze. The versatility of finite element (FE) analyses in incorporating various constitutive laws, as well as practically all geometric configurations, has resulted in the widespread use of the FE method for the analysis of complex unreinforced masonry structures over the last three decades. However, an FE model is only as accurate as its input parameters, and there are two fundamental challenges while defining FE model input parameters: (1) material properties and (2) support conditions. The difficulties in defining these two aspects of the FE model arise from the lack of knowledge in the common engineering understanding of masonry behavior. As a result, engineers are unable to define these FE model input parameters with certainty, and, inevitably, uncertainties are introduced to the FE model.
Identifying best-fitting inputs in health-economic model calibration: a Pareto frontier approach.
Enns, Eva A; Cipriano, Lauren E; Simons, Cyrena T; Kong, Chung Yin
2015-02-01
To identify best-fitting input sets using model calibration, individual calibration target fits are often combined into a single goodness-of-fit (GOF) measure using a set of weights. Decisions in the calibration process, such as which weights to use, influence which sets of model inputs are identified as best-fitting, potentially leading to different health economic conclusions. We present an alternative approach to identifying best-fitting input sets based on the concept of Pareto-optimality. A set of model inputs is on the Pareto frontier if no other input set simultaneously fits all calibration targets as well or better. We demonstrate the Pareto frontier approach in the calibration of 2 models: a simple, illustrative Markov model and a previously published cost-effectiveness model of transcatheter aortic valve replacement (TAVR). For each model, we compare the input sets on the Pareto frontier to an equal number of best-fitting input sets according to 2 possible weighted-sum GOF scoring systems, and we compare the health economic conclusions arising from these different definitions of best-fitting. For the simple model, outcomes evaluated over the best-fitting input sets according to the 2 weighted-sum GOF schemes were virtually nonoverlapping on the cost-effectiveness plane and resulted in very different incremental cost-effectiveness ratios ($79,300 [95% CI 72,500-87,600] v. $139,700 [95% CI 79,900-182,800] per quality-adjusted life-year [QALY] gained). Input sets on the Pareto frontier spanned both regions ($79,000 [95% CI 64,900-156,200] per QALY gained). The TAVR model yielded similar results. Choices in generating a summary GOF score may result in different health economic conclusions. The Pareto frontier approach eliminates the need to make these choices by using an intuitive and transparent notion of optimality as the basis for identifying best-fitting input sets. © The Author(s) 2014.
Calibration of HEC-Ras hydrodynamic model using gauged discharge data and flood inundation maps
Tong, Rui; Komma, Jürgen
2017-04-01
The estimation of flood is essential for disaster alleviation. Hydrodynamic models are implemented to predict the occurrence and variance of flood in different scales. In practice, the calibration of hydrodynamic models aims to search the best possible parameters for the representation the natural flow resistance. Recent years have seen the calibration of hydrodynamic models being more actual and faster following the advance of earth observation products and computer based optimization techniques. In this study, the Hydrologic Engineering River Analysis System (HEC-Ras) model was set up with high-resolution digital elevation model from Laser scanner for the river Inn in Tyrol, Austria. 10 largest flood events from 19 hourly discharge gauges and flood inundation maps were selected to calibrate the HEC-Ras model. Manning roughness values and lateral inflow factors as parameters were automatically optimized with the Shuffled complex with Principal component analysis (SP-UCI) algorithm developed from the Shuffled Complex Evolution (SCE-UA). Different objective functions (Nash-Sutcliffe model efficiency coefficient, the timing of peak, peak value and Root-mean-square deviation) were used in single or multiple way. It was found that the lateral inflow factor was the most sensitive parameter. SP-UCI algorithm could avoid the local optimal and achieve efficient and effective parameters in the calibration of HEC-Ras model using flood extension images. As results showed, calibration by means of gauged discharge data and flood inundation maps, together with objective function of Nash-Sutcliffe model efficiency coefficient, was very robust to obtain more reliable flood simulation, and also to catch up with the peak value and the timing of peak.
International Nuclear Information System (INIS)
Liu, Jianchun; Sonnenthal, Eric L.; Bodvarsson, Gudmundur S.
2002-01-01
In this study, porewater chloride data from Yucca Mountain, Nevada, are analyzed and modeled by 3-D chemical transport simulations and analytical methods. The simulation modeling approach is based on a continuum formulation of coupled multiphase fluid flow and tracer transport processes through fractured porous rock, using a dual-continuum concept. Infiltration-rate calibrations were using the pore water chloride data. Model results of chloride distributions were improved in matching the observed data with the calibrated infiltration rates. Statistical analyses of the frequency distribution for overall percolation fluxes and chloride concentration in the unsaturated zone system demonstrate that the use of the calibrated infiltration rates had insignificant effect on the distribution of simulated percolation fluxes but significantly changed the predicated distribution of simulated chloride concentrations. An analytical method was also applied to model transient chloride transport. The method was verified by 3-D simulation results as able to capture major chemical transient behavior and trends. Effects of lateral flow in the Paintbrush nonwelded unit on percolation fluxes and chloride distribution were studied by 3-D simulations with increased horizontal permeability. The combined results from these model calibrations furnish important information for the UZ model studies, contributing to performance assessment of the potential repository
Sun, Ruochen; Yuan, Huiling; Liu, Xiaoli
2017-11-01
The heteroscedasticity treatment in residual error models directly impacts the model calibration and prediction uncertainty estimation. This study compares three methods to deal with the heteroscedasticity, including the explicit linear modeling (LM) method and nonlinear modeling (NL) method using hyperbolic tangent function, as well as the implicit Box-Cox transformation (BC). Then a combined approach (CA) combining the advantages of both LM and BC methods has been proposed. In conjunction with the first order autoregressive model and the skew exponential power (SEP) distribution, four residual error models are generated, namely LM-SEP, NL-SEP, BC-SEP and CA-SEP, and their corresponding likelihood functions are applied to the Variable Infiltration Capacity (VIC) hydrologic model over the Huaihe River basin, China. Results show that the LM-SEP yields the poorest streamflow predictions with the widest uncertainty band and unrealistic negative flows. The NL and BC methods can better deal with the heteroscedasticity and hence their corresponding predictive performances are improved, yet the negative flows cannot be avoided. The CA-SEP produces the most accurate predictions with the highest reliability and effectively avoids the negative flows, because the CA approach is capable of addressing the complicated heteroscedasticity over the study basin.
International Nuclear Information System (INIS)
Vogl, Gregory W.; Harper, Kari K.; Payne, Bev
2010-01-01
Piezoelectric shakers have been developed and used at the National Institute of Standards and Technology (NIST) for decades for high-frequency calibration of accelerometers. Recently, NIST researchers built new piezoelectric shakers in the hopes of reducing the uncertainties in the calibrations of accelerometers while extending the calibration frequency range beyond 20 kHz. The ability to build and measure piezoelectric shakers invites modeling of these systems in order to improve their design for increased performance, which includes a sinusoidal motion with lower distortion, lower cross-axial motion, and an increased frequency range. In this paper, we present a model of piezoelectric shakers and match it to experimental data. The equations of motion for all masses are solved along with the coupled state equations for the piezoelectric actuator. Finally, additional electrical elements like inductors, capacitors, and resistors are added to the piezoelectric actuator for matching of experimental and theoretical frequency responses.
Calibrated and Interactive Modelling of Form-Active Hybrid Structures
DEFF Research Database (Denmark)
Quinn, Gregory; Holden Deleuran, Anders; Piker, Daniel
2016-01-01
Form-active hybrid structures (FAHS) couple two or more different structural elements of low self weight and low or negligible bending flexural stiffness (such as slender beams, cables and membranes) into one structural assembly of high global stiffness. They offer high load-bearing capacity...... software packages which introduce interruptions and data exchange issues in the modelling pipeline. The mechanical precision, stability and open software architecture of Kangaroo has facilitated the development of proof-of-concept modelling pipelines which tackle this challenge and enable powerful...... materially-informed sketching. Making use of a projection-based dynamic relaxation solver for structural analysis, explorative design has proven to be highly effective....
Experimental validation and calibration of pedestrian loading models for footbridges
DEFF Research Database (Denmark)
Ricciardelli, Fransesco; Briatico, C; Ingólfsson, Einar Thór
2006-01-01
Different patterns of pedestrian loading of footbridges exist, whose occurrence depends on a number of parameters, such as the bridge span, frequency, damping and mass, and the pedestrian density and activity. In this paper analytical models for the transient action of one walker and for the stat...
An auto-calibration procedure for empirical solar radiation models
Bojanowski, J.S.; Donatelli, Marcello; Skidmore, A.K.; Vrieling, A.
2013-01-01
Solar radiation data are an important input for estimating evapotranspiration and modelling crop growth. Direct measurement of solar radiation is now carried out in most European countries, but the network of measuring stations is too sparse for reliable interpolation of measured values. Instead of
The Active Model: a calibration of material intent
DEFF Research Database (Denmark)
Ramsgaard Thomsen, Mette; Tamke, Martin
2012-01-01
created it. This definition suggests structural characteristics that are perhaps not immediately obvious when implemented within architectural models. It opens the idea that materiality might persist into the digital environment, as well as the digital lingering within the material. It implies questions...
Remote sensing estimation of evapotranspiration for SWAT Model Calibration
Hydrological models are used to assess many water resource problems from water quantity to water quality issues. The accurate assessment of the water budget, primarily the influence of precipitation and evapotranspiration (ET), is a critical first-step evaluation, which is often overlooked in hydro...
SWAT application in intensive irrigation systems: Model modification, calibration and validation
Dechmi, Farida; Burguete, Javier; Skhiri, Ahmed
2012-11-01
SummaryThe Soil and Water Assessment Tool (SWAT) is a well established, distributed, eco-hydrologic model. However, using the study case of an agricultural intensive irrigated watershed, it was shown that all the model versions are not able to appropriately reproduce the total streamflow in such system when the irrigation source is outside the watershed. The objective of this study was to modify the SWAT2005 version for correctly simulating the main hydrological processes. Crop yield, total streamflow, total suspended sediment (TSS) losses and phosphorus load calibration and validation were performed using field survey information and water quantity and quality data recorded during 2008 and 2009 years in Del Reguero irrigated watershed in Spain. The goodness of the calibration and validation results was assessed using five statistical measures, including the Nash-Sutcliffe efficiency (NSE). Results indicated that the average annual crop yield and actual evapotranspiration estimations were quite satisfactory. On a monthly basis, the values of NSE were 0.90 (calibration) and 0.80 (validation) indicating that the modified model could reproduce accurately the observed streamflow. The TSS losses were also satisfactorily estimated (NSE = 0.72 and 0.52 for the calibration and validation steps). The monthly temporal patterns and all the statistical parameters indicated that the modified SWAT-IRRIG model adequately predicted the total phosphorus (TP) loading. Therefore, the model could be used to assess the impacts of different best management practices on nonpoint phosphorus losses in irrigated systems.
Verification of MCNP simulation of neutron flux parameters at TRIGA MK II reactor of Malaysia.
Yavar, A R; Khalafi, H; Kasesaz, Y; Sarmani, S; Yahaya, R; Wood, A K; Khoo, K S
2012-10-01
A 3-D model for 1 MW TRIGA Mark II research reactor was simulated. Neutron flux parameters were calculated using MCNP-4C code and were compared with experimental results obtained by k(0)-INAA and absolute method. The average values of φ(th),φ(epi), and φ(fast) by MCNP code were (2.19±0.03)×10(12) cm(-2)s(-1), (1.26±0.02)×10(11) cm(-2)s(-1) and (3.33±0.02)×10(10) cm(-2)s(-1), respectively. These average values were consistent with the experimental results obtained by k(0)-INAA. The findings show a good agreement between MCNP code results and experimental results. Copyright © 2012 Elsevier Ltd. All rights reserved.
Calibration of Airframe and Occupant Models for Two Full-Scale Rotorcraft Crash Tests
Annett, Martin S.; Horta, Lucas G.; Polanco, Michael A.
2012-01-01
Two full-scale crash tests of an MD-500 helicopter were conducted in 2009 and 2010 at NASA Langley's Landing and Impact Research Facility in support of NASA s Subsonic Rotary Wing Crashworthiness Project. The first crash test was conducted to evaluate the performance of an externally mounted composite deployable energy absorber under combined impact conditions. In the second crash test, the energy absorber was removed to establish baseline loads that are regarded as severe but survivable. Accelerations and kinematic data collected from the crash tests were compared to a system integrated finite element model of the test article. Results from 19 accelerometers placed throughout the airframe were compared to finite element model responses. The model developed for the purposes of predicting acceleration responses from the first crash test was inadequate when evaluating more severe conditions seen in the second crash test. A newly developed model calibration approach that includes uncertainty estimation, parameter sensitivity, impact shape orthogonality, and numerical optimization was used to calibrate model results for the second full-scale crash test. This combination of heuristic and quantitative methods was used to identify modeling deficiencies, evaluate parameter importance, and propose required model changes. It is shown that the multi-dimensional calibration techniques presented here are particularly effective in identifying model adequacy. Acceleration results for the calibrated model were compared to test results and the original model results. There was a noticeable improvement in the pilot and co-pilot region, a slight improvement in the occupant model response, and an over-stiffening effect in the passenger region. This approach should be adopted early on, in combination with the building-block approaches that are customarily used, for model development and test planning guidance. Complete crash simulations with validated finite element models can be used
Energy Technology Data Exchange (ETDEWEB)
Dowding, Kevin J.; Hills, Richard Guy (New Mexico State University, Las Cruces, NM)
2005-04-01
Numerical models of complex phenomena often contain approximations due to our inability to fully model the underlying physics, the excessive computational resources required to fully resolve the physics, the need to calibrate constitutive models, or in some cases, our ability to only bound behavior. Here we illustrate the relationship between approximation, calibration, extrapolation, and model validation through a series of examples that use the linear transient convective/dispersion equation to represent the nonlinear behavior of Burgers equation. While the use of these models represents a simplification relative to the types of systems we normally address in engineering and science, the present examples do support the tutorial nature of this document without obscuring the basic issues presented with unnecessarily complex models.
Calibration of a finite element composite delamination model by experiments
DEFF Research Database (Denmark)
Gaiotti, M.; Rizzo, C.M.; Branner, Kim
2013-01-01
This paper deals with the mechanical behavior under in plane compressive loading of thick and mostly unidirectional glass fiber composite plates made with an initial embedded delamination. The delamination is rectangular in shape, causing the separation of the central part of the plate into two...... distinct sub-laminates. The work focuses on experimental validation of a finite element model built using the 9-noded MITC9 shell elements, which prevent locking effects and aiming to capture the highly non linear buckling features involved in the problem. The geometry has been numerically defined...
Calibration of a distributed hydrologic model using observed spatial patterns from MODIS data
Demirel, Mehmet C.; González, Gorka M.; Mai, Juliane; Stisen, Simon
2016-04-01
Distributed hydrologic models are typically calibrated against streamflow observations at the outlet of the basin. Along with these observations from gauging stations, satellite based estimates offer independent evaluation data such as remotely sensed actual evapotranspiration (aET) and land surface temperature. The primary objective of the study is to compare model calibrations against traditional downstream discharge measurements with calibrations against simulated spatial patterns and combinations of both types of observations. While the discharge based model calibration typically improves the temporal dynamics of the model, it seems to give rise to minimum improvement of the simulated spatial patterns. In contrast, objective functions specifically targeting the spatial pattern performance could potentially increase the spatial model performance. However, most modeling studies, including the model formulations and parameterization, are not designed to actually change the simulated spatial pattern during calibration. This study investigates the potential benefits of incorporating spatial patterns from MODIS data to calibrate the mesoscale hydrologic model (mHM). This model is selected as it allows for a change in the spatial distribution of key soil parameters through the optimization of pedo-transfer function parameters and includes options for using fully distributed daily Leaf Area Index (LAI) values directly as input. In addition the simulated aET can be estimated at a spatial resolution suitable for comparison to the spatial patterns observed with MODIS data. To increase our control on spatial calibration we introduced three additional parameters to the model. These new parameters are part of an empirical equation to the calculate crop coefficient (Kc) from daily LAI maps and used to update potential evapotranspiration (PET) as model inputs. This is done instead of correcting/updating PET with just a uniform (or aspect driven) factor used in the mHM model
Evaluation of Geometric Progression (GP Buildup Factors using MCNP Codes (MCNP6.1 and MCNP5-1.60
Directory of Open Access Journals (Sweden)
Kim Kyung-O
2016-01-01
Full Text Available The gamma-ray buildup factors of three-dimensional point kernel code (QAD-CGGP are re-evaluated by using MCNP codes (MCNP6.1 and MCNPX5-1.60 and ENDF/B-VI.8 photoatomic data, which cover an energy range of 0.015–15 MeV and an iron thickness of 0.5–40 Mean Free Path (MFP. These new data are fitted to the Geometric Progression (GP fitting function and are then compared with ANS standard data equipped with QAD-CGGP. In addition, a simple benchmark calculation was performed to compare the QAD-CGGP results applied with new and existing buildup factors based on the MCNP codes. In the case of the buildup factors of low-energy gamma-rays, new data are evaluated to be about 5% higher than the existing data. In other cases, these new data present a similar trend based on the specific penetration depth, while existing data continuously increase beyond that depth. In a simple benchmark, the calculations using the existing data were slightly underestimated compared to the reference data at a deep penetration depth. On the other hand, the calculations with new data were stabilized with an increasing penetration depth, despite a slight overestimation at a shallow penetration depth.
Optimal Operational Monetary Policy Rules in an Endogenous Growth Model: a calibrated analysis
Arato, Hiroki
2009-01-01
This paper constructs an endogenous growth New Keynesian model and considers growth and welfare effect of Taylor-type (operational) monetary policy rules. The Ramsey equilibrium and optimal operational monetary policy rule is also computed. In the calibrated model, the Ramseyoptimal volatility of inflation rate is smaller than that in standard exogenous growth New Keynesian model with physical capital accumulation. Optimal operational monetary policy rule makes nominal interest rate respond s...
Uncertainty analyses of the calibrated parameter values of a water quality model
Rode, M.; Suhr, U.; Lindenschmidt, K.-E.
2003-04-01
For river basin management water quality models are increasingly used for the analysis and evaluation of different management measures. However substantial uncertainties exist in parameter values depending on the available calibration data. In this paper an uncertainty analysis for a water quality model is presented, which considers the impact of available model calibration data and the variance of input variables. The investigation was conducted based on four extensive flowtime related longitudinal surveys in the River Elbe in the years 1996 to 1999 with varying discharges and seasonal conditions. For the model calculations the deterministic model QSIM of the BfG (Germany) was used. QSIM is a one dimensional water quality model and uses standard algorithms for hydrodynamics and phytoplankton dynamics in running waters, e.g. Michaelis Menten/Monod kinetics, which are used in a wide range of models. The multi-objective calibration of the model was carried out with the nonlinear parameter estimator PEST. The results show that for individual flow time related measuring surveys very good agreements between model calculation and measured values can be obtained. If these parameters are applied to deviating boundary conditions, substantial errors in model calculation can occur. These uncertainties can be decreased with an increased calibration database. More reliable model parameters can be identified, which supply reasonable results for broader boundary conditions. The extension of the application of the parameter set on a wider range of water quality conditions leads to a slight reduction of the model precision for the specific water quality situation. Moreover the investigations show that highly variable water quality variables like the algal biomass always allow a smaller forecast accuracy than variables with lower coefficients of variation like e.g. nitrate.
Comparison between two calibration models of a measurement system for thyroid monitoring
International Nuclear Information System (INIS)
Venturini, Luzia
2005-01-01
This paper shows a comparison between two theoretical calibration that use two mathematical models to represent the neck region. In the first model thyroid is considered to be just a region limited by two concentric cylinders whose dimensions are those of trachea and neck. The second model uses functional functions to get a better representation of the thyroid geometry. Efficiency values are obtained using Monte Carlo simulation. (author)
International Nuclear Information System (INIS)
Kotegawa, Hiroshi; Sasamoto, Nobuo; Tanaka, Shun-ichi
1987-02-01
Both ''measured radioactive inventory due to neutron activation in the shield concrete of JPDR'' and ''measured intermediate and low energy neutron spectra penetrating through a graphite sphere'' are analyzed using a continuous energy model Monte Carlo code MCNP so as to estimate calculational accuracy of the code for neutron transport in thermal and epithermal energy regions. Analyses reveal that MCNP calculates thermal neutron spectra fairly accurately, while it apparently over-estimates epithermal neutron spectra (of approximate 1/E distribution) as compared with the measurements. (author)
Implementation of a tree algorithm in MCNP code for nuclear well logging applications.
Li, Fusheng; Han, Xiaogang
2012-07-01
The goal of this paper is to develop some modeling capabilities that are missing in the current MCNP code. Those missing capabilities can greatly help for some certain nuclear tools designs, such as a nuclear lithology/mineralogy spectroscopy tool. The new capabilities to be developed in this paper include the following: zone tally, neutron interaction tally, gamma rays index tally and enhanced pulse-height tally. The patched MCNP code also can be used to compute neutron slowing-down length and thermal neutron diffusion length. Copyright © 2011 Elsevier Ltd. All rights reserved.
Analysis and classification of data sets for calibration and validation of agro-ecosystem models
DEFF Research Database (Denmark)
Kersebaum, K C; Boote, K J; Jorgenson, J S
2015-01-01
Experimental field data are used at different levels of complexity to calibrate, validate and improve agro-ecosystem models to enhance their reliability for regional impact assessment. A methodological framework and software are presented to evaluate and classify data sets into four classes regar...
DEFF Research Database (Denmark)
Mohanty, Sankhya; Staliulionis, Zygimantas; Shojaee Nasirabadi, Parizad
2016-01-01
the development of rigorous calibrated CFD models as well as simple predictive numerical tools, the current paper tackles the optimization of critical features of a typical two-chamber electronic enclosure. The progressive optimization strategy begins the design parameter selection by initially using simpler...
Calibration of the L-MEB model over a coniferous and a deciduous forest
DEFF Research Database (Denmark)
Grant, Jennifer P.; Saleh-Contell, Kauzar; Wigneron, Jean-Pierre
2008-01-01
In this paper, the L-band Microwave Emission of the Biosphere (L-MEB) model used in the Soil Moisture and Ocean Salinity (SMOS) Level 2 Soil Moisture algorithm is calibrated using L-band (1.4 GHz) microwave measurements over a coniferous (Pine) and a deciduous (mixed/Beech) forest. This resulted...
Displaced calibration of PM10 measurements using spatio-temporal models
Directory of Open Access Journals (Sweden)
Daniela Cocchi
2007-12-01
Full Text Available PM10 monitoring networks are equipped with heterogeneous samplers. Some of these samplers are known to underestimate true levels of concentrations (non-reference samplers. In this paper we propose a hierarchical spatio-temporal Bayesian model for the calibration of measurements recorded using non-reference samplers, by borrowing strength from non co-located reference sampler measurements.
A parameter for the selection of an optimum balance calibration model by Monte Carlo simulation
CSIR Research Space (South Africa)
Bidgood, Peter M
2013-09-01
Full Text Available The current trend in balance calibration-matrix generation is to use non-linear regression and statistical methods. Methods typically include Modified-Design-of-Experiment (MDOE), Response-Surface-Models (RSMs) and Analysis of Variance (ANOVA...
DEFF Research Database (Denmark)
Christensen, Steen; Doherty, John
2008-01-01
super parameters), and that the structural errors caused by using pilot points and super parameters to parameterize the highly heterogeneous log-transmissivity field can be significant. For the test case much effort is put into studying how the calibrated model's ability to make accurate predictions...
Calibration of a semi-distributed hydrological model using discharge and remote sensing data
Muthuwatta, L.P.; Muthuwatta, Lal P.; Booij, Martijn J.; Rientjes, T.H.M.; Rientjes, Tom H.M.; Bos, M.G.; Gieske, A.S.M.; Ahmad, Mobin-Ud-Din; Yilmaz, Koray; Yucel, Ismail; Gupta, Hoshin V.; Wagener, Thorsten; Yang, Dawen; Savenije, Hubert; Neale, Christopher; Kunstmann, Harald; Pomeroy, John
2009-01-01
The objective of this study is to present an approach to calibrate a semi-distributed hydrological model using observed streamflow data and actual evapotranspiration time series estimates based on remote sensing data. First, daily actual evapotranspiration is estimated using available MODIS
Performance and Model Calibration of R-D-N Processes in Pilot Plant
DEFF Research Database (Denmark)
de la Sota, A.; Larrea, L.; Novak, L.
1994-01-01
This paper deals with the first part of an experimental programme in a pilot plant configured for advanced biological nutrient removal processes treating domestic wastewater of Bilbao. The IAWPRC Model No.1 was calibrated in order to optimize the design of the full-scale plant. In this first phas...
The SWAT model is a helpful tool to predict hydrological processes in a study catchment and their impact on the river discharge at the catchment outlet. For reliable discharge predictions, a precise simulation of hydrological processes is required. Therefore, SWAT has to be calibrated accurately to ...
Mohamad Nabavi; Joseph Dahlen; Laurence Schimleck; Thomas L. Eberhardt; Cristian Montes
2018-01-01
This study developed regional calibration models for the prediction of loblolly pine (Pinus taeda) tracheid properties using near-infrared (NIR) spectroscopy. A total of 1842 pith-to-bark radial strips, aged 19â31 years, were acquired from 268 trees from 109 stands across the southeastern USA. Diffuse reflectance NIR spectra were collected at 10-mm...
Calibration of the model SMART2 in the Netherlands, using data available at the European scale
Mol-Dijkstra, J.P.; Kros, J.
1999-01-01
The soil acidification model SMART2 has been developed for application on a national to a continental scale. In this study SMART2 is applied at the European scale, which means that SMART2 was applied to the Netherlands with data that are available at the European scale. In order to calibrate SMART2,
Model independent approach to the single photoelectron calibration of photomultiplier tubes
Energy Technology Data Exchange (ETDEWEB)
Saldanha, R.; Grandi, L.; Guardincerri, Y.; Wester, T.
2017-08-01
The accurate calibration of photomultiplier tubes is critical in a wide variety of applications in which it is necessary to know the absolute number of detected photons or precisely determine the resolution of the signal. Conventional calibration methods rely on fitting the photomultiplier response to a low intensity light source with analytical approximations to the single photoelectron distribution, often leading to biased estimates due to the inability to accurately model the full distribution, especially at low charge values. In this paper we present a simple statistical method to extract the relevant single photoelectron calibration parameters without making any assumptions about the underlying single photoelectron distribution. We illustrate the use of this method through the calibration of a Hamamatsu R11410 photomultiplier tube and study the accuracy and precision of the method using Monte Carlo simulations. The method is found to have significantly reduced bias compared to conventional methods and works under a wide range of light intensities, making it suitable for simultaneously calibrating large arrays of photomultiplier tubes.
Biasing secondary particle interaction physics and production in MCNP6
International Nuclear Information System (INIS)
Fensin, M.L.; James, M.R.
2016-01-01
Highlights: • Biasing secondary production and interactions of charged particles in the tabular energy regime. • Examining lower weight window bounds for rare events when using Russian roulette. • The new biasing strategy can speedup calculations by a factor of 1 million or more. - Abstract: Though MCNP6 will transport elementary charged particles and light ions to low energies (i.e. less than 20 MeV), MCNP6 has historically relied on model physics with suggested minimum energies of ∼20 to 200 MeV. Use of library data for the low energy regime was developed for MCNP6 1.1.Beta to read and use light ion libraries. Thick target yields of neutron production for alphas on fluoride result in 1 production event per roughly million sampled alphas depending on the energy of the alpha (for other isotopes the yield can be even rarer). Calculation times to achieve statistically significant and converged thick target yields are quite laborious, needing over one hundred processor hours. The MUCEND code possess a biasing technique for improving the sampling of secondary particle production by forcing a nuclear interaction to occur per each alpha transported. We present here a different biasing strategy for secondary particle production from charged particles. During each substep, as the charged particle slows down, we bias both a nuclear collision event to occur at each substep and the production of secondary particles at the collision event, while still continuing to progress the charged particle until reaching a region of zero importance or an energy/time cutoff. This biasing strategy is capable of speeding up calculations by a factor of a million or more as compared to the unbiased calculation. Further presented here are both proof that the biasing strategy is capable of producing the same results as the unbiased calculation and the limitations to consider in order to achieve accurate results of secondary particle production. Though this strategy was developed for MCNP
Jackson-Blake, Leah; Helliwell, Rachel
2015-04-01
Process-based catchment water quality models are increasingly used as tools to inform land management. However, for such models to be reliable they need to be well calibrated and shown to reproduce key catchment processes. Calibration can be challenging for process-based models, which tend to be complex and highly parameterised. Calibrating a large number of parameters generally requires a large amount of monitoring data, spanning all hydrochemical conditions. However, regulatory agencies and research organisations generally only sample at a fortnightly or monthly frequency, even in well-studied catchments, often missing peak flow events. The primary aim of this study was therefore to investigate how the quality and uncertainty of model simulations produced by a process-based, semi-distributed catchment model, INCA-P (the INtegrated CAtchment model of Phosphorus dynamics), were improved by calibration to higher frequency water chemistry data. Two model calibrations were carried out for a small rural Scottish catchment: one using 18 months of daily total dissolved phosphorus (TDP) concentration data, another using a fortnightly dataset derived from the daily data. To aid comparability, calibrations were carried out automatically using the Markov Chain Monte Carlo - DiffeRential Evolution Adaptive Metropolis (MCMC-DREAM) algorithm. Calibration to daily data resulted in improved simulation of peak TDP concentrations and improved model performance statistics. Parameter-related uncertainty in simulated TDP was large when fortnightly data was used for calibration, with a 95% credible interval of 26 μg/l. This uncertainty is comparable in size to the difference between Water Framework Directive (WFD) chemical status classes, and would therefore make it difficult to use this calibration to predict shifts in WFD status. The 95% credible interval reduced markedly with the higher frequency monitoring data, to 6 μg/l. The number of parameters that could be reliably auto-calibrated
Potential of the MCNP computer code
International Nuclear Information System (INIS)
Kyncl, J.
1995-01-01
The MCNP code is designed for numerical solution of neutron, photon, and electron transport problems by the Monte Carlo method. The code is based on the linear transport theory of behavior of the differential flux of the particles. The code directly uses data from the cross section point data library for input. Experience is outlined, gained in the application of the code to the calculation of the effective parameters of fuel assemblies and of the entire reactor core, to the determination of the effective parameters of the elementary fuel cell, and to the numerical solution of neutron diffusion and/or transport problems of the fuel assembly. The agreement between the calculated and observed data gives evidence that the MCNP code can be used with advantage for calculations involving WWER type fuel assemblies. (J.B.). 4 figs., 6 refs
Using Machine Learning to Predict MCNP Bias
Energy Technology Data Exchange (ETDEWEB)
Grechanuk, Pavel Aleksandrovi [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2018-01-09
For many real-world applications in radiation transport where simulations are compared to experimental measurements, like in nuclear criticality safety, the bias (simulated - experimental k_{eff}) in the calculation is an extremely important quantity used for code validation. The objective of this project is to accurately predict the bias of MCNP6 [1] criticality calculations using machine learning (ML) algorithms, with the intention of creating a tool that can complement the current nuclear criticality safety methods. In the latest release of MCNP6, the Whisper tool is available for criticality safety analysts and includes a large catalogue of experimental benchmarks, sensitivity profiles, and nuclear data covariance matrices. This data, coming from 1100+ benchmark cases, is used in this study of ML algorithms for criticality safety bias predictions.
Regional Calibration of SCS-CN L-THIA Model: Application for Ungauged Basins
Directory of Open Access Journals (Sweden)
Ji-Hong Jeon
2014-05-01
Full Text Available Estimating surface runoff for ungauged watershed is an important issue. The Soil Conservation Service Curve Number (SCS-CN method developed from long-term experimental data is widely used to estimate surface runoff from gaged or ungauged watersheds. Many modelers have used the documented SCS-CN parameters without calibration, sometimes resulting in significant errors in estimating surface runoff. Several methods for regionalization of SCS-CN parameters were evaluated. The regionalization methods include: (1 average; (2 land use area weighted average; (3 hydrologic soil group area weighted average; (4 area combined land use and hydrologic soil group weighted average; (5 spatial nearest neighbor; (6 inverse distance weighted average; and (7 global calibration method, and model performance for each method was evaluated with application to 14 watersheds located in Indiana. Eight watersheds were used for calibration and six watersheds for validation. For the validation results, the spatial nearest neighbor method provided the highest average Nash-Sutcliffe (NS value at 0.58 for six watersheds but it included the lowest NS value and variance of NS values of this method was the highest. The global calibration method provided the second highest average NS value at 0.56 with low variation of NS values. Although the spatial nearest neighbor method provided the highest average NS value, this method was not statistically different than other methods. However, the global calibration method was significantly different than other methods except the spatial nearest neighbor method. Therefore, we conclude that the global calibration method is appropriate to regionalize SCS-CN parameters for ungauged watersheds.
International Nuclear Information System (INIS)
Mashnik, Stepan G.
2011-01-01
MCNP6, the latest and most advanced LANL transport code representing a recent merger of MCNP5 and MCNPX, has been Validated and Verified (V and V) against a variety of intermediate and high-energy experimental data and against results by different versions of MCNPX and other codes. In the present work, we V and V MCNP6 using mainly the latest modifications of the Cascade-Exciton Model (CEM) and of the Los Alamos version of the Quark-Gluon String Model (LAQGSM) event generators CEM03.02 and LAQGSM03.03. We found that MCNP6 describes reasonably well various reactions induced by particles and nuclei at incident energies from 18 MeV to about 1 TeV per nucleon measured on thin and thick targets and agrees very well with similar results obtained with MCNPX and calculations by CEM03.02, LAQGSM03.01 (03.03), INCL4 + ABLA, and Bertini INC + Dresner evaporation, EPAX, ABRABLA, HIPSE, and AMD, used as stand alone codes. Most of several computational bugs and more serious physics problems observed in MCNP6/X during our V and V have been fixed; we continue our work to solve all the known problems before MCNP6 is distributed to the public. (author)
Improved photon production data for MCNP trademark
International Nuclear Information System (INIS)
Adams, A.A.; Frankle, S.C.; Little, R.C.
1998-04-01
Computer simulations with MCNP are often used to obtain information from measurements of neutron induced gamma-ray spectra. For such simulations to be useful, the complicated spectra produced by a wide variety of nuclides must be reproduced, requiring high quality nuclear data. A previous assessment of the neutron induced photon production data in the MCNP data libraries indicated a need for improvement. The photon production data were often based on outdated experiments and binned in such wide energy groups as to be of limited value for some applications. This paper describes the work that is underway at Los Alamos National Laboratory to improve the photon production data for thermal neutron capture reactions. To date, high quality photon production data for each stable isotope of chlorine, chromium, iron, copper, and nickel have been obtained. The improved spectra have been incorporated into ENDF formatted evaluations and processed into corresponding MCNP data files. Similar improvements for aluminum, manganese, silicon, calcium, and vanadium are also planned. The methodology used to produce the spectra is discussed, and sample results for chlorine are presented
On Inertial Body Tracking in the Presence of Model Calibration Errors.
Miezal, Markus; Taetz, Bertram; Bleser, Gabriele
2016-07-22
In inertial body tracking, the human body is commonly represented as a biomechanical model consisting of rigid segments with known lengths and connecting joints. The model state is then estimated via sensor fusion methods based on data from attached inertial measurement units (IMUs). This requires the relative poses of the IMUs w.r.t. the segments-the IMU-to-segment calibrations, subsequently called I2S calibrations-to be known. Since calibration methods based on static poses, movements and manual measurements are still the most widely used, potentially large human-induced calibration errors have to be expected. This work compares three newly developed/adapted extended Kalman filter (EKF) and optimization-based sensor fusion methods with an existing EKF-based method w.r.t. their segment orientation estimation accuracy in the presence of model calibration errors with and without using magnetometer information. While the existing EKF-based method uses a segment-centered kinematic chain biomechanical model and a constant angular acceleration motion model, the newly developed/adapted methods are all based on a free segments model, where each segment is represented with six degrees of freedom in the global frame. Moreover, these methods differ in the assumed motion model (constant angular acceleration, constant angular velocity, inertial data as control input), the state representation (segment-centered, IMU-centered) and the estimation method (EKF, sliding window optimization). In addition to the free segments representation, the optimization-based method also represents each IMU with six degrees of freedom in the global frame. In the evaluation on simulated and real data from a three segment model (an arm), the optimization-based method showed the smallest mean errors, standard deviations and maximum errors throughout all tests. It also showed the lowest dependency on magnetometer information and motion agility. Moreover, it was insensitive w.r.t. I2S position and
Directory of Open Access Journals (Sweden)
Bikić Siniša M.
2016-01-01
Full Text Available This paper is focused on the mathematical model of the Air Torque Position dampers. The mathematical model establishes a link between the velocity of air in front of the damper, position of the damper blade and the moment acting on the blade caused by the air flow. This research aims to experimentally verify the mathematical model for the damper type with non-cascading blades. Four different types of dampers with non-cascading blades were considered: single blade dampers, dampers with two cross-blades, dampers with two parallel blades and dampers with two blades of which one is a fixed blade in the horizontal position. The case of a damper with a straight pipeline positioned in front of and behind the damper was taken in consideration. Calibration and verification of the mathematical model was conducted experimentally. The experiment was conducted on the laboratory facility for testing dampers used for regulation of the air flow rate in heating, ventilation and air conditioning systems. The design and setup of the laboratory facility, as well as construction, adjustment and calibration of the laboratory damper are presented in this paper. The mathematical model was calibrated by using one set of data, while the verification of the mathematical model was conducted by using the second set of data. The mathematical model was successfully validated and it can be used for accurate measurement of the air velocity on dampers with non-cascading blades under different operating conditions. [Projekat Ministarstva nauke Republike Srbije, br. TR31058
Calibration of a simple and a complex model of global marine biogeochemistry
Kriest, Iris
2017-11-01
The assessment of the ocean biota's role in climate change is often carried out with global biogeochemical ocean models that contain many components and involve a high level of parametric uncertainty. Because many data that relate to tracers included in a model are only sparsely observed, assessment of model skill is often restricted to tracers that can be easily measured and assembled. Examination of the models' fit to climatologies of inorganic tracers, after the models have been spun up to steady state, is a common but computationally expensive procedure to assess model performance and reliability. Using new tools that have become available for global model assessment and calibration in steady state, this paper examines two different model types - a complex seven-component model (MOPS) and a very simple four-component model (RetroMOPS) - for their fit to dissolved quantities. Before comparing the models, a subset of their biogeochemical parameters has been optimised against annual-mean nutrients and oxygen. Both model types fit the observations almost equally well. The simple model contains only two nutrients: oxygen and dissolved organic phosphorus (DOP). Its misfit and large-scale tracer distributions are sensitive to the parameterisation of DOP production and decay. The spatio-temporal decoupling of nitrogen and oxygen, and processes involved in their uptake and release, renders oxygen and nitrate valuable tracers for model calibration. In addition, the non-conservative nature of these tracers (with respect to their upper boundary condition) introduces the global bias (fixed nitrogen and oxygen inventory) as a useful additional constraint on model parameters. Dissolved organic phosphorus at the surface behaves antagonistically to phosphate, and suggests that observations of this tracer - although difficult to measure - may be an important asset for model calibration.
Liu, Miao; Yang, Shourui; Wang, Zhangying; Huang, Shujun; Liu, Yue; Niu, Zhenqi; Zhang, Xiaoxuan; Zhu, Jigui; Zhang, Zonghua
2016-05-30
Augmented reality system can be applied to provide precise guidance for various kinds of manual works. The adaptability and guiding accuracy of such systems are decided by the computational model and the corresponding calibration method. In this paper, a novel type of augmented reality guiding system and the corresponding designing scheme are proposed. Guided by external positioning equipment, the proposed system can achieve high relative indication accuracy in a large working space. Meanwhile, the proposed system is realized with a digital projector and the general back projection model is derived with geometry relationship between digitized 3D model and the projector in free space. The corresponding calibration method is also designed for the proposed system to obtain the parameters of projector. To validate the proposed back projection model, the coordinate data collected by a 3D positioning equipment is used to calculate and optimize the extrinsic parameters. The final projecting indication accuracy of the system is verified with subpixel pattern projecting technique.
Parallel Genetic Algorithms for calibrating Cellular Automata models: Application to lava flows
International Nuclear Information System (INIS)
D'Ambrosio, D.; Spataro, W.; Di Gregorio, S.; Calabria Univ., Cosenza; Crisci, G.M.; Rongo, R.; Calabria Univ., Cosenza
2005-01-01
Cellular Automata are highly nonlinear dynamical systems which are suitable far simulating natural phenomena whose behaviour may be specified in terms of local interactions. The Cellular Automata model SCIARA, developed far the simulation of lava flows, demonstrated to be able to reproduce the behaviour of Etnean events. However, in order to apply the model far the prediction of future scenarios, a thorough calibrating phase is required. This work presents the application of Genetic Algorithms, general-purpose search algorithms inspired to natural selection and genetics, far the parameters optimisation of the model SCIARA. Difficulties due to the elevated computational time suggested the adoption a Master-Slave Parallel Genetic Algorithm far the calibration of the model with respect to the 2001 Mt. Etna eruption. Results demonstrated the usefulness of the approach, both in terms of computing time and quality of performed simulations
Calibration of the heat balance model for prediction of car climate
Pokorný, Jan; Fišer, Jan; Jícha, Miroslav
2012-04-01
In the paper, the authors refer to development a heat balance model to predict car climate and power heat load. Model is developed in Modelica language using Dymola as interpreter. It is a dynamical system, which describes a heat exchange between car cabin and ambient. Inside a car cabin, there is considered heat exchange between air zone, interior and air-conditioning system. It is considered 1D heat transfer with a heat accumulation and a relative movement Sun respect to the car cabin, whilst car is moving. Measurements of the real operating conditions of gave us data for model calibration. The model was calibrated for Škoda Felicia parking-summer scenarios.
Calibration of the heat balance model for prediction of car climate
Directory of Open Access Journals (Sweden)
Jícha Miroslav
2012-04-01
Full Text Available In the paper, the authors refer to development a heat balance model to predict car climate and power heat load. Model is developed in Modelica language using Dymola as interpreter. It is a dynamical system, which describes a heat exchange between car cabin and ambient. Inside a car cabin, there is considered heat exchange between air zone, interior and air-conditioning system. It is considered 1D heat transfer with a heat accumulation and a relative movement Sun respect to the car cabin, whilst car is moving. Measurements of the real operating conditions of gave us data for model calibration. The model was calibrated for Škoda Felicia parking-summer scenarios.
Inverse modeling as a step in the calibration of the LBL-USGS site-scale model of Yucca Mountain
International Nuclear Information System (INIS)
Finsterle, S.; Bodvarsson, G.S.; Chen, G.
1995-05-01
Calibration of the LBL-USGS site-scale model of Yucca Mountain is initiated. Inverse modeling techniques are used to match the results of simplified submodels to the observed pressure, saturation, and temperature data. Hydrologic and thermal parameters are determined and compared to the values obtained from laboratory measurements and conventional field test analysis
Calibration of Linked Hydrodynamic and Water Quality Model for Santa Margarita Lagoon
2016-07-01
was used to drive the transport and water quality kinetics for the simulation of 2007–2009. The sand berm, which controlled the opening/closure of...TECHNICAL REPORT 3015 July 2016 Calibration of Linked Hydrodynamic and Water Quality Model for Santa Margarita Lagoon Final Report Pei...Linked Hydrodynamic and Water Quality Model for Santa Margarita Lagoon Final Report Pei-Fang Wang Chuck Katz Ripan Barua SSC Pacific James
James W. Hardin; Henrik Schmeidiche; Raymond J. Carroll
2003-01-01
This paper discusses and illustrates the method of regression calibration. This is a straightforward technique for fitting models with additive measurement error. We present this discussion in terms of generalized linear models (GLMs) following the notation defined in Hardin and Carroll (2003). Discussion will include specified measurement error, measurement error estimated by replicate error-prone proxies, and measurement error estimated by instrumental variables. The discussion focuses on s...
DEFF Research Database (Denmark)
Aagaard Madsen, Helge; Larsen, Gunner Chr.; Larsen, Torben J.
2010-01-01
in an aeroelastic model. Calibration and validation of the different parts of the model is carried out by comparisons with actuator disk and actuator line (ACL) computations as well as with inflow measurements on a full-scale 2 MW turbine. It is shown that the load generating part of the increased turbulence....... Finally, added turbulence characteristics are compared with correlation results from literature. ©2010 American Society of Mechanical Engineers...
Results for both sequential and simultaneous calibration of exchange flows between segments of a 10-box, one-dimensional, well-mixed, bifurcated tidal mixing model for Tampa Bay are reported. Calibrations were conducted for three model options with different mathematical expressi...
International Nuclear Information System (INIS)
Laundy, R.S.
1991-01-01
This report addresses the feasibility of the use of optimisation techniques to calibrate the models developed for the impact assessment of a radioactive waste repository. The maximum likelihood method for improving parameter estimates is considered in detail, and non-linear optimisation techniques for finding solutions are reviewed. Applications are described for the calibration of groundwater flow, radionuclide transport and biosphere models. (author)
Directory of Open Access Journals (Sweden)
D. A. Robinson
1998-01-01
Full Text Available Capacitance probes are a fast, safe and relatively inexpensive means of measuring the relative permittivity of soils, which can then be used to estimate soil water content. Initial experiments with capacitance probes used empirical calibrations between the frequency response of the instrument and soil water content. This has the disadvantage that the calibrations are instrument-dependent. A twofold calibration strategy is described in this paper; the instrument frequency is turned into relative permittivity (dielectric constant which can then be calibrated against soil water content. This approach offers the advantages of making the second calibration, from soil permittivity to soil water content. instrument-independent and allows comparison with other dielectric methods, such as time domain reflectometry. A physically based model, used to calibrate capacitance probes in terms of relative permittivity (εr is presented. The model, which was developed from circuit analysis, predicts, successfully, the frequency response of the instrument in liquids with different relative permittivities, using only measurements in air and water. lt was used successfully to calibrate 10 prototype surface capacitance insertion probes (SCIPS and a depth capacitance probe. The findings demonstrate that the geometric properties of the instrument electrodes were an important parameter in the model, the value of which could be fixed through measurement. The relationship between apparent soil permittivity and volumetric water content has been the subject of much research in the last 30 years. Two lines of investigation have developed, time domain reflectometry (TDR and capacitance. Both methods claim to measure relative permittivity and should therefore be comparable. This paper demonstrates that the IH capacitance probe overestimates relative permittivity as the ionic conductivity of the medium increases. Electrically conducting ionic solutions were used to test the
ANN-based calibration model of FTIR used in transformer online monitoring
Li, Honglei; Liu, Xian-yong; Zhou, Fangjie; Tan, Kexiong
2005-02-01
Recently, chromatography column and gas sensor have been used in online monitoring device of dissolved gases in transformer oil. But some disadvantages still exist in these devices: consumption of carrier gas, requirement of calibration, etc. Since FTIR has high accuracy, consume no carrier gas and require no calibration, the researcher studied the application of FTIR in such monitoring device. Experiments of "Flow gas method" were designed, and spectrum of mixture composed of different gases was collected with A BOMEM MB104 FTIR Spectrometer. A key question in the application of FTIR is that: the absorbance spectrum of 3 fault key gases, including C2H4, CH4 and C2H6, are overlapped seriously at 2700~3400cm-1. Because Absorbance Law is no longer appropriate, a nonlinear calibration model based on BP ANN was setup to in the quantitative analysis. The height absorbance of C2H4, CH4 and C2H6 were adopted as quantitative feature, and all the data were normalized before training the ANN. Computing results show that the calibration model can effectively eliminate the cross disturbance to measurement.
Calibration of a complex activated sludge model for the full-scale wastewater treatment plant.
Liwarska-Bizukojc, Ewa; Olejnik, Dorota; Biernacki, Rafal; Ledakowicz, Stanislaw
2011-08-01
In this study, the results of the calibration of the complex activated sludge model implemented in BioWin software for the full-scale wastewater treatment plant are presented. Within the calibration of the model, sensitivity analysis of its parameters and the fractions of carbonaceous substrate were performed. In the steady-state and dynamic calibrations, a successful agreement between the measured and simulated values of the output variables was achieved. Sensitivity analysis revealed that upon the calculations of normalized sensitivity coefficient (S(i,j)) 17 (steady-state) or 19 (dynamic conditions) kinetic and stoichiometric parameters are sensitive. Most of them are associated with growth and decay of ordinary heterotrophic organisms and phosphorus accumulating organisms. The rankings of ten most sensitive parameters established on the basis of the calculations of the mean square sensitivity measure (δ(msqr)j) indicate that irrespective of the fact, whether the steady-state or dynamic calibration was performed, there is an agreement in the sensitivity of parameters.
Including sugar cane in the agro-ecosystem model ORCHIDEE-STICS: calibration and validation
Valade, A.; Vuichard, N.; Ciais, P.; Viovy, N.
2011-12-01
Sugarcane is currently the most efficient bioenergy crop with regards to the energy produced per hectare. With approximately half the global bioethanol production in 2005, and a devoted land area expected to expand globally in the years to come, sugar cane is at the heart of the biofuel debate. Dynamic global vegetation models coupled with agronomical models are powerful and novel tools to tackle many of the environmental issues related to biofuels if they are carefully calibrated and validated against field observations. Here we adapt the agro-terrestrial model ORCHIDEE-STICS for sugar cane simulations. Observation data of LAI are used to evaluate the sensitivity of the model to parameters of nitrogen absorption and phenology, which are calibrated in a systematic way for six sites in Australia and La Reunion. We find that the optimal set of parameters is highly dependent on the sites' characteristics and that the model can reproduce satisfactorily the evolution of LAI. This careful calibration of ORCHIDEE-STICS for sugar cane biomass production for different locations and technical itineraries provides a strong basis for further analysis of the impacts of bioenergy-related land use change on carbon cycle budgets. As a next step, a sensitivity analysis is carried out to estimate the uncertainty of the model in biomass and carbon flux simulation due to its parameterization.
Newton, Adam J H; Wall, Mark J; Richardson, Magnus J E
2017-03-01
Microelectrode amperometric biosensors are widely used to measure concentrations of analytes in solution and tissue including acetylcholine, adenosine, glucose, and glutamate. A great deal of experimental and modeling effort has been directed at quantifying the response of the biosensors themselves; however, the influence that the macroscopic tissue environment has on biosensor response has not been subjected to the same level of scrutiny. Here we identify an important issue in the way microelectrode biosensors are calibrated that is likely to have led to underestimations of analyte tissue concentrations. Concentration in tissue is typically determined by comparing the biosensor signal to that measured in free-flow calibration conditions. In a free-flow environment the concentration of the analyte at the outer surface of the biosensor can be considered constant. However, in tissue the analyte reaches the biosensor surface by diffusion through the extracellular space. Because the enzymes in the biosensor break down the analyte, a density gradient is set up resulting in a significantly lower concentration of analyte near the biosensor surface. This effect is compounded by the diminished volume fraction (porosity) and reduction in the diffusion coefficient due to obstructions (tortuosity) in tissue. We demonstrate this effect through modeling and experimentally verify our predictions in diffusive environments. NEW & NOTEWORTHY Microelectrode biosensors are typically calibrated in a free-flow environment where the concentrations at the biosensor surface are constant. However, when in tissue, the analyte reaches the biosensor via diffusion and so analyte breakdown by the biosensor results in a concentration gradient and consequently a lower concentration around the biosensor. This effect means that naive free-flow calibration will underestimate tissue concentration. We develop mathematical models to better quantify the discrepancy between the calibration and tissue
Visualization of geometry and tally data using MCNP and Justine
International Nuclear Information System (INIS)
Cox, L.J.; Favorite, J.A.
1999-01-01
The Monte Carlo N-Particle (MCNP) transport code is a general-purpose code that can be used for neutron, photon, electron, or coupled neutron/photon/electron transport, including the capability to calculate eigenvalues for neutron-multiplying systems. The code treats an arbitrary three-dimensional configuration of materials in geometric cells bounded by first- and second-degree surfaces and fourth-degree elliptical tori. Justine is the graphical user interface and problem setup tool for the Los Alamos Radiation Modeling Interactive Environment (LARAMIE). Its purpose is to serve as a convenient and very general interface for setting up physics calculations and linking together the disparate radiation transport codes under a single front-end. Currently, the LARAMIE system includes MCNP and the deterministic transport code suit DANTSYS (ONEDANT, TWODANT, and THREEDANT, for one-, two-, and three-dimensional geometries, respectively). Justine is currently available through the Radiation Safety Information Computational Center to members of the criticality safety community for evaluation and use. The authors will demonstrate the capabilities of both codes for visualization of geometries and results from a variety of criticality problems
New calculations for critical assemblies using MCNP4B
International Nuclear Information System (INIS)
Adams, A.A.; Frankle, S.C.; Little, R.C.
1997-07-01
A suite of 41 criticality benchmarks has been modeled using MCNP trademark (version 4B). Most of the assembly specifications were obtained from the Cross Section Evaluation Working Group (CSEWG) and the International Criticality Safety Benchmark Evaluation Project (ICSBEP) compendiums of experimental benchmarks. A few assembly specifications were obtained from experimental papers. The suite contains thermal and fast assemblies, bare and reflected assemblies, and emphasizes 233 U, 235 U, 238 U, and 239 Pu. The values of k eff for each assembly in the suite were calculated using MCNP libraries derived primarily from release 2 of ENDF/B-V and release 2 of ENDF/B-VI. The results show that the new ENDF/B-VI.2 evaluations for H, O, N, B, 235 U, 238 U, and 239 Pu can have a significant impact on the values of k eff . In addition to the integral quantity k eff , several additional experimental measurements were performed and documented. These experimental measurements include central fission and reaction-rate ratios for various isotopes, and neutron leakage and flux spectra. They provide more detailed information about the accuracy of the nuclear data than can k eff . Comparison calculations were performed using both ENDF/B-V.2 and ENDF/B-VI.2-based data libraries. The purpose of this paper is to compare the results of these additional calculations with experimental data, and to use these results to assess the quality of the nuclear data
The calibration of the MAST neutron yield monitors
International Nuclear Information System (INIS)
Stammers, Keith; Loughlin, M.J.
2006-01-01
Several neutron detectors have been installed on MAST to monitor the temporal production of neutrons during neutral beam injection. This paper describes the detectors, their calibration and applications of the data. The main neutron diagnostic is a guarded fission chamber, with processing electronics that allow data collection in three modes of operation, and covers the whole range of neutron production rate to be expected from current operations and future upgrades. The scalar mode of operation is calibrated with a 252 Cf source inside the vacuum vessel and then MCNP modelling is used to relate this calibration to an extended plasma source. Plasma neutron data are used to extend the calibration to the Campbell and ion-current modes, with final uncertainties of approximately 8% in each case. Corroborative evidence for the accuracy of the calibration, obtained from neutron activation, indicates that the method is satisfactory. The neutron data are used routinely to keep track of the radio-activation of key components of the MAST tokamak
Development of visual platform of MCNP4B
International Nuclear Information System (INIS)
Fan Jiajin; Wang Yi; Cheng Jianping
2002-01-01
For convenience of using MCNP, the authors successfully developed a new code named McnpClient. With friend man-machine interface, the users can create input files very easily. If any error occurs during running process, McnpClient will give detailed fatal error or bad trouble messages. When the running is done, all the data can be obtained and in the mean time the curves associated with the data can be displayed
Field Measurement and Calibration of HDM-4 Fuel Consumption Model on Interstate Highway in Florida
Directory of Open Access Journals (Sweden)
Xin Jiao
2015-03-01
Full Text Available Fuel consumptions are measured by operating passenger car and tractor-trailer on two interstate roadway sites in Florida. Each site contains flexible pavement and rigid pavement with similar pavement, traffic and environmental condition. Field test reveals that the average fuel consumption differences between vehicle operating on flexible pavement and rigid pavement at given test condition are 4.04% for tractor-trailer and 2.50% for passenger car, with a fuel saving on rigid pavement. The fuel consumption differences are found statistically significant at 95% confidence level for both vehicle types. Test data are then used to calibrate the Highway Development and Management IV (HDM-4 fuel consumption model and model coefficients are obtained for three sets of observations. Field measurement and prediction by calibrated model shows generally good agreement. Nevertheless, verification and adjustment with more experiment or data sources would be expected in future studies.
Hsu, Shu-Hui; Kulasekere, Ravi; Roberson, Peter L
2010-08-05
Film calibration is time-consuming work when dose accuracy is essential while working in a range of photon scatter environments. This study uses the single-target single-hit model of film response to fit the calibration curves as a function of calibration method, processor condition, field size and depth. Kodak XV film was irradiated perpendicular to the beam axis in a solid water phantom. Standard calibration films (one dose point per film) were irradiated at 90 cm source-to-surface distance (SSD) for various doses (16-128 cGy), depths (0.2, 0.5, 1.5, 5, 10 cm) and field sizes (5 × 5, 10 × 10 and 20 × 20 cm²). The 8-field calibration method (eight dose points per film) was used as a reference for each experiment, taken at 95 cm SSD and 5 cm depth. The delivered doses were measured using an Attix parallel plate chamber for improved accuracy of dose estimation in the buildup region. Three fitting methods with one to three dose points per calibration curve were investigated for the field sizes of 5 × 5, 10 × 10 and 20 × 20 cm². The inter-day variation of model parameters (background, saturation and slope) were 1.8%, 5.7%, and 7.7% (1 σ) using the 8-field method. The saturation parameter ratio of standard to 8-field curves was 1.083 ± 0.005. The slope parameter ratio of standard to 8-field curves ranged from 0.99 to 1.05, depending on field size and depth. The slope parameter ratio decreases with increasing depth below 0.5 cm for the three field sizes. It increases with increasing depths above 0.5 cm. A calibration curve with one to three dose points fitted with the model is possible with 2% accuracy in film dosimetry for various irradiation conditions. The proposed fitting methods may reduce workload while providing energy dependence correction in radiographic film dosimetry. This study is limited to radiographic XV film with a Lumisys scanner.
International Nuclear Information System (INIS)
Cartier, J.; Casoli, P.; Chappert, F.
2013-01-01
In this paper, we present calibration methods in order to estimate reactor neutron flux spectrum and its uncertainties by using integral activation measurements. These techniques are performed using Bayesian and MCMC framework. These methods are applied to integral activation experiments in the cavity of the CALIBAN reactor. We estimate the neutron flux and its related uncertainties. The originality of this work is that these uncertainties take into account measurements uncertainties, cross-sections uncertainties and model error. In particular, our results give a very good approximation of the total flux and indicate that neutron flux from MCNP simulation for energies above about 5 MeV seems to overestimate the 'real flux'. (authors)
International Nuclear Information System (INIS)
Baggest, D.S.; Rothweil, D.A.; Pang, S.
1995-12-01
With the advent of more sophisticated techniques for control of tokamak plasmas comes the requirement for increasingly more accurate models of plasma processes and tokamak systems. Development of accurate models for DIII-D power systems, vessel, and poloidal coils is already complete, while work continues in development of general plasma response modeling techniques. Increased accuracy in estimates of parameters to be controlled is also required. It is important to ensure that errors in supporting systems such as diagnostic and command circuits do not limit the accuracy of plasma parameter estimates or inhibit the ability to derive accurate plasma/tokamak system models. To address this issue, we have developed more formal power systems change control and power system/magnetic diagnostics calibration procedures. This paper discusses our approach to consolidating the tasks in these closely related areas. This includes, for example, defining criteria for when diagnostics should be re-calibrated along with required calibration tolerances, and implementing methods for tracking power systems hardware modifications and the resultant changes to control models
Program for the Generation of MCNP Inputs from State Files of CAREM
International Nuclear Information System (INIS)
Leszczynski, Francisco; Lopasso, Edmundo; Villarino, E
2000-01-01
The objective of this work is the development and tests of detailed input data for the Monte Carlo program MCNP, to be able of model the core of CAREM reactor, with the detail included on the updated models, for having available a calculation system that allow the production of confident results to be compared with results obtained with the system used today for designing the CAREM reactor core (CONDOR-CITVAP).The model includes the possibility of temperature and coolant density, and temperature and numeric densities of fuel.The detail consists of 21 different fuel elements (symmetry 3) and 14 axial zones.Results of comparisons of reactivity and power pick factors are presented, between MCNP and CONDOR-CITVAP.On average, these results show an acceptable agreement for all the compared parameters.It is described, also, the interface CONDOR-CITVAP-MCNP program, that has been developed for generating inputs of materials for MCNP, from outputs of CONDOR and CITVAP, for different reactor states
TORT/MCNP coupling method for the calculation of neutron flux around a core of BWR
International Nuclear Information System (INIS)
Kurosawa, M.
2005-01-01
For the analysis of BWR neutronics performance, accurate data are required for neutron flux distribution over the In-Reactor Pressure Vessel equipments taking into account the detailed geometrical arrangement. The TORT code can calculate neutron flux around a core of BWR in a three-dimensional geometry model, but has difficulties in fine geometrical modelling and lacks huge computer resource. On the other hand, the MCNP code enables the calculation of the neutron flux with a detailed geometry model, but requires very long sampling time to give enough number of particles. Therefore, a TORT/MCNP coupling method has been developed to eliminate the two problems mentioned above in each code. In this method, the TORT code calculates angular flux distribution on the core surface and the MCNP code calculates neutron spectrum at the points of interest using the flux distribution. The coupling method will be used as the DOT-DOMINO-MORSE code system. This TORT/MCNP coupling method was applied to calculate the neutron flux at points where induced radioactivity data were measured for 54 Mn and 60 Co and the radioactivity calculations based on the neutron flux obtained from the above method were compared with the measured data. (authors)
TORT/MCNP coupling method for the calculation of neutron flux around a core of BWR.
Kurosawa, Masahiko
2005-01-01
For the analysis of BWR neutronics performance, accurate data are required for neutron flux distribution over the In-Reactor Pressure Vessel equipments taking into account the detailed geometrical arrangement. The TORT code can calculate neutron flux around a core of BWR in a three-dimensional geometry model, but has difficulties in fine geometrical modelling and lacks huge computer resource. On the other hand, the MCNP code enables the calculation of the neutron flux with a detailed geometry model, but requires very long sampling time to give enough number of particles. Therefore, a TORT/MCNP coupling method has been developed to eliminate the two problems mentioned above in each code. In this method, the TORT code calculates angular flux distribution on the core surface and the MCNP code calculates neutron spectrum at the points of interest using the flux distribution. The coupling method will be used as the DOT-DOMINO-MORSE code system. This TORT/MCNP coupling method was applied to calculate the neutron flux at points where induced radioactivity data were measured for 54Mn and 60Co and the radioactivity calculations based on the neutron flux obtained from the above method were compared with the measured data.
Optical modeling and polarization calibration for CMB measurements with ACTPol and Advanced ACTPol
Koopman, Brian; Austermann, Jason; Cho, Hsiao-Mei; Coughlin, Kevin P.; Duff, Shannon M.; Gallardo, Patricio A.; Hasselfield, Matthew; Henderson, Shawn W.; Ho, Shuay-Pwu Patty; Hubmayr, Johannes; Irwin, Kent D.; Li, Dale; McMahon, Jeff; Nati, Federico; Niemack, Michael D.; Newburgh, Laura; Page, Lyman A.; Salatino, Maria; Schillaci, Alessandro; Schmitt, Benjamin L.; Simon, Sara M.; Vavagiakis, Eve M.; Ward, Jonathan T.; Wollack, Edward J.
2016-07-01
The Atacama Cosmology Telescope Polarimeter (ACTPol) is a polarization sensitive upgrade to the Atacama Cosmology Telescope, located at an elevation of 5190 m on Cerro Toco in Chile. ACTPol uses transition edge sensor bolometers coupled to orthomode transducers to measure both the temperature and polarization of the Cosmic Microwave Background (CMB). Calibration of the detector angles is a critical step in producing polarization maps of the CMB. Polarization angle offsets in the detector calibration can cause leakage in polarization from E to B modes and induce a spurious signal in the EB and TB cross correlations, which eliminates our ability to measure potential cosmological sources of EB and TB signals, such as cosmic birefringence. We calibrate the ACTPol detector angles by ray tracing the designed detector angle through the entire optical chain to determine the projection of each detector angle on the sky. The distribution of calibrated detector polarization angles are consistent with a global offset angle from zero when compared to the EB-nulling offset angle, the angle required to null the EB cross-correlation power spectrum. We present the optical modeling process. The detector angles can be cross checked through observations of known polarized sources, whether this be a galactic source or a laboratory reference standard. To cross check the ACTPol detector angles, we use a thin film polarization grid placed in front of the receiver of the telescope, between the receiver and the secondary reflector. Making use of a rapidly rotating half-wave plate (HWP) mount we spin the polarizing grid at a constant speed, polarizing and rotating the incoming atmospheric signal. The resulting sinusoidal signal is used to determine the detector angles. The optical modeling calibration was shown to be consistent with a global offset angle of zero when compared to EB nulling in the first ACTPol results and will continue to be a part of our calibration implementation. The first
Directory of Open Access Journals (Sweden)
Miguel A. Franesqui
2017-08-01
Full Text Available This article outlines the ultrasound data employed to calibrate in the laboratory an analytical model that permits the calculation of the depth of partial-depth surface-initiated cracks on bituminous pavements using this non-destructive technique. This initial calibration is required so that the model provides sufficient precision during practical application. The ultrasonic pulse transit times were measured on beam samples of different asphalt mixtures (semi-dense asphalt concrete AC-S; asphalt concrete for very thin layers BBTM; and porous asphalt PA. The cracks on the laboratory samples were simulated by means of notches of variable depths. With the data of ultrasound transmission time ratios, curve-fittings were carried out on the analytical model, thus determining the regression parameters and their statistical dispersion. The calibrated models obtained from laboratory datasets were subsequently applied to auscultate the evolution of the crack depth after microwaves exposure in the research article entitled “Top-down cracking self-healing of asphalt pavements with steel filler from industrial waste applying microwaves” (Franesqui et al., 2017 [1].
Franesqui, Miguel A; Yepes, Jorge; García-González, Cándida
2017-08-01
This article outlines the ultrasound data employed to calibrate in the laboratory an analytical model that permits the calculation of the depth of partial-depth surface-initiated cracks on bituminous pavements using this non-destructive technique. This initial calibration is required so that the model provides sufficient precision during practical application. The ultrasonic pulse transit times were measured on beam samples of different asphalt mixtures (semi-dense asphalt concrete AC-S; asphalt concrete for very thin layers BBTM; and porous asphalt PA). The cracks on the laboratory samples were simulated by means of notches of variable depths. With the data of ultrasound transmission time ratios, curve-fittings were carried out on the analytical model, thus determining the regression parameters and their statistical dispersion. The calibrated models obtained from laboratory datasets were subsequently applied to auscultate the evolution of the crack depth after microwaves exposure in the research article entitled "Top-down cracking self-healing of asphalt pavements with steel filler from industrial waste applying microwaves" (Franesqui et al., 2017) [1].
Usability of Calibrating Monitor for Soft Proof According to CIE CAM02 Colour Appearance Model
Directory of Open Access Journals (Sweden)
Dragoljub Novakovic
2010-06-01
Full Text Available Colour appearance models describe viewing conditions and enable simulating appearance of colours under different illuminants and illumination levels according to human perception. Since it is possible to predict how colour would look like when different illuminants are used, colour appearance models are incorporated in some monitor profiling software. Owing to these software, tone reproduction curve can be defined by taking into consideration viewing condition in which display is observed. In this work assessment of CIE CAM02 colour appearance model usage at calibrating LCD monitor for soft proof was tested in order to determine which tone reproduction curve enables better reproduction of colour. Luminance level was kept constant, whereas tone reproduction curves determined by gamma values and by parameters of CIE CAM02 model were varied. Testing was conducted in case where physical print reference is observed under illuminant which has colour temperature according to iso standard for soft-proofing (D50 and also for illuminants D65. Based on the results of calibrations assessment, subjective and objective assessment of created profiles, as well as on the perceptual test carried out on human observers, differences in image display were defined and conclusions of the adequacy of CAM02 usage at monitor calibration for each of the viewing conditions reached.
Calibration plots for risk prediction models in the presence of competing risks.
Gerds, Thomas A; Andersen, Per K; Kattan, Michael W
2014-08-15
A predicted risk of 17% can be called reliable if it can be expected that the event will occur to about 17 of 100 patients who all received a predicted risk of 17%. Statistical models can predict the absolute risk of an event such as cardiovascular death in the presence of competing risks such as death due to other causes. For personalized medicine and patient counseling, it is necessary to check that the model is calibrated in the sense that it provides reliable predictions for all subjects. There are three often encountered practical problems when the aim is to display or test if a risk prediction model is well calibrated. The first is lack of independent validation data, the second is right censoring, and the third is that when the risk scale is continuous, the estimation problem is as difficult as density estimation. To deal with these problems, we propose to estimate calibration curves for competing risks models based on jackknife pseudo-values that are combined with a nearest neighborhood smoother and a cross-validation approach to deal with all three problems. Copyright © 2014 John Wiley & Sons, Ltd.
Directory of Open Access Journals (Sweden)
K. Ichii
2010-07-01
Full Text Available Terrestrial biosphere models show large differences when simulating carbon and water cycles, and reducing these differences is a priority for developing more accurate estimates of the condition of terrestrial ecosystems and future climate change. To reduce uncertainties and improve the understanding of their carbon budgets, we investigated the utility of the eddy flux datasets to improve model simulations and reduce variabilities among multi-model outputs of terrestrial biosphere models in Japan. Using 9 terrestrial biosphere models (Support Vector Machine – based regressions, TOPS, CASA, VISIT, Biome-BGC, DAYCENT, SEIB, LPJ, and TRIFFID, we conducted two simulations: (1 point simulations at four eddy flux sites in Japan and (2 spatial simulations for Japan with a default model (based on original settings and a modified model (based on model parameter tuning using eddy flux data. Generally, models using default model settings showed large deviations in model outputs from observation with large model-by-model variability. However, after we calibrated the model parameters using eddy flux data (GPP, RE and NEP, most models successfully simulated seasonal variations in the carbon cycle, with less variability among models. We also found that interannual variations in the carbon cycle are mostly consistent among models and observations. Spatial analysis also showed a large reduction in the variability among model outputs. This study demonstrated that careful validation and calibration of models with available eddy flux data reduced model-by-model differences. Yet, site history, analysis of model structure changes, and more objective procedure of model calibration should be included in the further analysis.
Ichii, K.; Suzuki, T.; Kato, T.; Ito, A.; Hajima, T.; Ueyama, M.; Sasai, T.; Hirata, R.; Saigusa, N.; Ohtani, Y.; Takagi, K.
2010-07-01
Terrestrial biosphere models show large differences when simulating carbon and water cycles, and reducing these differences is a priority for developing more accurate estimates of the condition of terrestrial ecosystems and future climate change. To reduce uncertainties and improve the understanding of their carbon budgets, we investigated the utility of the eddy flux datasets to improve model simulations and reduce variabilities among multi-model outputs of terrestrial biosphere models in Japan. Using 9 terrestrial biosphere models (Support Vector Machine - based regressions, TOPS, CASA, VISIT, Biome-BGC, DAYCENT, SEIB, LPJ, and TRIFFID), we conducted two simulations: (1) point simulations at four eddy flux sites in Japan and (2) spatial simulations for Japan with a default model (based on original settings) and a modified model (based on model parameter tuning using eddy flux data). Generally, models using default model settings showed large deviations in model outputs from observation with large model-by-model variability. However, after we calibrated the model parameters using eddy flux data (GPP, RE and NEP), most models successfully simulated seasonal variations in the carbon cycle, with less variability among models. We also found that interannual variations in the carbon cycle are mostly consistent among models and observations. Spatial analysis also showed a large reduction in the variability among model outputs. This study demonstrated that careful validation and calibration of models with available eddy flux data reduced model-by-model differences. Yet, site history, analysis of model structure changes, and more objective procedure of model calibration should be included in the further analysis.
Characterization of uncertainty and sensitivity of model parameters is an essential and often overlooked facet of hydrological modeling. This paper introduces an algorithm called MOESHA that combines input parameter sensitivity analyses with a genetic algorithm calibration routin...
Directory of Open Access Journals (Sweden)
S. Wang
2012-12-01
Full Text Available Model calibration is essential for hydrologic modeling of large watersheds in a heterogeneous mountain environment. Little guidance is available for model calibration protocols for distributed models that aim at capturing the spatial variability of hydrologic processes. This study used the physically-based distributed hydrologic model, MIKE SHE, to contrast a lumped calibration protocol that used streamflow measured at one single watershed outlet to a multi-site calibration method which employed streamflow measurements at three stations within the large Chaohe River basin in northern China. Simulation results showed that the single-site calibrated model was able to sufficiently simulate the hydrographs for two of the three stations (Nash-Sutcliffe coefficient of 0.65–0.75, and correlation coefficient 0.81–0.87 during the testing period, but the model performed poorly for the third station (Nash-Sutcliffe coefficient only 0.44. Sensitivity analysis suggested that streamflow of upstream area of the watershed was dominated by slow groundwater, whilst streamflow of middle- and down- stream areas by relatively quick interflow. Therefore, a multi-site calibration protocol was deemed necessary. Due to the potential errors and uncertainties with respect to the representation of spatial variability, performance measures from the multi-site calibration protocol slightly decreased for two of the three stations, whereas it was improved greatly for the third station. We concluded that multi-site calibration protocol reached a compromise in term of model performance for the three stations, reasonably representing the hydrographs of all three stations with Nash-Sutcliffe coefficient ranging from 0.59–072. The multi-site calibration protocol applied in the analysis generally has advantages to the single site calibration protocol.
Development of interface between MCNP-FISPACT-MCNP (IPR-MFM) based on rigorous two step method
International Nuclear Information System (INIS)
Shaw, A.K.; Swami, H.L.; Danani, C.
2015-01-01
In this work we present the development of interface tool between MCNP-FISPACT-MCNP (MFM) based on Rigorous Two Step method for the shutdown dose rate (SDDR) calculation. The MFM links MCNP radiation transport and the FISPACT inventory code through a suitable coupling scheme. MFM coupling scheme has three steps. In first step it picks neutron spectrum and total flux from MCNP output file to use as input parameter for FISPACT. It prepares the FISPACT input files by using irradiation history, neutron flux and neutron spectrum and then execute the FISPACT input file in the second step. Third step of MFM coupling scheme extracts the decay gammas from the FISPACT output file and prepares MCNP input file for decay gamma transport followed by execution of MCNP input file and estimation of SDDR. Here detailing of MFM methodology and flow scheme has been described. The programming language PYTHON has been chosen for this development of the coupling scheme. A complete loop of MCNP-FISPACT-MCNP has been developed to handle the simplified geometrical problems. For validation of MFM interface a manual cross-check has been performed which shows good agreements. The MFM interface also has been validated with exiting MCNP-D1S method for a simple geometry with 14 MeV cylindrical neutron source. (author)
Nikzad-Langerodi, Ramin; Lughofer, Edwin; Cernuda, Carlos; Reischer, Thomas; Kantner, Wolfgang; Pawliczek, Marcin; Brandstetter, Markus
2018-07-12
The physico-chemical properties of Melamine Formaldehyde (MF) based thermosets are largely influenced by the degree of polymerization (DP) in the underlying resin. On-line supervision of the turbidity point by means of vibrational spectroscopy has recently emerged as a promising technique to monitor the DP of MF resins. However, spectroscopic determination of the DP relies on chemometric models, which are usually sensitive to drifts caused by instrumental and/or sample-associated changes occurring over time. In order to detect the time point when drifts start causing prediction bias, we here explore a universal drift detector based on a faded version of the Page-Hinkley (PH) statistic, which we test in three data streams from an industrial MF resin production process. We employ committee disagreement (CD), computed as the variance of model predictions from an ensemble of partial least squares (PLS) models, as a measure for sample-wise prediction uncertainty and use the PH statistic to detect changes in this quantity. We further explore supervised and unsupervised strategies for (semi-)automatic model adaptation upon detection of a drift. For the former, manual reference measurements are requested whenever statistical thresholds on Hotelling's T 2 and/or Q-Residuals are violated. Models are subsequently re-calibrated using weighted partial least squares in order to increase the influence of newer samples, which increases the flexibility when adapting to new (drifted) states. Unsupervised model adaptation is carried out exploiting the dual antecedent-consequent structure of a recently developed fuzzy systems variant of PLS termed FLEXFIS-PLS. In particular, antecedent parts are updated while maintaining the internal structure of the local linear predictors (i.e. the consequents). We found improved drift detection capability of the CD compared to Hotelling's T 2 and Q-Residuals when used in combination with the proposed PH test. Furthermore, we found that active
Calibration of the k- ɛ model constants for use in CFD applications
Glover, Nina; Guillias, Serge; Malki-Epshtein, Liora
2011-11-01
The k- ɛ turbulence model is a popular choice in CFD modelling due to its robust nature and the fact that it has been well validated. However it has been noted in previous research that the k- ɛ model has problems predicting flow separation as well as unconfined and transient flows. The model contains five empirical model constants whose values were found through data fitting for a wide range of flows (Launder 1972) but ad-hoc adjustments are often made to these values depending on the situation being modeled. Here we use the example of flow within a regular street canyon to perform a Bayesian calibration of the model constants against wind tunnel data. This allows us to assess the sensitivity of the CFD model to changes in these constants, find the most suitable values for the constants as well as quantifying the uncertainty related to the constants and the CFD model as a whole.
Regional Calibration of SCS-CN L-THIA Model: Application for Ungauged Basins
Jeon, Ji-Hong; Lim, Kyoung; Engel, Bernard
2014-01-01
Estimating surface runoff for ungauged watershed is an important issue. The Soil Conservation Service Curve Number (SCS-CN) method developed from long-term experimental data is widely used to estimate surface runoff from gaged or ungauged watersheds. Many modelers have used the documented SCS-CN parameters without calibration, sometimes resulting in significant errors in estimating surface runoff. Several methods for regionalization of SCS-CN parameters were evaluated. The regionalization met...
(Pre-) calibration of a Reduced Complexity Model of the Antarctic Contribution to Sea-level Changes
Ruckert, K. L.; Guan, Y.; Shaffer, G.; Forest, C. E.; Keller, K.
2015-12-01
(Pre-) calibration of a Reduced Complexity Model of the Antarctic Contribution to Sea-level ChangesKelsey L. Ruckert1*, Yawen Guan2, Chris E. Forest1,3,7, Gary Shaffer 4,5,6, and Klaus Keller1,7,81 Department of Geosciences, The Pennsylvania State University, University Park, Pennsylvania, USA 2 Department of Statistics, The Pennsylvania State University, University Park, Pennsylvania, USA 3 Department of Meteorology, The Pennsylvania State University, University Park, Pennsylvania, USA 4 GAIA_Antarctica, University of Magallanes, Punta Arenas, Chile 5 Center for Advanced Studies in Arid Zones, La Serena, Chile 6 Niels Bohr Institute, University of Copenhagen, Copenhagen, Denmark 7 Earth and Environmental Systems Institute, The Pennsylvania State University, University Park, Pennsylvania, USA 8 Department of Engineering and Public Policy, Carnegie Mellon University, Pittsburgh, Pennsylvania, USA * Corresponding author. E-mail klr324@psu.eduUnderstanding and projecting future sea-level changes poses nontrivial challenges. Sea-level changes are driven primarily by changes in the density of seawater as well as changes in the size of glaciers and ice sheets. Previous studies have demonstrated that a key source of uncertainties surrounding sea-level projections is the response of the Antarctic ice sheet to warming temperatures. Here we calibrate a previously published and relatively simple model of the Antarctic ice sheet over a hindcast period from the last interglacial period to the present. We apply and compare a range of (pre-) calibration methods, including a Bayesian approach that accounts for heteroskedasticity. We compare the model hindcasts and projections for different levels of model complexity and calibration methods. We compare the projections with the upper bounds from previous studies and find our projections have a narrower range in 2100. Furthermore we discuss the implications for the design of climate risk management strategies.