Updated thermal model using simplified short-wave radiosity calculations
International Nuclear Information System (INIS)
Smith, J.A.; Goltz, S.M.
1994-01-01
An extension to a forest canopy thermal radiance model is described that computes the short-wave energy flux absorbed within the canopy by solving simplified radiosity equations describing flux transfers between canopy ensemble classes partitioned by vegetation layer and leaf slope. Integrated short-wave reflectance and transmittance-factors obtained from measured leaf optical properties were found to be nearly equal for the canopy studied. Short-wave view factor matrices were approximated by combining the average leaf scattering coefficient with the long-wave view factor matrices already incorporated in the model. Both the updated and original models were evaluated for a dense spruce fir forest study site in Central Maine. Canopy short-wave absorption coefficients estimated from detailed Monte Carlo ray tracing calculations were 0.60, 0.04, and 0.03 for the top, middle, and lower canopy layers corresponding to leaf area indices of 4.0, 1.05, and 0.25. The simplified radiosity technique yielded analogous absorption values of 0.55, 0.03, and 0.01. The resulting root mean square error in modeled versus measured canopy temperatures for all layers was less than 1°C with either technique. Maximum error in predicted temperature using the simplified radiosity technique was approximately 2°C during peak solar heating. (author)
Updated thermal model using simplified short-wave radiosity calculations
Energy Technology Data Exchange (ETDEWEB)
Smith, J. A.; Goltz, S. M.
1994-02-15
An extension to a forest canopy thermal radiance model is described that computes the short-wave energy flux absorbed within the canopy by solving simplified radiosity equations describing flux transfers between canopy ensemble classes partitioned by vegetation layer and leaf slope. Integrated short-wave reflectance and transmittance-factors obtained from measured leaf optical properties were found to be nearly equal for the canopy studied. Short-wave view factor matrices were approximated by combining the average leaf scattering coefficient with the long-wave view factor matrices already incorporated in the model. Both the updated and original models were evaluated for a dense spruce fir forest study site in Central Maine. Canopy short-wave absorption coefficients estimated from detailed Monte Carlo ray tracing calculations were 0.60, 0.04, and 0.03 for the top, middle, and lower canopy layers corresponding to leaf area indices of 4.0, 1.05, and 0.25. The simplified radiosity technique yielded analogous absorption values of 0.55, 0.03, and 0.01. The resulting root mean square error in modeled versus measured canopy temperatures for all layers was less than 1°C with either technique. Maximum error in predicted temperature using the simplified radiosity technique was approximately 2°C during peak solar heating. (author)
The Updated BaSTI Stellar Evolution Models and Isochrones. I. Solar-scaled Calculations
DEFF Research Database (Denmark)
Hidalgo, Sebastian L.; Pietrinferni, Adriano; Cassisi, Santi
2018-01-01
We present an updated release of the BaSTI (a Bag of Stellar Tracks and Isochrones) stellar model and isochrone library for a solar-scaled heavy element distribution. The main input physics that have been changed from the previous BaSTI release include the solar metal mixture, electron conduction...... to metal enrichment ratio dY/dZ = 1.31. The isochrones cover an age range between 20 Myr and 14.5 Gyr, consistently take into account the pre-main-sequence phase, and have been translated to a large number of popular photometric systems. Asteroseismic properties of the theoretical models have also been...... calculated. We compare our isochrones with results from independent databases and with several sets of observations to test the accuracy of the calculations. All stellar evolution tracks, asteroseismic properties, and isochrones are made available through a dedicated web site....
The Updated BaSTI Stellar Evolution Models and Isochrones. I. Solar-scaled Calculations
Hidalgo, Sebastian L.; Pietrinferni, Adriano; Cassisi, Santi; Salaris, Maurizio; Mucciarelli, Alessio; Savino, Alessandro; Aparicio, Antonio; Silva Aguirre, Victor; Verma, Kuldeep
2018-04-01
We present an updated release of the BaSTI (a Bag of Stellar Tracks and Isochrones) stellar model and isochrone library for a solar-scaled heavy element distribution. The main input physics that have been changed from the previous BaSTI release include the solar metal mixture, electron conduction opacities, a few nuclear reaction rates, bolometric corrections, and the treatment of the overshooting efficiency for shrinking convective cores. The new model calculations cover a mass range between 0.1 and 15 M ⊙, 22 initial chemical compositions between [Fe/H] = ‑3.20 and +0.45, with helium to metal enrichment ratio dY/dZ = 1.31. The isochrones cover an age range between 20 Myr and 14.5 Gyr, consistently take into account the pre-main-sequence phase, and have been translated to a large number of popular photometric systems. Asteroseismic properties of the theoretical models have also been calculated. We compare our isochrones with results from independent databases and with several sets of observations to test the accuracy of the calculations. All stellar evolution tracks, asteroseismic properties, and isochrones are made available through a dedicated web site.
ATHENA calculation model for the ITER-FEAT divertor cooling system. Final report with updates
International Nuclear Information System (INIS)
Eriksson, John; Sjoeberg, A.; Sponton, L.L.
2001-05-01
An ATHENA model of the ITER-FEAT divertor cooling system has been developed for the purpose of calculating and evaluating consequences of different thermal-hydraulic accidents as specified in the Accident Analysis Specifications for the ITER-FEAT Generic Site Safety Report. The model is able to assess situations for a variety of conceivable operational transients from small flow disturbances to more critical conditions such as total blackout caused by a loss of offsite and emergency power. The main objective for analyzing this type of scenarios is to determine margins against jeopardizing the integrity of the divertor cooling system components and pipings. The model of the divertor primary heat transport system encompasses the divertor cassettes, the port limiter systems, the pressurizer, the heat exchanger and all feed and return pipes of these components. The development was pursued according to practices and procedures outlined in the ATHENA code manuals using available modelling components such as volumes, junctions, heat structures and process controls
International Nuclear Information System (INIS)
Hongo, Shozo; Yamaguchi, Hiroshi; Takeshita, Hiroshi; Iwai, Satoshi.
1994-01-01
A computer program named IDES is developed by BASIC language for a personal computer and translated to C language of engineering work station. The IDES carries out internal dose calculations described in ICRP Publication 30 and it installs the program of transformation method which is an empirical method to estimate absorbed fractions of different physiques from ICRP Referenceman. The program consists of three tasks: productions of SAF for Japanese including children, productions of SEE, Specific Effective Energy, and calculation of effective dose equivalents. Each task and corresponding data file appear as a module so as to meet future requirement for revisions of the related data. Usefulness of IDES is discussed by exemplifying the case that 5 age groups of Japanese intake orally Co-60 or Mn-54. (author)
International Nuclear Information System (INIS)
Eckerman, K.F.; Watson, S.B.; Nelson, C.B.; Nelson, D.R.; Richardson, A.C.B.; Sullivan, R.E.
1984-12-01
This report presents revised values for the radioactivity concentration guides (RCGs), based on the 1960 primary radiation protection guides (RPGs) for occupational exposure (FRC 1960) and for underground uranium miners (EPA 1971a) using the updated dosimetric models developed to prepare ICRP Publication 30. Unlike the derived quantities presented in Publication 30, which are based on limitation of the weighted sum of doses to all irradiated tissues, these RCGs are based on the ''critical organ'' approach of the 1960 guidance, which was a single limit for the most critically irradiated organ or tissue. This report provides revised guides for the 1960 Federal guidance which are consistent with current dosimetric relationships. 2 figs., 4 tabs
Update on Light-Ion Calculations
International Nuclear Information System (INIS)
Schultz, David R.
2013-01-01
During the time span of the CRP, calculations were (1) initiated extending previous work regarding elastic and transport cross sections relevant to light-species impurity-ion transport modeling, (2) completed for total and state-selective charge transfer (C 5+ , N 6+ , O 6+ , O 7+ + H; C 5+ , C 6+ , O 7+ , O 8+ + He; and C 6+ + H, H 2 ) for diagnostics such as charge exchange recombination spectroscopy, and (3) completed for excitation of atomic hydrogen by ion impact (H + , He 2+ , Be 4+ , C 6+ ) for diagnostics including beam emission spectroscopy and motional Stark effect spectroscopy. The first calculations undertaken were to continue work begun more than a decade ago providing plasma modelers with elastic total and differential cross sections, and related transport cross sections, used to model transport of hydrogen ions, atoms, and molecules as well as other species including intrinsic and extrinsic impurities. This body of work was reviewed in the course of reporting recent new calculations in a recent paper (P.S. Krstic and D.R. Schultz, Physics of Plasmas, 16, 053503 (2009)). After initial calculations for H + + O were completed, work was discontinued in light of other priorities. Charge transfer data for diagnostics provide important knowledge about the state of the plasma from the edge to the core and are therefore of significant interest to continually evaluate and improve. Further motivation for such calculations comes from recent and ongoing benchmark measurements of the total charge transfer cross section being made at Oak Ridge National Laboratory by C.C. Havener and collaborators. We have undertaken calculations using a variety of theoretical approaches, each applicable within a range of impact energies, that have led to the creation of a database of recommended state-selective and total cross sections composed of results from the various methods (MOCC, AOCC, CTMC, results from the literature) within their overlapping ranges of applicability
Updates to In-Line Calculation of Photolysis Rates
How photolysis rates are calculated affects ozone and aerosol concentrations predicted by the CMAQ model and the model?s run-time. The standard configuration of CMAQ uses the inline option that calculates photolysis rates by solving the radiative transfer equation for the needed ...
SAM Photovoltaic Model Technical Reference 2016 Update
Energy Technology Data Exchange (ETDEWEB)
Gilman, Paul [National Renewable Energy Laboratory (NREL), Golden, CO (United States); DiOrio, Nicholas A [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Freeman, Janine M [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Janzou, Steven [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Dobos, Aron [No longer NREL employee; Ryberg, David [No longer NREL employee
2018-03-19
This manual describes the photovoltaic performance model in the System Advisor Model (SAM) software, Version 2016.3.14 Revision 4 (SSC Version 160). It is an update to the 2015 edition of the manual, which describes the photovoltaic model in SAM 2015.1.30 (SSC 41). This new edition includes corrections of errors in the 2015 edition and descriptions of new features introduced in SAM 2016.3.14, including: 3D shade calculator Battery storage model DC power optimizer loss inputs Snow loss model Plane-of-array irradiance input from weather file option Support for sub-hourly simulations Self-shading works with all four subarrays, and uses same algorithm for fixed arrays and one-axis tracking Linear self-shading algorithm for thin-film modules Loss percentages replace derate factors. The photovoltaic performance model is one of the modules in the SAM Simulation Core (SSC), which is part of both SAM and the SAM SDK. SAM is a user-friedly desktop application for analysis of renewable energy projects. The SAM SDK (Software Development Kit) is for developers writing their own renewable energy analysis software based on SSC. This manual is written for users of both SAM and the SAM SDK wanting to learn more about the details of SAM's photovoltaic model.
Equilibrium fission model calculations
International Nuclear Information System (INIS)
Beckerman, M.; Blann, M.
1976-01-01
In order to aid in understanding the systematics of heavy ion fission and fission-like reactions in terms of the target-projectile system, bombarding energy and angular momentum, fission widths are calculated using an angular momentum dependent extension of the Bohr-Wheeler theory and particle emission widths using angular momentum coupling
MARMOT update for oxide fuel modeling
Energy Technology Data Exchange (ETDEWEB)
Zhang, Yongfeng [Idaho National Lab. (INL), Idaho Falls, ID (United States); Schwen, Daniel [Idaho National Lab. (INL), Idaho Falls, ID (United States); Chakraborty, Pritam [Idaho National Lab. (INL), Idaho Falls, ID (United States); Jiang, Chao [Idaho National Lab. (INL), Idaho Falls, ID (United States); Aagesen, Larry [Idaho National Lab. (INL), Idaho Falls, ID (United States); Ahmed, Karim [Idaho National Lab. (INL), Idaho Falls, ID (United States); Jiang, Wen [Idaho National Lab. (INL), Idaho Falls, ID (United States); Biner, Bulent [Idaho National Lab. (INL), Idaho Falls, ID (United States); Bai, Xianming [Virginia Polytechnic Inst. and State Univ. (Virginia Tech), Blacksburg, VA (United States); Tonks, Michael [Pennsylvania State Univ., University Park, PA (United States); Millett, Paul [Univ. of Arkansas, Fayetteville, AR (United States)
2016-09-01
This report summarizes the lower-length-scale research and development progresses in FY16 at Idaho National Laboratory in developing mechanistic materials models for oxide fuels, in parallel to the development of the MARMOT code which will be summarized in a separate report. This effort is a critical component of the microstructure based fuel performance modeling approach, supported by the Fuels Product Line in the Nuclear Energy Advanced Modeling and Simulation (NEAMS) program. The progresses can be classified into three categories: 1) development of materials models to be used in engineering scale fuel performance modeling regarding the effect of lattice defects on thermal conductivity, 2) development of modeling capabilities for mesoscale fuel behaviors including stage-3 gas release, grain growth, high burn-up structure, fracture and creep, and 3) improved understanding in material science by calculating the anisotropic grain boundary energies in UO$_2$ and obtaining thermodynamic data for solid fission products. Many of these topics are still under active development. They are updated in the report with proper amount of details. For some topics, separate reports are generated in parallel and so stated in the text. The accomplishments have led to better understanding of fuel behaviors and enhance capability of the MOOSE-BISON-MARMOT toolkit.
Model parameter updating using Bayesian networks
International Nuclear Information System (INIS)
Treml, C.A.; Ross, Timothy J.
2004-01-01
This paper outlines a model parameter updating technique for a new method of model validation using a modified model reference adaptive control (MRAC) framework with Bayesian Networks (BNs). The model parameter updating within this method is generic in the sense that the model/simulation to be validated is treated as a black box. It must have updateable parameters to which its outputs are sensitive, and those outputs must have metrics that can be compared to that of the model reference, i.e., experimental data. Furthermore, no assumptions are made about the statistics of the model parameter uncertainty, only upper and lower bounds need to be specified. This method is designed for situations where a model is not intended to predict a complete point-by-point time domain description of the item/system behavior; rather, there are specific points, features, or events of interest that need to be predicted. These specific points are compared to the model reference derived from actual experimental data. The logic for updating the model parameters to match the model reference is formed via a BN. The nodes of this BN consist of updateable model input parameters and the specific output values or features of interest. Each time the model is executed, the input/output pairs are used to adapt the conditional probabilities of the BN. Each iteration further refines the inferred model parameters to produce the desired model output. After parameter updating is complete and model inputs are inferred, reliabilities for the model output are supplied. Finally, this method is applied to a simulation of a resonance control cooling system for a prototype coupled cavity linac. The results are compared to experimental data.
A Provenance Tracking Model for Data Updates
Directory of Open Access Journals (Sweden)
Gabriel Ciobanu
2012-08-01
Full Text Available For data-centric systems, provenance tracking is particularly important when the system is open and decentralised, such as the Web of Linked Data. In this paper, a concise but expressive calculus which models data updates is presented. The calculus is used to provide an operational semantics for a system where data and updates interact concurrently. The operational semantics of the calculus also tracks the provenance of data with respect to updates. This provides a new formal semantics extending provenance diagrams which takes into account the execution of processes in a concurrent setting. Moreover, a sound and complete model for the calculus based on ideals of series-parallel DAGs is provided. The notion of provenance introduced can be used as a subjective indicator of the quality of data in concurrent interacting systems.
Updated Collisional Ionization Equilibrium Calculated for Optically Thin Plasmas
Savin, Daniel Wolf; Bryans, P.; Badnell, N. R.; Gorczyca, T. W.; Laming, J. M.; Mitthumsiri, W.
2010-03-01
Reliably interpreting spectra from electron-ionized cosmic plasmas requires accurate ionization balance calculations for the plasma in question. However, much of the atomic data needed for these calculations have not been generated using modern theoretical methods and their reliability are often highly suspect. We have carried out state-of-the-art calculations of dielectronic recombination (DR) rate coefficients for the hydrogenic through Na-like ions of all elements from He to Zn as well as for Al-like to Ar-like ions of Fe. We have also carried out state-of-the-art radiative recombination (RR) rate coefficient calculations for the bare through Na-like ions of all elements from H to Zn. Using our data and the recommended electron impact ionization data of Dere (2007), we present improved collisional ionization equilibrium calculations (Bryans et al. 2006, 2009). We compare our calculated fractional ionic abundances using these data with those presented by Mazzotta et al. (1998) for all elements from H to Ni. This work is supported in part by the NASA APRA and SHP SR&T programs.
Updating of states in operational hydrological models
Bruland, O.; Kolberg, S.; Engeland, K.; Gragne, A. S.; Liston, G.; Sand, K.; Tøfte, L.; Alfredsen, K.
2012-04-01
Operationally the main purpose of hydrological models is to provide runoff forecasts. The quality of the model state and the accuracy of the weather forecast together with the model quality define the runoff forecast quality. Input and model errors accumulate over time and may leave the model in a poor state. Usually model states can be related to observable conditions in the catchment. Updating of these states, knowing their relation to observable catchment conditions, influence directly the forecast quality. Norway is internationally in the forefront in hydropower scheduling both on short and long terms. The inflow forecasts are fundamental to this scheduling. Their quality directly influence the producers profit as they optimize hydropower production to market demand and at the same time minimize spill of water and maximize available hydraulic head. The quality of the inflow forecasts strongly depends on the quality of the models applied and the quality of the information they use. In this project the focus has been to improve the quality of the model states which the forecast is based upon. Runoff and snow storage are two observable quantities that reflect the model state and are used in this project for updating. Generally the methods used can be divided in three groups: The first re-estimates the forcing data in the updating period; the second alters the weights in the forecast ensemble; and the third directly changes the model states. The uncertainty related to the forcing data through the updating period is due to both uncertainty in the actual observation and to how well the gauging stations represent the catchment both in respect to temperatures and precipitation. The project looks at methodologies that automatically re-estimates the forcing data and tests the result against observed response. Model uncertainty is reflected in a joint distribution of model parameters estimated using the Dream algorithm.
Adjustment or updating of models
Indian Academy of Sciences (India)
25, Part 3, June 2000, pp. 235±245 ... While the model is defined in terms of these spatial parameters, ... discussed in terms of `model order' with concern focused on whether or not the ..... In other words, it is not easy to justify what the required.
Model validation: Correlation for updating
Indian Academy of Sciences (India)
In this paper, a review is presented of the various methods which ... to make a direct and objective comparison of specific dynamic properties, measured ..... stiffness matrix is available from the analytical model, is that of reducing or condensing.
An updated pH calculation tool for new challenges
Energy Technology Data Exchange (ETDEWEB)
Crolet, J.L. [Consultant, 36 Chemin Mirassou, 64140 Lons (France)
2004-07-01
The time evolution of the in-situ pH concept is summarised, as well as the past and present challenges of pH calculations. Since the beginning of such calculations on spread sheets, the tremendous progress in the computer technology has progressively removed all its past limitations. On the other hand, the development of artificial acetate buffering in standardized and non-standardized corrosion testing has raised quite a few new questions. Especially, a straightforward precautionary principle now requires to limit all what is artificial to situations where this is really necessary and, consequently, seriously consider the possibility of periodic pH readjustment as an alternative to useless or excessive artificial buffering, including in the case of an over-acidification at ambient pressure through HCl addition only (e.g. SSC testing of martensitic stainless steels). These new challenges require a genuine 'pH engineering' for the design of corrosion testing protocols under CO{sub 2} and H{sub 2}S partial pressures, at ambient pressure or in autoclave. In this aim, not only a great many detailed pH data shall be automatically delivered to unskilled users, but this shall be done in an experimental context which is most often new and much more complicated than before: e.g. pH adjustment of artificial buffers before saturation in the test gas and further pH evolution under acid gas pressure (pH shift before test beginning), anticipation of the pH readjustment frequency from just a volume / surface ratio and an expected corrosion rate (pH drift during the test). Furthermore, in order to be really useful and reliable, such numerous pH data have also to be well understood. Therefore, their origin, significance and parametric sensitivity are backed up and explained through three self-understanding graphical illustrations: 1. an 'anion - pH' nomogram shows the pH dependence of all the variable ions, H{sup +}, HCO{sub 3}{sup -}, HS{sup -}, Ac{sup -} (and
Robot Visual Tracking via Incremental Self-Updating of Appearance Model
Directory of Open Access Journals (Sweden)
Danpei Zhao
2013-09-01
Full Text Available This paper proposes a target tracking method called Incremental Self-Updating Visual Tracking for robot platforms. Our tracker treats the tracking problem as a binary classification: the target and the background. The greyscale, HOG and LBP features are used in this work to represent the target and are integrated into a particle filter framework. To track the target over long time sequences, the tracker has to update its model to follow the most recent target. In order to deal with the problems of calculation waste and lack of model-updating strategy with the traditional methods, an intelligent and effective online self-updating strategy is devised to choose the optimal update opportunity. The strategy of updating the appearance model can be achieved based on the change in the discriminative capability between the current frame and the previous updated frame. By adjusting the update step adaptively, severe waste of calculation time for needless updates can be avoided while keeping the stability of the model. Moreover, the appearance model can be kept away from serious drift problems when the target undergoes temporary occlusion. The experimental results show that the proposed tracker can achieve robust and efficient performance in several benchmark-challenging video sequences with various complex environment changes in posture, scale, illumination and occlusion.
A parallel orbital-updating based plane-wave basis method for electronic structure calculations
International Nuclear Information System (INIS)
Pan, Yan; Dai, Xiaoying; Gironcoli, Stefano de; Gong, Xin-Gao; Rignanese, Gian-Marco; Zhou, Aihui
2017-01-01
Highlights: • Propose three parallel orbital-updating based plane-wave basis methods for electronic structure calculations. • These new methods can avoid the generating of large scale eigenvalue problems and then reduce the computational cost. • These new methods allow for two-level parallelization which is particularly interesting for large scale parallelization. • Numerical experiments show that these new methods are reliable and efficient for large scale calculations on modern supercomputers. - Abstract: Motivated by the recently proposed parallel orbital-updating approach in real space method , we propose a parallel orbital-updating based plane-wave basis method for electronic structure calculations, for solving the corresponding eigenvalue problems. In addition, we propose two new modified parallel orbital-updating methods. Compared to the traditional plane-wave methods, our methods allow for two-level parallelization, which is particularly interesting for large scale parallelization. Numerical experiments show that these new methods are more reliable and efficient for large scale calculations on modern supercomputers.
Two updating methods for dissipative models with non symmetric matrices
International Nuclear Information System (INIS)
Billet, L.; Moine, P.; Aubry, D.
1997-01-01
In this paper the feasibility of the extension of two updating methods to rotating machinery models is considered, the particularity of rotating machinery models is to use non-symmetric stiffness and damping matrices. It is shown that the two methods described here, the inverse Eigen-sensitivity method and the error in constitutive relation method can be adapted to such models given some modification.As far as inverse sensitivity method is concerned, an error function based on the difference between right hand calculated and measured Eigen mode shapes and calculated and measured Eigen values is used. Concerning the error in constitutive relation method, the equation which defines the error has to be modified due to the non definite positiveness of the stiffness matrix. The advantage of this modification is that, in some cases, it is possible to focus the updating process on some specific model parameters. Both methods were validated on a simple test model consisting in a two-bearing and disc rotor system. (author)
Updating of a dynamic finite element model from the Hualien scale model reactor building
International Nuclear Information System (INIS)
Billet, L.; Moine, P.; Lebailly, P.
1996-08-01
The forces occurring at the soil-structure interface of a building have generally a large influence on the way the building reacts to an earthquake. One can be tempted to characterise these forces more accurately bu updating a model from the structure. However, this procedure requires an updating method suitable for dissipative models, since significant damping can be observed at the soil-structure interface of buildings. Such a method is presented here. It is based on the minimization of a mechanical energy built from the difference between Eigen data calculated bu the model and Eigen data issued from experimental tests on the real structure. An experimental validation of this method is then proposed on a model from the HUALIEN scale-model reactor building. This scale-model, built on the HUALIEN site of TAIWAN, is devoted to the study of soil-structure interaction. The updating concerned the soil impedances, modelled by a layer of springs and viscous dampers attached to the building foundation. A good agreement was found between the Eigen modes and dynamic responses calculated bu the updated model and the corresponding experimental data. (authors). 12 refs., 3 figs., 4 tabs
Updating parameters of the chicken processing line model
DEFF Research Database (Denmark)
Kurowicka, Dorota; Nauta, Maarten; Jozwiak, Katarzyna
2010-01-01
A mathematical model of chicken processing that quantitatively describes the transmission of Campylobacter on chicken carcasses from slaughter to chicken meat product has been developed in Nauta et al. (2005). This model was quantified with expert judgment. Recent availability of data allows...... updating parameters of the model to better describe processes observed in slaughterhouses. We propose Bayesian updating as a suitable technique to update expert judgment with microbiological data. Berrang and Dickens’s data are used to demonstrate performance of this method in updating parameters...... of the chicken processing line model....
Model Updating Nonlinear System Identification Toolbox, Phase II
National Aeronautics and Space Administration — ZONA Technology (ZONA) proposes to develop an enhanced model updating nonlinear system identification (MUNSID) methodology that utilizes flight data with...
Updating river basin models with radar altimetry
DEFF Research Database (Denmark)
Michailovsky, Claire Irene B.
suited for use in data assimilation frameworks which combine the information content from models and current observations to produce improved forecasts and reduce prediction uncertainty. The focus of the second and third papers of this thesis was therefore the use of radar altimetry as update data...... of political unwillingness to share data which is a common problem in particular in transboundary settings. In this context, remote sensing (RS) datasets provide an appealing alternative to traditional in-situ data and much research effort has gone into the use of these datasets for hydrological applications...... response of a catchment to meteorological forcing. While river discharge cannot be directly measured from space, radar altimetry (RA) can measure water level variations in rivers at the locations where the satellite ground track and river network intersect called virtual stations or VS. In this PhD study...
2017 Updates: Earth Gravitational Model 2020
Barnes, D. E.; Holmes, S. A.; Ingalls, S.; Beale, J.; Presicci, M. R.; Minter, C.
2017-12-01
The National Geospatial-Intelligence Agency [NGA], in conjunction with its U.S. and international partners, has begun preliminary work on its next Earth Gravitational Model, to replace EGM2008. The new `Earth Gravitational Model 2020' [EGM2020] has an expected public release date of 2020, and will retain the same harmonic basis and resolution as EGM2008. As such, EGM2020 will be essentially an ellipsoidal harmonic model up to degree (n) and order (m) 2159, but will be released as a spherical harmonic model to degree 2190 and order 2159. EGM2020 will benefit from new data sources and procedures. Updated satellite gravity information from the GOCE and GRACE mission, will better support the lower harmonics, globally. Multiple new acquisitions (terrestrial, airborne and shipborne) of gravimetric data over specific geographical areas (Antarctica, Greenland …), will provide improved global coverage and resolution over the land, as well as for coastal and some ocean areas. Ongoing accumulation of satellite altimetry data as well as improvements in the treatment of this data, will better define the marine gravity field, most notably in polar and near-coastal regions. NGA and partners are evaluating different approaches for optimally combining the new GOCE/GRACE satellite gravity models with the terrestrial data. These include the latest methods employing a full covariance adjustment. NGA is also working to assess systematically the quality of its entire gravimetry database, towards correcting biases and other egregious errors. Public release number 15-564
Application of Real Time Models Updating in ABO Central Field
International Nuclear Information System (INIS)
Heikal, S.; Adewale, D.; Doghmi, A.; Augustine, U.
2003-01-01
ABO central field is the first deep offshore oil production in Nigeria located in OML 125 (ex-OPL316). The field was developed in a water depth of between 500 and 800 meters. Deep-water development requires much faster data handling and model updates in order to make the best possible technical decision. This required an easy way to incorporate the latest information and dynamic update of the reservoir model enabling real time reservoir management. The paper aims at discussing the benefits of real time static and dynamic model update and illustrates with a horizontal well example how this update was beneficial prior and during the drilling operation minimizing the project CAPEX Prior to drilling, a 3D geological model was built based on seismic and offset wells' data. The geological model was updated twice, once after the pilot hole drilling and then after reaching the landing point and prior drilling the horizontal section .Forward modeling ws made was well using the along the planned trajectory. During the drilling process both geo- steering and LWD data were loaded in real time to the 3D modeling software. The data was analyzed and compared with the predicted model. The location of markers was changed as drilling progressed and the entire 3D Geological model was rapidly updated. The target zones were revaluated in the light of the new model updates. Recommendations were communicated to the field, and the well trajectory was modified to take into account the new information. The combination of speed, flexibility and update-ability of the 3D modeling software enabled continues geological model update on which the asset team based their trajectory modification decisions throughout the drilling phase. The well was geo-steered through 7 meters thickness of sand. After the drilling, the testing showed excellent results with a productivity and fluid properties data were used to update the dynamic model reviewing the well production plateau providing optimum reservoir
General Separations Area (GSA) Groundwater Flow Model Update: Hydrostratigraphic Data
Energy Technology Data Exchange (ETDEWEB)
Bagwell, L. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Bennett, P. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Flach, G. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)
2017-02-21
This document describes the assembly, selection, and interpretation of hydrostratigraphic data for input to an updated groundwater flow model for the General Separations Area (GSA; Figure 1) at the Department of Energy’s (DOE) Savannah River Site (SRS). This report is one of several discrete but interrelated tasks that support development of an updated groundwater model (Bagwell and Flach, 2016).
Fast model updating coupling Bayesian inference and PGD model reduction
Rubio, Paul-Baptiste; Louf, François; Chamoin, Ludovic
2018-04-01
The paper focuses on a coupled Bayesian-Proper Generalized Decomposition (PGD) approach for the real-time identification and updating of numerical models. The purpose is to use the most general case of Bayesian inference theory in order to address inverse problems and to deal with different sources of uncertainties (measurement and model errors, stochastic parameters). In order to do so with a reasonable CPU cost, the idea is to replace the direct model called for Monte-Carlo sampling by a PGD reduced model, and in some cases directly compute the probability density functions from the obtained analytical formulation. This procedure is first applied to a welding control example with the updating of a deterministic parameter. In the second application, the identification of a stochastic parameter is studied through a glued assembly example.
H TO Zn IONIZATION EQUILIBRIUM FOR THE NON-MAXWELLIAN ELECTRON κ-DISTRIBUTIONS: UPDATED CALCULATIONS
International Nuclear Information System (INIS)
Dzifčáková, E.; Dudík, J.
2013-01-01
New data for the calculation of ionization and recombination rates have been published in the past few years, most of which are included in the CHIANTI database. We used these data to calculate collisional ionization and recombination rates for the non-Maxwellian κ-distributions with an enhanced number of particles in the high-energy tail, which have been detected in the solar transition region and the solar wind. Ionization equilibria for elements H to Zn are derived. The κ-distributions significantly influence both the ionization and recombination rates and widen the ion abundance peaks. In comparison with the Maxwellian distribution, the ion abundance peaks can also be shifted to lower or higher temperatures. The updated ionization equilibrium calculations result in large changes for several ions, notably Fe VIII-Fe XIV. The results are supplied in electronic form compatible with the CHIANTI database.
Evaluation of two updating methods for dissipative models on a real structure
International Nuclear Information System (INIS)
Moine, P.; Billet, L.
1996-01-01
Finite Element Models are widely used to predict the dynamic behaviour from structures. Frequently, the model does not represent the structure with all be expected accuracy i.e. the measurements realised on the structure differ from the data predicted by the model. It is therefore necessary to update the model. Although many modeling errors come from inadequate representation of the damping phenomena, most of the model updating techniques are up to now restricted to conservative models only. In this paper, we present two updating methods for dissipative models using Eigen mode shapes and Eigen values as behavioural information from the structure. The first method - the modal output error method - compares directly the experimental Eigen vectors and Eigen values to the model Eigen vectors and Eigen values whereas the second method - the error in constitutive relation method - uses an energy error derived from the equilibrium relation. The error function, in both cases, is minimized by a conjugate gradient algorithm and the gradient is calculated analytically. These two methods behave differently which can be evidenced by updating a real structure constituted from a piece of pipe mounted on two viscous elastic suspensions. The updating of the model validates an updating strategy consisting in realizing a preliminary updating with the error in constitutive relation method (a fast to converge but difficult to control method) and then to pursue the updating with the modal output error method (a slow to converge but reliable and easy to control method). Moreover the problems encountered during the updating process and their corresponding solutions are given. (authors)
Numerical model updating technique for structures using firefly algorithm
Sai Kubair, K.; Mohan, S. C.
2018-03-01
Numerical model updating is a technique used for updating the existing experimental models for any structures related to civil, mechanical, automobiles, marine, aerospace engineering, etc. The basic concept behind this technique is updating the numerical models to closely match with experimental data obtained from real or prototype test structures. The present work involves the development of numerical model using MATLAB as a computational tool and with mathematical equations that define the experimental model. Firefly algorithm is used as an optimization tool in this study. In this updating process a response parameter of the structure has to be chosen, which helps to correlate the numerical model developed with the experimental results obtained. The variables for the updating can be either material or geometrical properties of the model or both. In this study, to verify the proposed technique, a cantilever beam is analyzed for its tip deflection and a space frame has been analyzed for its natural frequencies. Both the models are updated with their respective response values obtained from experimental results. The numerical results after updating show that there is a close relationship that can be brought between the experimental and the numerical models.
Microbial Communities Model Parameter Calculation for TSPA/SR
International Nuclear Information System (INIS)
D. Jolley
2001-01-01
This calculation has several purposes. First the calculation reduces the information contained in ''Committed Materials in Repository Drifts'' (BSC 2001a) to useable parameters required as input to MING V1.O (CRWMS M and O 1998, CSCI 30018 V1.O) for calculation of the effects of potential in-drift microbial communities as part of the microbial communities model. The calculation is intended to replace the parameters found in Attachment II of the current In-Drift Microbial Communities Model revision (CRWMS M and O 2000c) with the exception of Section 11-5.3. Second, this calculation provides the information necessary to supercede the following DTN: M09909SPAMING1.003 and replace it with a new qualified dataset (see Table 6.2-1). The purpose of this calculation is to create the revised qualified parameter input for MING that will allow ΔG (Gibbs Free Energy) to be corrected for long-term changes to the temperature of the near-field environment. Calculated herein are the quadratic or second order regression relationships that are used in the energy limiting calculations to potential growth of microbial communities in the in-drift geochemical environment. Third, the calculation performs an impact review of a new DTN: M00012MAJIONIS.000 that is intended to replace the currently cited DTN: GS9809083 12322.008 for water chemistry data used in the current ''In-Drift Microbial Communities Model'' revision (CRWMS M and O 2000c). Finally, the calculation updates the material lifetimes reported on Table 32 in section 6.5.2.3 of the ''In-Drift Microbial Communities'' AMR (CRWMS M and O 2000c) based on the inputs reported in BSC (2001a). Changes include adding new specified materials and updating old materials information that has changed
Finite element model updating using bayesian framework and modal properties
CSIR Research Space (South Africa)
Marwala, T
2005-01-01
Full Text Available Finite element (FE) models are widely used to predict the dynamic characteristics of aerospace structures. These models often give results that differ from measured results and therefore need to be updated to match measured results. Some...
Reservoir management under geological uncertainty using fast model update
Hanea, R.; Evensen, G.; Hustoft, L.; Ek, T.; Chitu, A.; Wilschut, F.
2015-01-01
Statoil is implementing "Fast Model Update (FMU)," an integrated and automated workflow for reservoir modeling and characterization. FMU connects all steps and disciplines from seismic depth conversion to prediction and reservoir management taking into account relevant reservoir uncertainty. FMU
Recommendations for DSD model calculations
International Nuclear Information System (INIS)
Cvelbar, F.
1999-01-01
The latest achievements of the DSD (direct-semidirect) capture model, such as the extension to unbound final states or to densely distributed bound states, and the introduction of the consistent DSD model are reviewed. Recommendations for the future use of the model are presented. (author)
A comparison of updating algorithms for large N reduced models
Energy Technology Data Exchange (ETDEWEB)
Pérez, Margarita García [Instituto de Física Teórica UAM-CSIC, Universidad Autónoma de Madrid,Nicolás Cabrera 13-15, E-28049-Madrid (Spain); González-Arroyo, Antonio [Instituto de Física Teórica UAM-CSIC, Universidad Autónoma de Madrid,Nicolás Cabrera 13-15, E-28049-Madrid (Spain); Departamento de Física Teórica, C-XI Universidad Autónoma de Madrid,E-28049 Madrid (Spain); Keegan, Liam [PH-TH, CERN,CH-1211 Geneva 23 (Switzerland); Okawa, Masanori [Graduate School of Science, Hiroshima University,Higashi-Hiroshima, Hiroshima 739-8526 (Japan); Core of Research for the Energetic Universe, Hiroshima University,Higashi-Hiroshima, Hiroshima 739-8526 (Japan); Ramos, Alberto [PH-TH, CERN,CH-1211 Geneva 23 (Switzerland)
2015-06-29
We investigate Monte Carlo updating algorithms for simulating SU(N) Yang-Mills fields on a single-site lattice, such as for the Twisted Eguchi-Kawai model (TEK). We show that performing only over-relaxation (OR) updates of the gauge links is a valid simulation algorithm for the Fabricius and Haan formulation of this model, and that this decorrelates observables faster than using heat-bath updates. We consider two different methods of implementing the OR update: either updating the whole SU(N) matrix at once, or iterating through SU(2) subgroups of the SU(N) matrix, we find the same critical exponent in both cases, and only a slight difference between the two.
A comparison of updating algorithms for large $N$ reduced models
Pérez, Margarita García; Keegan, Liam; Okawa, Masanori; Ramos, Alberto
2015-01-01
We investigate Monte Carlo updating algorithms for simulating $SU(N)$ Yang-Mills fields on a single-site lattice, such as for the Twisted Eguchi-Kawai model (TEK). We show that performing only over-relaxation (OR) updates of the gauge links is a valid simulation algorithm for the Fabricius and Haan formulation of this model, and that this decorrelates observables faster than using heat-bath updates. We consider two different methods of implementing the OR update: either updating the whole $SU(N)$ matrix at once, or iterating through $SU(2)$ subgroups of the $SU(N)$ matrix, we find the same critical exponent in both cases, and only a slight difference between the two.
Self-shielding models of MICROX-2 code: Review and updates
International Nuclear Information System (INIS)
Hou, J.; Choi, H.; Ivanov, K.N.
2014-01-01
Highlights: • The MICROX-2 code has been improved to expand its application to advanced reactors. • New fine-group cross section libraries based on ENDF/B-VII have been generated. • Resonance self-shielding and spatial self-shielding models have been improved. • The improvements were assessed by a series of benchmark calculations against MCNPX. - Abstract: The MICROX-2 is a transport theory code that solves for the neutron slowing-down and thermalization equations of a two-region lattice cell. The MICROX-2 code has been updated to expand its application to advanced reactor concepts and fuel cycle simulations, including generation of new fine-group cross section libraries based on ENDF/B-VII. In continuation of previous work, the MICROX-2 methods are reviewed and updated in this study, focusing on its resonance self-shielding and spatial self-shielding models for neutron spectrum calculations. The improvement of self-shielding method was assessed by a series of benchmark calculations against the Monte Carlo code, using homogeneous and heterogeneous pin cell models. The results have shown that the implementation of the updated self-shielding models is correct and the accuracy of physics calculation is improved. Compared to the existing models, the updates reduced the prediction error of the infinite multiplication factor by ∼0.1% and ∼0.2% for the homogeneous and heterogeneous pin cell models, respectively, considered in this study
Updates to the Demographic and Spatial Allocation Models to ...
EPA announced the availability of the draft report, Updates to the Demographic and Spatial Allocation Models to Produce Integrated Climate and Land Use Scenarios (ICLUS) for a 30-day public comment period. The ICLUS version 2 (v2) modeling tool furthered land change modeling by providing nationwide housing development scenarios up to 2100. ICLUS V2 includes updated population and land use data sets and addressing limitations identified in ICLUS v1 in both the migration and spatial allocation models. The companion user guide describes the development of ICLUS v2 and the updates that were made to the original data sets and the demographic and spatial allocation models. [2017 UPDATE] Get the latest version of ICLUS and stay up-to-date by signing up to the ICLUS mailing list. The GIS tool enables users to run SERGoM with the population projections developed for the ICLUS project and allows users to modify the spatial allocation housing density across the landscape.
Transition Models for Engineering Calculations
Fraser, C. J.
2007-01-01
While future theoretical and conceptual developments may promote a better understanding of the physical processes involved in the latter stages of boundary layer transition, the designers of rotodynamic machinery and other fluid dynamic devices need effective transition models now. This presentation will therefore center around the development of of some transition models which have been developed as design aids to improve the prediction codes used in the performance evaluation of gas turbine blading. All models are based on Narasimba's concentrated breakdown and spot growth.
Model Updating Nonlinear System Identification Toolbox, Phase I
National Aeronautics and Space Administration — ZONA Technology proposes to develop an enhanced model updating nonlinear system identification (MUNSID) methodology by adopting the flight data with state-of-the-art...
Bacteriophages: update on application as models for viruses in water
African Journals Online (AJOL)
Bacteriophages: update on application as models for viruses in water. ... the resistance of human viruses to water treatment and disinfection processes. ... highly sensitive molecular techniques viruses have been detected in drinking water ...
Temperature Calculations in the Coastal Modeling System
2017-04-01
ERDC/CHL CHETN-IV-110 April 2017 Approved for public release; distribution is unlimited . Temperature Calculations in the Coastal Modeling...tide) and river discharge at model boundaries, wave radiation stress, and wind forcing over a model computational domain. Physical processes calculated...calculated in the CMS using the following meteorological parameters: solar radiation, cloud cover, air temperature, wind speed, and surface water temperature
Aqua/Aura Updated Inclination Adjust Maneuver Performance Prediction Model
Boone, Spencer
2017-01-01
This presentation will discuss the updated Inclination Adjust Maneuver (IAM) performance prediction model that was developed for Aqua and Aura following the 2017 IAM series. This updated model uses statistical regression methods to identify potential long-term trends in maneuver parameters, yielding improved predictions when re-planning past maneuvers. The presentation has been reviewed and approved by Eric Moyer, ESMO Deputy Project Manager.
Preconditioner Updates Applied to CFD Model Problems
Czech Academy of Sciences Publication Activity Database
Birken, P.; Duintjer Tebbens, Jurjen; Meister, A.; Tůma, Miroslav
2008-01-01
Roč. 58, č. 11 (2008), s. 1628-1641 ISSN 0168-9274 R&D Projects: GA AV ČR 1ET400300415; GA AV ČR KJB100300703 Institutional research plan: CEZ:AV0Z10300504 Keywords : finite volume methods * update preconditioning * Krylov subspace methods * Euler equations * conservation laws Subject RIV: BA - General Mathematics Impact factor: 0.952, year: 2008
Update of KASHIL-E6 library for shielding analysis and benchmark calculations
International Nuclear Information System (INIS)
Kim, D. H.; Kil, C. S.; Jang, J. H.
2004-01-01
For various shielding and reactor pressure vessel dosimetry applications, a pseudo-problem-independent neutron-photon coupled MATXS-format library based on the last release of ENDF/B-VI has been generated as a part of the update program for KASHIL-E6, which was based on ENDF/B-VI.5. It has VITAMIN-B6 neutron and photon energy group structures, i.e., 199 groups for neutron and 42 groups for photon. The neutron and photon weighting functions and the Legendre order of scattering are same as KASHIL-E6. The library has been validated through some benchmarks: the PCA-REPLICA and NESDIP-2 experiments for LWR pressure vessel facility benchmark, the Winfrith Iron88 experiment for validation of iron data, and the Winfrith Graphite experiment for validation of graphite data. These calculations were performed by the TRANSXlDANTSYS code system. In addition, the substitutions of the JENDL-3.3 and JEFF-3.0 data for Fe, Cr, Cu and Ni, which are very important nuclides for shielding analyses, were investigated to estimate the effects on the benchmark calculation results
A PSO Driven Intelligent Model Updating and Parameter Identification Scheme for Cable-Damper System
Directory of Open Access Journals (Sweden)
Danhui Dan
2015-01-01
Full Text Available The precise measurement of the cable force is very important for monitoring and evaluating the operation status of cable structures such as cable-stayed bridges. The cable system should be installed with lateral dampers to reduce the vibration, which affects the precise measurement of the cable force and other cable parameters. This paper suggests a cable model updating calculation scheme driven by the particle swarm optimization (PSO algorithm. By establishing a finite element model considering the static geometric nonlinearity and stress-stiffening effect firstly, an automatically finite element method model updating powered by PSO algorithm is proposed, with the aims to identify the cable force and relevant parameters of cable-damper system precisely. Both numerical case studies and full-scale cable tests indicated that, after two rounds of updating process, the algorithm can accurately identify the cable force, moment of inertia, and damping coefficient of the cable-damper system.
Calculation models for a nuclear reactor
International Nuclear Information System (INIS)
Tashanii, Ahmed Ali
2010-01-01
Determination of different parameters of nuclear reactors requires neutron transport calculations. Due to complicity of geometry and material composition of the reactor core, neutron calculations were performed for simplified models of the real arrangement. In frame of the present work two models were used for calculations. First, an elementary cell model was used to prepare cross section data set for a homogenized-core reactor model. The homogenized-core reactor model was then used to perform neutron transport calculation. The nuclear reactor is a tank-shaped thermal reactor. The semi-cylindrical core arrangement consists of aluminum made fuel bundles immersed in water which acts as a moderator as well as a coolant. Each fuel bundle consists of aluminum cladded fuel rods arranged in square lattices. (author)
Precipitates/Salts Model Sensitivity Calculation
International Nuclear Information System (INIS)
Mariner, P.
2001-01-01
The objective and scope of this calculation is to assist Performance Assessment Operations and the Engineered Barrier System (EBS) Department in modeling the geochemical effects of evaporation on potential seepage waters within a potential repository drift. This work is developed and documented using procedure AP-3.12Q, ''Calculations'', in support of ''Technical Work Plan For Engineered Barrier System Department Modeling and Testing FY 02 Work Activities'' (BSC 2001a). The specific objective of this calculation is to examine the sensitivity and uncertainties of the Precipitates/Salts model. The Precipitates/Salts model is documented in an Analysis/Model Report (AMR), ''In-Drift Precipitates/Salts Analysis'' (BSC 2001b). The calculation in the current document examines the effects of starting water composition, mineral suppressions, and the fugacity of carbon dioxide (CO 2 ) on the chemical evolution of water in the drift
Ontology Update in the Cognitive Model of Ontology Learning
Directory of Open Access Journals (Sweden)
Zhang De-Hai
2016-01-01
Full Text Available Ontology has been used in many hot-spot fields, but most ontology construction methods are semiautomatic, and the construction process of ontology is still a tedious and painstaking task. In this paper, a kind of cognitive models is presented for ontology learning which can simulate human being’s learning from world. In this model, the cognitive strategies are applied with the constrained axioms. Ontology update is a key step when the new knowledge adds into the existing ontology and conflict with old knowledge in the process of ontology learning. This proposal designs and validates the method of ontology update based on the axiomatic cognitive model, which include the ontology update postulates, axioms and operations of the learning model. It is proved that these operators subject to the established axiom system.
Information dissemination model for social media with constant updates
Zhu, Hui; Wu, Heng; Cao, Jin; Fu, Gang; Li, Hui
2018-07-01
With the development of social media tools and the pervasiveness of smart terminals, social media has become a significant source of information for many individuals. However, false information can spread rapidly, which may result in negative social impacts and serious economic losses. Thus, reducing the unfavorable effects of false information has become an urgent challenge. In this paper, a new competitive model called DMCU is proposed to describe the dissemination of information with constant updates in social media. In the model, we focus on the competitive relationship between the original false information and updated information, and then propose the priority of related information. To more effectively evaluate the effectiveness of the proposed model, data sets containing actual social media activity are utilized in experiments. Simulation results demonstrate that the DMCU model can precisely describe the process of information dissemination with constant updates, and that it can be used to forecast information dissemination trends on social media.
Model cross section calculations using LAHET
International Nuclear Information System (INIS)
Prael, R.E.
1992-01-01
The current status of LAHET is discussed. The effect of a multistage preequilibrium exciton model following the INC is examined for neutron emission benchmark calculations, as is the use of a Fermi breakup model for light nuclei rather than an evaporation model. Comparisons are made also for recent fission cross section experiments, and a discussion of helium production cross sections is presented
Updated climatological model predictions of ionospheric and HF propagation parameters
International Nuclear Information System (INIS)
Reilly, M.H.; Rhoads, F.J.; Goodman, J.M.; Singh, M.
1991-01-01
The prediction performances of several climatological models, including the ionospheric conductivity and electron density model, RADAR C, and Ionospheric Communications Analysis and Predictions Program, are evaluated for different regions and sunspot number inputs. Particular attention is given to the near-real-time (NRT) predictions associated with single-station updates. It is shown that a dramatic improvement can be obtained by using single-station ionospheric data to update the driving parameters for an ionospheric model for NRT predictions of f(0)F2 and other ionospheric and HF circuit parameters. For middle latitudes, the improvement extends out thousands of kilometers from the update point to points of comparable corrected geomagnetic latitude. 10 refs
Energy Technology Data Exchange (ETDEWEB)
Gorenflo, Dieter; Baumhoegger, Elmar; Herres, Gerhard [Paderborn Univ. (Germany). Thermodynamik und Energietechnik; Kotthoff, Stephan [Siemens AG, Goerlitz (Germany)
2012-07-01
For years, the most precise forecast of the heat transfer performance of evaporators is a current topic with regard to an efficient energy utilization. An established calculation method for the new edition of the Heat Atlas was updated with regard to flooded evaporators which especially were implemented in air-conditioning and cooling systems. The contribution under consideration outlines this method and enlarges upon the innovations in detail. The impact of the heat flow density and boiling pressure on the heat transfer during pool boiling is modified by means of measurement in the case of a single, horizontal vaporizer tube. Above all, the impact of the fluid can be described easier and more exact. The authors compare the forecasting results with the experimental results regarding the ribbing of the heating surface and impact of the bundle. Furthermore, examples of close boiling and near azeotropic mixtures were admitted to the Heat Atlas. The authors also consider the positive effect of the rising bubble swarm when boiling the mixture in horizontal tube bundles.
Energy Technology Data Exchange (ETDEWEB)
Duro, L.; Grive, M.; Cera, E.; Domenech, C.; Bruno, J. (Enviros Spain S.L., Barcelona (ES))
2006-12-15
This report presents and documents the thermodynamic database used in the assessment of the radionuclide solubility limits within the SR-Can Exercise. It is a supporting report to the solubility assessment. Thermodynamic data are reviewed for 20 radioelements from Groups A and B, lanthanides and actinides. The development of this database is partially based on the one prepared by PSI and NAGRA. Several changes, updates and checks for internal consistency and completeness to the reference NAGRA-PSI 01/01 database have been conducted when needed. These modifications are mainly related to the information from the various experimental programmes and scientific literature available until the end of 2003. Some of the discussions also refer to a previous database selection conducted by Enviros Spain on behalf of ANDRA, where the reader can find additional information. When possible, in order to optimize the robustness of the database, the description of the solubility of the different radionuclides calculated by using the reported thermodynamic database is tested in front of experimental data available in the open scientific literature. When necessary, different procedures to estimate gaps in the database have been followed, especially accounting for temperature corrections. All the methodologies followed are discussed in the main text
International Nuclear Information System (INIS)
Duro, L.; Grive, M.; Cera, E.; Domenech, C.; Bruno, J.
2006-12-01
This report presents and documents the thermodynamic database used in the assessment of the radionuclide solubility limits within the SR-Can Exercise. It is a supporting report to the solubility assessment. Thermodynamic data are reviewed for 20 radioelements from Groups A and B, lanthanides and actinides. The development of this database is partially based on the one prepared by PSI and NAGRA. Several changes, updates and checks for internal consistency and completeness to the reference NAGRA-PSI 01/01 database have been conducted when needed. These modifications are mainly related to the information from the various experimental programmes and scientific literature available until the end of 2003. Some of the discussions also refer to a previous database selection conducted by Enviros Spain on behalf of ANDRA, where the reader can find additional information. When possible, in order to optimize the robustness of the database, the description of the solubility of the different radionuclides calculated by using the reported thermodynamic database is tested in front of experimental data available in the open scientific literature. When necessary, different procedures to estimate gaps in the database have been followed, especially accounting for temperature corrections. All the methodologies followed are discussed in the main text
Real Time Updating in Distributed Urban Rainfall Runoff Modelling
DEFF Research Database (Denmark)
Borup, Morten; Madsen, Henrik
that are being updated from system measurements was studied. The results showed that the fact alone that it takes time for rainfall data to travel the distance between gauges and catchments has such a big negative effect on the forecast skill of updated models, that it can justify the choice of even very...... as in a real data case study. The results confirmed that the method is indeed suitable for DUDMs and that it can be used to utilise upstream as well as downstream water level and flow observations to improve model estimates and forecasts. Due to upper and lower sensor limits many sensors in urban drainage...
Hybrid reduced order modeling for assembly calculations
International Nuclear Information System (INIS)
Bang, Youngsuk; Abdel-Khalik, Hany S.; Jessee, Matthew A.; Mertyurek, Ugur
2015-01-01
Highlights: • Reducing computational cost in engineering calculations. • Reduced order modeling algorithm for multi-physics problem like assembly calculation. • Non-intrusive algorithm with random sampling. • Pattern recognition in the components with high sensitive and large variation. - Abstract: While the accuracy of assembly calculations has considerably improved due to the increase in computer power enabling more refined description of the phase space and use of more sophisticated numerical algorithms, the computational cost continues to increase which limits the full utilization of their effectiveness for routine engineering analysis. Reduced order modeling is a mathematical vehicle that scales down the dimensionality of large-scale numerical problems to enable their repeated executions on small computing environment, often available to end users. This is done by capturing the most dominant underlying relationships between the model's inputs and outputs. Previous works demonstrated the use of the reduced order modeling for a single physics code, such as a radiation transport calculation. This manuscript extends those works to coupled code systems as currently employed in assembly calculations. Numerical tests are conducted using realistic SCALE assembly models with resonance self-shielding, neutron transport, and nuclides transmutation/depletion models representing the components of the coupled code system.
Hybrid reduced order modeling for assembly calculations
Energy Technology Data Exchange (ETDEWEB)
Bang, Youngsuk, E-mail: ysbang00@fnctech.com [FNC Technology, Co. Ltd., Yongin-si (Korea, Republic of); Abdel-Khalik, Hany S., E-mail: abdelkhalik@purdue.edu [Purdue University, West Lafayette, IN (United States); Jessee, Matthew A., E-mail: jesseema@ornl.gov [Oak Ridge National Laboratory, Oak Ridge, TN (United States); Mertyurek, Ugur, E-mail: mertyurek@ornl.gov [Oak Ridge National Laboratory, Oak Ridge, TN (United States)
2015-12-15
Highlights: • Reducing computational cost in engineering calculations. • Reduced order modeling algorithm for multi-physics problem like assembly calculation. • Non-intrusive algorithm with random sampling. • Pattern recognition in the components with high sensitive and large variation. - Abstract: While the accuracy of assembly calculations has considerably improved due to the increase in computer power enabling more refined description of the phase space and use of more sophisticated numerical algorithms, the computational cost continues to increase which limits the full utilization of their effectiveness for routine engineering analysis. Reduced order modeling is a mathematical vehicle that scales down the dimensionality of large-scale numerical problems to enable their repeated executions on small computing environment, often available to end users. This is done by capturing the most dominant underlying relationships between the model's inputs and outputs. Previous works demonstrated the use of the reduced order modeling for a single physics code, such as a radiation transport calculation. This manuscript extends those works to coupled code systems as currently employed in assembly calculations. Numerical tests are conducted using realistic SCALE assembly models with resonance self-shielding, neutron transport, and nuclides transmutation/depletion models representing the components of the coupled code system.
A Novel Hybrid Similarity Calculation Model
Directory of Open Access Journals (Sweden)
Xiaoping Fan
2017-01-01
Full Text Available This paper addresses the problems of similarity calculation in the traditional recommendation algorithms of nearest neighbor collaborative filtering, especially the failure in describing dynamic user preference. Proceeding from the perspective of solving the problem of user interest drift, a new hybrid similarity calculation model is proposed in this paper. This model consists of two parts, on the one hand the model uses the function fitting to describe users’ rating behaviors and their rating preferences, and on the other hand it employs the Random Forest algorithm to take user attribute features into account. Furthermore, the paper combines the two parts to build a new hybrid similarity calculation model for user recommendation. Experimental results show that, for data sets of different size, the model’s prediction precision is higher than the traditional recommendation algorithms.
Crushed-salt constitutive model update
International Nuclear Information System (INIS)
Callahan, G.D.; Loken, M.C.; Mellegard, K.D.; Hansen, F.D.
1998-01-01
Modifications to the constitutive model used to describe the deformation of crushed salt are presented in this report. Two mechanisms--dislocation creep and grain boundary diffusional pressure solutioning--defined previously but used separately are combined to form the basis for the constitutive model governing the deformation of crushed salt. The constitutive model is generalized to represent three-dimensional states of stress. New creep consolidation tests are combined with an existing database that includes hydrostatic consolidation and shear consolidation tests conducted on Waste Isolation Pilot Plant and southeastern New Mexico salt to determine material parameters for the constitutive model. Nonlinear least-squares model fitting to data from the shear consolidation tests and a combination of the shear and hydrostatic consolidation tests produced two sets of material parameter values for the model. The change in material parameter values from test group to test group indicates the empirical nature of the model but demonstrates improvement over earlier work with the previous models. Key improvements are the ability to capture lateral strain reversal and better resolve parameter values. To demonstrate the predictive capability of the model, each parameter value set was used to predict each of the tests in the database. Based on the fitting statistics and the ability of the model to predict the test data, the model appears to capture the creep consolidation behavior of crushed salt quite well
Crushed-salt constitutive model update
Energy Technology Data Exchange (ETDEWEB)
Callahan, G.D.; Loken, M.C.; Mellegard, K.D. [RE/SPEC Inc., Rapid City, SD (United States); Hansen, F.D. [Sandia National Labs., Albuquerque, NM (United States)
1998-01-01
Modifications to the constitutive model used to describe the deformation of crushed salt are presented in this report. Two mechanisms--dislocation creep and grain boundary diffusional pressure solutioning--defined previously but used separately are combined to form the basis for the constitutive model governing the deformation of crushed salt. The constitutive model is generalized to represent three-dimensional states of stress. New creep consolidation tests are combined with an existing database that includes hydrostatic consolidation and shear consolidation tests conducted on Waste Isolation Pilot Plant and southeastern New Mexico salt to determine material parameters for the constitutive model. Nonlinear least-squares model fitting to data from the shear consolidation tests and a combination of the shear and hydrostatic consolidation tests produced two sets of material parameter values for the model. The change in material parameter values from test group to test group indicates the empirical nature of the model but demonstrates improvement over earlier work with the previous models. Key improvements are the ability to capture lateral strain reversal and better resolve parameter values. To demonstrate the predictive capability of the model, each parameter value set was used to predict each of the tests in the database. Based on the fitting statistics and the ability of the model to predict the test data, the model appears to capture the creep consolidation behavior of crushed salt quite well.
Updating of a finite element model of the Cruas 2 cooling tower
International Nuclear Information System (INIS)
Billet, L.
1994-03-01
A method based on modal analysis and inversion of a dynamic FEM model is used to detect changes in the dynamic behavior of nuclear plant cooling towers. Prior to detection, it is necessary to build a representative model of the structure. In this paper are given details about the CRUAS N. 2 cooling tower modelling and the updating procedure used to match the model to on-site measurements. First, were reviewed previous numerical and experimental studies on cooling towers vibrations. We found that the first eigenfrequencies of cooling towers are very sensitive to boundary conditions at the top and the bottom of the structure. Then, we built a beam and plate FEM model of the CRUAS N. 2 cooling tower. The first calculated modes were located in the proper frequency band (0.9 Hz - 1.30 Hz) but not distributed according to the experimental order. We decided to update the numerical model with MADMACS, an updating model software. It was necessary to: - decrease the shell stiffness by 30%; - increase the top ring stiffness by 300%; - modify the boundary conditions at the bottom by taking into account the soil impedance. In order to obtain a difference between the measured and the corresponding calculated frequencies less than 1%. The model was then judged to be realistic enough. (author). 23 figs., 13 refs., 1 annex
A revised calculational model for fission
Energy Technology Data Exchange (ETDEWEB)
Atchison, F
1998-09-01
A semi-empirical parametrization has been developed to calculate the fission contribution to evaporative de-excitation of nuclei with a very wide range of charge, mass and excitation-energy and also the nuclear states of the scission products. The calculational model reproduces measured values (cross-sections, mass distributions, etc.) for a wide range of fissioning systems: Nuclei from Ta to Cf, interactions involving nucleons up to medium energy and light ions. (author)
Construction and Updating of Event Models in Auditory Event Processing
Huff, Markus; Maurer, Annika E.; Brich, Irina; Pagenkopf, Anne; Wickelmaier, Florian; Papenmeier, Frank
2018-01-01
Humans segment the continuous stream of sensory information into distinct events at points of change. Between 2 events, humans perceive an event boundary. Present theories propose changes in the sensory information to trigger updating processes of the present event model. Increased encoding effort finally leads to a memory benefit at event…
An Update on Modifications to Water Treatment Plant Model
Water treatment plant (WTP) model is an EPA tool for informing regulatory options. WTP has a few versions: 1). WTP2.2 can help in regulatory analysis. An updated version (WTP3.0) will allow plant-specific analysis (WTP-ccam) and thus help meet plant-specific treatment objectives...
Purchasing portfolio models: a critique and update
Gelderman, C.J.; Weele, van A.J.
2005-01-01
Purchasing portfolio models have spawned considerable discussion in the literature. Many advantages and disadvantages have been put forward, revealing considerable divergence in opinion on the merits of portfolio models. This study addresses the question of whether or not the use of purchasing
A Kriging Model Based Finite Element Model Updating Method for Damage Detection
Directory of Open Access Journals (Sweden)
Xiuming Yang
2017-10-01
Full Text Available Model updating is an effective means of damage identification and surrogate modeling has attracted considerable attention for saving computational cost in finite element (FE model updating, especially for large-scale structures. In this context, a surrogate model of frequency is normally constructed for damage identification, while the frequency response function (FRF is rarely used as it usually changes dramatically with updating parameters. This paper presents a new surrogate model based model updating method taking advantage of the measured FRFs. The Frequency Domain Assurance Criterion (FDAC is used to build the objective function, whose nonlinear response surface is constructed by the Kriging model. Then, the efficient global optimization (EGO algorithm is introduced to get the model updating results. The proposed method has good accuracy and robustness, which have been verified by a numerical simulation of a cantilever and experimental test data of a laboratory three-story structure.
Updating the debate on model complexity
Simmons, Craig T.; Hunt, Randall J.
2012-01-01
As scientists who are trying to understand a complex natural world that cannot be fully characterized in the field, how can we best inform the society in which we live? This founding context was addressed in a special session, “Complexity in Modeling: How Much is Too Much?” convened at the 2011 Geological Society of America Annual Meeting. The session had a variety of thought-provoking presentations—ranging from philosophy to cost-benefit analyses—and provided some areas of broad agreement that were not evident in discussions of the topic in 1998 (Hunt and Zheng, 1999). The session began with a short introduction during which model complexity was framed borrowing from an economic concept, the Law of Diminishing Returns, and an example of enjoyment derived by eating ice cream. Initially, there is increasing satisfaction gained from eating more ice cream, to a point where the gain in satisfaction starts to decrease, ending at a point when the eater sees no value in eating more ice cream. A traditional view of model complexity is similar—understanding gained from modeling can actually decrease if models become unnecessarily complex. However, oversimplified models—those that omit important aspects of the problem needed to make a good prediction—can also limit and confound our understanding. Thus, the goal of all modeling is to find the “sweet spot” of model sophistication—regardless of whether complexity was added sequentially to an overly simple model or collapsed from an initial highly parameterized framework that uses mathematics and statistics to attain an optimum (e.g., Hunt et al., 2007). Thus, holistic parsimony is attained, incorporating “as simple as possible,” as well as the equally important corollary “but no simpler.”
Precipitates/Salts Model Sensitivity Calculation
Energy Technology Data Exchange (ETDEWEB)
P. Mariner
2001-12-20
The objective and scope of this calculation is to assist Performance Assessment Operations and the Engineered Barrier System (EBS) Department in modeling the geochemical effects of evaporation on potential seepage waters within a potential repository drift. This work is developed and documented using procedure AP-3.12Q, ''Calculations'', in support of ''Technical Work Plan For Engineered Barrier System Department Modeling and Testing FY 02 Work Activities'' (BSC 2001a). The specific objective of this calculation is to examine the sensitivity and uncertainties of the Precipitates/Salts model. The Precipitates/Salts model is documented in an Analysis/Model Report (AMR), ''In-Drift Precipitates/Salts Analysis'' (BSC 2001b). The calculation in the current document examines the effects of starting water composition, mineral suppressions, and the fugacity of carbon dioxide (CO{sub 2}) on the chemical evolution of water in the drift.
International Nuclear Information System (INIS)
Liu Yang; Xu Dejian; Li Yan; Duan Zhongdong
2011-01-01
As a novel updating technique, cross-model cross-mode (CMCM) method possesses a high efficiency and capability of flexible selecting updating parameters. However, the success of this method depends on the accuracy of measured modal shapes. Usually, the measured modal shapes are inaccurate since many kinds of measured noises are inevitable. Furthermore, the complete testing modal shapes are required by CMCM method so that the calculating errors may be introduced into the measured modal shapes by conducting the modal expansion or model reduction technique. Therefore, this algorithm is faced with the challenge of updating the finite element (FE) model of practical complex structures. In this study, the fuzzy CMCM method is proposed in order to weaken the effect of errors of the measured modal shapes on the updated results. Then two simulated examples are applied to compare the performance of the fuzzy CMCM method with the CMCM method. The test results show that proposed method is more promising to update the FE model of practical structures than CMCM method.
A last updating evolution model for online social networks
Bu, Zhan; Xia, Zhengyou; Wang, Jiandong; Zhang, Chengcui
2013-05-01
As information technology has advanced, people are turning to electronic media more frequently for communication, and social relationships are increasingly found on online channels. However, there is very limited knowledge about the actual evolution of the online social networks. In this paper, we propose and study a novel evolution network model with the new concept of “last updating time”, which exists in many real-life online social networks. The last updating evolution network model can maintain the robustness of scale-free networks and can improve the network reliance against intentional attacks. What is more, we also found that it has the “small-world effect”, which is the inherent property of most social networks. Simulation experiment based on this model show that the results and the real-life data are consistent, which means that our model is valid.
The International Reference Ionosphere: Model Update 2016
Bilitza, Dieter; Altadill, David; Reinisch, Bodo; Galkin, Ivan; Shubin, Valentin; Truhlik, Vladimir
2016-04-01
The International Reference Ionosphere (IRI) is recognized as the official standard for the ionosphere (COSPAR, URSI, ISO) and is widely used for a multitude of different applications as evidenced by the many papers in science and engineering journals that acknowledge the use of IRI (e.g., about 11% of all Radio Science papers each year). One of the shortcomings of the model has been the dependence of the F2 peak height modeling on the propagation factor M(3000)F2. With the 2016 version of IRI, two new models will be introduced for hmF2 that were developed directly based on hmF2 measurements by ionosondes [Altadill et al., 2013] and by COSMIC radio occultation [Shubin, 2015], respectively. In addition IRI-2016 will include an improved representation of the ionosphere during the very low solar activities that were reached during the last solar minimum in 2008/2009. This presentation will review these and other improvements that are being implemented with the 2016 version of the IRI model. We will also discuss recent IRI workshops and their findings and results. One of the most exciting new projects is the development of the Real-Time IRI [Galkin et al., 2012]. We will discuss the current status and plans for the future. Altadill, D., S. Magdaleno, J.M. Torta, E. Blanch (2013), Global empirical models of the density peak height and of the equivalent scale height for quiet conditions, Advances in Space Research 52, 1756-1769, doi:10.1016/j.asr.2012.11.018. Galkin, I.A., B.W. Reinisch, X. Huang, and D. Bilitza (2012), Assimilation of GIRO Data into a Real-Time IRI, Radio Science, 47, RS0L07, doi:10.1029/2011RS004952. Shubin V.N. (2015), Global median model of the F2-layer peak height based on ionospheric radio-occultation and ground-based Digisonde observations, Advances in Space Research 56, 916-928, doi:10.1016/j.asr.2015.05.029.
An update of Leighton's solar dynamo model
Cameron, R. H.; Schüssler, M.
2017-03-01
In 1969, Leighton developed a quasi-1D mathematical model of the solar dynamo, building upon the phenomenological scenario of Babcock published in 1961. Here we present a modification and extension of Leighton's model. Using the axisymmetric component (longitudinal average) of the magnetic field, we consider the radial field component at the solar surface and the radially integrated toroidal magnetic flux in the convection zone, both as functions of latitude. No assumptions are made with regard to the radial location of the toroidal flux. The model includes the effects of (I) turbulent diffusion at the surface and in the convection zone; (II) poleward meridional flow at the surface and an equatorward return flow affecting the toroidal flux; (III) latitudinal differential rotation and the near-surface layer of radial rotational shear; (iv) downward convective pumping of magnetic flux in the shear layer; and (v) flux emergence in the form of tilted bipolar magnetic regions treated as a source term for the radial surface field. While the parameters relevant for the transport of the surface field are taken from observations, the model condenses the unknown properties of magnetic field and flow in the convection zone into a few free parameters (turbulent diffusivity, effective return flow, amplitude of the source term, and a parameter describing the effective radial shear). Comparison with the results of 2D flux transport dynamo codes shows that the model captures the essential features of these simulations. We make use of the computational efficiency of the model to carry out an extended parameter study. We cover an extended domain of the 4D parameter space and identify the parameter ranges that provide solar-like solutions. Dipole parity is always preferred and solutions with periods around 22 yr and a correct phase difference between flux emergence in low latitudes and the strength of the polar fields are found for a return flow speed around 2 m s-1, turbulent
Construction and updating of event models in auditory event processing.
Huff, Markus; Maurer, Annika E; Brich, Irina; Pagenkopf, Anne; Wickelmaier, Florian; Papenmeier, Frank
2018-02-01
Humans segment the continuous stream of sensory information into distinct events at points of change. Between 2 events, humans perceive an event boundary. Present theories propose changes in the sensory information to trigger updating processes of the present event model. Increased encoding effort finally leads to a memory benefit at event boundaries. Evidence from reading time studies (increased reading times with increasing amount of change) suggest that updating of event models is incremental. We present results from 5 experiments that studied event processing (including memory formation processes and reading times) using an audio drama as well as a transcript thereof as stimulus material. Experiments 1a and 1b replicated the event boundary advantage effect for memory. In contrast to recent evidence from studies using visual stimulus material, Experiments 2a and 2b found no support for incremental updating with normally sighted and blind participants for recognition memory. In Experiment 3, we replicated Experiment 2a using a written transcript of the audio drama as stimulus material, allowing us to disentangle encoding and retrieval processes. Our results indicate incremental updating processes at encoding (as measured with reading times). At the same time, we again found recognition performance to be unaffected by the amount of change. We discuss these findings in light of current event cognition theories. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
EARTHWORK VOLUME CALCULATION FROM DIGITAL TERRAIN MODELS
Directory of Open Access Journals (Sweden)
JANIĆ Milorad
2015-06-01
Full Text Available Accurate calculation of cut and fill volume has an essential importance in many fields. This article shows a new method, which has no approximation, based on Digital Terrain Models. A relatively new mathematical model is developed for that purpose, which is implemented in the software solution. Both of them has been tested and verified in the praxis on several large opencast mines. This application is developed in AutoLISP programming language and works in AutoCAD environment.
Hybrid reduced order modeling for assembly calculations
Energy Technology Data Exchange (ETDEWEB)
Bang, Y.; Abdel-Khalik, H. S. [North Carolina State University, Raleigh, NC (United States); Jessee, M. A.; Mertyurek, U. [Oak Ridge National Laboratory, Oak Ridge, TN (United States)
2013-07-01
While the accuracy of assembly calculations has considerably improved due to the increase in computer power enabling more refined description of the phase space and use of more sophisticated numerical algorithms, the computational cost continues to increase which limits the full utilization of their effectiveness for routine engineering analysis. Reduced order modeling is a mathematical vehicle that scales down the dimensionality of large-scale numerical problems to enable their repeated executions on small computing environment, often available to end users. This is done by capturing the most dominant underlying relationships between the model's inputs and outputs. Previous works demonstrated the use of the reduced order modeling for a single physics code, such as a radiation transport calculation. This manuscript extends those works to coupled code systems as currently employed in assembly calculations. Numerical tests are conducted using realistic SCALE assembly models with resonance self-shielding, neutron transport, and nuclides transmutation/depletion models representing the components of the coupled code system. (authors)
Model and calculations for net infiltration
International Nuclear Information System (INIS)
Childs, S.W.; Long, A.
1992-01-01
In this paper a conceptual model for calculating net infiltration is developed and implemented. It incorporates the following important factors: viability of climate for the next 10,000 years, areal viability of net infiltration, and important soil/plant factors that affect the soil water budget of desert soils. Model results are expressed in terms of occurrence probabilities for time periods. In addition the variability of net infiltration is demonstrated both for change with time and differences among three soil/hydrologic units present at the site modeled
Olkiluoto surface hydrological modelling: Update 2012 including salt transport modelling
International Nuclear Information System (INIS)
Karvonen, T.
2013-11-01
Posiva Oy is responsible for implementing a final disposal program for spent nuclear fuel of its owners Teollisuuden Voima Oyj and Fortum Power and Heat Oy. The spent nuclear fuel is planned to be disposed at a depth of about 400-450 meters in the crystalline bedrock at the Olkiluoto site. Leakages located at or close to spent fuel repository may give rise to the upconing of deep highly saline groundwater and this is a concern with regard to the performance of the tunnel backfill material after the closure of the tunnels. Therefore a salt transport sub-model was added to the Olkiluoto surface hydrological model (SHYD). The other improvements include update of the particle tracking algorithm and possibility to estimate the influence of open drillholes in a case where overpressure in inflatable packers decreases causing a hydraulic short-circuit between hydrogeological zones HZ19 and HZ20 along the drillhole. Four new hydrogeological zones HZ056, HZ146, BFZ100 and HZ039 were added to the model. In addition, zones HZ20A and HZ20B intersect with each other in the new structure model, which influences salinity upconing caused by leakages in shafts. The aim of the modelling of long-term influence of ONKALO, shafts and repository tunnels provide computational results that can be used to suggest limits for allowed leakages. The model input data included all the existing leakages into ONKALO (35-38 l/min) and shafts in the present day conditions. The influence of shafts was computed using eight different values for total shaft leakage: 5, 11, 20, 30, 40, 50, 60 and 70 l/min. The selection of the leakage criteria for shafts was influenced by the fact that upconing of saline water increases TDS-values close to the repository areas although HZ20B does not intersect any deposition tunnels. The total limit for all leakages was suggested to be 120 l/min. The limit for HZ20 zones was proposed to be 40 l/min: about 5 l/min the present day leakages to access tunnel, 25 l/min from
Olkiluoto surface hydrological modelling: Update 2012 including salt transport modelling
Energy Technology Data Exchange (ETDEWEB)
Karvonen, T. [WaterHope, Helsinki (Finland)
2013-11-15
Posiva Oy is responsible for implementing a final disposal program for spent nuclear fuel of its owners Teollisuuden Voima Oyj and Fortum Power and Heat Oy. The spent nuclear fuel is planned to be disposed at a depth of about 400-450 meters in the crystalline bedrock at the Olkiluoto site. Leakages located at or close to spent fuel repository may give rise to the upconing of deep highly saline groundwater and this is a concern with regard to the performance of the tunnel backfill material after the closure of the tunnels. Therefore a salt transport sub-model was added to the Olkiluoto surface hydrological model (SHYD). The other improvements include update of the particle tracking algorithm and possibility to estimate the influence of open drillholes in a case where overpressure in inflatable packers decreases causing a hydraulic short-circuit between hydrogeological zones HZ19 and HZ20 along the drillhole. Four new hydrogeological zones HZ056, HZ146, BFZ100 and HZ039 were added to the model. In addition, zones HZ20A and HZ20B intersect with each other in the new structure model, which influences salinity upconing caused by leakages in shafts. The aim of the modelling of long-term influence of ONKALO, shafts and repository tunnels provide computational results that can be used to suggest limits for allowed leakages. The model input data included all the existing leakages into ONKALO (35-38 l/min) and shafts in the present day conditions. The influence of shafts was computed using eight different values for total shaft leakage: 5, 11, 20, 30, 40, 50, 60 and 70 l/min. The selection of the leakage criteria for shafts was influenced by the fact that upconing of saline water increases TDS-values close to the repository areas although HZ20B does not intersect any deposition tunnels. The total limit for all leakages was suggested to be 120 l/min. The limit for HZ20 zones was proposed to be 40 l/min: about 5 l/min the present day leakages to access tunnel, 25 l/min from
Reactor Thermal Hydraulic Numerical Calculation And Modeling
International Nuclear Information System (INIS)
Duong Ngoc Hai; Dang The Ba
2008-01-01
In the paper the results of analysis of thermal hydraulic state models using the numerical codes such as COOLOD, EUREKA and RELAP5 for simulation of the reactor thermal hydraulic states are presented. The calculations, analyses of reactor thermal hydraulic state and safety were implemented using different codes. The received numerical results, which were compared each to other, to experiment measurement of Dalat (Vietnam) research reactor and published results, show their appropriateness and capacity for analyses of different appropriate cases. (author)
Recent Updates to the System Advisor Model (SAM)
Energy Technology Data Exchange (ETDEWEB)
DiOrio, Nicholas A [National Renewable Energy Laboratory (NREL), Golden, CO (United States)
2018-02-14
The System Advisor Model (SAM) is a mature suite of techno-economic models for many renewable energy technologies that can be downloaded for free as a desktop application or software development kit. SAM is used for system-level modeling, including generating performance pro the release of the code as an open source project on GitHub. Other additions that will be covered include the ability to download data directly into SAM from the National Solar Radiation Database (NSRDB) and up- dates to a user-interface macro that assists with PV system sizing. A brief update on SAM's battery model and its integration with the detailed photovoltaic model will also be discussed. Finally, an outline of planned work for the next year will be presented, including the addition of a bifacial model, support for multiple MPPT inputs for detailed inverter modeling, and the addition of a model for inverter thermal behavior.
The Potosi Reservoir Model 2013c, Property Modeling Update
Energy Technology Data Exchange (ETDEWEB)
Adushita, Yasmin; Smith, Valerie; Leetaru, Hannes
2014-09-30
property modeling workflows and layering. This model was retained as the base case. In the preceding Task [1], the Potosi reservoir model was updated to take into account the new data from the Verification Well #2 (VW2) which was drilled in 2012. The porosity and permeability modeling was revised to take into account the log data from the new well. Revisions of the 2010 modeling assumptions were also done on relative permeability, capillary pressures, formation water salinity, and the maximum allowable well bottomhole pressure. Dynamic simulations were run using the injection target of 3.5 million tons per annum (3.2 MTPA) for 30 years. This dynamic model was named Potosi Dynamic Model 2013b. In this Task, a new property modeling workflow was applied, where seismic inversion data guided the porosity mapping and geobody extraction. The static reservoir model was fully guided by PorosityCube interpretations and derivations coupled with petrophysical logs from three wells. The two main assumptions are: porosity features in the PorosityCube that correlate with lost circulation zones represent vugular zones, and that these vugular zones are laterally continuous. Extrapolation was done carefully to populate the vugular facies and their corresponding properties outside the seismic footprint up to the boundary of the 30 by 30 mi (48 by 48 km) model. Dynamic simulations were also run using the injection target of 3.5 million tons per annum (3.2 MTPA) for 30 years. This new dynamic model was named Potosi Dynamic Model 2013c. Reservoir simulation with the latest model gives a cumulative injection of 43 million tons (39 MT) in 30 years with a single well, which corresponds to 40% of the injection target. The injection rate is approx. 3.2 MTPA in the first six months as the well is injecting into the surrounding vugs, and declines rapidly to 1.8 million tons per annum (1.6 MTPA) in year 3 once the surrounding vugs are full and the CO2 start to reach the matrix. After, the injection
Matrix model calculations beyond the spherical limit
International Nuclear Information System (INIS)
Ambjoern, J.; Chekhov, L.; Kristjansen, C.F.; Makeenko, Yu.
1993-01-01
We propose an improved iterative scheme for calculating higher genus contributions to the multi-loop (or multi-point) correlators and the partition function of the hermitian one matrix model. We present explicit results up to genus two. We develop a version which gives directly the result in the double scaling limit and present explicit results up to genus four. Using the latter version we prove that the hermitian and the complex matrix model are equivalent in the double scaling limit and that in this limit they are both equivalent to the Kontsevich model. We discuss how our results away from the double scaling limit are related to the structure of moduli space. (orig.)
Development of Dynamic Environmental Effect Calculation Model
International Nuclear Information System (INIS)
Jeong, Chang Joon; Ko, Won Il
2010-01-01
The short-term, long-term decay heat, and radioactivity are considered as main environmental parameters of SF and HLA. In this study, the dynamic calculation models for radioactivity, short-term decay heat, and long-term heat load of the SF are developed and incorporated into the Doneness code. The spent fuel accumulation has become a major issue for sustainable operation of nuclear power plants. If a once-through fuel cycle is selected, the SF will be disposed into the repository. Otherwise, in case of fast reactor or reuse cycle, the SF will be reprocessed and the high level waste will be disposed
Model Hosting for continuous updating and transparent Water Resources Management
Jódar, Jorge; Almolda, Xavier; Batlle, Francisco; Carrera, Jesús
2013-04-01
Numerical models have become a standard tool for water resources management. They are required for water volume bookkeeping and help in decision making. Nevertheless, numerical models are complex and they can be used only by highly qualified technicians, which are often far from the decision makers. Moreover, they need to be maintained. That is, they require updating of their state, by assimilation of measurements, natural and anthropic actions (e.g., pumping and weather data), and model parameters. Worst, their very complexity implies that are they viewed as obscure and far, which hinders transparency and governance. We propose internet model hosting as an alternative to overcome these limitations. The basic idea is to keep the model hosted in the cloud. The model is updated as new data (measurements and external forcing) becomes available, which ensures continuous maintenance, with a minimal human cost (only required to address modelling problems). Internet access facilitates model use not only by modellers, but also by people responsible for data gathering and by water managers. As a result, the model becomes an institutional tool shared by water agencies to help them not only in decision making for sustainable management of water resources, but also in generating a common discussion platform. By promoting intra-agency sharing, the model becomes the common official position of the agency, which facilitates commitment in their adopted decisions regarding water management. Moreover, by facilitating access to stakeholders and the general public, the state of the aquifer and the impacts of alternative decisions become transparent. We have developed a tool (GAC, Global Aquifer Control) to address the above requirements. The application has been developed using Cloud Computing technologies, which facilitates the above operations. That is, GAC automatically updates the numerical models with the new available measurements, and then simulates numerous management options
Cost Calculation Model for Logistics Service Providers
Directory of Open Access Journals (Sweden)
Zoltán Bokor
2012-11-01
Full Text Available The exact calculation of logistics costs has become a real challenge in logistics and supply chain management. It is essential to gain reliable and accurate costing information to attain efficient resource allocation within the logistics service provider companies. Traditional costing approaches, however, may not be sufficient to reach this aim in case of complex and heterogeneous logistics service structures. So this paper intends to explore the ways of improving the cost calculation regimes of logistics service providers and show how to adopt the multi-level full cost allocation technique in logistics practice. After determining the methodological framework, a sample cost calculation scheme is developed and tested by using estimated input data. Based on the theoretical findings and the experiences of the pilot project it can be concluded that the improved costing model contributes to making logistics costing more accurate and transparent. Moreover, the relations between costs and performances also become more visible, which enhances the effectiveness of logistics planning and controlling significantly
Effective hamiltonian calculations using incomplete model spaces
International Nuclear Information System (INIS)
Koch, S.; Mukherjee, D.
1987-01-01
It appears that the danger of encountering ''intruder states'' is substantially reduced if an effective hamiltonian formalism is developed for incomplete model spaces (IMS). In a Fock-space approach, the proof a ''connected diagram theorem'' is fairly straightforward with exponential-type of ansatze for the wave-operator W, provided the normalization chosen for W is separable. Operationally, one just needs a suitable categorization of the Fock-space operators into ''diagonal'' and ''non-diagonal'' parts that is generalization of the corresponding procedure for the complete model space. The formalism is applied to prototypical 2-electron systems. The calculations have been performed on the Cyber 205 super-computer. The authors paid special attention to an efficient vectorization for the construction and solution of the resulting coupled non-linear equations
Model for fission-product calculations
International Nuclear Information System (INIS)
Smith, A.B.
1984-01-01
Many fission-product cross sections remain unmeasurable thus considerable reliance must be placed upon calculational interpolation and extrapolation from the few available measured cross sections. The vehicle, particularly for the lighter fission products, is the conventional optical-statistical model. The applied goals generally are: capture cross sections to 7 to 10% accuracies and inelastic-scattering cross sections to 25 to 50%. Comparisons of recent evaluations and experimental results indicate that these goals too often are far from being met, particularly in the area of inelastic scattering, and some of the evaluated fission-product cross sections are simply physically unreasonable. It is difficult to avoid the conclusion that the models employed in many of the evaluations are inappropriate and/or inappropriately used. In order to alleviate the above unfortunate situations, a regional optical-statistical (OM) model was sought with the goal of quantitative prediction of the cross sections of the lighter-mass (Z = 30-51) fission products. The first step toward that goal was the establishment of a reliable experimental data base consisting of energy-averaged neutron total and differential-scattering cross sections. The second step was the deduction of a regional model from the experimental data. It was assumed that a spherical OM is appropriate: a reasonable and practical assumption. The resulting OM then was verified against the measured data base. Finally, the physical character of the regional model is examined
On-line Bayesian model updating for structural health monitoring
Rocchetta, Roberto; Broggi, Matteo; Huchet, Quentin; Patelli, Edoardo
2018-03-01
Fatigue induced cracks is a dangerous failure mechanism which affects mechanical components subject to alternating load cycles. System health monitoring should be adopted to identify cracks which can jeopardise the structure. Real-time damage detection may fail in the identification of the cracks due to different sources of uncertainty which have been poorly assessed or even fully neglected. In this paper, a novel efficient and robust procedure is used for the detection of cracks locations and lengths in mechanical components. A Bayesian model updating framework is employed, which allows accounting for relevant sources of uncertainty. The idea underpinning the approach is to identify the most probable crack consistent with the experimental measurements. To tackle the computational cost of the Bayesian approach an emulator is adopted for replacing the computationally costly Finite Element model. To improve the overall robustness of the procedure, different numerical likelihoods, measurement noises and imprecision in the value of model parameters are analysed and their effects quantified. The accuracy of the stochastic updating and the efficiency of the numerical procedure are discussed. An experimental aluminium frame and on a numerical model of a typical car suspension arm are used to demonstrate the applicability of the approach.
Acceleration methods and models in Sn calculations
International Nuclear Information System (INIS)
Sbaffoni, M.M.; Abbate, M.J.
1984-01-01
In some neutron transport problems solved by the discrete ordinate method, it is relatively common to observe some particularities as, for example, negative fluxes generation, slow and insecure convergences and solution instabilities. The commonly used models for neutron flux calculation and acceleration methods included in the most used codes were analyzed, in face of their use in problems characterized by a strong upscattering effect. Some special conclusions derived from this analysis are presented as well as a new method to perform the upscattering scaling for solving the before mentioned problems in this kind of cases. This method has been included in the DOT3.5 code (two dimensional discrete ordinates radiation transport code) generating a new version of wider application. (Author) [es
Finite element model updating in structural dynamics using design sensitivity and optimisation
Calvi, Adriano
1998-01-01
Model updating is an important issue in engineering. In fact a well-correlated model provides for accurate evaluation of the structure loads and responses. The main objectives of the study were to exploit available optimisation programs to create an error localisation and updating procedure of nite element models that minimises the "error" between experimental and analytical modal data, addressing in particular the updating of large scale nite element models with se...
Hydrogeological structure model of the Olkiluoto Site. Update in 2010
International Nuclear Information System (INIS)
Vaittinen, T.; Ahokas, H.; Nummela, J.; Paulamaeki, S.
2011-09-01
As part of the programme for the final disposal of spent nuclear fuel, a hydrogeological structure model containing the hydraulically significant zones on Olkiluoto Island has been compiled. The structure model describes the deterministic site scale zones that dominate the groundwater flow. The main objective of the study is to provide the geometry and the hydrogeological properties related to the groundwater flow for the zones and the sparsely fractured bedrock to be used in the numerical modelling of groundwater flow and geochemical transport and thereby in the safety assessment. Also, these zones should be taken into account in the repository layout and in the construction of the disposal facility and they have a long-term impact on the evolution of the site and the safety of the disposal repository. The previous hydrogeological model was compiled in 2008 and this updated version is based on data available at the end of May 2010. The updating was based on new hydrogeological observations and a systematic approach covering all drillholes to assess measured fracture transmissivities typical of the site-scale hydrogeological zones. New data consisted of head observations and interpreted pressure and flow responses caused by field activities. Essential background data for the modelling included the ductile deformation model and the site scale brittle deformation zones modelled in the geological model version 2.0. The GSM combine both geological and geophysical investigation data on the site. As a result of the modelling campaign, hydrogeological zones HZ001, HZ008, HZ19A, HZ19B, HZ19C, HZ20A, HZ20B, HZ21, HZ21B, HZ039, HZ099, OL-BFZ100, and HZ146 were included in the structure model. Compared with the previous model, zone HZ004 was replaced with zone HZ146 and zone HZ039 was introduced for the first time. Alternative zone HZ21B was included in the basic model. For the modelled zones, both the zone intersections, describing the fractures with dominating groundwater
International Nuclear Information System (INIS)
Fu, Y; Xu, O; Yang, W; Zhou, L; Wang, J
2017-01-01
To investigate time-variant and nonlinear characteristics in industrial processes, a soft sensor modelling method based on time difference, moving-window recursive partial least square (PLS) and adaptive model updating is proposed. In this method, time difference values of input and output variables are used as training samples to construct the model, which can reduce the effects of the nonlinear characteristic on modelling accuracy and retain the advantages of recursive PLS algorithm. To solve the high updating frequency of the model, a confidence value is introduced, which can be updated adaptively according to the results of the model performance assessment. Once the confidence value is updated, the model can be updated. The proposed method has been used to predict the 4-carboxy-benz-aldehyde (CBA) content in the purified terephthalic acid (PTA) oxidation reaction process. The results show that the proposed soft sensor modelling method can reduce computation effectively, improve prediction accuracy by making use of process information and reflect the process characteristics accurately. (paper)
Body Dysmorphic Disorder: Neurobiological Features and an Updated Model
Li, Wei; Arienzo, Donatello; Feusner, Jamie D.
2013-01-01
Body Dysmorphic Disorder (BDD) affects approximately 2% of the population and involves misperceived defects of appearance along with obsessive preoccupation and compulsive behaviors. There is evidence of neurobiological abnormalities associated with symptoms in BDD, although research to date is still limited. This review covers the latest neuropsychological, genetic, neurochemical, psychophysical, and neuroimaging studies and synthesizes these findings into an updated (yet still preliminary) neurobiological model of the pathophysiology of BDD. We propose a model in which visual perceptual abnormalities, along with frontostriatal and limbic system dysfunction, may combine to contribute to the symptoms of impaired insight and obsessive thoughts and compulsive behaviors expressed in BDD. Further research is necessary to gain a greater understanding of the etiological formation of BDD symptoms and their evolution over time. PMID:25419211
"Updates to Model Algorithms & Inputs for the Biogenic ...
We have developed new canopy emission algorithms and land use data for BEIS. Simulations with BEIS v3.4 and these updates in CMAQ v5.0.2 are compared these changes to the Model of Emissions of Gases and Aerosols from Nature (MEGAN) and evaluated the simulations against observations. This has resulted in improvements in model evaluations of modeled isoprene, NOx, and O3. The National Exposure Research Laboratory (NERL) Atmospheric Modeling and Analysis Division (AMAD) conducts research in support of EPA mission to protect human health and the environment. AMAD research program is engaged in developing and evaluating predictive atmospheric models on all spatial and temporal scales for forecasting the air quality and for assessing changes in air quality and air pollutant exposures, as affected by changes in ecosystem management and regulatory decisions. AMAD is responsible for providing a sound scientific and technical basis for regulatory policies based on air quality models to improve ambient air quality. The models developed by AMAD are being used by EPA, NOAA, and the air pollution community in understanding and forecasting not only the magnitude of the air pollution problem, but also in developing emission control policies and regulations for air quality improvements.
Updated Reference Model for Heat Generation in the Lithosphere
Wipperfurth, S. A.; Sramek, O.; Roskovec, B.; Mantovani, F.; McDonough, W. F.
2017-12-01
Models integrating geophysics and geochemistry allow for characterization of the Earth's heat budget and geochemical evolution. Global lithospheric geophysical models are now constrained by surface and body wave data and are classified into several unique tectonic types. Global lithospheric geochemical models have evolved from petrological characterization of layers to a combination of petrologic and seismic constraints. Because of these advances regarding our knowledge of the lithosphere, it is necessary to create an updated chemical and physical reference model. We are developing a global lithospheric reference model based on LITHO1.0 (segmented into 1°lon x 1°lat x 9-layers) and seismological-geochemical relationships. Uncertainty assignments and correlations are assessed for its physical attributes, including layer thickness, Vp and Vs, and density. This approach yields uncertainties for the masses of the crust and lithospheric mantle. Heat producing element abundances (HPE: U, Th, and K) are ascribed to each volume element. These chemical attributes are based upon the composition of subducting sediment (sediment layers), composition of surface rocks (upper crust), a combination of petrologic and seismic correlations (middle and lower crust), and a compilation of xenolith data (lithospheric mantle). The HPE abundances are correlated within each voxel, but not vertically between layers. Efforts to provide correlation of abundances horizontally between each voxel are discussed. These models are used further to critically evaluate the bulk lithosphere heat production in the continents and the oceans. Cross-checks between our model and results from: 1) heat flux (Artemieva, 2006; Davies, 2013; Cammarano and Guerri, 2017), 2) gravity (Reguzzoni and Sampietro, 2015), and 3) geochemical and petrological models (Rudnick and Gao, 2014; Hacker et al. 2015) are performed.
Slab2 - Updated Subduction Zone Geometries and Modeling Tools
Moore, G.; Hayes, G. P.; Portner, D. E.; Furtney, M.; Flamme, H. E.; Hearne, M. G.
2017-12-01
The U.S. Geological Survey database of global subduction zone geometries (Slab1.0), is a highly utilized dataset that has been applied to a wide range of geophysical problems. In 2017, these models have been improved and expanded upon as part of the Slab2 modeling effort. With a new data driven approach that can be applied to a broader range of tectonic settings and geophysical data sets, we have generated a model set that will serve as a more comprehensive, reliable, and reproducible resource for three-dimensional slab geometries at all of the world's convergent margins. The newly developed framework of Slab2 is guided by: (1) a large integrated dataset, consisting of a variety of geophysical sources (e.g., earthquake hypocenters, moment tensors, active-source seismic survey images of the shallow slab, tomography models, receiver functions, bathymetry, trench ages, and sediment thickness information); (2) a dynamic filtering scheme aimed at constraining incorporated seismicity to only slab related events; (3) a 3-D data interpolation approach which captures both high resolution shallow geometries and instances of slab rollback and overlap at depth; and (4) an algorithm which incorporates uncertainties of contributing datasets to identify the most probable surface depth over the extent of each subduction zone. Further layers will also be added to the base geometry dataset, such as historic moment release, earthquake tectonic providence, and interface coupling. Along with access to several queryable data formats, all components have been wrapped into an open source library in Python, such that suites of updated models can be released as further data becomes available. This presentation will discuss the extent of Slab2 development, as well as the current availability of the model and modeling tools.
Updated Results from the Michigan Titan Thermospheric General Circulation Model (TTGCM)
Bell, J. M.; Bougher, S. W.; de Lahaye, V.; Waite, J. H.; Ridley, A.
2006-05-01
This paper presents updated results from the Michigan Titan Thermospheric General Circulation Model (TTGCM) that was recently unveiled in operational form (Bell et al 2005 Spring AGU). Since then, we have incorporated a suite of chemical reactions for the major neutral constituents in Titan's upper atmosphere (N2, CH4). Additionally, some selected minor neutral constituents and major ionic species are also supported in the framework. At this time, HCN, which remains one of the critical thermally active species in the upper atmosphere, remains specified at all altitudes, utilizing profiles derived from recent Cassini-Huygen's measurements. In addition to these improvements, a parallel effort is underway to develop a non-hydrostatic Titan Thermospheric General Circulation Model for further comparisons. In this work, we emphasize the impacts of self-consistent chemistry on the results of the updated TTGCM relative to its frozen chemistry predecessor. Meanwhile, the thermosphere's thermodynamics remains determined by the interplay of solar EUV forcing and HCN rotational cooling, which is calculated by a full line- by-line radiative transfer routine along the lines of Yelle (1991) and Mueller-Wodarg (2000, 2002). In addition to these primary drivers, a treatment of magnetospheric heating is further tested. The model's results will be compared with both the Cassini INMS data and the model of Mueller-Wodarg (2000,2002).
An Update to the NASA Reference Solar Sail Thrust Model
Heaton, Andrew F.; Artusio-Glimpse, Alexandra B.
2015-01-01
An optical model of solar sail material originally derived at JPL in 1978 has since served as the de facto standard for NASA and other solar sail researchers. The optical model includes terms for specular and diffuse reflection, thermal emission, and non-Lambertian diffuse reflection. The standard coefficients for these terms are based on tests of 2.5 micrometer Kapton sail material coated with 100 nm of aluminum on the front side and chromium on the back side. The original derivation of these coefficients was documented in an internal JPL technical memorandum that is no longer available. Additionally more recent optical testing has taken place and different materials have been used or are under consideration by various researchers for solar sails. Here, where possible, we re-derive the optical coefficients from the 1978 model and update them to accommodate newer test results and sail material. The source of the commonly used value for the front side non-Lambertian coefficient is not clear, so we investigate that coefficient in detail. Although this research is primarily designed to support the upcoming NASA NEA Scout and Lunar Flashlight solar sail missions, the results are also of interest to the wider solar sail community.
Concurrent algorithms for nuclear shell model calculations
International Nuclear Information System (INIS)
Mackenzie, L.M.; Macleod, A.M.; Berry, D.J.; Whitehead, R.R.
1988-01-01
The calculation of nuclear properties has proved very successful for light nuclei, but is limited by the power of the present generation of computers. Starting with an analysis of current techniques, this paper discusses how these can be modified to map parallelism inherent in the mathematics onto appropriate parallel machines. A prototype dedicated multiprocessor for nuclear structure calculations, designed and constructed by the authors, is described and evaluated. The approach adopted is discussed in the context of a number of generically similar algorithms. (orig.)
Prediction error, ketamine and psychosis: An updated model.
Corlett, Philip R; Honey, Garry D; Fletcher, Paul C
2016-11-01
In 2007, we proposed an explanation of delusion formation as aberrant prediction error-driven associative learning. Further, we argued that the NMDA receptor antagonist ketamine provided a good model for this process. Subsequently, we validated the model in patients with psychosis, relating aberrant prediction error signals to delusion severity. During the ensuing period, we have developed these ideas, drawing on the simple principle that brains build a model of the world and refine it by minimising prediction errors, as well as using it to guide perceptual inferences. While previously we focused on the prediction error signal per se, an updated view takes into account its precision, as well as the precision of prior expectations. With this expanded perspective, we see several possible routes to psychotic symptoms - which may explain the heterogeneity of psychotic illness, as well as the fact that other drugs, with different pharmacological actions, can produce psychotomimetic effects. In this article, we review the basic principles of this model and highlight specific ways in which prediction errors can be perturbed, in particular considering the reliability and uncertainty of predictions. The expanded model explains hallucinations as perturbations of the uncertainty mediated balance between expectation and prediction error. Here, expectations dominate and create perceptions by suppressing or ignoring actual inputs. Negative symptoms may arise due to poor reliability of predictions in service of action. By mapping from biology to belief and perception, the account proffers new explanations of psychosis. However, challenges remain. We attempt to address some of these concerns and suggest future directions, incorporating other symptoms into the model, building towards better understanding of psychosis. © The Author(s) 2016.
Shell model calculations for exotic nuclei
International Nuclear Information System (INIS)
Brown, B.A.; Wildenthal, B.H.
1991-01-01
A review of the shell-model approach to understanding the properties of light exotic nuclei is given. Binding energies including p and p-sd model spaces and sd and sd-pf model spaces; cross-shell excitations around 32 Mg, including weak-coupling aspects and mechanisms for lowering the ntw excitations; beta decay properties of neutron-rich sd model, of p-sd and sd-pf model spaces, of proton-rich sd model space; coulomb break-up cross sections are discussed. (G.P.) 76 refs.; 12 figs
Foothills model forest grizzly bear study : project update
Energy Technology Data Exchange (ETDEWEB)
NONE
2002-01-01
This report updates a five year study launched in 1999 to ensure the continued healthy existence of grizzly bears in west-central Alberta by integrating their needs into land management decisions. The objective was to gather better information and to develop computer-based maps and models regarding grizzly bear migration, habitat use and response to human activities. The study area covers 9,700 square km in west-central Alberta where 66 to 147 grizzly bears exist. During the first 3 field seasons, researchers captured and radio collared 60 bears. Researchers at the University of Calgary used remote sensing tools and satellite images to develop grizzly bear habitat maps. Collaborators at the University of Washington used trained dogs to find bear scat which was analyzed for DNA, stress levels and reproductive hormones. Resource Selection Function models are being developed by researchers at the University of Alberta to identify bear locations and to see how habitat is influenced by vegetation cover and oil, gas, forestry and mining activities. The health of the bears is being studied by researchers at the University of Saskatchewan and the Canadian Cooperative Wildlife Health Centre. The study has already advanced the scientific knowledge of grizzly bear behaviour. Preliminary results indicate that grizzlies continue to find mates, reproduce and gain weight and establish dens. These are all good indicators of a healthy population. Most bear deaths have been related to poaching. The study will continue for another two years. 1 fig.
Uncertainty calculation in transport models and forecasts
DEFF Research Database (Denmark)
Manzo, Stefano; Prato, Carlo Giacomo
Transport projects and policy evaluations are often based on transport model output, i.e. traffic flows and derived effects. However, literature has shown that there is often a considerable difference between forecasted and observed traffic flows. This difference causes misallocation of (public...... implemented by using an approach based on stochastic techniques (Monte Carlo simulation and Bootstrap re-sampling) or scenario analysis combined with model sensitivity tests. Two transport models are used as case studies: the Næstved model and the Danish National Transport Model. 3 The first paper...... in a four-stage transport model related to different variable distributions (to be used in a Monte Carlo simulation procedure), assignment procedures and levels of congestion, at both the link and the network level. The analysis used as case study the Næstved model, referring to the Danish town of Næstved2...
A review on model updating of joint structure for dynamic analysis purpose
Directory of Open Access Journals (Sweden)
Zahari S.N.
2016-01-01
Full Text Available Structural joints provide connection between structural element (beam, plate etc. in order to construct a whole assembled structure. There are many types of structural joints such as bolted joint, riveted joints and welded joints. The joints structures significantly contribute to structural stiffness and dynamic behaviour of structures hence the main objectives of this paper are to review on method of model updating on joints structure and to discuss the guidelines to perform model updating for dynamic analysis purpose. This review paper firstly will outline some of the existing finite element modelling works of joints structure. Experimental modal analysis is the next step to obtain modal parameters (natural frequency & mode shape to validate and improve the discrepancy between results obtained from experimental and the simulation counterparts. Hence model updating will be carried out to minimize the differences between the two results. There are two methods of model updating; direct method and iterative method. Sensitivity analysis employed using SOL200 in NASTRAN by selecting the suitable updating parameters to avoid ill-conditioning problem. It is best to consider both geometrical and material properties in the updating procedure rather than choosing only a number of geometrical properties alone. Iterative method was chosen as the best model updating procedure because the physical meaning of updated parameters are guaranteed although this method required computational effort compare to direct method.
Summary of Expansions, Updates, and Results in GREET 2017 Suite of Models
Energy Technology Data Exchange (ETDEWEB)
Wang, Michael [Argonne National Lab. (ANL), Argonne, IL (United States); Elgowainy, Amgad [Argonne National Lab. (ANL), Argonne, IL (United States); Han, Jeongwoo [Argonne National Lab. (ANL), Argonne, IL (United States); Benavides, Pahola Thathiana [Argonne National Lab. (ANL), Argonne, IL (United States); Burnham, Andrew [Argonne National Lab. (ANL), Argonne, IL (United States); Cai, Hao [Argonne National Lab. (ANL), Argonne, IL (United States); Canter, Christina [Argonne National Lab. (ANL), Argonne, IL (United States); Chen, Rui [Argonne National Lab. (ANL), Argonne, IL (United States); Dai, Qiang [Argonne National Lab. (ANL), Argonne, IL (United States); Kelly, Jarod [Argonne National Lab. (ANL), Argonne, IL (United States); Lee, Dong-Yeon [Argonne National Lab. (ANL), Argonne, IL (United States); Lee, Uisung [Argonne National Lab. (ANL), Argonne, IL (United States); Li, Qianfeng [Argonne National Lab. (ANL), Argonne, IL (United States); Lu, Zifeng [Argonne National Lab. (ANL), Argonne, IL (United States); Qin, Zhangcai [Argonne National Lab. (ANL), Argonne, IL (United States); Sun, Pingping [Argonne National Lab. (ANL), Argonne, IL (United States); Supekar, Sarang D. [Argonne National Lab. (ANL), Argonne, IL (United States)
2017-11-01
This report provides a technical summary of the expansions and updates to the 2017 release of Argonne National Laboratory’s Greenhouse Gases, Regulated Emissions, and Energy Use in Transportation (GREET®) model, including references and links to key technical documents related to these expansions and updates. The GREET 2017 release includes an updated version of the GREET1 (the fuel-cycle GREET model) and GREET2 (the vehicle-cycle GREET model), both in the Microsoft Excel platform and in the GREET.net modeling platform. Figure 1 shows the structure of the GREET Excel modeling platform. The .net platform integrates all GREET modules together seamlessly.
"Updates to Model Algorithms & Inputs for the Biogenic Emissions Inventory System (BEIS) Model"
We have developed new canopy emission algorithms and land use data for BEIS. Simulations with BEIS v3.4 and these updates in CMAQ v5.0.2 are compared these changes to the Model of Emissions of Gases and Aerosols from Nature (MEGAN) and evaluated the simulations against observatio...
A revised model of Jupiter's inner electron belts: Updating the Divine radiation model
Garrett, Henry B.; Levin, Steven M.; Bolton, Scott J.; Evans, Robin W.; Bhattacharya, Bidushi
2005-02-01
In 1983, Divine presented a comprehensive model of the Jovian charged particle environment that has long served as a reference for missions to Jupiter. However, in situ observations by Galileo and synchrotron observations from Earth indicate the need to update the model in the inner radiation zone. Specifically, a review of the model for 1 MeV data. Further modifications incorporating observations from the Galileo and Cassini spacecraft will be reported in the future.
Shell model calculations at superdeformed shapes
International Nuclear Information System (INIS)
Nazarewicz, W.; Dobaczewski, J.; Van Isacker, P.
1991-01-01
Spectroscopy of superdeformed nuclear states opens up an exciting possibility to probe new properties of the nuclear mean field. In particular, the unusually deformed atomic nucleus can serve as a microscopic laboratory of quantum-mechanical symmetries of a three dimensional harmonic oscillator. The classifications and coupling schemes characteristic of weakly deformed systems are expected to be modified in the superdeformed world. The ''superdeformed'' symmetries lead to new quantum numbers and new effective interactions that can be employed in microscopic calculations. New classification schemes can be directly related to certain geometrical properties of the nuclear shape. 63 refs., 7 figs
Guo, Ning; Yang, Zhichun; Wang, Le; Ouyang, Yan; Zhang, Xinping
2018-05-01
Aiming at providing a precise dynamic structural finite element (FE) model for dynamic strength evaluation in addition to dynamic analysis. A dynamic FE model updating method is presented to correct the uncertain parameters of the FE model of a structure using strain mode shapes and natural frequencies. The strain mode shape, which is sensitive to local changes in structure, is used instead of the displacement mode for enhancing model updating. The coordinate strain modal assurance criterion is developed to evaluate the correlation level at each coordinate over the experimental and the analytical strain mode shapes. Moreover, the natural frequencies which provide the global information of the structure are used to guarantee the accuracy of modal properties of the global model. Then, the weighted summation of the natural frequency residual and the coordinate strain modal assurance criterion residual is used as the objective function in the proposed dynamic FE model updating procedure. The hybrid genetic/pattern-search optimization algorithm is adopted to perform the dynamic FE model updating procedure. Numerical simulation and model updating experiment for a clamped-clamped beam are performed to validate the feasibility and effectiveness of the present method. The results show that the proposed method can be used to update the uncertain parameters with good robustness. And the updated dynamic FE model of the beam structure, which can correctly predict both the natural frequencies and the local dynamic strains, is reliable for the following dynamic analysis and dynamic strength evaluation.
Machine learning in updating predictive models of planning and scheduling transportation projects
1997-01-01
A method combining machine learning and regression analysis to automatically and intelligently update predictive models used in the Kansas Department of Transportations (KDOTs) internal management system is presented. The predictive models used...
Using temporal information to construct, update, and retrieve situation models of narratives
Rinck, M.; Hähnel, A.; Becker, G.
2001-01-01
Four experiments explored how readers use temporal information to construct and update situation models and retrieve them from memory. In Experiment 1, readers spontaneously constructed temporal and spatial situation models of single sentences. In Experiment 2, temporal inconsistencies caused
Models for Automated Tube Performance Calculations
International Nuclear Information System (INIS)
Brunkhorst, C.
2002-01-01
High power radio-frequency systems, as typically used in fusion research devices, utilize vacuum tubes. Evaluation of vacuum tube performance involves data taken from tube operating curves. The acquisition of data from such graphical sources is a tedious process. A simple modeling method is presented that will provide values of tube currents for a given set of element voltages. These models may be used as subroutines in iterative solutions of amplifier operating conditions for a specific loading impedance
Marwala, Tshilidzi
2010-01-01
Finite element models (FEMs) are widely used to understand the dynamic behaviour of various systems. FEM updating allows FEMs to be tuned better to reflect measured data and may be conducted using two different statistical frameworks: the maximum likelihood approach and Bayesian approaches. Finite Element Model Updating Using Computational Intelligence Techniques applies both strategies to the field of structural mechanics, an area vital for aerospace, civil and mechanical engineering. Vibration data is used for the updating process. Following an introduction a number of computational intelligence techniques to facilitate the updating process are proposed; they include: • multi-layer perceptron neural networks for real-time FEM updating; • particle swarm and genetic-algorithm-based optimization methods to accommodate the demands of global versus local optimization models; • simulated annealing to put the methodologies into a sound statistical basis; and • response surface methods and expectation m...
Beyond standard model calculations with Sherpa
Energy Technology Data Exchange (ETDEWEB)
Hoeche, Stefan [SLAC National Accelerator Laboratory, Menlo Park, CA (United States); Kuttimalai, Silvan [Durham University, Institute for Particle Physics Phenomenology, Durham (United Kingdom); Schumann, Steffen [Universitaet Goettingen, II. Physikalisches Institut, Goettingen (Germany); Siegert, Frank [Institut fuer Kern- und Teilchenphysik, TU Dresden, Dresden (Germany)
2015-03-01
We present a fully automated framework as part of the Sherpa event generator for the computation of tree-level cross sections in Beyond Standard Model scenarios, making use of model information given in the Universal FeynRules Output format. Elementary vertices are implemented into C++ code automatically and provided to the matrix-element generator Comix at runtime. Widths and branching ratios for unstable particles are computed from the same building blocks. The corresponding decays are simulated with spin correlations. Parton showers, QED radiation and hadronization are added by Sherpa, providing a full simulation of arbitrary BSM processes at the hadron level. (orig.)
Prediction-error variance in Bayesian model updating: a comparative study
Asadollahi, Parisa; Li, Jian; Huang, Yong
2017-04-01
In Bayesian model updating, the likelihood function is commonly formulated by stochastic embedding in which the maximum information entropy probability model of prediction error variances plays an important role and it is Gaussian distribution subject to the first two moments as constraints. The selection of prediction error variances can be formulated as a model class selection problem, which automatically involves a trade-off between the average data-fit of the model class and the information it extracts from the data. Therefore, it is critical for the robustness in the updating of the structural model especially in the presence of modeling errors. To date, three ways of considering prediction error variances have been seem in the literature: 1) setting constant values empirically, 2) estimating them based on the goodness-of-fit of the measured data, and 3) updating them as uncertain parameters by applying Bayes' Theorem at the model class level. In this paper, the effect of different strategies to deal with the prediction error variances on the model updating performance is investigated explicitly. A six-story shear building model with six uncertain stiffness parameters is employed as an illustrative example. Transitional Markov Chain Monte Carlo is used to draw samples of the posterior probability density function of the structure model parameters as well as the uncertain prediction variances. The different levels of modeling uncertainty and complexity are modeled through three FE models, including a true model, a model with more complexity, and a model with modeling error. Bayesian updating is performed for the three FE models considering the three aforementioned treatments of the prediction error variances. The effect of number of measurements on the model updating performance is also examined in the study. The results are compared based on model class assessment and indicate that updating the prediction error variances as uncertain parameters at the model
International Nuclear Information System (INIS)
Yan Guanghua; Liu, Chihray; Lu Bo; Palta, Jatinder R; Li, Jonathan G
2008-01-01
The purpose of this study was to choose an appropriate head scatter source model for the fast and accurate independent planar dose calculation for intensity-modulated radiation therapy (IMRT) with MLC. The performance of three different head scatter source models regarding their ability to model head scatter and facilitate planar dose calculation was evaluated. A three-source model, a two-source model and a single-source model were compared in this study. In the planar dose calculation algorithm, in-air fluence distribution was derived from each of the head scatter source models while considering the combination of Jaw and MLC opening. Fluence perturbations due to tongue-and-groove effect, rounded leaf end and leaf transmission were taken into account explicitly. The dose distribution was calculated by convolving the in-air fluence distribution with an experimentally determined pencil-beam kernel. The results were compared with measurements using a diode array and passing rates with 2%/2 mm and 3%/3 mm criteria were reported. It was found that the two-source model achieved the best agreement on head scatter factor calculation. The three-source model and single-source model underestimated head scatter factors for certain symmetric rectangular fields and asymmetric fields, but similar good agreement could be achieved when monitor back scatter effect was incorporated explicitly. All the three source models resulted in comparable average passing rates (>97%) when the 3%/3 mm criterion was selected. The calculation with the single-source model and two-source model was slightly faster than the three-source model due to their simplicity
Energy Technology Data Exchange (ETDEWEB)
Yan Guanghua [Department of Nuclear and Radiological Engineering, University of Florida, Gainesville, FL 32611 (United States); Liu, Chihray; Lu Bo; Palta, Jatinder R; Li, Jonathan G [Department of Radiation Oncology, University of Florida, Gainesville, FL 32610-0385 (United States)
2008-04-21
The purpose of this study was to choose an appropriate head scatter source model for the fast and accurate independent planar dose calculation for intensity-modulated radiation therapy (IMRT) with MLC. The performance of three different head scatter source models regarding their ability to model head scatter and facilitate planar dose calculation was evaluated. A three-source model, a two-source model and a single-source model were compared in this study. In the planar dose calculation algorithm, in-air fluence distribution was derived from each of the head scatter source models while considering the combination of Jaw and MLC opening. Fluence perturbations due to tongue-and-groove effect, rounded leaf end and leaf transmission were taken into account explicitly. The dose distribution was calculated by convolving the in-air fluence distribution with an experimentally determined pencil-beam kernel. The results were compared with measurements using a diode array and passing rates with 2%/2 mm and 3%/3 mm criteria were reported. It was found that the two-source model achieved the best agreement on head scatter factor calculation. The three-source model and single-source model underestimated head scatter factors for certain symmetric rectangular fields and asymmetric fields, but similar good agreement could be achieved when monitor back scatter effect was incorporated explicitly. All the three source models resulted in comparable average passing rates (>97%) when the 3%/3 mm criterion was selected. The calculation with the single-source model and two-source model was slightly faster than the three-source model due to their simplicity.
Model calculations for electrochemically etched neutron detectors
International Nuclear Information System (INIS)
Pitt, E.; Scharmann, A.; Werner, B.
1988-01-01
Electrochemical etching has been established as a common method for visualisation of nuclear tracks in solid state nuclear track detectors. Usually the Mason equation, which describes the amplification of the electrical field strength at the track tip, is used to explain the treeing effect of electrochemical etching. The yield of neutron-induced tracks from electrochemically etched CR-39 track detectors was investigated with respect to the electrical parameters. A linear dependence on the response from the macroscopic field strength was measured which could not be explained by the Mason equation. It was found that the reality of a recoil proton track in the detector does not fit the boundary conditions which are necessary when the Mason equation is used. An alternative model was introduced to describe the track and detector geometry in the case of a neutron track detector. The field strength at the track tip was estimated with this model and compared with the experimental data, yielding good agreement. (author)
Model calculation of thermal conductivity in antiferromagnets
Energy Technology Data Exchange (ETDEWEB)
Mikhail, I.F.I., E-mail: ifi_mikhail@hotmail.com; Ismail, I.M.M.; Ameen, M.
2015-11-01
A theoretical study is given of thermal conductivity in antiferromagnetic materials. The study has the advantage that the three-phonon interactions as well as the magnon phonon interactions have been represented by model operators that preserve the important properties of the exact collision operators. A new expression for thermal conductivity has been derived that involves the same terms obtained in our previous work in addition to two new terms. These two terms represent the conservation and quasi-conservation of wavevector that occur in the three-phonon Normal and Umklapp processes respectively. They gave appreciable contributions to the thermal conductivity and have led to an excellent quantitative agreement with the experimental measurements of the antiferromagnet FeCl{sub 2}. - Highlights: • The Boltzmann equations of phonons and magnons in antiferromagnets have been studied. • Model operators have been used to represent the magnon–phonon and three-phonon interactions. • The models possess the same important properties as the exact operators. • A new expression for the thermal conductivity has been derived. • The results showed a good quantitative agreement with the experimental data of FeCl{sub 2}.
Standard Model theory calculations and experimental tests
International Nuclear Information System (INIS)
Cacciari, M.; Hamel de Monchenault, G.
2015-01-01
To present knowledge, all the physics at the Large Hadron Collider (LHC) can be described in the framework of the Standard Model (SM) of particle physics. Indeed the newly discovered Higgs boson with a mass close to 125 GeV seems to confirm the predictions of the SM. Thus, besides looking for direct manifestations of the physics beyond the SM, one of the primary missions of the LHC is to perform ever more stringent tests of the SM. This requires not only improved theoretical developments to produce testable predictions and provide experiments with reliable event generators, but also sophisticated analyses techniques to overcome the formidable experimental environment of the LHC and perform precision measurements. In the first section, we describe the state of the art of the theoretical tools and event generators that are used to provide predictions for the production cross sections of the processes of interest. In section 2, inclusive cross section measurements with jets, leptons and vector bosons are presented. Examples of differential cross sections, charge asymmetries and the study of lepton pairs are proposed in section 3. Finally, in section 4, we report studies on the multiple production of gauge bosons and constraints on anomalous gauge couplings
National Research Council Canada - National Science Library
Yokota, Miyo
2004-01-01
...)) for individual variation and a metabolic rate (M) correction during downhill movements. This study evaluated the updated version of the model incorporating these new features, using a dataset collected during U.S. Marine Corps (USMC...
Swartjes F; ECO
2003-01-01
Twenty scenarios, differing with respect to land use, soil type and contaminant, formed the basis for calculating human exposure from soil contaminants with the use of models contributed by seven European countries (one model per country). Here, the human exposures to children and children
Precipitates/Salts Model Calculations for Various Drift Temperature Environments
International Nuclear Information System (INIS)
Marnier, P.
2001-01-01
The objective and scope of this calculation is to assist Performance Assessment Operations and the Engineered Barrier System (EBS) Department in modeling the geochemical effects of evaporation within a repository drift. This work is developed and documented using procedure AP-3.12Q, Calculations, in support of ''Technical Work Plan For Engineered Barrier System Department Modeling and Testing FY 02 Work Activities'' (BSC 2001a). The primary objective of this calculation is to predict the effects of evaporation on the abstracted water compositions established in ''EBS Incoming Water and Gas Composition Abstraction Calculations for Different Drift Temperature Environments'' (BSC 2001c). A secondary objective is to predict evaporation effects on observed Yucca Mountain waters for subsequent cement interaction calculations (BSC 2001d). The Precipitates/Salts model is documented in an Analysis/Model Report (AMR), ''In-Drift Precipitates/Salts Analysis'' (BSC 2001b)
DIGA/NSL new calculational model in slab geometry
International Nuclear Information System (INIS)
Makai, M.; Gado, J.; Kereszturi, A.
1987-04-01
A new calculational model is presented based on a modified finite-difference algorithm, in which the coefficients are determined by means of the so-called gamma matrices. The DIGA program determines the gamma matrices and the NSL program realizes the modified finite difference model. Both programs assume slab cell geometry, DIGA assumes 2 energy groups and 3 diffusive regions. The DIGA/NSL programs serve to study the new calculational model. (author)
Pankatz, Klaus; Kerkweg, Astrid
2015-04-01
The work presented is part of the joint project "DecReg" ("Regional decadal predictability") which is in turn part of the project "MiKlip" ("Decadal predictions"), an effort funded by the German Federal Ministry of Education and Research to improve decadal predictions on a global and regional scale. In MiKlip, one big question is if regional climate modeling shows "added value", i.e. to evaluate, if regional climate models (RCM) produce better results than the driving models. However, the scope of this study is to look more closely at the setup specific details of regional climate modeling. As regional models only simulate a small domain, they have to inherit information about the state of the atmosphere at their lateral boundaries from external data sets. There are many unresolved questions concerning the setup of lateral boundary conditions (LBC). External data sets come from global models or from global reanalysis data-sets. A temporal resolution of six hours is common for this kind of data. This is mainly due to the fact, that storage space is a limiting factor, especially for climate simulations. However, theoretically, the coupling frequency could be as high as the time step of the driving model. Meanwhile, it is unclear if a more frequent update of the LBCs has a significant effect on the climate in the domain of the RCM. The first study examines how the RCM reacts to a higher update frequency. The study is based on a 30 year time slice experiment for three update frequencies of the LBC, namely six hours, one hour and six minutes. The evaluation of means, standard deviations and statistics of the climate in the regional domain shows only small deviations, some statistically significant though, of 2m temperature, sea level pressure and precipitation. The second part of the first study assesses parameters linked to cyclone activity, which is affected by the LBC update frequency. Differences in track density and strength are found when comparing the simulations
A universal calculation model for the controlled electric transmission line
International Nuclear Information System (INIS)
Zivzivadze, O.; Zivzivadze, L.
2009-01-01
Difficulties associated with the development of calculation models are analyzed, and the ways of resolution of these problems are given. A version of the equivalent circuit as a six-pole network, the parameters of which do not depend on the angle of shift Θ between the voltage vectors of circuits is offered. The interrelation between the parameters of the equivalent circuit and the transmission constants of the line was determined. A universal calculation model for the controlled electric transmission line was elaborated. The model allows calculating the stationary modes of lines of such classes at any angle of shift Θ between the circuits. (author)
The updated geodetic mean dynamic topography model – DTU15MDT
DEFF Research Database (Denmark)
Knudsen, Per; Andersen, Ole Baltazar; Maximenko, Nikolai
An update to the global mean dynamic topography model DTU13MDT is presented. For DTU15MDT the newer gravity model EIGEN-6C4 has been combined with the DTU15MSS mean sea surface model to construct this global mean dynamic topography model. The EIGEN-6C4 is derived using the full series of GOCE data...
DEFF Research Database (Denmark)
Hansen, Lisbet Sneftrup; Borup, Morten; Moller, Arne
2014-01-01
drainage models and reduce a number of unavoidable discrepancies between the model and reality. The latter can be achieved partly by inserting measured water levels from the sewer system into the model. This article describes how deterministic updating of model states in this manner affects a simulation...
ECP evaluation by water radiolysis and ECP model calculations
Energy Technology Data Exchange (ETDEWEB)
Hanawa, S.; Nakamura, T.; Uchida, S. [Japan Atomic Energy Agency, Tokai-mura, Ibaraki (Japan); Kus, P.; Vsolak, R.; Kysela, J. [Nuclear Research Inst. Rez plc, Rez (Czech Republic)
2010-07-01
In-pile ECP measurements data was evaluated by water radiolysis calculations. The data was obtained by using an in-pile loop in an experimental reactor, LVR-15, at the Nuclear Research Institute (NRI) in Czech Republic. Three types of ECP sensors, a Pt electrode, an Ag/AgCl sensor and a zirconia membrane sensor containing Ag/Ag{sub 2}O were used at several levels of the irradiation rig at various neutron flux and gamma rates. For water radiolysis calculation, the in-pile loop was modeled to several nodes following their design specifications, operating conditions such as flow rates, dose rate distributions of neutron and gamma-ray and so on. Concentration of chemical species along the water flow was calculated by a radiolysis code, WRAC-J. The radiolysis calculation results were transferred to an ECP model. In the model, anodic and cathodic current densities were calculated with combination of an electrochemistry model and an oxide film growth model. The measured ECP data were compared with the radiolysis/ECP calculation results, and applicability the of radiolysis model was confirmed. In addition, anomalous phenomenon appears in the in-pile loop was also investigated by radiolysis calculations. (author)
Comparative calculations and validation studies with atmospheric dispersion models
International Nuclear Information System (INIS)
Paesler-Sauer, J.
1986-11-01
This report presents the results of an intercomparison of different mesoscale dispersion models and measured data of tracer experiments. The types of models taking part in the intercomparison are Gaussian-type, numerical Eulerian, and Lagrangian dispersion models. They are suited for the calculation of the atmospherical transport of radionuclides released from a nuclear installation. For the model intercomparison artificial meteorological situations were defined and corresponding arithmetical problems were formulated. For the purpose of model validation real dispersion situations of tracer experiments were used as input data for model calculations; in these cases calculated and measured time-integrated concentrations close to the ground are compared. Finally a valuation of the models concerning their efficiency in solving the problems is carried out by the aid of objective methods. (orig./HP) [de
Model calculations of nuclear data for biologically-important elements
International Nuclear Information System (INIS)
Chadwick, M.B.; Blann, M.; Reffo, G.; Young, P.G.
1994-05-01
We describe calculations of neutron-induced reactions on carbon and oxygen for incident energies up to 70 MeV, the relevant clinical energy in radiation neutron therapy. Our calculations using the FKK-GNASH, GNASH, and ALICE codes are compared with experimental measurements, and their usefulness for modeling reactions on biologically-important elements is assessed
Advanced Test Reactor Core Modeling Update Project Annual Report for Fiscal Year 2011
Energy Technology Data Exchange (ETDEWEB)
David W. Nigg; Devin A. Steuhm
2011-09-01
, a capability for rigorous sensitivity analysis and uncertainty quantification based on the TSUNAMI system is being implemented and initial computational results have been obtained. This capability will have many applications in 2011 and beyond as a tool for understanding the margins of uncertainty in the new models as well as for validation experiment design and interpretation. Finally we note that although full implementation of the new computational models and protocols will extend over a period 3-4 years as noted above, interim applications in the much nearer term have already been demonstrated. In particular, these demonstrations included an analysis that was useful for understanding the cause of some issues in December 2009 that were triggered by a larger than acceptable discrepancy between the measured excess core reactivity and a calculated value that was based on the legacy computational methods. As the Modeling Update project proceeds we anticipate further such interim, informal, applications in parallel with formal qualification of the system under the applicable INL Quality Assurance procedures and standards.
Advanced Test Reactor Core Modeling Update Project Annual Report for Fiscal Year 2011
International Nuclear Information System (INIS)
Nigg, David W.; Steuhm, Devin A.
2011-01-01
. Furthermore, a capability for rigorous sensitivity analysis and uncertainty quantification based on the TSUNAMI system is being implemented and initial computational results have been obtained. This capability will have many applications in 2011 and beyond as a tool for understanding the margins of uncertainty in the new models as well as for validation experiment design and interpretation. Finally we note that although full implementation of the new computational models and protocols will extend over a period 3-4 years as noted above, interim applications in the much nearer term have already been demonstrated. In particular, these demonstrations included an analysis that was useful for understanding the cause of some issues in December 2009 that were triggered by a larger than acceptable discrepancy between the measured excess core reactivity and a calculated value that was based on the legacy computational methods. As the Modeling Update project proceeds we anticipate further such interim, informal, applications in parallel with formal qualification of the system under the applicable INL Quality Assurance procedures and standards.
In-Drift Microbial Communities Model Validation Calculations
Energy Technology Data Exchange (ETDEWEB)
D. M. Jolley
2001-09-24
The objective and scope of this calculation is to create the appropriate parameter input for MING 1.0 (CSCI 30018 V1.0, CRWMS M&O 1998b) that will allow the testing of the results from the MING software code with both scientific measurements of microbial populations at the site and laboratory and with natural analogs to the site. This set of calculations provides results that will be used in model validation for the ''In-Drift Microbial Communities'' model (CRWMS M&O 2000) which is part of the Engineered Barrier System Department (EBS) process modeling effort that eventually will feed future Total System Performance Assessment (TSPA) models. This calculation is being produced to replace MING model validation output that is effected by the supersession of DTN MO9909SPAMING1.003 using its replacement DTN MO0106SPAIDM01.034 so that the calculations currently found in the ''In-Drift Microbial Communities'' AMR (CRWMS M&O 2000) will be brought up to date. This set of calculations replaces the calculations contained in sections 6.7.2, 6.7.3 and Attachment I of CRWMS M&O (2000) As all of these calculations are created explicitly for model validation, the data qualification status of all inputs can be considered corroborative in accordance with AP-3.15Q. This work activity has been evaluated in accordance with the AP-2.21 procedure, ''Quality Determinations and Planning for Scientific, Engineering, and Regulatory Compliance Activities'', and is subject to QA controls (BSC 2001). The calculation is developed in accordance with the AP-3.12 procedure, Calculations, and prepared in accordance with the ''Technical Work Plan For EBS Department Modeling FY 01 Work Activities'' (BSC 2001) which includes controls for the management of electronic data.
In-Drift Microbial Communities Model Validation Calculation
Energy Technology Data Exchange (ETDEWEB)
D. M. Jolley
2001-10-31
The objective and scope of this calculation is to create the appropriate parameter input for MING 1.0 (CSCI 30018 V1.0, CRWMS M&O 1998b) that will allow the testing of the results from the MING software code with both scientific measurements of microbial populations at the site and laboratory and with natural analogs to the site. This set of calculations provides results that will be used in model validation for the ''In-Drift Microbial Communities'' model (CRWMS M&O 2000) which is part of the Engineered Barrier System Department (EBS) process modeling effort that eventually will feed future Total System Performance Assessment (TSPA) models. This calculation is being produced to replace MING model validation output that is effected by the supersession of DTN MO9909SPAMING1.003 using its replacement DTN MO0106SPAIDM01.034 so that the calculations currently found in the ''In-Drift Microbial Communities'' AMR (CRWMS M&O 2000) will be brought up to date. This set of calculations replaces the calculations contained in sections 6.7.2, 6.7.3 and Attachment I of CRWMS M&O (2000) As all of these calculations are created explicitly for model validation, the data qualification status of all inputs can be considered corroborative in accordance with AP-3.15Q. This work activity has been evaluated in accordance with the AP-2.21 procedure, ''Quality Determinations and Planning for Scientific, Engineering, and Regulatory Compliance Activities'', and is subject to QA controls (BSC 2001). The calculation is developed in accordance with the AP-3.12 procedure, Calculations, and prepared in accordance with the ''Technical Work Plan For EBS Department Modeling FY 01 Work Activities'' (BSC 2001) which includes controls for the management of electronic data.
IN-DRIFT MICROBIAL COMMUNITIES MODEL VALIDATION CALCULATIONS
Energy Technology Data Exchange (ETDEWEB)
D.M. Jolley
2001-12-18
The objective and scope of this calculation is to create the appropriate parameter input for MING 1.0 (CSCI 30018 V1.0, CRWMS M&O 1998b) that will allow the testing of the results from the MING software code with both scientific measurements of microbial populations at the site and laboratory and with natural analogs to the site. This set of calculations provides results that will be used in model validation for the ''In-Drift Microbial Communities'' model (CRWMS M&O 2000) which is part of the Engineered Barrier System Department (EBS) process modeling effort that eventually will feed future Total System Performance Assessment (TSPA) models. This calculation is being produced to replace MING model validation output that is effected by the supersession of DTN M09909SPAMINGl.003 using its replacement DTN M00106SPAIDMO 1.034 so that the calculations currently found in the ''In-Drift Microbial Communities'' AMR (CRWMS M&O 2000) will be brought up to date. This set of calculations replaces the calculations contained in sections 6.7.2, 6.7.3 and Attachment I of CRWMS M&O (2000) As all of these calculations are created explicitly for model validation, the data qualification status of all inputs can be considered corroborative in accordance with AP-3.15Q. This work activity has been evaluated in accordance with the AP-2.21 procedure, ''Quality Determinations and Planning for Scientific, Engineering, and Regulatory Compliance Activities'', and is subject to QA controls (BSC 2001). The calculation is developed in accordance with the AP-3.12 procedure, Calculations, and prepared in accordance with the ''Technical Work Plan For EBS Department Modeling FY 01 Work Activities'' (BSC 200 1) which includes controls for the management of electronic data.
In-Drift Microbial Communities Model Validation Calculations
International Nuclear Information System (INIS)
Jolley, D.M.
2001-01-01
The objective and scope of this calculation is to create the appropriate parameter input for MING 1.0 (CSCI 30018 V1.0, CRWMS MandO 1998b) that will allow the testing of the results from the MING software code with both scientific measurements of microbial populations at the site and laboratory and with natural analogs to the site. This set of calculations provides results that will be used in model validation for the ''In-Drift Microbial Communities'' model (CRWMS MandO 2000) which is part of the Engineered Barrier System Department (EBS) process modeling effort that eventually will feed future Total System Performance Assessment (TSPA) models. This calculation is being produced to replace MING model validation output that is effected by the supersession of DTN MO9909SPAMING1.003 using its replacement DTN MO0106SPAIDM01.034 so that the calculations currently found in the ''In-Drift Microbial Communities'' AMR (CRWMS MandO 2000) will be brought up to date. This set of calculations replaces the calculations contained in sections 6.7.2, 6.7.3 and Attachment I of CRWMS MandO (2000) As all of these calculations are created explicitly for model validation, the data qualification status of all inputs can be considered corroborative in accordance with AP-3.15Q. This work activity has been evaluated in accordance with the AP-2.21 procedure, ''Quality Determinations and Planning for Scientific, Engineering, and Regulatory Compliance Activities'', and is subject to QA controls (BSC 2001). The calculation is developed in accordance with the AP-3.12 procedure, Calculations, and prepared in accordance with the ''Technical Work Plan For EBS Department Modeling FY 01 Work Activities'' (BSC 2001) which includes controls for the management of electronic data
IN-DRIFT MICROBIAL COMMUNITIES MODEL VALIDATION CALCULATIONS
International Nuclear Information System (INIS)
D.M. Jolley
2001-01-01
The objective and scope of this calculation is to create the appropriate parameter input for MING 1.0 (CSCI 30018 V1.0, CRWMS M andO 1998b) that will allow the testing of the results from the MING software code with both scientific measurements of microbial populations at the site and laboratory and with natural analogs to the site. This set of calculations provides results that will be used in model validation for the ''In-Drift Microbial Communities'' model (CRWMS M andO 2000) which is part of the Engineered Barrier System Department (EBS) process modeling effort that eventually will feed future Total System Performance Assessment (TSPA) models. This calculation is being produced to replace MING model validation output that is effected by the supersession of DTN M09909SPAMINGl.003 using its replacement DTN M00106SPAIDMO 1.034 so that the calculations currently found in the ''In-Drift Microbial Communities'' AMR (CRWMS M andO 2000) will be brought up to date. This set of calculations replaces the calculations contained in sections 6.7.2, 6.7.3 and Attachment I of CRWMS M andO (2000) As all of these calculations are created explicitly for model validation, the data qualification status of all inputs can be considered corroborative in accordance with AP-3.15Q. This work activity has been evaluated in accordance with the AP-2.21 procedure, ''Quality Determinations and Planning for Scientific, Engineering, and Regulatory Compliance Activities'', and is subject to QA controls (BSC 2001). The calculation is developed in accordance with the AP-3.12 procedure, Calculations, and prepared in accordance with the ''Technical Work Plan For EBS Department Modeling FY 01 Work Activities'' (BSC 200 1) which includes controls for the management of electronic data
Seismic source characterization for the 2014 update of the U.S. National Seismic Hazard Model
Moschetti, Morgan P.; Powers, Peter; Petersen, Mark D.; Boyd, Oliver; Chen, Rui; Field, Edward H.; Frankel, Arthur; Haller, Kathleen; Harmsen, Stephen; Mueller, Charles S.; Wheeler, Russell; Zeng, Yuehua
2015-01-01
We present the updated seismic source characterization (SSC) for the 2014 update of the National Seismic Hazard Model (NSHM) for the conterminous United States. Construction of the seismic source models employs the methodology that was developed for the 1996 NSHM but includes new and updated data, data types, source models, and source parameters that reflect the current state of knowledge of earthquake occurrence and state of practice for seismic hazard analyses. We review the SSC parameterization and describe the methods used to estimate earthquake rates, magnitudes, locations, and geometries for all seismic source models, with an emphasis on new source model components. We highlight the effects that two new model components—incorporation of slip rates from combined geodetic-geologic inversions and the incorporation of adaptively smoothed seismicity models—have on probabilistic ground motions, because these sources span multiple regions of the conterminous United States and provide important additional epistemic uncertainty for the 2014 NSHM.
Hualien forced vibration calculation with a finite element model
International Nuclear Information System (INIS)
Wang, F.; Gantenbein, F.; Nedelec, M.; Duretz, Ch.
1995-01-01
The forced vibration tests of the Hualien mock-up were useful to validate finite element models developed for soil-structure interaction. In this paper the two sets of tests with and without backfill were analysed. the methods used are based on finite element modeling for the soil. Two approaches were considered: calculation of soil impedance followed by the calculation of the transfer functions with a model taking into account the superstructure and the impedance; direct calculation of the soil-structure transfer functions, with the soil and the structure being represented in the same model by finite elements. Blind predictions and post-test calculations are presented and compared with the test results. (author). 4 refs., 8 figs., 2 tabs
The accuracy of heavy ion optical model calculations
International Nuclear Information System (INIS)
Kozik, T.
1980-01-01
There is investigated in detail the sources and magnitude of numerical errors in heavy ion optical model calculations. It is shown on example of 20 Ne + 24 Mg scattering at Esub(LAB)=100 MeV. (author)
NLOM - a program for nonlocal optical model calculations
International Nuclear Information System (INIS)
Kim, B.T.; Kyum, M.C.; Hong, S.W.; Park, M.H.; Udagawa, T.
1992-01-01
A FORTRAN program NLOM for nonlocal optical model calculations is described. It is based on a method recently developed by Kim and Udagawa, which utilizes the Lanczos technique for solving integral equations derived from the nonlocal Schroedinger equation. (orig.)
Experimental evaluation of analytical penumbra calculation model for wobbled beams
International Nuclear Information System (INIS)
Kohno, Ryosuke; Kanematsu, Nobuyuki; Yusa, Ken; Kanai, Tatsuaki
2004-01-01
The goal of radiotherapy is not only to apply a high radiation dose to a tumor, but also to avoid side effects in the surrounding healthy tissue. Therefore, it is important for carbon-ion treatment planning to calculate accurately the effects of the lateral penumbra. In this article, for wobbled beams under various irradiation conditions, we focus on the lateral penumbras at several aperture positions of one side leaf of the multileaf collimator. The penumbras predicted by an analytical penumbra calculation model were compared with the measured results. The results calculated by the model for various conditions agreed well with the experimental ones. In conclusion, we found that the analytical penumbra calculation model could predict accurately the measured results for wobbled beams and it was useful for carbon-ion treatment planning to apply the model
A methodology for constructing the calculation model of scientific spreadsheets
Vos, de M.; Wielemaker, J.; Schreiber, G.; Wielinga, B.; Top, J.L.
2015-01-01
Spreadsheets models are frequently used by scientists to analyze research data. These models are typically described in a paper or a report, which serves as single source of information on the underlying research project. As the calculation workflow in these models is not made explicit, readers are
Mathematical models for calculating radiation dose to the fetus
International Nuclear Information System (INIS)
Watson, E.E.
1992-01-01
Estimates of radiation dose from radionuclides inside the body are calculated on the basis of energy deposition in mathematical models representing the organs and tissues of the human body. Complex models may be used with radiation transport codes to calculate the fraction of emitted energy that is absorbed in a target tissue even at a distance from the source. Other models may be simple geometric shapes for which absorbed fractions of energy have already been calculated. Models of Reference Man, the 15-year-old (Reference Woman), the 10-year-old, the five-year-old, the one-year-old, and the newborn have been developed and used for calculating specific absorbed fractions (absorbed fractions of energy per unit mass) for several different photon energies and many different source-target combinations. The Reference woman model is adequate for calculating energy deposition in the uterus during the first few weeks of pregnancy. During the course of pregnancy, the embryo/fetus increases rapidly in size and thus requires several models for calculating absorbed fractions. In addition, the increases in size and changes in shape of the uterus and fetus result in the repositioning of the maternal organs and in different geometric relationships among the organs and the fetus. This is especially true of the excretory organs such as the urinary bladder and the various sections of the gastrointestinal tract. Several models have been developed for calculating absorbed fractions of energy in the fetus, including models of the uterus and fetus for each month of pregnancy and complete models of the pregnant woman at the end of each trimester. In this paper, the available models and the appropriate use of each will be discussed. (Author) 19 refs., 7 figs
Model for calculating the boron concentration in PWR type reactors
International Nuclear Information System (INIS)
Reis Martins Junior, L.L. dos; Vanni, E.A.
1986-01-01
A PWR boron concentration model has been developed for use with RETRAN code. The concentration model calculates the boron mass balance in the primary circuit as the injected boron mixes and is transported through the same circuit. RETRAN control blocks are used to calculate the boron concentration in fluid volumes during steady-state and transient conditions. The boron reactivity worth is obtained from the core concentration and used in RETRAN point kinetics model. A FSAR type analysis of a Steam Line Break Accident in Angra I plant was selected to test the model and the results obtained indicate a sucessfull performance. (Author) [pt
Ambient modal testing of a double-arch dam: the experimental campaign and model updating
International Nuclear Information System (INIS)
García-Palacios, Jaime H.; Soria, José M.; Díaz, Iván M.; Tirado-Andrés, Francisco
2016-01-01
A finite element model updating of a double-curvature-arch dam (La Tajera, Spain) is carried out hereof using the modal parameters obtained from an operational modal analysis. That is, the system modal dampings, natural frequencies and mode shapes have been identified using output-only identification techniques under environmental loads (wind, vehicles). A finite element model of the dam-reservoir-foundation system was initially created. Then, a testing campaing was then carried out from the most significant test points using high-sensitivity accelerometers wirelessly synchronized. Afterwards, the model updating of the initial model was done using a Monte Carlo based approach in order to match it to the recorded dynamic behaviour. The updated model may be used within a structural health monitoring system for damage detection or, for instance, for the analysis of the seismic response of the arch dam- reservoir-foundation coupled system. (paper)
Updating and prospective validation of a prognostic model for high sickness absence
Roelen, C.A.M.; Heymans, M.W.; Twisk, J.W.R.; van Rhenen, W.; Pallesen, S.; Bjorvatn, B.; Moen, B.E.; Mageroy, N.
2015-01-01
Objectives To further develop and validate a Dutch prognostic model for high sickness absence (SA). Methods Three-wave longitudinal cohort study of 2,059 Norwegian nurses. The Dutch prognostic model was used to predict high SA among Norwegian nurses at wave 2. Subsequently, the model was updated by
A model to calculate the burn of gadolinium in PWR
International Nuclear Information System (INIS)
Sannazzaro, L.R.
1983-01-01
A cell model to calculate the burnup of a PWR fuel element with gadolinium as a poison, projected by KWU, is presented. With the model proposed, the burn of the gadolinium isotopes is analyzed, as well as the effect of these isotopes in the fuel element behaviour. The results obtained with this cell model are compared with those obtained by a conventional cell model. (E.G.) [pt
Modelling of groundwater flow and solute transport in Olkiluoto. Update 2008
International Nuclear Information System (INIS)
Loefman, J.; Pitkaenen, P.; Meszaros, F.; Keto, V.; Ahokas, H.
2009-10-01
Posiva Oy is preparing for the final disposal of spent nuclear fuel in the crystalline bedrock in Finland. Olkiluoto in Eurajoki has been selected as the primary site for the repository, subject to further detailed characterisation which is currently focused on the construction of an underground rock characterisation and research facility (the ONKALO). An essential part of the site investigation programme is analysis of the deep groundwater flow by means of numerical flow modelling. This study is the latest update concerning the site-scale flow modelling and is based on all the hydrogeological data gathered from field investigations by the end of 2007. The work is divided into two separate modelling tasks: 1) characterization of the baseline groundwater flow conditions before excavation of the ONKALO, and 2) a prediction/outcome (P/O) study of the potential hydrogeological disturbances due to the ONKALO. The flow model was calibrated by using all the available data that was appropriate for the applied, deterministic, equivalent porous medium (EPM) / dual-porosity (DP) approach. In the baseline modelling, calibration of the flow model focused on improving the agreement between the calculated results and the undisturbed observations. The calibration resulted in a satisfactory agreement with the measured pumping test responses, a very good overall agreement with the observed pressures in the deep drill holes and a fairly good agreement with the observed salinity. Some discrepancies still remained in a few single drill hole sections, because the fresh water infiltration in the model tends to dilute the groundwater too much at shallow depths. In the P/O calculations the flow model was further calibrated by using the monitoring data on the ONKALO disturbances. Having significantly more information on the inflows to the tunnel (compared with the previous study) allowed better calibration of the model, which allowed it to capture very well the observed inflow, the
Update of the Polar SWIFT model for polar stratospheric ozone loss (Polar SWIFT version 2)
Wohltmann, Ingo; Lehmann, Ralph; Rex, Markus
2017-07-01
The Polar SWIFT model is a fast scheme for calculating the chemistry of stratospheric ozone depletion in polar winter. It is intended for use in global climate models (GCMs) and Earth system models (ESMs) to enable the simulation of mutual interactions between the ozone layer and climate. To date, climate models often use prescribed ozone fields, since a full stratospheric chemistry scheme is computationally very expensive. Polar SWIFT is based on a set of coupled differential equations, which simulate the polar vortex-averaged mixing ratios of the key species involved in polar ozone depletion on a given vertical level. These species are O3, chemically active chlorine (ClOx), HCl, ClONO2 and HNO3. The only external input parameters that drive the model are the fraction of the polar vortex in sunlight and the fraction of the polar vortex below the temperatures necessary for the formation of polar stratospheric clouds. Here, we present an update of the Polar SWIFT model introducing several improvements over the original model formulation. In particular, the model is now trained on vortex-averaged reaction rates of the ATLAS Chemistry and Transport Model, which enables a detailed look at individual processes and an independent validation of the different parameterizations contained in the differential equations. The training of the original Polar SWIFT model was based on fitting complete model runs to satellite observations and did not allow for this. A revised formulation of the system of differential equations is developed, which closely fits vortex-averaged reaction rates from ATLAS that represent the main chemical processes influencing ozone. In addition, a parameterization for the HNO3 change by denitrification is included. The rates of change of the concentrations of the chemical species of the Polar SWIFT model are purely chemical rates of change in the new version, whereas in the original Polar SWIFT model, they included a transport effect caused by the
[Purity Detection Model Update of Maize Seeds Based on Active Learning].
Tang, Jin-ya; Huang, Min; Zhu, Qi-bing
2015-08-01
Seed purity reflects the degree of seed varieties in typical consistent characteristics, so it is great important to improve the reliability and accuracy of seed purity detection to guarantee the quality of seeds. Hyperspectral imaging can reflect the internal and external characteristics of seeds at the same time, which has been widely used in nondestructive detection of agricultural products. The essence of nondestructive detection of agricultural products using hyperspectral imaging technique is to establish the mathematical model between the spectral information and the quality of agricultural products. Since the spectral information is easily affected by the sample growth environment, the stability and generalization of model would weaken when the test samples harvested from different origin and year. Active learning algorithm was investigated to add representative samples to expand the sample space for the original model, so as to implement the rapid update of the model's ability. Random selection (RS) and Kennard-Stone algorithm (KS) were performed to compare the model update effect with active learning algorithm. The experimental results indicated that in the division of different proportion of sample set (1:1, 3:1, 4:1), the updated purity detection model for maize seeds from 2010 year which was added 40 samples selected by active learning algorithm from 2011 year increased the prediction accuracy for 2011 new samples from 47%, 33.75%, 49% to 98.89%, 98.33%, 98.33%. For the updated purity detection model of 2011 year, its prediction accuracy for 2010 new samples increased by 50.83%, 54.58%, 53.75% to 94.57%, 94.02%, 94.57% after adding 56 new samples from 2010 year. Meanwhile the effect of model updated by active learning algorithm was better than that of RS and KS. Therefore, the update for purity detection model of maize seeds is feasible by active learning algorithm.
Lazy Updating of hubs can enable more realistic models by speeding up stochastic simulations
International Nuclear Information System (INIS)
Ehlert, Kurt; Loewe, Laurence
2014-01-01
To respect the nature of discrete parts in a system, stochastic simulation algorithms (SSAs) must update for each action (i) all part counts and (ii) each action's probability of occurring next and its timing. This makes it expensive to simulate biological networks with well-connected “hubs” such as ATP that affect many actions. Temperature and volume also affect many actions and may be changed significantly in small steps by the network itself during fever and cell growth, respectively. Such trends matter for evolutionary questions, as cell volume determines doubling times and fever may affect survival, both key traits for biological evolution. Yet simulations often ignore such trends and assume constant environments to avoid many costly probability updates. Such computational convenience precludes analyses of important aspects of evolution. Here we present “Lazy Updating,” an add-on for SSAs designed to reduce the cost of simulating hubs. When a hub changes, Lazy Updating postpones all probability updates for reactions depending on this hub, until a threshold is crossed. Speedup is substantial if most computing time is spent on such updates. We implemented Lazy Updating for the Sorting Direct Method and it is easily integrated into other SSAs such as Gillespie's Direct Method or the Next Reaction Method. Testing on several toy models and a cellular metabolism model showed >10× faster simulations for its use-cases—with a small loss of accuracy. Thus we see Lazy Updating as a valuable tool for some special but important simulation problems that are difficult to address efficiently otherwise
batman: BAsic Transit Model cAlculatioN in Python
Kreidberg, Laura
2015-11-01
I introduce batman, a Python package for modeling exoplanet transit light curves. The batman package supports calculation of light curves for any radially symmetric stellar limb darkening law, using a new integration algorithm for models that cannot be quickly calculated analytically. The code uses C extension modules to speed up model calculation and is parallelized with OpenMP. For a typical light curve with 100 data points in transit, batman can calculate one million quadratic limb-darkened models in 30 seconds with a single 1.7 GHz Intel Core i5 processor. The same calculation takes seven minutes using the four-parameter nonlinear limb darkening model (computed to 1 ppm accuracy). Maximum truncation error for integrated models is an input parameter that can be set as low as 0.001 ppm, ensuring that the community is prepared for the precise transit light curves we anticipate measuring with upcoming facilities. The batman package is open source and publicly available at https://github.com/lkreidberg/batman .
Comparison of Calculation Models for Bucket Foundation in Sand
DEFF Research Database (Denmark)
Vaitkunaite, Evelina; Molina, Salvador Devant; Ibsen, Lars Bo
The possibility of fast and rather precise preliminary offshore foundation design is desirable. The ultimate limit state of bucket foundation is investigated using three different geotechnical calculation tools: [Ibsen 2001] an analytical method, LimitState:GEO and Plaxis 3D. The study has focused...... on resultant bearing capacity of variously embedded foundation in sand. The 2D models, [Ibsen 2001] and LimitState:GEO can be used for the preliminary design because they are fast and result in a rather similar bearing capacity calculation compared with the finite element models of Plaxis 3D. The 2D models...
Statistical Model Calculations for (n,γ Reactions
Directory of Open Access Journals (Sweden)
Beard Mary
2015-01-01
Full Text Available Hauser-Feshbach (HF cross sections are of enormous importance for a wide range of applications, from waste transmutation and nuclear technologies, to medical applications, and nuclear astrophysics. It is a well-observed result that diﬀerent nuclear input models sensitively aﬀect HF cross section calculations. Less well known however are the eﬀects on calculations originating from model-specific implementation details (such as level density parameter, matching energy, back-shift and giant dipole parameters, as well as eﬀects from non-model aspects, such as experimental data truncation and transmission function energy binning. To investigate the eﬀects or these various aspects, Maxwellian-averaged neutron capture cross sections have been calculated for approximately 340 nuclei. The relative eﬀects of these model details will be discussed.
Directory of Open Access Journals (Sweden)
Tetsuya Sugiyama
2014-08-01
Full Text Available Laser speckle flowgraphy (LSFG allows for quantitative estimation of blood flow in the optic nerve head (ONH, choroid and retina, utilizing the laser speckle phenomenon. The basic technology and clinical applications of LSFG-NAVI, the updated model of LSFG, are summarized in this review. For developing a commercial version of LSFG, the special area sensor was replaced by the ordinary charge-coupled device camera. In LSFG-NAVI, the mean blur rate (MBR has been introduced as a new parameter. Compared to the original LSFG model, LSFG-NAVI demonstrates a better spatial resolution of the blood flow map of human ocular fundus. The observation area is 24 times larger than the original system. The analysis software can separately calculate MBRs in the blood vessels and tissues (capillaries of an entire ONH and the measurements have good reproducibility. The absolute values of MBR in the ONH have been shown to linearly correlate with the capillary blood flow. The Analysis of MBR pulse waveform provides parameters including skew, blowout score, blowout time, rising and falling rates, flow acceleration index, acceleration time index, and resistivity index for comparing different eyes. Recently, there have been an increasing number of reports on the clinical applications of LSFG-NAVI to ocular diseases, including glaucoma, retinal and choroidal diseases.
Interacting boson model: Microscopic calculations for the mercury isotopes
Energy Technology Data Exchange (ETDEWEB)
Druce, C.H.; Pittel, S.; Barrett, B.R.; Duval, P.D.
1987-05-15
Microscopic calculations of the parameters of the proton--neutron interacting boson model (IBM-2) appropriate to the even Hg isotopes are reported. The calculations are based on the Otsuka--Arima--Iachello boson mapping procedure, which is briefly reviewed. Renormalization of the parameters due to exclusion of the l = 4 g boson is treated perturbatively. The calculations employ a semi-realistic shell-model Hamiltonian with no adjustable parameters. The calculated parameters of the IBM-2 Hamiltonian are used to generate energy spectra and electromagnetic transition probabilities, which are compared with experimental data and with the result of phenomenological fits. The overall agreement is reasonable with some notable exceptions, which are discussed. Particular attention is focused on the parameters of the Majorana interaction and on the F-spin character of low-lying levels. copyright 1987 Academic Press, Inc.
The interacting boson model: Microscopic calculations for the mercury isotopes
Druce, C. H.; Pittel, S.; Barrett, B. R.; Duval, P. D.
1987-05-01
Microscopic calculations of the parameters of the proton-neutron interacting boson model (IBM-2) appropriate to the even Hg isotopes are reported. The calculations are based on the Otsuka-Armia-Iachello boson mapping procedure, which is briefly reviewed. Renormalization of the parameters due to exclusion of the l=4 g boson is treated perturbatively. The calculations employ a semi-realistic shell-model Hamiltonian with no adjustable parameters. The calculated parameters of the IBM-2 Hamiltonian are used to generate energy spectra and electromagnetic transition probabilities, which are compared with experimental data and with the result of phenomenological fits. The overall agreement is reasonable with some notable exceptions, which are discussed. Particular attention is focused on the parameters of the Majorana interaction and on the F-spin character of low-lying levels.
Optimal Height Calculation and Modelling of Noise Barrier
Directory of Open Access Journals (Sweden)
Raimondas Grubliauskas
2011-04-01
Full Text Available Transport is one of the main sources of noise having a particularly strong negative impact on the environment. In the city, one of the best methods to reduce the spread of noise in residential areas is a noise barrier. The article presents noise reduction barrier adaptation with empirical formulas calculating and modelling noise distribution. The simulation of noise dispersion has been performed applying the CadnaA program that allows modelling the noise levels of various developments under changing conditions. Calculation and simulation is obtained by assessing the level of noise reduction using the same variables. The investigation results are presented as noise distribution isolines. The selection of a different height of noise barriers are the results calculated at the heights of 1, 4 and 15 meters. The level of noise reduction at the maximum overlap of data, calculation and simulation has reached about 10%.Article in Lithuanian
Recoil corrected bag model calculations for semileptonic weak decays
International Nuclear Information System (INIS)
Lie-Svendsen, Oe.; Hoegaasen, H.
1987-02-01
Recoil corrections to various model results for strangeness changing weak decay amplitudes have been developed. It is shown that the spurious reference frame dependence of earlier calculations is reduced. The second class currents are generally less important than obtained by calculations in the static approximation. Theoretical results are compared to observations. The agreement is quite good, although the values for the Cabibbo angle obtained by fits to the decay rates are somewhat to large
Optical model calculations with the code ECIS95
Energy Technology Data Exchange (ETDEWEB)
Carlson, B V [Departamento de Fisica, Instituto Tecnologico da Aeronautica, Centro Tecnico Aeroespacial (Brazil)
2001-12-15
The basic features of elastic and inelastic scattering within the framework of the spherical and deformed nuclear optical models are discussed. The calculation of cross sections, angular distributions and other scattering quantities using J. Raynal's code ECIS95 is described. The use of the ECIS method (Equations Couplees en Iterations Sequentielles) in coupled-channels and distorted-wave Born approximation calculations is also reviewed. (author)
Use of nuclear reaction models in cross section calculations
International Nuclear Information System (INIS)
Grimes, S.M.
1975-03-01
The design of fusion reactors will require information about a large number of neutron cross sections in the MeV region. Because of the obvious experimental difficulties, it is probable that not all of the cross sections of interest will be measured. Current direct and pre-equilibrium models can be used to calculate non-statistical contributions to neutron cross sections from information available from charged particle reaction studies; these are added to the calculated statistical contribution. Estimates of the reliability of such calculations can be derived from comparisons with the available data. (3 tables, 12 figures) (U.S.)
Realistic shell-model calculations for Sn isotopes
International Nuclear Information System (INIS)
Covello, A.; Andreozzi, F.; Coraggio, L.; Gargano, A.; Porrino, A.
1997-01-01
We report on a shell-model study of the Sn isotopes in which a realistic effective interaction derived from the Paris free nucleon-nucleon potential is employed. The calculations are performed within the framework of the seniority scheme by making use of the chain-calculation method. This provides practically exact solutions while cutting down the amount of computational work required by a standard seniority-truncated calculation. The behavior of the energy of several low-lying states in the isotopes with A ranging from 122 to 130 is presented and compared with the experimental one. (orig.)
Approximate dynamic fault tree calculations for modelling water supply risks
International Nuclear Information System (INIS)
Lindhe, Andreas; Norberg, Tommy; Rosén, Lars
2012-01-01
Traditional fault tree analysis is not always sufficient when analysing complex systems. To overcome the limitations dynamic fault tree (DFT) analysis is suggested in the literature as well as different approaches for how to solve DFTs. For added value in fault tree analysis, approximate DFT calculations based on a Markovian approach are presented and evaluated here. The approximate DFT calculations are performed using standard Monte Carlo simulations and do not require simulations of the full Markov models, which simplifies model building and in particular calculations. It is shown how to extend the calculations of the traditional OR- and AND-gates, so that information is available on the failure probability, the failure rate and the mean downtime at all levels in the fault tree. Two additional logic gates are presented that make it possible to model a system's ability to compensate for failures. This work was initiated to enable correct analyses of water supply risks. Drinking water systems are typically complex with an inherent ability to compensate for failures that is not easily modelled using traditional logic gates. The approximate DFT calculations are compared to results from simulations of the corresponding Markov models for three water supply examples. For the traditional OR- and AND-gates, and one gate modelling compensation, the errors in the results are small. For the other gate modelling compensation, the error increases with the number of compensating components. The errors are, however, in most cases acceptable with respect to uncertainties in input data. The approximate DFT calculations improve the capabilities of fault tree analysis of drinking water systems since they provide additional and important information and are simple and practically applicable.
A hierarchical updating method for finite element model of airbag buffer system under landing impact
Directory of Open Access Journals (Sweden)
He Huan
2015-12-01
Full Text Available In this paper, we propose an impact finite element (FE model for an airbag landing buffer system. First, an impact FE model has been formulated for a typical airbag landing buffer system. We use the independence of the structure FE model from the full impact FE model to develop a hierarchical updating scheme for the recovery module FE model and the airbag system FE model. Second, we define impact responses at key points to compare the computational and experimental results to resolve the inconsistency between the experimental data sampling frequency and experimental triggering. To determine the typical characteristics of the impact dynamics response of the airbag landing buffer system, we present the impact response confidence factors (IRCFs to evaluate how consistent the computational and experiment results are. An error function is defined between the experimental and computational results at key points of the impact response (KPIR to serve as a modified objective function. A radial basis function (RBF is introduced to construct updating variables for a surrogate model for updating the objective function, thereby converting the FE model updating problem to a soluble optimization problem. Finally, the developed method has been validated using an experimental and computational study on the impact dynamics of a classic airbag landing buffer system.
Sequential updating of a new dynamic pharmacokinetic model for caffeine in premature neonates.
Micallef, Sandrine; Amzal, Billy; Bach, Véronique; Chardon, Karen; Tourneux, Pierre; Bois, Frédéric Y
2007-01-01
Caffeine treatment is widely used in nursing care to reduce the risk of apnoea in premature neonates. To check the therapeutic efficacy of the treatment against apnoea, caffeine concentration in blood is an important indicator. The present study was aimed at building a pharmacokinetic model as a basis for a medical decision support tool. In the proposed model, time dependence of physiological parameters is introduced to describe rapid growth of neonates. To take into account the large variability in the population, the pharmacokinetic model is embedded in a population structure. The whole model is inferred within a Bayesian framework. To update caffeine concentration predictions as data of an incoming patient are collected, we propose a fast method that can be used in a medical context. This involves the sequential updating of model parameters (at individual and population levels) via a stochastic particle algorithm. Our model provides better predictions than the ones obtained with models previously published. We show, through an example, that sequential updating improves predictions of caffeine concentration in blood (reduce bias and length of credibility intervals). The update of the pharmacokinetic model using body mass and caffeine concentration data is studied. It shows how informative caffeine concentration data are in contrast to body mass data. This study provides the methodological basis to predict caffeine concentration in blood, after a given treatment if data are collected on the treated neonate.
Update of the ITER MELCOR model for the validation of the Cryostat design
Energy Technology Data Exchange (ETDEWEB)
Martínez, M.; Labarta, C.; Terrón, S.; Izquierdo, J.; Perlado, J.M.
2015-07-01
Some transients can compromise the vacuum in the Cryostat of ITER and cause significant loads. A MELCOR model has been updated in order to assess this loads. Transients have been run with this model and its result will be used in the mechanical assessment of the cryostat. (Author)
Adapting to change: The role of the right hemisphere in mental model building and updating.
Filipowicz, Alex; Anderson, Britt; Danckert, James
2016-09-01
We recently proposed that the right hemisphere plays a crucial role in the processes underlying mental model building and updating. Here, we review the evidence we and others have garnered to support this novel account of right hemisphere function. We begin by presenting evidence from patient work that suggests a critical role for the right hemisphere in the ability to learn from the statistics in the environment (model building) and adapt to environmental change (model updating). We then provide a review of neuroimaging research that highlights a network of brain regions involved in mental model updating. Next, we outline specific roles for particular regions within the network such that the anterior insula is purported to maintain the current model of the environment, the medial prefrontal cortex determines when to explore new or alternative models, and the inferior parietal lobule represents salient and surprising information with respect to the current model. We conclude by proposing some future directions that address some of the outstanding questions in the field of mental model building and updating. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Finite element model updating of a small steel frame using neural networks
International Nuclear Information System (INIS)
Zapico, J L; González, M P; Alonso, R; González-Buelga, A
2008-01-01
This paper presents an experimental and analytical dynamic study of a small-scale steel frame. The experimental model was physically built and dynamically tested on a shaking table in a series of different configurations obtained from the original one by changing the mass and by causing structural damage. Finite element modelling and parameterization with physical meaning is iteratively tried for the original undamaged configuration. The finite element model is updated through a neural network, the natural frequencies of the model being the net input. The updating process is made more accurate and robust by using a regressive procedure, which constitutes an original contribution of this work. A novel simplified analytical model has been developed to evaluate the reduction of bending stiffness of the elements due to damage. The experimental results of the rest of the configurations have been used to validate both the updated finite element model and the analytical one. The statistical properties of the identified modal data are evaluated. From these, the statistical properties and a confidence interval for the estimated model parameters are obtained by using the Latin Hypercube sampling technique. The results obtained are successful: the updated model accurately reproduces the low modes identified experimentally for all configurations, and the statistical study of the transmission of errors yields a narrow confidence interval for all the identified parameters
Directory of Open Access Journals (Sweden)
Zhiyuan Xia
2017-02-01
Full Text Available Nowadays, many more bridges with extra-width have been needed for vehicle throughput. In order to obtain a precise finite element (FE model of those complex bridge structures, the practical hybrid updating method by integration of Gaussian mutation particle swarm optimization (GMPSO, Kriging meta-model and Latin hypercube sampling (LHS was proposed. By demonstrating the efficiency and accuracy of the hybrid method through the model updating of a damaged simply supported beam, the proposed method was applied to the model updating of a self-anchored suspension bridge with extra-width which showed great necessity considering the results of ambient vibration test. The results of bridge model updating showed that both of the mode frequencies and shapes had relatively high agreement between the updated model and experimental structure. The successful model updating of this bridge fills in the blanks of model updating of a complex self-anchored suspension bridge. Moreover, the updating process enables other model updating issues for complex bridge structures
Comparison of the performance of net radiation calculation models
DEFF Research Database (Denmark)
Kjærsgaard, Jeppe Hvelplund; Cuenca, R.H.; Martinez-Cob, A.
2009-01-01
. The long-wave radiation models included a physically based model, an empirical model from the literature, and a new empirical model. Both empirical models used only solar radiation as required for meteorological input. The long-wave radiation models were used with model calibration coefficients from......Daily values of net radiation are used in many applications of crop-growth modeling and agricultural water management. Measurements of net radiation are not part of the routine measurement program at many weather stations and are commonly estimated based on other meteorological parameters. Daily...... values of net radiation were calculated using three net outgoing long-wave radiation models and compared to measured values. Four meteorological datasets representing two climate regimes, a sub-humid, high-latitude environment and a semi-arid mid-latitude environment, were used to test the models...
Finite element modelling and updating of friction stir welding (FSW joint for vibration analysis
Directory of Open Access Journals (Sweden)
Zahari Siti Norazila
2017-01-01
Full Text Available Friction stir welding of aluminium alloys widely used in automotive and aerospace application due to its advanced and lightweight properties. The behaviour of FSW joints plays a significant role in the dynamic characteristic of the structure due to its complexities and uncertainties therefore the representation of an accurate finite element model of these joints become a research issue. In this paper, various finite elements (FE modelling technique for prediction of dynamic properties of sheet metal jointed by friction stir welding will be presented. Firstly, nine set of flat plate with different series of aluminium alloy; AA7075 and AA6061 joined by FSW are used. Nine set of specimen was fabricated using various types of welding parameters. In order to find the most optimum set of FSW plate, the finite element model using equivalence technique was developed and the model validated using experimental modal analysis (EMA on nine set of specimen and finite element analysis (FEA. Three types of modelling were engaged in this study; rigid body element Type 2 (RBE2, bar element (CBAR and spot weld element connector (CWELD. CBAR element was chosen to represent weld model for FSW joints due to its accurate prediction of mode shapes and contains an updating parameter for weld modelling compare to other weld modelling. Model updating was performed to improve correlation between EMA and FEA and before proceeds to updating, sensitivity analysis was done to select the most sensitive updating parameter. After perform model updating, total error of the natural frequencies for CBAR model is improved significantly. Therefore, CBAR element was selected as the most reliable element in FE to represent FSW weld joint.
EMPIRE-II statistical model code for nuclear reaction calculations
Energy Technology Data Exchange (ETDEWEB)
Herman, M [International Atomic Energy Agency, Vienna (Austria)
2001-12-15
EMPIRE II is a nuclear reaction code, comprising various nuclear models, and designed for calculations in the broad range of energies and incident particles. A projectile can be any nucleon or Heavy Ion. The energy range starts just above the resonance region, in the case of neutron projectile, and extends up to few hundreds of MeV for Heavy Ion induced reactions. The code accounts for the major nuclear reaction mechanisms, such as optical model (SCATB), Multistep Direct (ORION + TRISTAN), NVWY Multistep Compound, and the full featured Hauser-Feshbach model. Heavy Ion fusion cross section can be calculated within the simplified coupled channels approach (CCFUS). A comprehensive library of input parameters covers nuclear masses, optical model parameters, ground state deformations, discrete levels and decay schemes, level densities, fission barriers (BARFIT), moments of inertia (MOMFIT), and {gamma}-ray strength functions. Effects of the dynamic deformation of a fast rotating nucleus can be taken into account in the calculations. The results can be converted into the ENDF-VI format using the accompanying code EMPEND. The package contains the full EXFOR library of experimental data. Relevant EXFOR entries are automatically retrieved during the calculations. Plots comparing experimental results with the calculated ones can be produced using X4TOC4 and PLOTC4 codes linked to the rest of the system through bash-shell (UNIX) scripts. The graphic user interface written in Tcl/Tk is provided. (author)
Evaluation of calculational and material models for concrete containment structures
International Nuclear Information System (INIS)
Dunham, R.S.; Rashid, Y.R.; Yuan, K.A.
1984-01-01
A computer code utilizing an appropriate finite element, material and constitutive model has been under development as a part of a comprehensive effort by the Electric Power Research Institute (EPRI) to develop and validate a realistic methodology for the ultimate load analysis of concrete containment structures. A preliminary evaluation of the reinforced and prestressed concrete modeling capabilities recently implemented in the ABAQUS-EPGEN code has been completed. This effort focuses on using a state-of-the-art calculational model to predict the behavior of large-scale reinforced concrete slabs tested under uniaxial and biaxial tension to simulate the wall of a typical concrete containment structure under internal pressure. This paper gives comparisons between calculations and experimental measurements for a uniaxially-loaded specimen. The calculated strains compare well with the measured strains in the reinforcing steel; however, the calculations gave diffused cracking patterns that do not agree with the discrete cracking observed in the experiments. Recommendations for improvement of the calculational models are given. (orig.)
Precipitates/Salts Model Calculations for Various Drift Temperature Environments
Energy Technology Data Exchange (ETDEWEB)
P. Marnier
2001-12-20
The objective and scope of this calculation is to assist Performance Assessment Operations and the Engineered Barrier System (EBS) Department in modeling the geochemical effects of evaporation within a repository drift. This work is developed and documented using procedure AP-3.12Q, Calculations, in support of ''Technical Work Plan For Engineered Barrier System Department Modeling and Testing FY 02 Work Activities'' (BSC 2001a). The primary objective of this calculation is to predict the effects of evaporation on the abstracted water compositions established in ''EBS Incoming Water and Gas Composition Abstraction Calculations for Different Drift Temperature Environments'' (BSC 2001c). A secondary objective is to predict evaporation effects on observed Yucca Mountain waters for subsequent cement interaction calculations (BSC 2001d). The Precipitates/Salts model is documented in an Analysis/Model Report (AMR), ''In-Drift Precipitates/Salts Analysis'' (BSC 2001b).
An Updated Site Scale Saturated Zone Ground Water Transport Model For Yucca Mountain
International Nuclear Information System (INIS)
S. Kelkar; H. Viswanathan; A. Eddebbarrh; M. Ding; P. Reimus; B. Robinson; B. Arnold; A. Meijer
2006-01-01
The Yucca Mountain site scale saturated zone transport model has been revised to incorporate the updated flow model based on a hydrogeologic framework model using the latest lithology data, increased grid resolution that better resolves the geology within the model domain, updated Kd distributions for radionuclides of interest, and updated retardation factor distributions for colloid filtration. The resulting numerical transport model is used for performance assessment predictions of radionuclide transport and to guide future data collection and modeling activities. The transport model results are validated by comparing the model transport pathways with those derived from geochemical data, and by comparing the transit times from the repository footprint to the compliance boundary at the accessible environment with those derived from 14 C-based age estimates. The transport model includes the processes of advection, dispersion, fracture flow, matrix diffusion, sorption, and colloid-facilitated transport. The transport of sorbing radionuclides in the aqueous phase is modeled as a linear, equilibrium process using the Kd model. The colloid-facilitated transport of radionuclides is modeled using two approaches: the colloids with irreversibly embedded radionuclides undergo reversible filtration only, while the migration of radionuclides that reversibly sorb to colloids is modeled with modified values for sorption coefficient and matrix diffusion coefficients. Model breakthrough curves for various radionuclides at the compliance boundary are presented along with their sensitivity to various parameters
The models of internal dose calculation in ICRP
International Nuclear Information System (INIS)
Nakano, Takashi
1995-01-01
There are a lot discussions about internal dose calculation in ICRP. Many efforts are devoted to improvement in models and parameters. In this report, we discuss what kind of models and parameters are used in ICRP. Models are divided into two parts, the dosimetric model and biokinetic model. The former is a mathematical phantom model, and it is mainly developed in ORNL. The results are used in many researchers. The latter is a compartment model and it has a difficulty to decide the parameter values. They are not easy to estimate because of their age dependency. ICRP officially sets values at ages of 3 month, 1 year, 5 year, 10 year, 15 year and adult, and recommends to get values among ages by linear age interpolate. But it is very difficult to solve the basic equation with these values, so we calculate by use of computers. However, it has complex shame and needs long CPU time. We should make approximated equations. The parameter values include much uncertainty because of less experimental data, especially for a child. And these models and parameter values are for Caucasian. We should inquire whether they could correctly describe other than Caucasian. The body size affects the values of calculated SAF, and the differences of metabolism change the biokinetic pattern. (author)
Interactions of model biomolecules. Benchmark CC calculations within MOLCAS
Energy Technology Data Exchange (ETDEWEB)
Urban, Miroslav [Slovak University of Technology in Bratislava, Faculty of Materials Science and Technology in Trnava, Institute of Materials Science, Bottova 25, SK-917 24 Trnava, Slovakia and Department of Physical and Theoretical Chemistry, Faculty of Natural Scie (Slovakia); Pitoňák, Michal; Neogrády, Pavel; Dedíková, Pavlína [Department of Physical and Theoretical Chemistry, Faculty of Natural Sciences, Comenius University, Mlynská dolina, SK-842 15 Bratislava (Slovakia); Hobza, Pavel [Institute of Organic Chemistry and Biochemistry and Center for Complex Molecular Systems and biomolecules, Academy of Sciences of the Czech Republic, Prague (Czech Republic)
2015-01-22
We present results using the OVOS approach (Optimized Virtual Orbitals Space) aimed at enhancing the effectiveness of the Coupled Cluster calculations. This approach allows to reduce the total computer time required for large-scale CCSD(T) calculations about ten times when the original full virtual space is reduced to about 50% of its original size without affecting the accuracy. The method is implemented in the MOLCAS computer program. When combined with the Cholesky decomposition of the two-electron integrals and suitable parallelization it allows calculations which were formerly prohibitively too demanding. We focused ourselves to accurate calculations of the hydrogen bonded and the stacking interactions of the model biomolecules. Interaction energies of the formaldehyde, formamide, benzene, and uracil dimers and the three-body contributions in the cytosine – guanine tetramer are presented. Other applications, as the electron affinity of the uracil affected by solvation are also shortly mentioned.
A State Space Model for Spatial Updating of Remembered Visual Targets during Eye Movements.
Mohsenzadeh, Yalda; Dash, Suryadeep; Crawford, J Douglas
2016-01-01
In the oculomotor system, spatial updating is the ability to aim a saccade toward a remembered visual target position despite intervening eye movements. Although this has been the subject of extensive experimental investigation, there is still no unifying theoretical framework to explain the neural mechanism for this phenomenon, and how it influences visual signals in the brain. Here, we propose a unified state-space model (SSM) to account for the dynamics of spatial updating during two types of eye movement; saccades and smooth pursuit. Our proposed model is a non-linear SSM and implemented through a recurrent radial-basis-function neural network in a dual Extended Kalman filter (EKF) structure. The model parameters and internal states (remembered target position) are estimated sequentially using the EKF method. The proposed model replicates two fundamental experimental observations: continuous gaze-centered updating of visual memory-related activity during smooth pursuit, and predictive remapping of visual memory activity before and during saccades. Moreover, our model makes the new prediction that, when uncertainty of input signals is incorporated in the model, neural population activity and receptive fields expand just before and during saccades. These results suggest that visual remapping and motor updating are part of a common visuomotor mechanism, and that subjective perceptual constancy arises in part from training the visual system on motor tasks.
International Nuclear Information System (INIS)
Billet, L.
1994-01-01
The Research and Development Division of Electricite de France is developing a surveillance method of cooling towers involving on-site wind-induced measurements. The method is supposed to detect structural damage in the tower. The damage is identified by tuning a finite element model of the tower on experimental mode shapes and eigenfrequencies. The sensitivity of the method was evaluated through numerical tests. First, the dynamic response of a damaged tower was simulated by varying the stiffness of some area of the model shell (from 1 % to 24 % of the total shell area). Second, the structural parameters of the undamaged cooling tower model were updated in order to make the output of the undamaged model as close as possible to the synthetic experimental data. The updating method, based on the minimization of the differences between experimental modal energies and modal energies calculated by the model, did not detect a stiffness change over less than 3 % of the shell area. Such a sensitivity is thought to be insufficient to detect tower cracks which behave like highly localized defaults. (author). 8 refs., 9 figs., 6 tabs
Laminated materials with plastic interfaces: modeling and calculation
International Nuclear Information System (INIS)
Sandino Aquino de los Ríos, Gilberto; Castañeda Balderas, Rubén; Diaz Diaz, Alberto; Duong, Van Anh; Chataigner, Sylvain; Caron, Jean-François; Ehrlacher, Alain; Foret, Gilles
2009-01-01
In this paper, a model of laminated plates called M4-5N and validated in a previous paper is modified in order to take into account interlaminar plasticity by means of displacement discontinuities at the interfaces. These discontinuities are calculated by adapting a 3D plasticity model. In order to compute the model, a Newton–Raphson-like method is employed. In this method, two sub-problems are considered: one is linear and the other is non-linear. In the linear problem the non-linear equations of the model are linearized and the calculations are performed by making use of a finite element software. By iterating the resolution of each sub-problem, one obtains after convergence the solution of the global problem. The model is then applied to the problem of a double lap, adhesively bonded joint subjected to a tensile load. The adhesive layer is modeled by an elastic–plastic interface. The results of the M4-5N model are compared with those of a commercial finite element software. A good agreement between the two computation techniques is obtained and validates the non-linear calculations proposed in this paper. Finally, the numerical tool and a delamination criterion are applied to predict delamination onset in composite laminates
Liousse, C.; Penner, J. E.; Assamoi, E.; Xu, L.; Criqui, P.; Mima, S.; Guillaume, B.; Rosset, R.
2010-12-01
A regional fossil fuel and biofuel emission inventory for particulates has been developed for Africa at a resolution of 0.25° x 0.25° for the year 2005. The original database of Junker and Liousse (2008) was used after modification for updated regional fuel consumption and emission factors. Consumption data were corrected after direct inquiries conducted in Africa, including a new emitter category (i.e. two-wheel vehicles including “zemidjans”) and a new activity sector (i.e. power plants) since both were not considered in the previous emission inventory. Emission factors were measured during the 2005 AMMA campaign (Assamoi and Liousse, 2010) and combustion chamber experiments. Two prospective inventories for 2030 are derived based on this new regional inventory and two energy consumption forecasts by the Prospective Outlook on Long-term Energy Systems (POLES) model (Criqui, 2001). The first is a reference scenario, where no emission controls beyond those achieved in 2003 are taken into account, and the second is for a "clean" scenario where possible and planned policies for emission control are assumed to be effective. BC and OCp emission budgets for these new inventories will be discussed and compared to the previous global dataset. These new inventories along with the most recent open biomass burning inventory (Liousse et al., 2010) have been tested in the ORISAM-TM5 global chemistry-climate model with a focus over Africa at a 1° x 1° resolution. Global simulations for BC and primary OC for the years 2005 and 2030 are carried out and the modelled particulate concentrations for 2005 are compared to available measurements in Africa. Finally, BC and OC radiative properties (aerosol optical depths and single scattering albedo) are calculated and the direct radiative forcing is estimated using an off line model (Wang and Penner, 2009). Results of sensitivity tests driven with different emission scenarios will be presented.
Use of results from microscopic methods in optical model calculations
International Nuclear Information System (INIS)
Lagrange, C.
1985-11-01
A concept of vectorization for coupled-channel programs based upon conventional methods is first presented. This has been implanted in our program for its use on the CRAY-1 computer. In a second part we investigate the capabilities of a semi-microscopic optical model involving fewer adjustable parameters than phenomenological ones. The two main ingredients of our calculations are, for spherical or well-deformed nuclei, the microscopic optical-model calculations of Jeukenne, Lejeune and Mahaux and nuclear densities from Hartree-Fock-Bogoliubov calculations using the density-dependent force D1. For transitional nuclei deformation-dependent nuclear structure wave functions are employed to weigh the scattering potentials for different shapes and channels [fr
Precision calculations in supersymmetric extensions of the Standard Model
International Nuclear Information System (INIS)
Slavich, P.
2013-01-01
This dissertation is organized as follows: in the next chapter I will summarize the structure of the supersymmetric extensions of the standard model (SM), namely the MSSM (Minimal Supersymmetric Standard Model) and the NMSSM (Next-to-Minimal Supersymmetric Standard Model), I will provide a brief overview of different patterns of SUSY (supersymmetry) breaking and discuss some issues on the renormalization of the input parameters that are common to all calculations of higher-order corrections in SUSY models. In chapter 3 I will review and describe computations on the production of MSSM Higgs bosons in gluon fusion. In chapter 4 I will review results on the radiative corrections to the Higgs boson masses in the NMSSM. In chapter 5 I will review the calculation of BR(B → X s γ in the MSSM with Minimal Flavor Violation (MFV). Finally, in chapter 6 I will briefly summarize the outlook of my future research. (author)
Do calculated conflicts in microsimulation model predict number of crashes?
Dijkstra, Atze; Marchesini, Paula; Bijleveld, Frits; Kars, Vincent; Drolenga, Hans; Maarseveen, Martin Van
2010-01-01
A microsimulation model and its calculations are described, and the results that are subsequently used to determine indicators for traffic safety are presented. The method demonstrates which changes occur at the level of traffic flow (number of vehicles per section of road) and at the vehicle level
A shell-model calculation in terms of correlated subsystems
International Nuclear Information System (INIS)
Boisson, J.P.; Silvestre-Brac, B.
1979-01-01
A method for solving the shell-model equations in terms of a basis which includes correlated subsystems is presented. It is shown that the method allows drastic truncations of the basis to be made. The corresponding calculations are easy to perform and can be carried out rapidly
TTS-Polttopuu - cost calculation model for fuelwood
International Nuclear Information System (INIS)
Naett, H.; Ryynaenen, S.
1999-01-01
The TTS-Institutes's Forestry Department has developed a computer based cost-calculation model, 'TTS-Polttopuu', for the calculation of unit costs and resource needs in the harvesting systems for wood chips and split firewood. The model enables to determine the productivity and device cost per operating hour by each working stage of the harvesting system. The calculation model also enables the user to find out how changes in the productivity and cost bases of different harvesting chains influence the unit cost of the whole system. The harvesting chain includes the cutting of delimbed and non-delimbed fuelwood, forest haulage, road transportation, chipping and chopping of longwood at storage. This individually operating software was originally developed to serve research needs, but it also serves the needs of the forestry and agricultural education, training and extension as well as individual firewood producers. The system requirements for this cost calculation model are at least 486- level processor with the Windows 95/98 -operating system, 16 MB of memory (RAM) and 5 MB of available hard-disk. This development work was carried out in conjunction with the nation-wide BIOENERGY-research programme. (orig.)
Calculation of extreme wind atlases using mesoscale modeling. Final report
DEFF Research Database (Denmark)
Larsén, Xiaoli Guo; Badger, Jake
This is the final report of the project PSO-10240 "Calculation of extreme wind atlases using mesoscale modeling". The overall objective is to improve the estimation of extreme winds by developing and applying new methodologies to confront the many weaknesses in the current methodologies as explai...
Finite element model updating of natural fibre reinforced composite structure in structural dynamics
Directory of Open Access Journals (Sweden)
Sani M.S.M.
2016-01-01
Full Text Available Model updating is a process of making adjustment of certain parameters of finite element model in order to reduce discrepancy between analytical predictions of finite element (FE and experimental results. Finite element model updating is considered as an important field of study as practical application of finite element method often shows discrepancy to the test result. The aim of this research is to perform model updating procedure on a composite structure as well as trying improving the presumed geometrical and material properties of tested composite structure in finite element prediction. The composite structure concerned in this study is a plate of reinforced kenaf fiber with epoxy. Modal properties (natural frequency, mode shapes, and damping ratio of the kenaf fiber structure will be determined using both experimental modal analysis (EMA and finite element analysis (FEA. In EMA, modal testing will be carried out using impact hammer test while normal mode analysis using FEA will be carried out using MSC. Nastran/Patran software. Correlation of the data will be carried out before optimizing the data from FEA. Several parameters will be considered and selected for the model updating procedure.
Overview of models allowing calculation of activity coefficients
Energy Technology Data Exchange (ETDEWEB)
Jaussaud, C.; Sorel, C
2004-07-01
Activity coefficients must be estimated to accurately quantify the extraction equilibrium involved in spent fuel reprocessing. For these calculations, binary data are required for each electrolyte over a concentration range sometimes exceeding the maximum solubility. The activity coefficients must be extrapolated to model the behavior of binary supersaturated aqueous solution. According to the bibliography, the most suitable models are based on the local composition concept. (authors)
Klemans, Rob J B; Otte, Dianne; Knol, Mirjam; Knol, Edward F; Meijer, Yolanda; Gmelig-Meyling, Frits H J; Bruijnzeel-Koomen, Carla A F M; Knulst, André C; Pasmans, Suzanne G M A
2013-01-01
A diagnostic prediction model for peanut allergy in children was recently published, using 6 predictors: sex, age, history, skin prick test, peanut specific immunoglobulin E (sIgE), and total IgE minus peanut sIgE. To validate this model and update it by adding allergic rhinitis, atopic dermatitis, and sIgE to peanut components Ara h 1, 2, 3, and 8 as candidate predictors. To develop a new model based only on sIgE to peanut components. Validation was performed by testing discrimination (diagnostic value) with an area under the receiver operating characteristic curve and calibration (agreement between predicted and observed frequencies of peanut allergy) with the Hosmer-Lemeshow test and a calibration plot. The performance of the (updated) models was similarly analyzed. Validation of the model in 100 patients showed good discrimination (88%) but poor calibration (P original model: sex, skin prick test, peanut sIgE, and total IgE minus sIgE. When building a model with sIgE to peanut components, Ara h 2 was the only predictor, with a discriminative ability of 90%. Cutoff values with 100% positive and negative predictive values could be calculated for both the updated model and sIgE to Ara h 2. In this way, the outcome of the food challenge could be predicted with 100% accuracy in 59% (updated model) and 50% (Ara h 2) of the patients. Discrimination of the validated model was good; however, calibration was poor. The discriminative ability of Ara h 2 was almost comparable to that of the updated model, containing 4 predictors. With both models, the need for peanut challenges could be reduced by at least 50%. Copyright © 2012 American Academy of Allergy, Asthma & Immunology. Published by Mosby, Inc. All rights reserved.
International Nuclear Information System (INIS)
Van Wees, F.G.H.
1992-01-01
During the last few years the Business Unit ESC-Energy Studies of the Netherlands Energy Research Foundation (ECN) developed calculation programs to determine the economic efficiency of energy technologies, which programs support several studies for the Dutch Ministry of Economic Affairs. All these programs form the so-called BRET programs. One of these programs is ERWIN (Economische Rentabiliteit WINdenergiesystemen or in English: Economic Efficiency of Wind Energy Systems) of which an updated manual (ERWIN2) is presented in this report. An outline is given of the possibilities and limitations to carry out calculations with the model
Using radar altimetry to update a routing model of the Zambezi River Basin
DEFF Research Database (Denmark)
Michailovsky, Claire Irene B.; Bauer-Gottwein, Peter
2012-01-01
Satellite radar altimetry allows for the global monitoring of lakes and river levels. However, the widespread use of altimetry for hydrological studies is limited by the coarse temporal and spatial resolution provided by current altimetric missions and the fact that discharge rather than level...... is needed for hydrological applications. To overcome these limitations, altimetry river levels can be combined with hydrological modeling in a dataassimilation framework. This study focuses on the updating of a river routing model of the Zambezi using river levels from radar altimetry. A hydrological model...... of the basin was built to simulate the land phase of the water cycle and produce inflows to a Muskingum routing model. River altimetry from the ENVISAT mission was then used to update the storages in the reaches of the Muskingum model using the Extended Kalman Filter. The method showed improvements in modeled...
The 2014 update to the National Seismic Hazard Model in California
Powers, Peter; Field, Edward H.
2015-01-01
The 2014 update to the U. S. Geological Survey National Seismic Hazard Model in California introduces a new earthquake rate model and new ground motion models (GMMs) that give rise to numerous changes to seismic hazard throughout the state. The updated earthquake rate model is the third version of the Uniform California Earthquake Rupture Forecast (UCERF3), wherein the rates of all ruptures are determined via a self-consistent inverse methodology. This approach accommodates multifault ruptures and reduces the overprediction of moderate earthquake rates exhibited by the previous model (UCERF2). UCERF3 introduces new faults, changes to slip or moment rates on existing faults, and adaptively smoothed gridded seismicity source models, all of which contribute to significant changes in hazard. New GMMs increase ground motion near large strike-slip faults and reduce hazard over dip-slip faults. The addition of very large strike-slip ruptures and decreased reverse fault rupture rates in UCERF3 further enhances these effects.
Shell model calculations for stoichiometric Na β-alumina
International Nuclear Information System (INIS)
Wang, J.C.
1985-01-01
Walker and Catlow recently reported the results of their shell model calculations for the structure and transport of Na β-alumina (Naβ). The main computer programs used by Walker and Catlow for their calculations are PLUTO and HADES III. The latter, a recent version of HADES II written for cubic crystals, is believed to be applicable to defects in crystals of both cubic and hexagonal symmetry. PLUTO is usually used in calculating properties of perfect crystals before defects are introduced into the structure. Walker and Catlow claim that, in some respects, their models are superior to those of Wang et al. Yet, their results are quite different from those observed experimentally. In this work these differences are investigated by using a computer program designed to calculate lattice energies for s Naβ using the same shell model parameters adopted by Walker and Catlow. The core and shell positions of all ions, as well as the lattice parameters, were fully relaxed. The calculated energy difference between aBR and BR sites (0.33 eV) is about twice as large as that reported by Walker and Catlow. The present results also show that the relaxed oxygen ion positions next to the conduction plane in this case are displaced from their observed sites reported. When the core-shell spring constant of the oxygen ion was adjusted to minimize these displacements, the above-mentioned energy difference increased to about 0.56 eV. These results cast doubt on the fluid conduction plane structure suggested by Walker and Catlow and on the defect structure and activation energy obtained from their calculations
Evaluation of Lower East Fork Poplar Creek Mercury Sources - Model Update
Energy Technology Data Exchange (ETDEWEB)
Ketelle, Richard [East Tennessee Technology Park (ETTP), Oak Ridge, TN (United States); Brandt, Craig C. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Peterson, Mark J. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Bevelhimer, Mark S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Watson, David B. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Brooks, Scott C. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Mayes, Melanie [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); DeRolph, Christopher R. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Dickson, Johnbull O. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Olsen, Todd A. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)
2017-08-01
The purpose of this report is to assess new data that has become available and provide an update to the evaluations and modeling presented in the Oak Ridge National Laboratory (ORNL) Technical Manuscript Evaluation of lower East Fork Poplar Creek (LEFPC) Mercury Sources (Watson et al., 2016). Primary sources of field and laboratory data for this update include multiple US Department of Energy (DOE) programs including Environmental Management (EM; e.g., Biological Monitoring and Abatement Program, Mercury Remediation Technology Development [TD], and Applied Field Research Initiative), Office of Science (Mercury Science Focus Areas [SFA] project), and the Y-12 National Security Complex (Y-12) Compliance Department.
Impact of time displaced precipitation estimates for on-line updated models
DEFF Research Database (Denmark)
Borup, Morten; Grum, Morten; Mikkelsen, Peter Steen
2012-01-01
When an online runoff model is updated from system measurements the requirements to the precipitation estimates change. Using rain gauge data as precipitation input there will be a displacement between the time where the rain intensity hits the gauge and the time where the rain hits the actual...
DEFF Research Database (Denmark)
Borup, Morten; Mikkelsen, Peter Steen; Borup, Morten
2013-01-01
When an online runoff model is updated from system measurements, the requirements of the precipitation input change. Using rain gauge data as precipitation input there will be a displacement between the time when the rain hits the gauge and the time where the rain hits the actual catchment, due...
Cluster model calculations of alpha decays across the periodic table
International Nuclear Information System (INIS)
Merchant, A.C.; Buck, B.
1988-10-01
The cluster model of Buck, Dover and Vary has been used to calculate partial widths for alpha decay from the ground states of all nuclei for which experimental measurements exist. The cluster-core potential is represented by a simple three-parameter form having fixed diffuseness, a radius which scales as A 1/3 and a depth which is adjusted to fit the Q-value of the particular decay. The calculations yield excellent agreement with the vast majority of the available data, and some typical examples are presented. (author) [pt
Modelling of Control Bars in Calculations of Boiling Water Reactors
International Nuclear Information System (INIS)
Khlaifi, A.; Buiron, L.
2004-01-01
The core of a nuclear reactor is generally composed of a neat assemblies of fissile material from where neutrons were descended. In general, the energy of fission is extracted by a fluid serving to cool clusters. A reflector is arranged around the assemblies to reduce escaping of neutrons. This is made outside the reactor core. Different mechanisms of reactivity are generally necessary to control the chain reaction. Manoeuvring of Boiling Water Reactor takes place by controlling insertion of absorbent rods to various places of the core. If no blocked assembly calculations are known and mastered, blocked assembly neutronic calculation are delicate and often treated by case to case in present studies [1]. Answering the question how to model crossbar for the control of a boiling water reactor ? requires the choice of a representation level for every chain of variables, the physical model, and its representing equations, etc. The aim of this study is to select the best applicable parameter serving to calculate blocked assembly of a Boiling Water Reactor. This will be made through a range of representative configurations of these reactors and used absorbing environment, in order to illustrate strategies of modelling in the case of an industrial calculation. (authors)
Modelling and parallel calculation of a kinetic boundary layer
International Nuclear Information System (INIS)
Perlat, Jean Philippe
1998-01-01
This research thesis aims at addressing reliability and cost issues in the calculation by numeric simulation of flows in transition regime. The first step has been to reduce calculation cost and memory space for the Monte Carlo method which is known to provide performance and reliability for rarefied regimes. Vector and parallel computers allow this objective to be reached. Here, a MIMD (multiple instructions, multiple data) machine has been used which implements parallel calculation at different levels of parallelization. Parallelization procedures have been adapted, and results showed that parallelization by calculation domain decomposition was far more efficient. Due to reliability issue related to the statistic feature of Monte Carlo methods, a new deterministic model was necessary to simulate gas molecules in transition regime. New models and hyperbolic systems have therefore been studied. One is chosen which allows thermodynamic values (density, average velocity, temperature, deformation tensor, heat flow) present in Navier-Stokes equations to be determined, and the equations of evolution of thermodynamic values are described for the mono-atomic case. Numerical resolution of is reported. A kinetic scheme is developed which complies with the structure of all systems, and which naturally expresses boundary conditions. The validation of the obtained 14 moment-based model is performed on shock problems and on Couette flows [fr
Diffusion theory model for optimization calculations of cold neutron sources
International Nuclear Information System (INIS)
Azmy, Y.Y.
1987-01-01
Cold neutron sources are becoming increasingly important and common experimental facilities made available at many research reactors around the world due to the high utility of cold neutrons in scattering experiments. The authors describe a simple two-group diffusion model of an infinite slab LD 2 cold source. The simplicity of the model permits to obtain an analytical solution from which one can deduce the reason for the optimum thickness based solely on diffusion-type phenomena. Also, a second more sophisticated model is described and the results compared to a deterministic transport calculation. The good (particularly qualitative) agreement between the results suggests that diffusion theory methods can be used in parametric and optimization studies to avoid the generally more expensive transport calculations
Updating known distribution models for forecasting climate change impact on endangered species.
Muñoz, Antonio-Román; Márquez, Ana Luz; Real, Raimundo
2013-01-01
To plan endangered species conservation and to design adequate management programmes, it is necessary to predict their distributional response to climate change, especially under the current situation of rapid change. However, these predictions are customarily done by relating de novo the distribution of the species with climatic conditions with no regard of previously available knowledge about the factors affecting the species distribution. We propose to take advantage of known species distribution models, but proceeding to update them with the variables yielded by climatic models before projecting them to the future. To exemplify our proposal, the availability of suitable habitat across Spain for the endangered Bonelli's Eagle (Aquila fasciata) was modelled by updating a pre-existing model based on current climate and topography to a combination of different general circulation models and Special Report on Emissions Scenarios. Our results suggested that the main threat for this endangered species would not be climate change, since all forecasting models show that its distribution will be maintained and increased in mainland Spain for all the XXI century. We remark on the importance of linking conservation biology with distribution modelling by updating existing models, frequently available for endangered species, considering all the known factors conditioning the species' distribution, instead of building new models that are based on climate change variables only.
Noh, Seong Jin; Tachikawa, Yasuto; Shiiba, Michiharu; Kim, Sunmin
Applications of data assimilation techniques have been widely used to improve upon the predictability of hydrologic modeling. Among various data assimilation techniques, sequential Monte Carlo (SMC) filters, known as "particle filters" provide the capability to handle non-linear and non-Gaussian state-space models. This paper proposes a dual state-parameter updating scheme (DUS) based on SMC methods to estimate both state and parameter variables of a hydrologic model. We introduce a kernel smoothing method for the robust estimation of uncertain model parameters in the DUS. The applicability of the dual updating scheme is illustrated using the implementation of the storage function model on a middle-sized Japanese catchment. We also compare performance results of DUS combined with various SMC methods, such as SIR, ASIR and RPF.
Near-Source Modeling Updates: Building Downwash & Near-Road
The presentation describes recent research efforts in near-source model development focusing on building downwash and near-road barriers. The building downwash section summarizes a recent wind tunnel study, ongoing computational fluid dynamics simulations and efforts to improve ...
Hybrid Reduced Order Modeling Algorithms for Reactor Physics Calculations
Bang, Youngsuk
Reduced order modeling (ROM) has been recognized as an indispensable approach when the engineering analysis requires many executions of high fidelity simulation codes. Examples of such engineering analyses in nuclear reactor core calculations, representing the focus of this dissertation, include the functionalization of the homogenized few-group cross-sections in terms of the various core conditions, e.g. burn-up, fuel enrichment, temperature, etc. This is done via assembly calculations which are executed many times to generate the required functionalization for use in the downstream core calculations. Other examples are sensitivity analysis used to determine important core attribute variations due to input parameter variations, and uncertainty quantification employed to estimate core attribute uncertainties originating from input parameter uncertainties. ROM constructs a surrogate model with quantifiable accuracy which can replace the original code for subsequent engineering analysis calculations. This is achieved by reducing the effective dimensionality of the input parameter, the state variable, or the output response spaces, by projection onto the so-called active subspaces. Confining the variations to the active subspace allows one to construct an ROM model of reduced complexity which can be solved more efficiently. This dissertation introduces a new algorithm to render reduction with the reduction errors bounded based on a user-defined error tolerance which represents the main challenge of existing ROM techniques. Bounding the error is the key to ensuring that the constructed ROM models are robust for all possible applications. Providing such error bounds represents one of the algorithmic contributions of this dissertation to the ROM state-of-the-art. Recognizing that ROM techniques have been developed to render reduction at different levels, e.g. the input parameter space, the state space, and the response space, this dissertation offers a set of novel
TTS-Polttopuu - cost calculation model for fuelwood
International Nuclear Information System (INIS)
Naett, H.; Ryynaenen, S.
1998-01-01
The TTS-Institutes's Forestry Department has developed a computer based costcalculation model, 'TTS-Polttopuu', for the calculation of unit costs and resource needs in the harvesting systems for wood chips and split firewood. The model enables to determine the productivity and device cost per operating hour by each working stage of the harvesting system. The calculation model also enables the user to find out how changes in the productivity and cost bases of different harvesting chains influence the unit cost of the whole system. The harvesting chain includes the cutting of delimbed and non-delimbed fuelwood, forest haulage, road transportation chipping and chopping of longwood at storage. This individually operating software was originally developed to serve research needs, but it also serves the needs of the forestry and agricultural education, training and extension as well as individual firewood producers. The system requirements for this cost calculation model are at least 486-level processor with the Windows 95/98 -operating system, 16 MB of memory (RAM) and 5 MB of available hard-disk. This development work was carried out in conjunction with the nation-wide BIOENERGY Research Programme. (orig.)
Calculations of dose distributions using a neural network model
International Nuclear Information System (INIS)
Mathieu, R; Martin, E; Gschwind, R; Makovicka, L; Contassot-Vivier, S; Bahi, J
2005-01-01
The main goal of external beam radiotherapy is the treatment of tumours, while sparing, as much as possible, surrounding healthy tissues. In order to master and optimize the dose distribution within the patient, dosimetric planning has to be carried out. Thus, for determining the most accurate dose distribution during treatment planning, a compromise must be found between the precision and the speed of calculation. Current techniques, using analytic methods, models and databases, are rapid but lack precision. Enhanced precision can be achieved by using calculation codes based, for example, on Monte Carlo methods. However, in spite of all efforts to optimize speed (methods and computer improvements), Monte Carlo based methods remain painfully slow. A newer way to handle all of these problems is to use a new approach in dosimetric calculation by employing neural networks. Neural networks (Wu and Zhu 2000 Phys. Med. Biol. 45 913-22) provide the advantages of those various approaches while avoiding their main inconveniences, i.e., time-consumption calculations. This permits us to obtain quick and accurate results during clinical treatment planning. Currently, results obtained for a single depth-dose calculation using a Monte Carlo based code (such as BEAM (Rogers et al 2003 NRCC Report PIRS-0509(A) rev G)) require hours of computing. By contrast, the practical use of neural networks (Mathieu et al 2003 Proceedings Journees Scientifiques Francophones, SFRP) provides almost instant results and quite low errors (less than 2%) for a two-dimensional dosimetric map
Status Update: Modeling Energy Balance in NIF Hohlraums
Energy Technology Data Exchange (ETDEWEB)
Jones, O. S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2015-07-22
We have developed a standardized methodology to model hohlraum drive in NIF experiments. We compare simulation results to experiments by 1) comparing hohlraum xray fluxes and 2) comparing capsule metrics, such as bang times. Long-pulse, high gas-fill hohlraums require a 20-28% reduction in simulated drive and inclusion of ~15% backscatter to match experiment through (1) and (2). Short-pulse, low fill or near-vacuum hohlraums require a 10% reduction in simulated drive to match experiment through (2); no reduction through (1). Ongoing work focuses on physical model modifications to improve these matches.
An updated summary of MATHEW/ADPIC model evaluation studies
International Nuclear Information System (INIS)
Foster, K.T.; Dickerson, M.H.
1990-05-01
This paper summarizes the major model evaluation studies conducted for the MATHEW/ADPIC atmospheric transport and diffusion models used by the US Department of Energy's Atmospheric Release Advisory Capability. These studies have taken place over the last 15 years and involve field tracer releases influenced by a variety of meteorological and topographical conditions. Neutrally buoyant tracers released both as surface and elevated point sources, as well as material dispersed by explosive, thermally bouyant release mechanisms have been studied. Results from these studies show that the MATHEW/ADPIC models estimate the tracer air concentrations to within a factor of two of the measured values 20% to 50% of the time, and within a factor of five of the measurements 35% to 85% of the time depending on the complexity of the meteorology and terrain, and the release height of the tracer. Comparisons of model estimates to peak downwind deposition and air concentration measurements from explosive releases are shown to be generally within a factor of two to three. 24 refs., 14 figs., 3 tabs
General equilibrium basic needs policy model, (updating part).
Kouwenaar A
1985-01-01
ILO pub-WEP pub-PREALC pub. Working paper, econometric model for the assessment of structural change affecting development planning for basic needs satisfaction in Ecuador - considers population growth, family size (households), labour force participation, labour supply, wages, income distribution, profit rates, capital ownership, etc.; examines nutrition, education and health as factors influencing productivity. Diagram, graph, references, statistical tables.
Recent Updates to the GEOS-5 Linear Model
Holdaway, Dan; Kim, Jong G.; Errico, Ron; Gelaro, Ronald; Mahajan, Rahul
2014-01-01
Global Modeling and Assimilation Office (GMAO) is close to having a working 4DVAR system and has developed a linearized version of GEOS-5.This talk outlines a series of improvements made to the linearized dynamics, physics and trajectory.Of particular interest is the development of linearized cloud microphysics, which provides the framework for 'all-sky' data assimilation.
Update on Parametric Cost Models for Space Telescopes
Stahl. H. Philip; Henrichs, Todd; Luedtke, Alexander; West, Miranda
2011-01-01
Since the June 2010 Astronomy Conference, an independent review of our cost data base discovered some inaccuracies and inconsistencies which can modify our previously reported results. This paper will review changes to the data base, our confidence in those changes and their effect on various parametric cost models
Model and calculation of in situ stresses in anisotropic formations
Energy Technology Data Exchange (ETDEWEB)
Yuezhi, W.; Zijun, L.; Lixin, H. [Jianghan Petroleum Institute, (China)
1997-08-01
In situ stresses in transversely isotropic material in relation to wellbore stability have been investigated. Equations for three horizontal in- situ stresses and a new formation fracture pressure model were described, and the methodology for determining the elastic parameters of anisotropic rocks in the laboratory was outlined. Results indicate significantly smaller differences between theoretically calculated pressures and actual formation pressures than results obtained by using the isotropic method. Implications for improvements in drilling efficiency were reviewed. 13 refs., 6 figs.
Perturbation theory instead of large scale shell model calculations
International Nuclear Information System (INIS)
Feldmeier, H.; Mankos, P.
1977-01-01
Results of large scale shell model calculations for (sd)-shell nuclei are compared with a perturbation theory provides an excellent approximation when the SU(3)-basis is used as a starting point. The results indicate that perturbation theory treatment in an SU(3)-basis including 2hω excitations should be preferable to a full diagonalization within the (sd)-shell. (orig.) [de
Calculation of relativistic model stars using Regge calculus
International Nuclear Information System (INIS)
Porter, J.
1987-01-01
A new approach to the Regge calculus, developed in a previous paper, is used in conjunction with the velocity potential version of relativistic fluid dynamics due to Schutz [1970, Phys. Rev., D, 2, 2762] to calculate relativistic model stars. The results are compared with those obtained when the Tolman-Oppenheimer-Volkov equations are solved by other numerical methods. The agreement is found to be excellent. (author)
DEFF Research Database (Denmark)
Kristensen, Anders Ringgaard; Søllested, Thomas Algot
2004-01-01
improvements. The biological model of the replacement model is described in a previous paper and in this paper the optimization model is described. The model is developed as a prototype for use under practical conditions. The application of the model is demonstrated using data from two commercial Danish sow......Recent methodological improvements in replacement models comprising multi-level hierarchical Markov processes and Bayesian updating have hardly been implemented in any replacement model and the aim of this study is to present a sow replacement model that really uses these methodological...... herds. It is concluded that the Bayesian updating technique and the hierarchical structure decrease the size of the state space dramatically. Since parameter estimates vary considerably among herds it is concluded that decision support concerning sow replacement only makes sense with parameters...
iTree-Hydro: Snow hydrology update for the urban forest hydrology model
Yang Yang; Theodore A. Endreny; David J. Nowak
2011-01-01
This article presents snow hydrology updates made to iTree-Hydro, previously called the Urban Forest EffectsâHydrology model. iTree-Hydro Version 1 was a warm climate model developed by the USDA Forest Service to provide a process-based planning tool with robust water quantity and quality predictions given data limitations common to most urban areas. Cold climate...
Chen, G. W.; Omenzetter, P.
2016-04-01
This paper presents the implementation of an updating procedure for the finite element model (FEM) of a prestressed concrete continuous box-girder highway off-ramp bridge. Ambient vibration testing was conducted to excite the bridge, assisted by linear chirp sweepings induced by two small electrodynamic shakes deployed to enhance the excitation levels, since the bridge was closed to traffic. The data-driven stochastic subspace identification method was executed to recover the modal properties from measurement data. An initial FEM was developed and correlation between the experimental modal results and their analytical counterparts was studied. Modelling of the pier and abutment bearings was carefully adjusted to reflect the real operational conditions of the bridge. The subproblem approximation method was subsequently utilized to automatically update the FEM. For this purpose, the influences of bearing stiffness, and mass density and Young's modulus of materials were examined as uncertain parameters using sensitivity analysis. The updating objective function was defined based on a summation of squared values of relative errors of natural frequencies between the FEM and experimentation. All the identified modes were used as the target responses with the purpose of putting more constrains for the optimization process and decreasing the number of potentially feasible combinations for parameter changes. The updated FEM of the bridge was able to produce sufficient improvements in natural frequencies in most modes of interest, and can serve for a more precise dynamic response prediction or future investigation of the bridge health.
Persistent estrus rat models of polycystic ovary disease: an update.
Singh, Krishna B
2005-10-01
To critically review published articles on polycystic ovary (PCO) disease in rat models, with a focus on delineating its pathophysiology. Review of the English-language literature published from 1966 to March 2005 was performed through PubMed search. Keywords or phrases used were persistent estrus, chronic anovulation, polycystic ovary, polycystic ovary disease, and polycystic ovary syndrome. Articles were also located via bibliographies of published literature. University Health Sciences Center. Articles on persistent estrus and PCO in rats were selected and reviewed regarding the methods for induction of PCO disease. Changes in the reproductive cycle, ovarian morphology, hormonal parameters, and factors associated with the development of PCO disease in rat models were analyzed. Principal methods for inducing PCO in the rat include exposure to constant light, anterior hypothalamic and amygdaloidal lesions, and the use of androgens, estrogens, antiprogestin, and mifepristone. The validated rat PCO models provide useful information on morphologic and hormonal disturbances in the pathogenesis of chronic anovulation in this condition. These studies have aimed to replicate the morphologic and hormonal characteristics observed in the human PCO syndrome. The implications of these studies to human condition are discussed.
Structure-dynamic model verification calculation of PWR 5 tests
International Nuclear Information System (INIS)
Engel, R.
1980-02-01
Within reactor safety research project RS 16 B of the German Federal Ministry of Research and Technology (BMFT), blowdown experiments are conducted at Battelle Institut e.V. Frankfurt/Main using a model reactor pressure vessel with a height of 11,2 m and internals corresponding to those in a PWR. In the present report the dynamic loading on the pressure vessel internals (upper perforated plate and barrel suspension) during the DWR 5 experiment are calculated by means of a vertical and horizontal dynamic model using the CESHOCK code. The equations of motion are resolved by direct integration. (orig./RW) [de
Borup, Morten; Grum, Morten; Mikkelsen, Peter Steen
2013-01-01
When an online runoff model is updated from system measurements, the requirements of the precipitation input change. Using rain gauge data as precipitation input there will be a displacement between the time when the rain hits the gauge and the time where the rain hits the actual catchment, due to the time it takes for the rain cell to travel from the rain gauge to the catchment. Since this time displacement is not present for system measurements the data assimilation scheme might already have updated the model to include the impact from the particular rain cell when the rain data is forced upon the model, which therefore will end up including the same rain twice in the model run. This paper compares forecast accuracy of updated models when using time displaced rain input to that of rain input with constant biases. This is done using a simple time-area model and historic rain series that are either displaced in time or affected with a bias. The results show that for a 10 minute forecast, time displacements of 5 and 10 minutes compare to biases of 60 and 100%, respectively, independent of the catchments time of concentration.
a Proposed Benchmark Problem for Scatter Calculations in Radiographic Modelling
Jaenisch, G.-R.; Bellon, C.; Schumm, A.; Tabary, J.; Duvauchelle, Ph.
2009-03-01
Code Validation is a permanent concern in computer modelling, and has been addressed repeatedly in eddy current and ultrasonic modeling. A good benchmark problem is sufficiently simple to be taken into account by various codes without strong requirements on geometry representation capabilities, focuses on few or even a single aspect of the problem at hand to facilitate interpretation and to avoid that compound errors compensate themselves, yields a quantitative result and is experimentally accessible. In this paper we attempt to address code validation for one aspect of radiographic modeling, the scattered radiation prediction. Many NDT applications can not neglect scattered radiation, and the scatter calculation thus is important to faithfully simulate the inspection situation. Our benchmark problem covers the wall thickness range of 10 to 50 mm for single wall inspections, with energies ranging from 100 to 500 keV in the first stage, and up to 1 MeV with wall thicknesses up to 70 mm in the extended stage. A simple plate geometry is sufficient for this purpose, and the scatter data is compared on a photon level, without a film model, which allows for comparisons with reference codes like MCNP. We compare results of three Monte Carlo codes (McRay, Sindbad and Moderato) as well as an analytical first order scattering code (VXI), and confront them to results obtained with MCNP. The comparison with an analytical scatter model provides insights into the application domain where this kind of approach can successfully replace Monte-Carlo calculations.
Ordinary Mathematical Models in Calculating the Aviation GTE Parameters
Directory of Open Access Journals (Sweden)
E. A. Khoreva
2017-01-01
Full Text Available The paper presents the analytical review results of the ordinary mathematical models of the operating process used to study aviation GTE parameters and characteristics at all stages of its creation and operation. Considers the mathematical models of the zero and the first level, which are mostly used when solving typical problems in calculating parameters and characteristics of engines.Presents a number of practical problems arising in designing aviation GTE for various applications.The application of mathematical models of the zero-level engine can be quite appropriate when the engine is considered as a component in the aircraft system to estimate its calculated individual flight performance or when modeling the flight cycle of the aircrafts of different purpose.The paper demonstrates that introduction of correction functions into the first-level mathematical models in solving typical problems (influence of the Reynolds number, characteristics deterioration of the units during the overhaul period of engine, as well as influence of the flow inhomogeneity at the inlet because of manufacturing tolerance, etc. enables providing a sufficient engineering estimate accuracy to reflect a realistic operating process in the engine and its elements.
Freeway travel speed calculation model based on ETC transaction data.
Weng, Jiancheng; Yuan, Rongliang; Wang, Ru; Wang, Chang
2014-01-01
Real-time traffic flow operation condition of freeway gradually becomes the critical information for the freeway users and managers. In fact, electronic toll collection (ETC) transaction data effectively records operational information of vehicles on freeway, which provides a new method to estimate the travel speed of freeway. First, the paper analyzed the structure of ETC transaction data and presented the data preprocess procedure. Then, a dual-level travel speed calculation model was established under different levels of sample sizes. In order to ensure a sufficient sample size, ETC data of different enter-leave toll plazas pairs which contain more than one road segment were used to calculate the travel speed of every road segment. The reduction coefficient α and reliable weight θ for sample vehicle speed were introduced in the model. Finally, the model was verified by the special designed field experiments which were conducted on several freeways in Beijing at different time periods. The experiments results demonstrated that the average relative error was about 6.5% which means that the freeway travel speed could be estimated by the proposed model accurately. The proposed model is helpful to promote the level of the freeway operation monitoring and the freeway management, as well as to provide useful information for the freeway travelers.
Mathematical model of kinetostatithic calculation of flat lever mechanisms
Directory of Open Access Journals (Sweden)
A. S. Sidorenko
2016-01-01
Full Text Available Currently widely used graphical-analytical methods of analysis largely obsolete, replaced by various analytical methods using computer technology. Therefore, of particular interest is the development of a mathematical model kinetostatical calculation mechanisms in the form of library procedures of calculation for all powered two groups Assyrians (GA and primary level. Before resorting to the appropriate procedure that computes all the forces in the kinematic pairs, you need to compute inertial forces, moments of forces of inertia and all external forces and moments acting on this GA. To this end shows the design diagram of the power analysis for each species GA of the second class, as well as the initial link. Finding reactions in the internal and external kinematic pairs based on equilibrium conditions with the account of forces of inertia and moments of inertia forces (Dalembert principle. Thus obtained equations of kinetostatical for their versatility have been solved by the Cramer rule. Thus, for each GA of the second class were found all 6 unknowns: the forces in the kinematic pairs, the directions of these forces as well as forces the shoulders. If we study kinetostatic mechanism with parallel consolidation of two GA in the initial link, in this case, power is the geometric sum of the forces acting on the primary link from the discarded GA. Thus, the obtained mathematical model kinetostatical calculation mechanisms in the form of libraries of mathematical procedures for determining reactions of all GA of the second class. The mathematical model kinetostatical calculation makes it relatively simple to implement its software implementation.
Hirarchical emotion calculation model for virtual human modellin - biomed 2010.
Zhao, Yue; Wright, David
2010-01-01
This paper introduces a new emotion generation method for virtual human modelling. The method includes a novel hierarchical emotion structure, a group of emotion calculation equations and a simple heuristics decision making mechanism, which enables virtual humans to perform emotionally in real-time according to their internal and external factors. Emotion calculation equations used in this research were derived from psychologic emotion measurements. Virtual humans can utilise the information in virtual memory and emotion calculation equations to generate their own numerical emotion states within the hierarchical emotion structure. Those emotion states are important internal references for virtual humans to adopt appropriate behaviours and also key cues for their decision making. A simple heuristics theory is introduced and integrated into decision making process in order to make the virtual humans decision making more like a real human. A data interface which connects the emotion calculation and the decision making structure together has also been designed and simulated to test the method in Virtools environment.
Modeling and Calculation of Dent Based on Pipeline Bending Strain
Directory of Open Access Journals (Sweden)
Qingshan Feng
2016-01-01
Full Text Available The bending strain of long-distance oil and gas pipelines can be calculated by the in-line inspection tool which used inertial measurement unit (IMU. The bending strain is used to evaluate the strain and displacement of the pipeline. During the bending strain inspection, the dent existing in the pipeline can affect the bending strain data as well. This paper presents a novel method to model and calculate the pipeline dent based on the bending strain. The technique takes inertial mapping data from in-line inspection and calculates depth of dent in the pipeline using Bayesian statistical theory and neural network. To verify accuracy of the proposed method, an in-line inspection tool is used to inspect pipeline to gather data. The calculation of dent shows the method is accurate for the dent, and the mean relative error is 2.44%. The new method provides not only strain of the pipeline dent but also the depth of dent. It is more benefit for integrity management of pipeline for the safety of the pipeline.
U.S. Environmental Protection Agency — The uploaded data consists of the BRACE Na aerosol observations paired with CMAQ model output, the updated model's parameterization of sea salt aerosol emission size...
Theoretical model for calculation of molecular stopping power
International Nuclear Information System (INIS)
Xu, Y.J.
1984-01-01
A modified local plasma model based on the work of Linhard-Winther, Bethe, Brown, and Walske is established. The Gordon-Kim's molecular charged density model is employed to obtain a formula to evaluate the stopping power of many useful molecular systems. The stopping power of H 2 and He gas was calculated for incident proton energy ranging from 100 KeV to 2.5 MeV. The stopping power of O 2 , N 2 , and water vapor was also calculated for incident proton energy ranging from 40 keV to 2.5 MeV. Good agreement with experimental data was obtained. A discussion of molecular effects leading to departure from Bragg's rule is presented. The equipartition rule and the effect of nuclear momentum recoiling in stopping power are also discussed in the appendix. The calculation procedure presented hopefully can easily be extended to include the most useful organic systems such as the molecules composed of carbon, nitrogen, hydrogen and oxygen which are useful in radiation protection field
Total energy calculations from self-energy models
International Nuclear Information System (INIS)
Sanchez-Friera, P.
2001-06-01
Density-functional theory is a powerful method to calculate total energies of large systems of interacting electrons. The usefulness of this method, however, is limited by the fact that an approximation is required for the exchange-correlation energy. Currently used approximations (LDA and GGA) are not sufficiently accurate in many physical problems, as for instance the study of chemical reactions. It has been shown that exchange-correlation effects can be accurately described via the self-energy operator in the context of many-body perturbation theory. This is, however, a computationally very demanding approach. In this thesis a new scheme for calculating total energies is proposed, which combines elements from many-body perturbation theory and density-functional theory. The exchange-correlation energy functional is built from a simplified model of the self-energy, that nevertheless retains the main features of the exact operator. The model is built in such way that the computational effort is not significantly increased with respect to that required in a typical density-functional theory calculation. (author)
Calculations of the electrostatic potential adjacent to model phospholipid bilayers.
Peitzsch, R M; Eisenberg, M; Sharp, K A; McLaughlin, S
1995-03-01
We used the nonlinear Poisson-Boltzmann equation to calculate electrostatic potentials in the aqueous phase adjacent to model phospholipid bilayers containing mixtures of zwitterionic lipids (phosphatidylcholine) and acidic lipids (phosphatidylserine or phosphatidylglycerol). The aqueous phase (relative permittivity, epsilon r = 80) contains 0.1 M monovalent salt. When the bilayers contain equipotential surfaces are discrete domes centered over the negatively charged lipids and are approximately twice the value calculated using Debye-Hückel theory. When the bilayers contain > 25% acidic lipid, the -25 mV equipotential profiles are essentially flat and agree well with the values calculated using Gouy-Chapman theory. When the bilayers contain 100% acidic lipid, all of the equipotential surfaces are flat and agree with Gouy-Chapman predictions (including the -100 mV surface, which is located only 1 A from the outermost atoms). Even our model bilayers are not simple systems: the charge on each lipid is distributed over several atoms, these partial charges are non-coplanar, there is a 2 A ion-exclusion region (epsilon r = 80) adjacent to the polar headgroups, and the molecular surface is rough. We investigated the effect of these four factors using smooth (or bumpy) epsilon r = 2 slabs with embedded point charges: these factors had only minor effects on the potential in the aqueous phase.
EPA's announced the availability of the final report, Updates to the Demographic and Spatial Allocation Models to Produce Integrated Climate and Land Use Scenarios (ICLUS) (Version 2). This update furthered land change modeling by providing nationwide housing developmen...
Improved SVR Model for Multi-Layer Buildup Factor Calculation
International Nuclear Information System (INIS)
Trontl, K.; Pevec, D.; Smuc, T.
2006-01-01
The accuracy of point kernel method applied in gamma ray dose rate calculations in shielding design and radiation safety analysis is limited by the accuracy of buildup factors used in calculations. Although buildup factors for single-layer shields are well defined and understood, buildup factors for stratified shields represent a complex physical problem that is hard to express in mathematical terms. The traditional approach for expressing buildup factors of multi-layer shields is through semi-empirical formulas obtained by fitting the results of transport theory or Monte Carlo calculations. Such an approach requires an ad-hoc definition of the fitting function and often results with numerous and usually inadequately explained and defined correction factors added to the final empirical formula. Even more, finally obtained formulas are generally limited to a small number of predefined combinations of materials within relatively small range of gamma ray energies and shield thicknesses. Recently, a new approach has been suggested by the authors involving one of machine learning techniques called Support Vector Machines, i.e., Support Vector Regression (SVR). Preliminary investigations performed for double-layer shields revealed great potential of the method, but also pointed out some drawbacks of the developed model, mostly related to the selection of one of the parameters describing the problem (material atomic number), and the method in which the model was designed to evolve during the learning process. It is the aim of this paper to introduce a new parameter (single material buildup factor) that is to replace the existing material atomic number as an input parameter. The comparison of two models generated by different input parameters has been performed. The second goal is to improve the evolution process of learning, i.e., the experimental computational procedure that provides a framework for automated construction of complex regression models of predefined
Freight Calculation Model: A Case Study of Coal Distribution
Yunianto, I. T.; Lazuardi, S. D.; Hadi, F.
2018-03-01
Coal has been known as one of energy alternatives that has been used as energy source for several power plants in Indonesia. During its transportation from coal sites to power plant locations is required the eligible shipping line services that are able to provide the best freight rate. Therefore, this study aims to obtain the standardized formulations for determining the ocean freight especially for coal distribution based on the theoretical concept. The freight calculation model considers three alternative transport modes commonly used in coal distribution: tug-barge, vessel and self-propelled barge. The result shows there are two cost components very dominant in determining the value of freight with the proportion reaching 90% or even more, namely: time charter hire and fuel cost. Moreover, there are three main factors that have significant impacts on the freight calculation, which are waiting time at ports, time charter rate and fuel oil price.
Nuclear model calculations on cyclotron production of {sup 51}Cr
Energy Technology Data Exchange (ETDEWEB)
Kakavand, Tayeb [Imam Khomeini International Univ., Qazvin (Iran, Islamic Republic of). Dept. of Physics; Aboudzadeh, Mohammadreza [Nuclear Science and Technology Research Institute/AEOI, Karaj (Iran, Islamic Republic of). Agricultural, Medical and Industrial Research School; Farahani, Zahra; Eslami, Mohammad [Zanjan Univ. (Iran, Islamic Republic of). Dept. of Physics
2015-12-15
{sup 51}Cr (T{sub 1/2} = 27.7 d), which decays via electron capture (100 %) with 320 keV gamma emission (9.8 %), is a radionuclide with still a large application in biological studies. In this work, ALICE/ASH and TALYS nuclear model codes along with some adjustments are used to calculate the excitation functions for proton, deuteron, α-particle and neutron induced on various targets leading to the production of {sup 51}Cr radioisotope. The production yields of {sup 51}Cr from various reactions are determined using the excitation function calculations and stopping power data. The results are compared with corresponding experimental data and discussed from point of view of feasibility.
Optical model calculation of neutron-nucleus scattering cross sections
International Nuclear Information System (INIS)
Smith, M.E.; Camarda, H.S.
1980-01-01
A program to calculate the total, elastic, reaction, and differential cross section of a neutron interacting with a nucleus is described. The interaction between the neutron and the nucleus is represented by a spherically symmetric complex potential that includes spin-orbit coupling. This optical model problem is solved numerically, and is treated with the partial-wave formalism of scattering theory. The necessary scattering theory required to solve this problem is briefly stated. Then, the numerical methods used to integrate the Schroedinger equation, calculate derivatives, etc., are described, and the results of various programming tests performed are presented. Finally, the program is discussed from a user's point of view, and it is pointed out how and where the program (OPTICAL) can be changed to satisfy particular needs
Updating Linear Schedules with Lowest Cost: a Linear Programming Model
Biruk, Sławomir; Jaśkowski, Piotr; Czarnigowska, Agata
2017-10-01
Many civil engineering projects involve sets of tasks repeated in a predefined sequence in a number of work areas along a particular route. A useful graphical representation of schedules of such projects is time-distance diagrams that clearly show what process is conducted at a particular point of time and in particular location. With repetitive tasks, the quality of project performance is conditioned by the ability of the planner to optimize workflow by synchronizing the works and resources, which usually means that resources are planned to be continuously utilized. However, construction processes are prone to risks, and a fully synchronized schedule may expire if a disturbance (bad weather, machine failure etc.) affects even one task. In such cases, works need to be rescheduled, and another optimal schedule should be built for the changed circumstances. This typically means that, to meet the fixed completion date, durations of operations have to be reduced. A number of measures are possible to achieve such reduction: working overtime, employing more resources or relocating resources from less to more critical tasks, but they all come at a considerable cost and affect the whole project. The paper investigates the problem of selecting the measures that reduce durations of tasks of a linear project so that the cost of these measures is kept to the minimum and proposes an algorithm that could be applied to find optimal solutions as the need to reschedule arises. Considering that civil engineering projects, such as road building, usually involve less process types than construction projects, the complexity of scheduling problems is lower, and precise optimization algorithms can be applied. Therefore, the authors put forward a linear programming model of the problem and illustrate its principle of operation with an example.
A relativistic point coupling model for nuclear structure calculations
International Nuclear Information System (INIS)
Buervenich, T.; Maruhn, J.A.; Madland, D.G.; Reinhard, P.G.
2002-01-01
A relativistic point coupling model is discussed focusing on a variety of aspects. In addition to the coupling using various bilinear Dirac invariants, derivative terms are also included to simulate finite-range effects. The formalism is presented for nuclear structure calculations of ground state properties of nuclei in the Hartree and Hartree-Fock approximations. Different fitting strategies for the determination of the parameters have been applied and the quality of the fit obtainable in this model is discussed. The model is then compared more generally to other mean-field approaches both formally and in the context of applications to ground-state properties of known and superheavy nuclei. Perspectives for further extensions such as an exact treatment of the exchange terms using a higher-order Fierz transformation are discussed briefly. (author)
Model for calculation of electrostatic contribution into protein stability
Kundrotas, Petras; Karshikoff, Andrey
2003-03-01
Existing models of the denatured state of proteins consider only one possible spatial distribution of protein charges and therefore are applicable to a limited number of cases. In this presentation a more general framework for the modeling of the denatured state is proposed. It is based on the assumption that the titratable groups of an unfolded protein can adopt a quasi-random distribution, restricted by the protein sequence. The model was tested on two proteins, barnase and N-terminal domain of the ribosomal protein L9. The calculated free energy of denaturation, Δ G( pH), reproduces the experimental data essentially better than the commonly used null approximation (NA). It was demonstrated that the seemingly good agreement with experimental data obtained by NA originates from the compensatory effect between the pair-wise electrostatic interactions and the desolvation energy of the individual sites. It was also found that the ionization properties of denatured proteins are influenced by the protein sequence.
Physical model and calculation code for fuel coolant interactions
International Nuclear Information System (INIS)
Goldammer, H.; Kottowski, H.
1976-01-01
A physical model is proposed to describe fuel coolant interactions in shock-tube geometry. According to the experimental results, an interaction model which divides each cycle into three phases is proposed. The first phase is the fuel-coolant-contact, the second one is the ejection and recently of the coolant, and the third phase is the impact and fragmentation. Physical background of these phases are illustrated in the first part of this paper. Mathematical expressions of the model are exposed in the second part. A principal feature of the computational method is the consistent application of the fourier-equation throughout the whole interaction process. The results of some calculations, performed for different conditions are compiled in attached figures. (Aoki, K.)
Intravascular brachytherapy: a model for the calculation of the dose
International Nuclear Information System (INIS)
Pirchio, Rosana; Martin, Gabriela; Rivera, Elena; Cricco, Graciela; Cocca, Claudia; Gutierrez, Alicia; Nunez, Mariel; Bergoc, Rosa; Guzman, Luis; Belardi, Diego
2002-01-01
In this study we present the radiation dose distribution for a theoretical model with Montecarlo simulation, and based on an experimental model developed for the study of the prevention of restenosis post-angioplasty employing intravascular brachytherapy. In the experimental in vivo model, the atherosclerotic plaques were induced in femoral arteries of male New Zealand rabbits through surgical intervention and later administration of cholesterol enriched diet. For the intravascular irradiation we employed a 32P source contained within the balloon used for the angioplasty. The radiation dose distributions were calculated using the Monte Carlo code MCNP4B according to a segment of a simulated artery. We studied the radiation dose distribution in the axial and radial directions for different thickness of the atherosclerotic plaques. The results will be correlated with the biologic effects observed by means of histological analysis of the irradiated arteries (Au)
Advanced Test Reactor Core Modeling Update Project Annual Report for Fiscal Year 2012
Energy Technology Data Exchange (ETDEWEB)
David W. Nigg, Principal Investigator; Kevin A. Steuhm, Project Manager
2012-09-01
Legacy computational reactor physics software tools and protocols currently used for support of Advanced Test Reactor (ATR) core fuel management and safety assurance, and to some extent, experiment management, are inconsistent with the state of modern nuclear engineering practice, and are difficult, if not impossible, to properly verify and validate (V&V) according to modern standards. Furthermore, the legacy staff knowledge required for application of these tools and protocols from the 1960s and 1970s is rapidly being lost due to staff turnover and retirements. In late 2009, the Idaho National Laboratory (INL) initiated a focused effort, the ATR Core Modeling Update Project, to address this situation through the introduction of modern high-fidelity computational software and protocols. This aggressive computational and experimental campaign will have a broad strategic impact on the operation of the ATR, both in terms of improved computational efficiency and accuracy for support of ongoing DOE programs as well as in terms of national and international recognition of the ATR National Scientific User Facility (NSUF). The ATR Core Modeling Update Project, targeted for full implementation in phase with the next anticipated ATR Core Internals Changeout (CIC) in the 2014-2015 time frame, began during the last quarter of Fiscal Year 2009, and has just completed its third full year. Key accomplishments so far have encompassed both computational as well as experimental work. A new suite of stochastic and deterministic transport theory based reactor physics codes and their supporting nuclear data libraries (HELIOS, KENO6/SCALE, NEWT/SCALE, ATTILA, and an extended implementation of MCNP5) has been installed at the INL under various licensing arrangements. Corresponding models of the ATR and ATRC are now operational with all five codes, demonstrating the basic feasibility of the new code packages for their intended purpose. Of particular importance, a set of as-run core
The prehistoric Vajont rockslide: An updated geological model
Paronuzzi, Paolo; Bolla, Alberto
2012-10-01
misinterpretations or even to erroneous engineering-geological and geotechnical models. Accurate fieldwork and modern technologies can be fundamental in solving such a very intriguing 'geological puzzle.'
Energy Technology Data Exchange (ETDEWEB)
Hwang, Ho-Ling [ORNL; Davis, Stacy Cagle [ORNL
2009-12-01
This report is designed to document the analysis process and estimation models currently used by the Federal Highway Administration (FHWA) to estimate the off-highway gasoline consumption and public sector fuel consumption. An overview of the entire FHWA attribution process is provided along with specifics related to the latest update (2008) on the Off-Highway Gasoline Use Model and the Public Use of Gasoline Model. The Off-Highway Gasoline Use Model is made up of five individual modules, one for each of the off-highway categories: agricultural, industrial and commercial, construction, aviation, and marine. This 2008 update of the off-highway models was the second major update (the first model update was conducted during 2002-2003) after they were originally developed in mid-1990. The agricultural model methodology, specifically, underwent a significant revision because of changes in data availability since 2003. Some revision to the model was necessary due to removal of certain data elements used in the original estimation method. The revised agricultural model also made use of some newly available information, published by the data source agency in recent years. The other model methodologies were not drastically changed, though many data elements were updated to improve the accuracy of these models. Note that components in the Public Use of Gasoline Model were not updated in 2008. A major challenge in updating estimation methods applied by the public-use model is that they would have to rely on significant new data collection efforts. In addition, due to resource limitation, several components of the models (both off-highway and public-us models) that utilized regression modeling approaches were not recalibrated under the 2008 study. An investigation of the Environmental Protection Agency's NONROAD2005 model was also carried out under the 2008 model update. Results generated from the NONROAD2005 model were analyzed, examined, and compared, to the extent that
Directory of Open Access Journals (Sweden)
Lei Qin
2014-05-01
Full Text Available We propose a novel approach for tracking an arbitrary object in video sequences for visual surveillance. The first contribution of this work is an automatic feature extraction method that is able to extract compact discriminative features from a feature pool before computing the region covariance descriptor. As the feature extraction method is adaptive to a specific object of interest, we refer to the region covariance descriptor computed using the extracted features as the adaptive covariance descriptor. The second contribution is to propose a weakly supervised method for updating the object appearance model during tracking. The method performs a mean-shift clustering procedure among the tracking result samples accumulated during a period of time and selects a group of reliable samples for updating the object appearance model. As such, the object appearance model is kept up-to-date and is prevented from contamination even in case of tracking mistakes. We conducted comparing experiments on real-world video sequences, which confirmed the effectiveness of the proposed approaches. The tracking system that integrates the adaptive covariance descriptor and the clustering-based model updating method accomplished stable object tracking on challenging video sequences.
Nonlinear model updating applied to the IMAC XXXII Round Robin benchmark system
Kurt, Mehmet; Moore, Keegan J.; Eriten, Melih; McFarland, D. Michael; Bergman, Lawrence A.; Vakakis, Alexander F.
2017-05-01
We consider the application of a new nonlinear model updating strategy to a computational benchmark system. The approach relies on analyzing system response time series in the frequency-energy domain by constructing both Hamiltonian and forced and damped frequency-energy plots (FEPs). The system parameters are then characterized and updated by matching the backbone branches of the FEPs with the frequency-energy wavelet transforms of experimental and/or computational time series. The main advantage of this method is that no nonlinearity model is assumed a priori, and the system model is updated solely based on simulation and/or experimental measured time series. By matching the frequency-energy plots of the benchmark system and its reduced-order model, we show that we are able to retrieve the global strongly nonlinear dynamics in the frequency and energy ranges of interest, identify bifurcations, characterize local nonlinearities, and accurately reconstruct time series. We apply the proposed methodology to a benchmark problem, which was posed to the system identification community prior to the IMAC XXXII (2014) and XXXIII (2015) Conferences as a "Round Robin Exercise on Nonlinear System Identification". We show that we are able to identify the parameters of the non-linear element in the problem with a priori knowledge about its position.
MODEL OF FEES CALCULATION FOR ACCESS TO TRACK INFRASTRUCTURE FACILITIES
Directory of Open Access Journals (Sweden)
M. I. Mishchenko
2014-12-01
Full Text Available Purpose. The purpose of the article is to develop a one- and two-element model of the fees calculation for the use of track infrastructure of Ukrainian railway transport. Methodology. On the basis of this one can consider that when planning the planned preventive track repair works and the amount of depreciation charges the guiding criterion is not the amount of progress it is the operating life of the track infrastructure facilities. The cost of PPTRW is determined on the basis of the following: the classification track repairs; typical technological processes for track repairs; technology based time standards for PPTRW; costs for the work of people, performing the PPTRW, their hourly wage rates according to the Order 98-Ts; the operating cost of machinery; regulated list; norms of expenditures and costs of materials and products (they have the largest share of the costs for repairs; railway rates; average distances for transportation of materials used during repair; standards of general production expenses and the administrative costs. Findings. The models offered in article allow executing the objective account of expenses in travelling facilities for the purpose of calculation of the proved size of indemnification and necessary size of profit, the sufficient enterprises for effective activity of a travelling infrastructure. Originality. The methodological bases of determination the fees (payments for the use of track infrastructure on one- and two-element base taking into account the experience of railways in the EC countries and the current transport legislation were grounded. Practical value. The article proposes the one- and two-element models of calculating the fees (payments for the TIF use, accounting the applicable requirements of European transport legislation, which provides the expense compensation and income formation, sufficient for economic incentives of the efficient operation of the TIE of Ukrainian railway transport.
Practical model for the calculation of multiply scattered lidar returns
International Nuclear Information System (INIS)
Eloranta, E.W.
1998-01-01
An equation to predict the intensity of the multiply scattered lidar return is presented. Both the scattering cross section and the scattering phase function can be specified as a function of range. This equation applies when the cloud particles are larger than the lidar wavelength. This approximation considers photon trajectories with multiple small-angle forward-scattering events and one large-angle scattering that directs the photon back toward the receiver. Comparisons with Monte Carlo simulations, exact double-scatter calculations, and lidar data demonstrate that this model provides accurate results. copyright 1998 Optical Society of America
The calculation of exchange forces: General results and specific models
International Nuclear Information System (INIS)
Scott, T.C.; Babb, J.F.; Dalgarno, A.; Morgan, J.D. III
1993-01-01
In order to clarify questions about the calculation of the exchange energy of a homonuclear molecular ion, an analysis is carried out of a model problem consisting of the one-dimensional limit of H 2 + . It is demonstrated that the use of the infinite polarization expansion for the localized wave function in the Holstein--Herring formula yields an approximate exchange energy which at large internuclear distances R has the correct leading behavior to O(e -R ) and is close to but not equal to the exact exchange energy. The extension to the n-dimensional double-well problem is presented
Modeling Dynamic Objects in Monte Carlo Particle Transport Calculations
International Nuclear Information System (INIS)
Yegin, G.
2008-01-01
In this study, the Multi-Geometry geometry modeling technique was improved in order to handle moving objects in a Monte Carlo particle transport calculation. In the Multi-Geometry technique, the geometry is a superposition of objects not surfaces. By using this feature, we developed a new algorithm which allows a user to make enable or disable geometry elements during particle transport. A disabled object can be ignored at a certain stage of a calculation and switching among identical copies of the same object located adjacent poins during a particle simulation corresponds to the movement of that object in space. We called this powerfull feature as Dynamic Multi-Geometry technique (DMG) which is used for the first time in Brachy Dose Monte Carlo code to simulate HDR brachytherapy treatment systems. Our results showed that having disabled objects in a geometry does not effect calculated dose values. This technique is also suitable to be used in other areas such as IMRT treatment planning systems
Cuba "updates" its economic model: perspectives for cooperation with the European Union
Schmieg, Evita
2017-01-01
Following the thawing of relations with the United States under Obama, Cuba is now seeking closer integration into the global economy through a programme of "guidelines" for updating the country’s economic model adopted in 2011. The central goals are increasing exports, substituting imports and encouraging foreign direct investment in order to improve the country’s hard currency situation, increase domestic value creation and reduce dependency on Venezuela. The guidelines also expand the spac...
Atmospheric release model for the E-area low-level waste facility: Updates and modifications
International Nuclear Information System (INIS)
None, None
2017-01-01
The atmospheric release model (ARM) utilizes GoldSim® Monte Carlo simulation software (GTG, 2017) to evaluate the flux of gaseous radionuclides as they volatilize from E-Area disposal facility waste zones, diffuse into the air-filled soil pores surrounding the waste, and emanate at the land surface. This report documents the updates and modifications to the ARM for the next planned E-Area PA considering recommendations from the 2015 PA strategic planning team outlined by Butcher and Phifer.
Measuring online learning systems success: applying the updated DeLone and McLean model.
Lin, Hsiu-Fen
2007-12-01
Based on a survey of 232 undergraduate students, this study used the updated DeLone and McLean information systems success model to examine the determinants for successful use of online learning systems (OLS). The results provided an expanded understanding of the factors that measure OLS success. The results also showed that system quality, information quality, and service quality had a significant effect on actual OLS use through user satisfaction and behavioral intention to use OLS.
Atmospheric release model for the E-area low-level waste facility: Updates and modifications
Energy Technology Data Exchange (ETDEWEB)
None, None
2017-11-16
The atmospheric release model (ARM) utilizes GoldSim® Monte Carlo simulation software (GTG, 2017) to evaluate the flux of gaseous radionuclides as they volatilize from E-Area disposal facility waste zones, diffuse into the air-filled soil pores surrounding the waste, and emanate at the land surface. This report documents the updates and modifications to the ARM for the next planned E-Area PA considering recommendations from the 2015 PA strategic planning team outlined by Butcher and Phifer.
Energy Technology Data Exchange (ETDEWEB)
Te Buck, S.; Van Keulen, B.; Bosselaar, L.; Gerlagh, T.; Skelton, T.
2010-07-15
This is the fifth, updated edition of the Dutch Renewable Energy Monitoring Protocol. The protocol, compiled on behalf of the Ministry of Economic Affairs, can be considered as a policy document that provides a uniform calculation method for determining the amount of energy produced in the Netherlands in a renewable manner. Because all governments and organisations use the calculation methods described in this protocol, this makes it possible to monitor developments in this field well and consistently. The introduction of this protocol outlines the history and describes its set-up, validity and relationship with other similar documents and agreements. The Dutch Renewable Energy Monitoring Protocol is compiled by NL Agency, and all relevant parties were given the chance to provide input. This has been incorporated as far as is possible. Statistics Netherlands (CBS) uses this protocol to calculate the amount of renewable energy produced in the Netherlands. These data are then used by the Ministry of Economic Affairs to gauge the realisation of policy objectives. In June 2009 the European Directive for energy from renewable sources was published with renewable energy targets for the Netherlands. This directive used a different calculation method - the gross energy end-use method - whilst the Dutch definition is based on the so-called substitution method. NL Agency was asked to add the calculation according to the gross end use method, although this is not clearly defined on a number of points. In describing the method, the unanswered questions become clear, as do, for example, the points the Netherlands should bring up in international discussions.
Research of Cadastral Data Modelling and Database Updating Based on Spatio-temporal Process
Directory of Open Access Journals (Sweden)
ZHANG Feng
2016-02-01
Full Text Available The core of modern cadastre management is to renew the cadastre database and keep its currentness,topology consistency and integrity.This paper analyzed the changes and their linkage of various cadastral objects in the update process.Combined object-oriented modeling technique with spatio-temporal objects' evolution express,the paper proposed a cadastral data updating model based on the spatio-temporal process according to people's thought.Change rules based on the spatio-temporal topological relations of evolution cadastral spatio-temporal objects are drafted and further more cascade updating and history back trace of cadastral features,land use and buildings are realized.This model implemented in cadastral management system-ReGIS.Achieved cascade changes are triggered by the direct driving force or perceived external events.The system records spatio-temporal objects' evolution process to facilitate the reconstruction of history,change tracking,analysis and forecasting future changes.
The 2018 and 2020 Updates of the U.S. National Seismic Hazard Models
Petersen, M. D.
2017-12-01
During 2018 the USGS will update the 2014 National Seismic Hazard Models by incorporating new seismicity models, ground motion models, site factors, fault inputs, and by improving weights to ground motion models using empirical and other data. We will update the earthquake catalog for the U.S. and introduce new rate models. Additional fault data will be used to improve rate estimates on active faults. New ground motion models (GMMs) and site factors for Vs30 have been released by the Pacific Earthquake Engineering Research Center (PEER) and we will consider these in assessing ground motions in craton and extended margin regions of the central and eastern U.S. The USGS will also include basin-depth terms for selected urban areas of the western United States to improve long-period shaking assessments using published depth estimates to 1.0 and 2.5 km/s shear wave velocities. We will produce hazard maps for input into the building codes that span a broad range of periods (0.1 to 5 s) and site classes (shear wave velocity from 2000 m/s to 200 m/s in the upper 30 m of the crust, Vs30). In the 2020 update we plan on including: a new national crustal model that defines basin depths required in the latest GMMs, new 3-D ground motion simulations for several urban areas, new magnitude-area equations, and new fault geodetic and geologic strain rate models. The USGS will also consider including new 3-D ground motion simulations for inclusion in these long-period maps. These new models are being evaluated and will be discussed at one or more regional and topical workshops held at the beginning of 2018.
Determination of appropriate models and parameters for premixing calculations
Energy Technology Data Exchange (ETDEWEB)
Park, Ik-Kyu; Kim, Jong-Hwan; Min, Beong-Tae; Hong, Seong-Wan
2008-03-15
The purpose of the present work is to use experiments that have been performed at Forschungszentrum Karlsruhe during about the last ten years for determining the most appropriate models and parameters for premixing calculations. The results of a QUEOS experiment are used to fix the parameters concerning heat transfer. The QUEOS experiments are especially suited for this purpose as they have been performed with small hot solid spheres. Therefore the area of heat exchange is known. With the heat transfer parameters fixed in this way, a PREMIX experiment is recalculated. These experiments have been performed with molten alumina (Al{sub 2}O{sub 3}) as a simulant of corium. Its initial temperature is 2600 K. With these experiments the models and parameters for jet and drop break-up are tested.
Determination of appropriate models and parameters for premixing calculations
International Nuclear Information System (INIS)
Park, Ik-Kyu; Kim, Jong-Hwan; Min, Beong-Tae; Hong, Seong-Wan
2008-03-01
The purpose of the present work is to use experiments that have been performed at Forschungszentrum Karlsruhe during about the last ten years for determining the most appropriate models and parameters for premixing calculations. The results of a QUEOS experiment are used to fix the parameters concerning heat transfer. The QUEOS experiments are especially suited for this purpose as they have been performed with small hot solid spheres. Therefore the area of heat exchange is known. With the heat transfer parameters fixed in this way, a PREMIX experiment is recalculated. These experiments have been performed with molten alumina (Al 2 O 3 ) as a simulant of corium. Its initial temperature is 2600 K. With these experiments the models and parameters for jet and drop break-up are tested
Calculating excess lifetime risk in relative risk models
International Nuclear Information System (INIS)
Vaeth, M.; Pierce, D.A.
1990-01-01
When assessing the impact of radiation exposure it is common practice to present the final conclusions in terms of excess lifetime cancer risk in a population exposed to a given dose. The present investigation is mainly a methodological study focusing on some of the major issues and uncertainties involved in calculating such excess lifetime risks and related risk projection methods. The age-constant relative risk model used in the recent analyses of the cancer mortality that was observed in the follow-up of the cohort of A-bomb survivors in Hiroshima and Nagasaki is used to describe the effect of the exposure on the cancer mortality. In this type of model the excess relative risk is constant in age-at-risk, but depends on the age-at-exposure. Calculation of excess lifetime risks usually requires rather complicated life-table computations. In this paper we propose a simple approximation to the excess lifetime risk; the validity of the approximation for low levels of exposure is justified empirically as well as theoretically. This approximation provides important guidance in understanding the influence of the various factors involved in risk projections. Among the further topics considered are the influence of a latent period, the additional problems involved in calculations of site-specific excess lifetime cancer risks, the consequences of a leveling off or a plateau in the excess relative risk, and the uncertainties involved in transferring results from one population to another. The main part of this study relates to the situation with a single, instantaneous exposure, but a brief discussion is also given of the problem with a continuous exposure at a low-dose rate
Recent Developments in No-Core Shell-Model Calculations
International Nuclear Information System (INIS)
Navratil, P.; Quaglioni, S.; Stetcu, I.; Barrett, B.R.
2009-01-01
We present an overview of recent results and developments of the no-core shell model (NCSM), an ab initio approach to the nuclear many-body problem for light nuclei. In this aproach, we start from realistic two-nucleon or two- plus three-nucleon interactions. Many-body calculations are performed using a finite harmonic-oscillator (HO) basis. To facilitate convergence for realistic inter-nucleon interactions that generate strong short-range correlations, we derive effective interactions by unitary transformations that are tailored to the HO basis truncation. For soft realistic interactions this might not be necessary. If that is the case, the NCSM calculations are variational. In either case, the ab initio NCSM preserves translational invariance of the nuclear many-body problem. In this review, we, in particular, highlight results obtained with the chiral two- plus three-nucleon interactions. We discuss efforts to extend the applicability of the NCSM to heavier nuclei and larger model spaces using importance-truncation schemes and/or use of effective interactions with a core. We outline an extension of the ab initio NCSM to the description of nuclear reactions by the resonating group method technique. A future direction of the approach, the ab initio NCSM with continuum, which will provide a complete description of nuclei as open systems with coupling of bound and continuum states is given in the concluding part of the review.
Recent Developments in No-Core Shell-Model Calculations
Energy Technology Data Exchange (ETDEWEB)
Navratil, P; Quaglioni, S; Stetcu, I; Barrett, B R
2009-03-20
We present an overview of recent results and developments of the no-core shell model (NCSM), an ab initio approach to the nuclear many-body problem for light nuclei. In this aproach, we start from realistic two-nucleon or two- plus three-nucleon interactions. Many-body calculations are performed using a finite harmonic-oscillator (HO) basis. To facilitate convergence for realistic inter-nucleon interactions that generate strong short-range correlations, we derive effective interactions by unitary transformations that are tailored to the HO basis truncation. For soft realistic interactions this might not be necessary. If that is the case, the NCSM calculations are variational. In either case, the ab initio NCSM preserves translational invariance of the nuclear many-body problem. In this review, we, in particular, highlight results obtained with the chiral two- plus three-nucleon interactions. We discuss efforts to extend the applicability of the NCSM to heavier nuclei and larger model spaces using importance-truncation schemes and/or use of effective interactions with a core. We outline an extension of the ab initio NCSM to the description of nuclear reactions by the resonating group method technique. A future direction of the approach, the ab initio NCSM with continuum, which will provide a complete description of nuclei as open systems with coupling of bound and continuum states is given in the concluding part of the review.
Modeling and calculation of open carbon dioxide refrigeration system
International Nuclear Information System (INIS)
Cai, Yufei; Zhu, Chunling; Jiang, Yanlong; Shi, Hong
2015-01-01
Highlights: • A model of open refrigeration system is developed. • The state of CO 2 has great effect on Refrigeration capacity loss by heat transfer. • Refrigeration capacity loss by remaining CO 2 has little relation to the state of CO 2 . • Calculation results are in agreement with the test results. - Abstract: Based on the analysis of the properties of carbon dioxide, an open carbon dioxide refrigeration system is proposed, which is responsible for the situation without external electricity unit. A model of open refrigeration system is developed, and the relationship between the storage environment of carbon dioxide and refrigeration capacity is conducted. Meanwhile, a test platform is developed to simulation the performance of the open carbon dioxide refrigeration system. By comparing the theoretical calculations and the experimental results, several conclusions are obtained as follows: refrigeration capacity loss by heat transfer in supercritical state is much more than that in two-phase region and the refrigeration capacity loss by remaining carbon dioxide has little relation to the state of carbon dioxide. The results will be helpful to the use of open carbon dioxide refrigeration
Update on Small Modular Reactors Dynamic System Modeling Tool: Web Application
International Nuclear Information System (INIS)
Hale, Richard Edward; Cetiner, Sacit M.; Fugate, David L.; Batteh, John J; Tiller, Michael M.
2015-01-01
Previous reports focused on the development of component and system models as well as end-to-end system models using Modelica and Dymola for two advanced reactor architectures: (1) Advanced Liquid Metal Reactor and (2) fluoride high-temperature reactor (FHR). The focus of this report is the release of the first beta version of the web-based application for model use and collaboration, as well as an update on the FHR model. The web-based application allows novice users to configure end-to-end system models from preconfigured choices to investigate the instrumentation and controls implications of these designs and allows for the collaborative development of individual component models that can be benchmarked against test systems for potential inclusion in the model library. A description of this application is provided along with examples of its use and a listing and discussion of all the models that currently exist in the library.
Updating and prospective validation of a prognostic model for high sickness absence.
Roelen, C A M; Heymans, M W; Twisk, J W R; van Rhenen, W; Pallesen, S; Bjorvatn, B; Moen, B E; Magerøy, N
2015-01-01
To further develop and validate a Dutch prognostic model for high sickness absence (SA). Three-wave longitudinal cohort study of 2,059 Norwegian nurses. The Dutch prognostic model was used to predict high SA among Norwegian nurses at wave 2. Subsequently, the model was updated by adding person-related (age, gender, marital status, children at home, and coping strategies), health-related (BMI, physical activity, smoking, and caffeine and alcohol intake), and work-related (job satisfaction, job demands, decision latitude, social support at work, and both work-to-family and family-to-work spillover) variables. The updated model was then prospectively validated for predictions at wave 3. 1,557 (77 %) nurses had complete data at wave 2 and 1,342 (65 %) at wave 3. The risk of high SA was under-estimated by the Dutch model, but discrimination between high-risk and low-risk nurses was fair after re-calibration to the Norwegian data. Gender, marital status, BMI, physical activity, smoking, alcohol intake, job satisfaction, job demands, decision latitude, support at the workplace, and work-to-family spillover were identified as potential predictors of high SA. However, these predictors did not improve the model's discriminative ability, which remained fair at wave 3. The prognostic model correctly identifies 73 % of Norwegian nurses at risk of high SA, although additional predictors are needed before the model can be used to screen working populations for risk of high SA.
Updating representation of land surface-atmosphere feedbacks in airborne campaign modeling analysis
Huang, M.; Carmichael, G. R.; Crawford, J. H.; Chan, S.; Xu, X.; Fisher, J. A.
2017-12-01
An updated modeling system to support airborne field campaigns is being built at NASA Ames Pleiades, with focus on adjusting the representation of land surface-atmosphere feedbacks. The main updates, referring to previous experiences with ARCTAS-CARB and CalNex in the western US to study air pollution inflows, include: 1) migrating the WRF (Weather Research and Forecasting) coupled land surface model from Noah to improved/more complex models especially Noah-MP and Rapid Update Cycle; 2) enabling the WRF land initialization with suitably spun-up land model output; 3) incorporating satellite land cover, vegetation dynamics, and soil moisture data (i.e., assimilating Soil Moisture Active Passive data using the ensemble Kalman filter approach) into WRF. Examples are given of comparing the model fields with available aircraft observations during spring-summer 2016 field campaigns taken place at the eastern side of continents (KORUS-AQ in South Korea and ACT-America in the eastern US), the air pollution export regions. Under fair weather and stormy conditions, air pollution vertical distributions and column amounts, as well as the impact from land surface, are compared. These help identify challenges and opportunities for LEO/GEO satellite remote sensing and modeling of air quality in the northern hemisphere. Finally, we briefly show applications of this system on simulating Australian conditions, which would explore the needs for further development of the observing system in the southern hemisphere and inform the Clean Air and Urban Landscapes (https://www.nespurban.edu.au) modelers.
Computational models for probabilistic neutronic calculation in TADSEA
International Nuclear Information System (INIS)
Garcia, Jesus A.R.; Curbelo, Jesus P.; Hernandez, Carlos R.G.; Oliva, Amaury M.; Lira, Carlos A.B.O.
2013-01-01
The Very High Temperature Reactor is one of the main candidates for the next generation of nuclear power plants. In pebble bed reactors, the fuel is contained within graphite pebbles in the form of TRISO particles, which form a randomly packed bed inside a graphite-walled cylindrical cavity. In previous studies, the conceptual design of a Transmutation Advanced Device for Sustainable Energy Applications (TADSEA) has been made. The TADSEA is a pebble-bed ADS cooled by helium and moderated by graphite. In order to simulate the TADSEA correctly, the double heterogeneity of the system must be considered. It consists on randomly located pebbles into the core and randomly located TRISO particles into the fuel pebbles. These features are often neglected due to the difficulty to model with MCNP code. The main reason is that there is a limited number of cells and surfaces to be defined. In this paper a computational tool, which allows to get a new geometrical model for fuel pebble to neutronic calculation with MCNPX, was presented. The heterogeneity of system is considered, and also the randomly located TRISO particles inside the pebble. There are also compared several neutronic computational models for TADSEA's fuel pebbles in order to study heterogeneity effects. On the other hand the boundary effect given by the intersection between the pebble surface and the TRISO particles could be significative in the multiplicative properties. A model to study this e ect is also presented. (author)
Volume-based geometric modeling for radiation transport calculations
International Nuclear Information System (INIS)
Li, Z.; Williamson, J.F.
1992-01-01
Accurate theoretical characterization of radiation fields is a valuable tool in the design of complex systems, such as linac heads and intracavitary applicators, and for generation of basic dose calculation data that is inaccessible to experimental measurement. Both Monte Carlo and deterministic solutions to such problems require a system for accurately modeling complex 3-D geometries that supports ray tracing, point and segment classification, and 2-D graphical representation. Previous combinatorial approaches to solid modeling, which involve describing complex structures as set-theoretic combinations of simple objects, are limited in their ease of use and place unrealistic constraints on the geometric relations between objects such as excluding common boundaries. A new approach to volume-based solid modeling has been developed which is based upon topologically consistent definitions of boundary, interior, and exterior of a region. From these definitions, FORTRAN union, intersection, and difference routines have been developed that allow involuted and deeply nested structures to be described as set-theoretic combinations of ellipsoids, elliptic cylinders, prisms, cones, and planes that accommodate shared boundaries. Line segments between adjacent intersections on a trajectory are assigned to the appropriate region by a novel sorting algorithm that generalizes upon Siddon's approach. Two 2-D graphic display tools are developed to help the debugging of a given geometric model. In this paper, the mathematical basis of our system is described, it is contrasted to other approaches, and examples are discussed
Development of nuclear models for higher energy calculations
International Nuclear Information System (INIS)
Bozoian, M.; Siciliano, E.R.; Smith, R.D.
1988-01-01
Two nuclear models for higher energy calculations have been developed in the regions of high and low energy transfer, respectively. In the former, a relativistic hybrid-type preequilibrium model is compared with data ranging from 60 to 800 MeV. Also, the GNASH exciton preequilibrium-model code with higher energy improvements is compared with data at 200 and 318 MeV. In the region of low energy transfer, nucleon-nucleus scattering is predominately a direct reaction involving quasi-elastic collisions with one or more target nucleons. We discuss various aspects of quasi-elastic scattering which are important in understanding features of cross sections and spin observables. These include (1) contributions from multi-step processes; (2) damping of the continuum response from 2p-2h excitations; (3) the ''optimal'' choice of frame in which to evaluate the nucleon-nucleon amplitudes; and (4) the effect of optical and spin-orbit distortions, which are included in a model based on the RPA the DWIA and the eikonal approximation. 33 refs., 15 figs
International Nuclear Information System (INIS)
2003-01-01
The fourth Research Co-ordination Meeting (RCM) of the Co-ordinated Research Project (CRP) on 'Updated Codes and Methods to Reduce the Calculational Uncertainties of the LMFR Reactivity Effect' was held during 19-23 May, 2003 in Obninsk, Russian Federation. The general objective of the CRP is to validate, verify and improve methodologies and computer codes used for the calculation of reactivity coefficients in fast reactors aiming at enhancing the utilization of plutonium and minor actinides. The first RCM took place in Vienna on 24 - 26 November 1999. The meeting was attended by 19 participants from 7 Member States and one from an international organization (France, Germany, India, Japan, Rep. of Korea, Russian Federation, the United Kingdom, and IAEA). The participants from two Member States (China and the U.S.A.) provided their results and presentation materials even though being absent at the meeting. The results for several relevant reactivity parameters obtained by the participants with their own state-of-the-art basic data and codes, were compared in terms of calculational uncertainty, and their effects on the ULOF transient behavior of the hybrid BN- 600 core were evaluated. Contributions of the participants in the benchmark analyses is shown. This report first addresses the benchmark definitions and specifications given for each Phase and briefly introduces the basic data, computer codes, and methodologies applied to the benchmark analyses by various participants. Then, the results obtained by the participants in terms of calculational uncertainty and their effect on the core transient behavior are intercompared. Finally it addresses some conclusions drawn in the benchmarks
Energy Technology Data Exchange (ETDEWEB)
Kneur, J.L
2006-06-15
This document is divided into 2 parts. The first part describes a particular re-summation technique of perturbative series that can give a non-perturbative results in some cases. We detail some applications in field theory and in condensed matter like the calculation of the effective temperature of Bose-Einstein condensates. The second part deals with the minimal supersymmetric standard model. We present an accurate calculation of the mass spectrum of supersymmetric particles, a calculation of the relic density of supersymmetric black matter, and the constraints that we can infer from models.
Updating Sea Spray Aerosol Emissions in the Community Multiscale Air Quality Model
Gantt, B.; Bash, J. O.; Kelly, J.
2014-12-01
Sea spray aerosols (SSA) impact the particle mass concentration and gas-particle partitioning in coastal environments, with implications for human and ecosystem health. In this study, the Community Multiscale Air Quality (CMAQ) model is updated to enhance fine mode SSA emissions, include sea surface temperature (SST) dependency, and revise surf zone emissions. Based on evaluation with several regional and national observational datasets in the continental U.S., the updated emissions generally improve surface concentrations predictions of primary aerosols composed of sea-salt and secondary aerosols affected by sea-salt chemistry in coastal and near-coastal sites. Specifically, the updated emissions lead to better predictions of the magnitude and coastal-to-inland gradient of sodium, chloride, and nitrate concentrations at Bay Regional Atmospheric Chemistry Experiment (BRACE) sites near Tampa, FL. Including SST-dependency to the SSA emission parameterization leads to increased sodium concentrations in the southeast U.S. and decreased concentrations along the Pacific coast and northeastern U.S., bringing predictions into closer agreement with observations at most Interagency Monitoring of Protected Visual Environments (IMPROVE) and Chemical Speciation Network (CSN) sites. Model comparison with California Research at the Nexus of Air Quality and Climate Change (CalNex) observations will also be discussed, with particular focus on the South Coast Air Basin where clean marine air mixes with anthropogenic pollution in a complex environment. These SSA emission updates enable more realistic simulation of chemical processes in coastal environments, both in clean marine air masses and mixtures of clean marine and polluted conditions.
Staley, Dennis M.; Negri, Jacquelyn A.; Kean, Jason W.; Laber, Jayme L.; Tillery, Anne C.; Youberg, Ann M.
2016-06-30
Wildfire can significantly alter the hydrologic response of a watershed to the extent that even modest rainstorms can generate dangerous flash floods and debris flows. To reduce public exposure to hazard, the U.S. Geological Survey produces post-fire debris-flow hazard assessments for select fires in the western United States. We use publicly available geospatial data describing basin morphology, burn severity, soil properties, and rainfall characteristics to estimate the statistical likelihood that debris flows will occur in response to a storm of a given rainfall intensity. Using an empirical database and refined geospatial analysis methods, we defined new equations for the prediction of debris-flow likelihood using logistic regression methods. We showed that the new logistic regression model outperformed previous models used to predict debris-flow likelihood.
Updated and integrated modelling of the 1995 - 2008 Mise-a-la-masse survey data in Olkiluoto
International Nuclear Information System (INIS)
Ahokas, T.; Paananen, M.
2010-01-01
Posiva Oy prepares for disposal of spent nuclear fuel into bedrock focusing in Olkiluoto, Eurajoki. This is in accordance of the Decision-in-Principle of the State Council in 2000, and ratification by the Parliament in 2001. The ONKALO underground characterization premises have been constructed since 2004. Posiva Oy is aiming for submitting the construction licence application in 2012. To support the compilation of the safety case and repository and ONKALO design and construction, an integrated Olkiluoto site Description including geological, rock mechanics, hydrogeological and hydrogeochemical models will be depicted. Mise-a-la-masse (MAM) surveys have been carried out in the Olkiluoto area since 1995 to follow electric conductors from drillhole to drillhole, from drillhole to the ground surface and also between the ONKALO access tunnel and drillholes or the ground surface. The data and some visualisation of the data have been presented as part of reporting of the 1995 and 2008 surveys. The work presented in this paper includes modelling of all the measured data and combining single conductors modelled from different surveys to conductive zones. The results from this work will be used in updating the geological and hydrogeological models of the Olkiluoto site area. Several electrically conductive zones were modelled from the examined data, many of them coincide with the known brittle deformation zones but also indications of many so far unknown zones were detected. During the modelling Comsol Multiphysics software for calculating theoretical potential field anomalies of different models was tested. The test calculations showed that this software is useful in confirming the modelling results, especially in complicated cases. (orig.)
Cluster model calculations of the solid state materials electron structure
International Nuclear Information System (INIS)
Pelikan, P.; Biskupic, S.; Banacky, P.; Zajac, A.; Svrcek, A.; Noga, J.
1997-01-01
Materials of the general composition ACuO 2 are the parent compounds of so called infinite layer superconductors. In the paper presented the electron structure of the compounds CaCuO 2 , SrCuO2, Ca 0.86 Sr 0.14 CuO2 and Ca 0.26 Sr 0.74 CuO 2 were calculated. The cluster models consisting of 192 atoms were computed using quasi relativistic version of semiempirical INDO method. The obtained results indicate the strong ionicity of Ca/Sr-O bonds and high covalency of Cu-bonds. The width of energy gap at the Fermi level increases as follows: Ca 0.26 Sr 0.74 CuO 2 0.86 Sr 0.14 CuO2 2 . This order correlates with the fact that materials of the composition Ca x Sr 1-x CuO 2 have have the high temperatures of the superconductive transition (up to 110 K). Materials partially substituted by Sr 2+ have also the higher density of states in the close vicinity at the Fermi level that ai the additional condition for the possibility of superconductive transition. It was calculated the strong influence of the vibration motions to the energy gap at the Fermi level. (authors). 1 tabs., 2 figs., 10 refs
Su, Chiu-Wen; Yen, Amy Ming-Fang; Lai, Hongmin; Chen, Hsiu-Hsi; Chen, Sam Li-Sheng
2017-12-01
The accuracy of a prediction model for periodontal disease using the community periodontal index (CPI) has been undertaken by using an area under a receiver operating characteristics (AUROC) curve. How the uncalibrated CPI, as measured by general dentists trained by periodontists in a large epidemiologic study, and affects the performance in a prediction model, has not been researched yet. A two-stage design was conducted by first proposing a validation study to calibrate CPI between a senior periodontal specialist and trained general dentists who measured CPIs in the main study of a nationwide survey. A Bayesian hierarchical logistic regression model was applied to estimate the non-updated and updated clinical weights used for building up risk scores. How the calibrated CPI affected performance of the updated prediction model was quantified by comparing AUROC curves between the original and updated models. Estimates regarding calibration of CPI obtained from the validation study were 66% and 85% for sensitivity and specificity, respectively. After updating, clinical weights of each predictor were inflated, and the risk score for the highest risk category was elevated from 434 to 630. Such an update improved the AUROC performance of the two corresponding prediction models from 62.6% (95% confidence interval [CI]: 61.7% to 63.6%) for the non-updated model to 68.9% (95% CI: 68.0% to 69.6%) for the updated one, reaching a statistically significant difference (P prediction model was demonstrated for periodontal disease as measured by the calibrated CPI derived from a large epidemiologic survey.
Selection of models to calculate the LLW source term
International Nuclear Information System (INIS)
Sullivan, T.M.
1991-10-01
Performance assessment of a LLW disposal facility begins with an estimation of the rate at which radionuclides migrate out of the facility (i.e., the source term). The focus of this work is to develop a methodology for calculating the source term. In general, the source term is influenced by the radionuclide inventory, the wasteforms and containers used to dispose of the inventory, and the physical processes that lead to release from the facility (fluid flow, container degradation, wasteform leaching, and radionuclide transport). In turn, many of these physical processes are influenced by the design of the disposal facility (e.g., infiltration of water). The complexity of the problem and the absence of appropriate data prevent development of an entirely mechanistic representation of radionuclide release from a disposal facility. Typically, a number of assumptions, based on knowledge of the disposal system, are used to simplify the problem. This document provides a brief overview of disposal practices and reviews existing source term models as background for selecting appropriate models for estimating the source term. The selection rationale and the mathematical details of the models are presented. Finally, guidance is presented for combining the inventory data with appropriate mechanisms describing release from the disposal facility. 44 refs., 6 figs., 1 tab
Calculational models of close-spaced thermionic converters
International Nuclear Information System (INIS)
McVey, J.B.
1983-01-01
Two new calculational models have been developed in conjunction with the SAVTEC experimental program. These models have been used to analyze data from experimental close-spaced converters, providing values for spacing, electrode work functions, and converter efficiency. They have also been used to make performance predictions for such converters over a wide range of conditions. Both models are intended for use in the collisionless (Knudsen) regime. They differ from each other in that the simpler one uses a Langmuir-type formulation which only considers electrons emitted from the emitter. This approach is implemented in the LVD (Langmuir Vacuum Diode) computer program, which has the virtue of being both simple and fast. The more complex model also includes both Saha-Langmuir emission of positive cesium ions from the emitter and collector back emission. Computer implementation is by the KMD1 (Knudsen Mode Diode) program. The KMD1 model derives the particle distribution functions from the Vlasov equation. From these the particle densities are found for various interelectrode motive shapes. Substituting the particle densities into Poisson's equation gives a second order differential equation for potential. This equation can be integrated once analytically. The second integration, which gives the interelectrode motive, is performed numerically by the KMD1 program. This is complicated by the fact that the integrand is often singular at one end point of the integration interval. The program performs a transformation on the integrand to make it finite over the entire interval. Once the motive has been computed, the output voltage, current density, power density, and efficiency are found. The program is presently unable to operate when the ion richness ratio β is between about .8 and 1.0, due to the occurrence of oscillatory motives
International Nuclear Information System (INIS)
Mueller, R.G.
1987-06-01
Due to the strong influence of vapour bubbles on the nuclear chain reaction, an exact calculation of neutron physics and thermal hydraulics in light water reactors requires consideration of subcooled boiling. To this purpose, in the present study a dynamic model is derived from the time-dependent conservation equations. It contains new methods for the time-dependent determination of evaporation and condensation heat flow and for the heat transfer coefficient in subcooled boiling. Furthermore, it enables the complete two-phase flow region to be treated in a consistent manner. The calculation model was verified using measured data of experiments covering a wide range of thermodynamic boundary conditions. In all cases very good agreement was reached. The results from the coupling of the new calculation model with a neutron kinetics program proved its suitability for the steady-state and transient calculation of reactor cores. (orig.) [de
Using radar altimetry to update a large-scale hydrological model of the Brahmaputra river basin
DEFF Research Database (Denmark)
Finsen, F.; Milzow, Christian; Smith, R.
2014-01-01
Measurements of river and lake water levels from space-borne radar altimeters (past missions include ERS, Envisat, Jason, Topex) are useful for calibration and validation of large-scale hydrological models in poorly gauged river basins. Altimetry data availability over the downstream reaches...... of the Brahmaputra is excellent (17 high-quality virtual stations from ERS-2, 6 from Topex and 10 from Envisat are available for the Brahmaputra). In this study, altimetry data are used to update a large-scale Budyko-type hydrological model of the Brahmaputra river basin in real time. Altimetry measurements...... improved model performance considerably. The Nash-Sutcliffe model efficiency increased from 0.77 to 0.83. Real-time river basin modelling using radar altimetry has the potential to improve the predictive capability of large-scale hydrological models elsewhere on the planet....
Experimental Update of the Overtopping Model Used for the Wave Dragon Wave Energy Converter
Directory of Open Access Journals (Sweden)
Erik Friis-Madsen
2013-04-01
Full Text Available An overtopping model specifically suited for Wave Dragon is needed in order to improve the reliability of its performance estimates. The model shall be comprehensive of all relevant physical processes that affect overtopping and flexible to adapt to any local conditions and device configuration. An experimental investigation is carried out to update an existing formulation suited for 2D draft-limited, low-crested structures, in order to include the effects on the overtopping flow of the wave steepness, the 3D geometry of Wave Dragon, the wing reflectors, the device motions and the non-rigid connection between platform and reflectors. The study is carried out in four phases, each of them specifically targeted at quantifying one of these effects through a sensitivity analysis and at modeling it through custom-made parameters. These are depending on features of the wave or the device configuration, all of which can be measured in real-time. Instead of using new fitting coefficients, this approach allows a broader applicability of the model beyond the Wave Dragon case, to any overtopping WEC or structure within the range of tested conditions. Predictions reliability of overtopping over Wave Dragon increased, as the updated model allows improved accuracy and precision respect to the former version.
A Review of the Updated Pharmacophore for the Alpha 5 GABA(A Benzodiazepine Receptor Model
Directory of Open Access Journals (Sweden)
Terry Clayton
2015-01-01
Full Text Available An updated model of the GABA(A benzodiazepine receptor pharmacophore of the α5-BzR/GABA(A subtype has been constructed prompted by the synthesis of subtype selective ligands in light of the recent developments in both ligand synthesis, behavioral studies, and molecular modeling studies of the binding site itself. A number of BzR/GABA(A α5 subtype selective compounds were synthesized, notably α5-subtype selective inverse agonist PWZ-029 (1 which is active in enhancing cognition in both rodents and primates. In addition, a chiral positive allosteric modulator (PAM, SH-053-2′F-R-CH3 (2, has been shown to reverse the deleterious effects in the MAM-model of schizophrenia as well as alleviate constriction in airway smooth muscle. Presented here is an updated model of the pharmacophore for α5β2γ2 Bz/GABA(A receptors, including a rendering of PWZ-029 docked within the α5-binding pocket showing specific interactions of the molecule with the receptor. Differences in the included volume as compared to α1β2γ2, α2β2γ2, and α3β2γ2 will be illustrated for clarity. These new models enhance the ability to understand structural characteristics of ligands which act as agonists, antagonists, or inverse agonists at the Bz BS of GABA(A receptors.
Calculating ε'/ε in the standard model
International Nuclear Information System (INIS)
Sharpe, S.R.
1988-01-01
The ingredients needed in order to calculate ε' and ε are described. Particular emphasis is given to the non-perturbative calculations of matrix elements by lattice methods. The status of the electromagnetic contribution to ε' is reviewed. 15 refs
2016-11-03
This final rule updates the Home Health Prospective Payment System (HH PPS) payment rates, including the national, standardized 60-day episode payment rates, the national per-visit rates, and the non-routine medical supply (NRS) conversion factor; effective for home health episodes of care ending on or after January 1, 2017. This rule also: Implements the last year of the 4-year phase-in of the rebasing adjustments to the HH PPS payment rates; updates the HH PPS case-mix weights using the most current, complete data available at the time of rulemaking; implements the 2nd-year of a 3-year phase-in of a reduction to the national, standardized 60-day episode payment to account for estimated case-mix growth unrelated to increases in patient acuity (that is, nominal case-mix growth) between CY 2012 and CY 2014; finalizes changes to the methodology used to calculate payments made under the HH PPS for high-cost "outlier" episodes of care; implements changes in payment for furnishing Negative Pressure Wound Therapy (NPWT) using a disposable device for patients under a home health plan of care; discusses our efforts to monitor the potential impacts of the rebasing adjustments; includes an update on subsequent research and analysis as a result of the findings from the home health study; and finalizes changes to the Home Health Value-Based Purchasing (HHVBP) Model, which was implemented on January 1, 2016; and updates to the Home Health Quality Reporting Program (HH QRP).
Image processing of full-field strain data and its use in model updating
International Nuclear Information System (INIS)
Wang, W; Mottershead, J E; Sebastian, C M; Patterson, E A
2011-01-01
Finite element model updating is an inverse problem based on measured structural outputs, typically natural frequencies. Full-field responses such as static stress/strain patterns and vibration mode shapes contain valuable information for model updating but within large volumes of highly-redundant data. Pattern recognition and image processing provide feasible techniques to extract effective and efficient information, often known as shape features, from this data. For instance, the Zernike polynomials having the properties of orthogonality and rotational invariance are powerful decomposition kernels for a shape defined within a unit circle. In this paper, full field strain patterns for a specimen, in the form of a square plate with a circular hole, under a tensile load are considered. Effective shape features can be constructed by a set of modified Zernike polynomials. The modification includes the application of a weighting function to the Zernike polynomials so that high strain magnitudes around the hole are well represented. The Gram-Schmidt process is then used to ensure orthogonality for the obtained decomposition kernels over the domain of the specimen. The difference between full-field strain patterns measured by digital image correlation (DIC) and reconstructed using 15 shape features (Zernike moment descriptors, ZMDs) at different steps in the elasto-plastic deformation of the specimen is found to be very small. It is significant that only a very small number of shape features are necessary and sufficient to represent the full-field data. Model updating of nonlinear elasto-plastic material properties is carried out by adjusting the parameters of a FE model until the FE strain pattern converges upon the measured strains as determined using ZMDs.
Comparative analysis of calculation models of railway subgrade
Directory of Open Access Journals (Sweden)
I.O. Sviatko
2013-08-01
Full Text Available Purpose. In transport engineering structures design, the primary task is to determine the parameters of foundation soil and nuances of its work under loads. It is very important to determine the parameters of shear resistance and the parameters, determining the development of deep deformations in foundation soils, while calculating the soil subgrade - upper track structure interaction. Search for generalized numerical modeling methods of embankment foundation soil work that include not only the analysis of the foundation stress state but also of its deformed one. Methodology. The analysis of existing modern and classical methods of numerical simulation of soil samples under static load was made. Findings. According to traditional methods of analysis of ground masses work, limitation and the qualitative estimation of subgrade deformations is possible only indirectly, through the estimation of stress and comparison of received values with the boundary ones. Originality. A new computational model was proposed in which it will be applied not only classical approach analysis of the soil subgrade stress state, but deformed state will be also taken into account. Practical value. The analysis showed that for accurate analysis of ground masses work it is necessary to develop a generalized methodology for analyzing of the rolling stock - railway subgrade interaction, which will use not only the classical approach of analyzing the soil subgrade stress state, but also take into account its deformed one.
Some safe and sensible shortcuts for efficiently upscaled updates of existing elevation models.
Knudsen, Thomas; Aasbjerg Nielsen, Allan
2013-04-01
The Danish national elevation model, DK-DEM, was introduced in 2009 and is based on LiDAR data collected in the time frame 2005-2007. Hence, DK-DEM is aging, and it is time to consider how to integrate new data with the current model in a way that improves the representation of new landscape features, while still preserving the overall (very high) quality of the model. In LiDAR terms, 2005 is equivalent to some time between the palaeolithic and the neolithic. So evidently, when (and if) an update project is launched, we may expect some notable improvements due to the technical and scientific developments from the last half decade. To estimate the magnitude of these potential improvements, and to devise efficient and effective ways of integrating the new and old data, we currently carry out a number of case studies based on comparisons between the current terrain model (with a ground sample distance, GSD, of 1.6 m), and a number of new high resolution point clouds (10-70 points/m2). Not knowing anything about the terms of a potential update project, we consider multiple scenarios ranging from business as usual: A new model with the same GSD, but improved precision, to aggressive upscaling: A new model with 4 times better GSD, i.e. a 16-fold increase in the amount of data. Especially in the latter case speeding up the gridding process is important. Luckily recent results from one of our case studies reveal that for very high resolution data in smooth terrain (which is the common case in Denmark), using local mean (LM) as grid value estimator is only negligibly worse than using the theoretically "best" estimator, i.e. ordinary kriging (OK) with rigorous modelling of the semivariogram. The bias in a leave one out cross validation differs on the micrometer level, while the RMSE differs on the 0.1 mm level. This is fortunate, since a LM estimator can be implemented in plain stream mode, letting the points from the unstructured point cloud (i.e. no TIN generation) stream
Accurate Holdup Calculations with Predictive Modeling & Data Integration
Energy Technology Data Exchange (ETDEWEB)
Azmy, Yousry [North Carolina State Univ., Raleigh, NC (United States). Dept. of Nuclear Engineering; Cacuci, Dan [Univ. of South Carolina, Columbia, SC (United States). Dept. of Mechanical Engineering
2017-04-03
In facilities that process special nuclear material (SNM) it is important to account accurately for the fissile material that enters and leaves the plant. Although there are many stages and processes through which materials must be traced and measured, the focus of this project is material that is “held-up” in equipment, pipes, and ducts during normal operation and that can accumulate over time into significant quantities. Accurately estimating the holdup is essential for proper SNM accounting (vis-à-vis nuclear non-proliferation), criticality and radiation safety, waste management, and efficient plant operation. Usually it is not possible to directly measure the holdup quantity and location, so these must be inferred from measured radiation fields, primarily gamma and less frequently neutrons. Current methods to quantify holdup, i.e. Generalized Geometry Holdup (GGH), primarily rely on simple source configurations and crude radiation transport models aided by ad hoc correction factors. This project seeks an alternate method of performing measurement-based holdup calculations using a predictive model that employs state-of-the-art radiation transport codes capable of accurately simulating such situations. Inverse and data assimilation methods use the forward transport model to search for a source configuration that best matches the measured data and simultaneously provide an estimate of the level of confidence in the correctness of such configuration. In this work the holdup problem is re-interpreted as an inverse problem that is under-determined, hence may permit multiple solutions. A probabilistic approach is applied to solving the resulting inverse problem. This approach rates possible solutions according to their plausibility given the measurements and initial information. This is accomplished through the use of Bayes’ Theorem that resolves the issue of multiple solutions by giving an estimate of the probability of observing each possible solution. To use
Glass operational file. Operational models and integration calculations
International Nuclear Information System (INIS)
Ribet, I.
2004-01-01
This document presents the operational choices of dominating phenomena, hypotheses, equations and numerical data of the parameters used in the two operational models elaborated for the calculation of the glass source terms with respect to the waste packages considered: existing packages (R7T7, AVM and CEA glasses) and future ones (UOX2, UOX3, UMo, others). The overall operational choices are justified and demonstrated and a critical analysis of the approach is systematically proposed. The use of the operational model (OPM) V 0 → V r , realistic, conservative and robust, is recommended for glasses with a high thermal and radioactive load, which represent the main part of the vitrified wastes. The OPM V 0 S, much more overestimating but faster to parameterize, can be used for the long-term behaviour forecasting of glasses with low thermal and radioactive load, considering today's lack of knowledge for the parameterization of a V 0 → V r type OPM. Efficiency estimations have been made for R7T7 glasses (OPM V 0 → V r ) and AVM glasses (OPM V 0 S), which correspond to more than 99.9% of the vitrified waste packages activity. The very contrasted results obtained, illustrate the importance of the choice of operational models: in conditions representative of a geologic disposal, the estimation of R7T7-type package lifetime exceeds several hundred thousands years. Even if the estimated lifetime of AVM packages is much shorter (because of the overestimating character of the OPM V 0 S), the release potential radiotoxicity is of the same order as the one of R7T7 packages. (J.S.)
Groundwater flow modelling under ice sheet conditions. Scoping calculations
Energy Technology Data Exchange (ETDEWEB)
Jaquet, O.; Namar, R. (In2Earth Modelling Ltd (Switzerland)); Jansson, P. (Dept. of Physical Geography and Quaternary Geology, Stockholm Univ., Stockholm (Sweden))
2010-10-15
The potential impact of long-term climate changes has to be evaluated with respect to repository performance and safety. In particular, glacial periods of advancing and retreating ice sheet and prolonged permafrost conditions are likely to occur over the repository site. The growth and decay of ice sheets and the associated distribution of permafrost will affect the groundwater flow field and its composition. As large changes may take place, the understanding of groundwater flow patterns in connection to glaciations is an important issue for the geological disposal at long term. During a glacial period, the performance of the repository could be weakened by some of the following conditions and associated processes: - Maximum pressure at repository depth (canister failure). - Maximum permafrost depth (canister failure, buffer function). - Concentration of groundwater oxygen (canister corrosion). - Groundwater salinity (buffer stability). - Glacially induced earthquakes (canister failure). Therefore, the GAP project aims at understanding key hydrogeological issues as well as answering specific questions: - Regional groundwater flow system under ice sheet conditions. - Flow and infiltration conditions at the ice sheet bed. - Penetration depth of glacial meltwater into the bedrock. - Water chemical composition at repository depth in presence of glacial effects. - Role of the taliks, located in front of the ice sheet, likely to act as potential discharge zones of deep groundwater flow. - Influence of permafrost distribution on the groundwater flow system in relation to build-up and thawing periods. - Consequences of glacially induced earthquakes on the groundwater flow system. Some answers will be provided by the field data and investigations; the integration of the information and the dynamic characterisation of the key processes will be obtained using numerical modelling. Since most of the data are not yet available, some scoping calculations are performed using the
Groundwater flow modelling under ice sheet conditions. Scoping calculations
International Nuclear Information System (INIS)
Jaquet, O.; Namar, R.; Jansson, P.
2010-10-01
The potential impact of long-term climate changes has to be evaluated with respect to repository performance and safety. In particular, glacial periods of advancing and retreating ice sheet and prolonged permafrost conditions are likely to occur over the repository site. The growth and decay of ice sheets and the associated distribution of permafrost will affect the groundwater flow field and its composition. As large changes may take place, the understanding of groundwater flow patterns in connection to glaciations is an important issue for the geological disposal at long term. During a glacial period, the performance of the repository could be weakened by some of the following conditions and associated processes: - Maximum pressure at repository depth (canister failure). - Maximum permafrost depth (canister failure, buffer function). - Concentration of groundwater oxygen (canister corrosion). - Groundwater salinity (buffer stability). - Glacially induced earthquakes (canister failure). Therefore, the GAP project aims at understanding key hydrogeological issues as well as answering specific questions: - Regional groundwater flow system under ice sheet conditions. - Flow and infiltration conditions at the ice sheet bed. - Penetration depth of glacial meltwater into the bedrock. - Water chemical composition at repository depth in presence of glacial effects. - Role of the taliks, located in front of the ice sheet, likely to act as potential discharge zones of deep groundwater flow. - Influence of permafrost distribution on the groundwater flow system in relation to build-up and thawing periods. - Consequences of glacially induced earthquakes on the groundwater flow system. Some answers will be provided by the field data and investigations; the integration of the information and the dynamic characterisation of the key processes will be obtained using numerical modelling. Since most of the data are not yet available, some scoping calculations are performed using the
Directory of Open Access Journals (Sweden)
Indra Djati Sidi
2017-12-01
Full Text Available The model error N has been introduced to denote the discrepancy between measured and predicted capacity of pile foundation. This model error is recognized as epistemic uncertainty in pile capacity prediction. The statistics of N have been evaluated based on data gathered from various sites and may be considered only as a eneral-error trend in capacity prediction, providing crude estimates of the model error in the absence of more specific data from the site. The results of even a single load test to failure, should provide direct evidence of the pile capacity at a given site. Bayes theorem has been used as a rational basis for combining new data with previous data to revise assessment of uncertainty and reliability. This study is devoted to the development of procedures for updating model error (N, and subsequently the predicted pile capacity with a results of single failure test.
A visual tracking method based on deep learning without online model updating
Tang, Cong; Wang, Yicheng; Feng, Yunsong; Zheng, Chao; Jin, Wei
2018-02-01
The paper proposes a visual tracking method based on deep learning without online model updating. In consideration of the advantages of deep learning in feature representation, deep model SSD (Single Shot Multibox Detector) is used as the object extractor in the tracking model. Simultaneously, the color histogram feature and HOG (Histogram of Oriented Gradient) feature are combined to select the tracking object. In the process of tracking, multi-scale object searching map is built to improve the detection performance of deep detection model and the tracking efficiency. In the experiment of eight respective tracking video sequences in the baseline dataset, compared with six state-of-the-art methods, the method in the paper has better robustness in the tracking challenging factors, such as deformation, scale variation, rotation variation, illumination variation, and background clutters, moreover, its general performance is better than other six tracking methods.
Finite element model updating of concrete structures based on imprecise probability
Biswal, S.; Ramaswamy, A.
2017-09-01
Imprecise probability based methods are developed in this study for the parameter estimation, in finite element model updating for concrete structures, when the measurements are imprecisely defined. Bayesian analysis using Metropolis Hastings algorithm for parameter estimation is generalized to incorporate the imprecision present in the prior distribution, in the likelihood function, and in the measured responses. Three different cases are considered (i) imprecision is present in the prior distribution and in the measurements only, (ii) imprecision is present in the parameters of the finite element model and in the measurement only, and (iii) imprecision is present in the prior distribution, in the parameters of the finite element model, and in the measurements. Procedures are also developed for integrating the imprecision in the parameters of the finite element model, in the finite element software Abaqus. The proposed methods are then verified against reinforced concrete beams and prestressed concrete beams tested in our laboratory as part of this study.
Updating radon daughter bronchial dosimetry
International Nuclear Information System (INIS)
Harley, N.H.; Cohen, B.S.
1990-01-01
It is of value to update radon daughter bronchial dosimetry as new information becomes available. Measurements have now been performed using hollow casts of the human bronchial tree with a larynx to determine convective or turbulent deposition in the upper airways. These measurements allow a more realistic calculation of bronchial deposition by diffusion. Particle diameters of 0.15 and 0.2 μm were used which correspond to the activity median diameters for radon daughters in both environmental and mining atmospheres. The total model incorporates Yeh/Schum bronchial morphometry, deposition of unattached and attached radon daughters, build up and decay of the daughters and mucociliary clearance. The alpha dose to target cells in the bronchial epithelium is calculated for the updated model and compared with previous calculations of bronchial dose
Calculation of real optical model potential for heavy ions in the framework of the folding model
International Nuclear Information System (INIS)
Goncharov, S.A.; Timofeyuk, N.K.; Kazacha, G.S.
1987-01-01
The code for calculation of a real optical model potential in the framework of the folding model is realized. The program of numerical Fourier-Bessel transformation based on Filon's integration rule is used. The accuracy of numerical calculations is ∼ 10 -4 for a distance interval up to a bout (2.5-3) times the size of nuclei. The potentials are calculated for interactions of 3,4 He with nuclei from 9 Be to 27 Al with different effective NN-interactions and densities obtained from electron scattering data. Calculated potentials are similar to phenomenological potentials in Woods-Saxon form. With calculated potentials the available elastic scattering data for the considered nuclei in the energy interval 18-56 MeV are analysed. The needed renormalizations for folding potentials are < or approx. 20%
Rakovec, O.; Weerts, A.; Hazenberg, P.; Torfs, P.; Uijlenhoet, R.
2012-12-01
This paper presents a study on the optimal setup for discharge assimilation within a spatially distributed hydrological model (Rakovec et al., 2012a). The Ensemble Kalman filter (EnKF) is employed to update the grid-based distributed states of such an hourly spatially distributed version of the HBV-96 model. By using a physically based model for the routing, the time delay and attenuation are modelled more realistically. The discharge and states at a given time step are assumed to be dependent on the previous time step only (Markov property). Synthetic and real world experiments are carried out for the Upper Ourthe (1600 km2), a relatively quickly responding catchment in the Belgian Ardennes. The uncertain precipitation model forcings were obtained using a time-dependent multivariate spatial conditional simulation method (Rakovec et al., 2012b), which is further made conditional on preceding simulations. We assess the impact on the forecasted discharge of (1) various sets of the spatially distributed discharge gauges and (2) the filtering frequency. The results show that the hydrological forecast at the catchment outlet is improved by assimilating interior gauges. This augmentation of the observation vector improves the forecast more than increasing the updating frequency. In terms of the model states, the EnKF procedure is found to mainly change the pdfs of the two routing model storages, even when the uncertainty in the discharge simulations is smaller than the defined observation uncertainty. Rakovec, O., Weerts, A. H., Hazenberg, P., Torfs, P. J. J. F., and Uijlenhoet, R.: State updating of a distributed hydrological model with Ensemble Kalman Filtering: effects of updating frequency and observation network density on forecast accuracy, Hydrol. Earth Syst. Sci. Discuss., 9, 3961-3999, doi:10.5194/hessd-9-3961-2012, 2012a. Rakovec, O., Hazenberg, P., Torfs, P. J. J. F., Weerts, A. H., and Uijlenhoet, R.: Generating spatial precipitation ensembles: impact of
Directory of Open Access Journals (Sweden)
O. Rakovec
2012-09-01
Full Text Available This paper presents a study on the optimal setup for discharge assimilation within a spatially distributed hydrological model. The Ensemble Kalman filter (EnKF is employed to update the grid-based distributed states of such an hourly spatially distributed version of the HBV-96 model. By using a physically based model for the routing, the time delay and attenuation are modelled more realistically. The discharge and states at a given time step are assumed to be dependent on the previous time step only (Markov property.
Synthetic and real world experiments are carried out for the Upper Ourthe (1600 km^{2}, a relatively quickly responding catchment in the Belgian Ardennes. We assess the impact on the forecasted discharge of (1 various sets of the spatially distributed discharge gauges and (2 the filtering frequency. The results show that the hydrological forecast at the catchment outlet is improved by assimilating interior gauges. This augmentation of the observation vector improves the forecast more than increasing the updating frequency. In terms of the model states, the EnKF procedure is found to mainly change the pdfs of the two routing model storages, even when the uncertainty in the discharge simulations is smaller than the defined observation uncertainty.
Models for calculation of dissociation energies of homonuclear diatomic molecules
International Nuclear Information System (INIS)
Brewer, L.; Winn, J.S.
1979-08-01
The variation of known dissociation energies of the transition metal diatomics across the Periodic Table is rather irregular like the bulk sublimation enthalpy, suggesting that the valence-bond model for bulk metallic systems might be applicable to the gaseous diatomic molecules and the various intermediate clusters. Available dissociation energies were converted to valence-state bonding energies considering various degrees of promotion to optimize the bonding. The degree of promotion of electrons to increase the number of bonding electrons is smaller than for the bulk, but the trends in bonding energy parallel the behavior found for the bulk metals. Thus using the established trends in bonding energies for the bulk elements, it was possible to calculate all unknown dissociation energies to provide a complete table of dissociation energies for all M 2 molecules from H 2 to Lr 2 . For solids such as Mg, Al, Si and most of the transition metals, large promotion energies are offset by strong bonding between the valence state atoms. The main question is whether bonding in the diatomics is adequate to sustain extensive promotion. The most extreme example for which a considerable difference would be expected between the bulk and the diatomics would be that of the Group IIA and IIB metals. The first section of this paper which deals with the alkaline earths Mg and Ca demonstrates a significant influence of the excited valence state even for these elements. The next section then expands the treatment to transition metals
Rotating shaft model updating from modal data by a direct energy approach : a feasibility study
International Nuclear Information System (INIS)
Audebert, S.
1996-01-01
Investigations to improve the rotating machinery monitoring tend more and more to use numerical models. The aim is to obtain multi-fluid bearing rotor models which are able to correctly represent their dynamic behaviour, either modal or forced response type. The possibility of extending the direct energy method, initially developed for undamped structures, to rotating machinery is studied. It is based on the minimization of the kinetic and strain energy gap between experimental and analytic modal data. The preliminary determination of a multi-linear bearing rotor system Eigen modes shows the problem complexity in comparison with undamped non rotating structures: taking into account gyroscopic effects and bearing damping, as factors of rotor velocities, leads to complex component Eigen modes; moreover, non symmetric matrices, related to stiffness and damping bearing contributions, induce distinct left and right-hand side Eigen modes (left hand side Eigenmodes corresponds to the adjoint structure). Theoretically, the extension of the energy method is studied, considering first the intermediate case of an undamped non gyroscopic structure, second the general case of a rotating shaft: dta used for updating procedure are Eigen frequencies and left- and right- hand side mode shapes. Since left hand side mode shapes cannot be directly measured, they are replaced by analytic ones. The method is tested on a two-bearing rotor system, with a mass added; simulated data are used, relative to a non compatible structure, i.e. which is not a part of the set of modified analytic possible structures. Parameters to be corrected are the mass density, the Young's modulus, and the stiffness and damping linearized characteristics of bearings. If parameters are influent in regard with modes to be updates, the updating method permits a significant improvement of the gap between analytic and experimental modes, even for modes not involves in the procedure. Modal damping appears to be more
Turnbull, Heather; Omenzetter, Piotr
2018-03-01
vDifficulties associated with current health monitoring and inspection practices combined with harsh, often remote, operational environments of wind turbines highlight the requirement for a non-destructive evaluation system capable of remotely monitoring the current structural state of turbine blades. This research adopted a physics based structural health monitoring methodology through calibration of a finite element model using inverse techniques. A 2.36m blade from a 5kW turbine was used as an experimental specimen, with operational modal analysis techniques utilised to realize the modal properties of the system. Modelling the experimental responses as fuzzy numbers using the sub-level technique, uncertainty in the response parameters was propagated back through the model and into the updating parameters. Initially, experimental responses of the blade were obtained, with a numerical model of the blade created and updated. Deterministic updating was carried out through formulation and minimisation of a deterministic objective function using both firefly algorithm and virus optimisation algorithm. Uncertainty in experimental responses were modelled using triangular membership functions, allowing membership functions of updating parameters (Young's modulus and shear modulus) to be obtained. Firefly algorithm and virus optimisation algorithm were again utilised, however, this time in the solution of fuzzy objective functions. This enabled uncertainty associated with updating parameters to be quantified. Varying damage location and severity was simulated experimentally through addition of small masses to the structure intended to cause a structural alteration. A damaged model was created, modelling four variable magnitude nonstructural masses at predefined points and updated to provide a deterministic damage prediction and information in relation to the parameters uncertainty via fuzzy updating.
The COST model for calculation of forest operations costs
Ackerman, P.; Belbo, H.; Eliasson, L.; Jong, de J.J.; Lazdins, A.; Lyons, J.
2014-01-01
Since the late nineteenth century when high-cost equipment was introduced into forestry there has been a need to calculate the cost of this equipment in more detail with respect to, for example, cost of ownership, cost per hour of production, and cost per production unit. Machine cost calculations
Daucourt, Mia C; Schatschneider, Christopher; Connor, Carol M; Al Otaiba, Stephanie; Hart, Sara A
2018-01-01
Recent achievement research suggests that executive function (EF), a set of regulatory processes that control both thought and action necessary for goal-directed behavior, is related to typical and atypical reading performance. This project examines the relation of EF, as measured by its components, Inhibition, Updating Working Memory, and Shifting, with a hybrid model of reading disability (RD). Our sample included 420 children who participated in a broader intervention project when they were in KG-third grade (age M = 6.63 years, SD = 1.04 years, range = 4.79-10.40 years). At the time their EF was assessed, using a parent-report Behavior Rating Inventory of Executive Function (BRIEF), they had a mean age of 13.21 years ( SD = 1.54 years; range = 10.47-16.63 years). The hybrid model of RD was operationalized as a composite consisting of four symptoms, and set so that any child could have any one, any two, any three, any four, or none of the symptoms included in the hybrid model. The four symptoms include low word reading achievement, unexpected low word reading achievement, poorer reading comprehension compared to listening comprehension, and dual-discrepancy response-to-intervention, requiring both low achievement and low growth in word reading. The results of our multilevel ordinal logistic regression analyses showed a significant relation between all three components of EF (Inhibition, Updating Working Memory, and Shifting) and the hybrid model of RD, and that the strength of EF's predictive power for RD classification was the highest when RD was modeled as having at least one or more symptoms. Importantly, the chances of being classified as having RD increased as EF performance worsened and decreased as EF performance improved. The question of whether any one EF component would emerge as a superior predictor was also examined and results showed that Inhibition, Updating Working Memory, and Shifting were equally valuable as predictors of the hybrid model of RD
Directory of Open Access Journals (Sweden)
Mia C. Daucourt
2018-03-01
Full Text Available Recent achievement research suggests that executive function (EF, a set of regulatory processes that control both thought and action necessary for goal-directed behavior, is related to typical and atypical reading performance. This project examines the relation of EF, as measured by its components, Inhibition, Updating Working Memory, and Shifting, with a hybrid model of reading disability (RD. Our sample included 420 children who participated in a broader intervention project when they were in KG-third grade (age M = 6.63 years, SD = 1.04 years, range = 4.79–10.40 years. At the time their EF was assessed, using a parent-report Behavior Rating Inventory of Executive Function (BRIEF, they had a mean age of 13.21 years (SD = 1.54 years; range = 10.47–16.63 years. The hybrid model of RD was operationalized as a composite consisting of four symptoms, and set so that any child could have any one, any two, any three, any four, or none of the symptoms included in the hybrid model. The four symptoms include low word reading achievement, unexpected low word reading achievement, poorer reading comprehension compared to listening comprehension, and dual-discrepancy response-to-intervention, requiring both low achievement and low growth in word reading. The results of our multilevel ordinal logistic regression analyses showed a significant relation between all three components of EF (Inhibition, Updating Working Memory, and Shifting and the hybrid model of RD, and that the strength of EF’s predictive power for RD classification was the highest when RD was modeled as having at least one or more symptoms. Importantly, the chances of being classified as having RD increased as EF performance worsened and decreased as EF performance improved. The question of whether any one EF component would emerge as a superior predictor was also examined and results showed that Inhibition, Updating Working Memory, and Shifting were equally valuable as predictors of the
Experimental Update of the Overtopping Model Used for the Wave Dragon Wave Energy Converter
DEFF Research Database (Denmark)
Parmeggiani, Stefano; Kofoed, Jens Peter; Friis-Madsen, Erik
2013-01-01
An overtopping model specifically suited for Wave Dragon is needed in order to improve the reliability of its performance estimates. The model shall be comprehensive of all relevant physical processes that affect overtopping and flexible to adapt to any local conditions and device configuration....... An experimental investigation is carried out to update an existing formulation suited for 2D draft-limited, low-crested structures, in order to include the effects on the overtopping flow of the wave steepness, the 3D geometry of Wave Dragon, the wing reflectors, the device motions and the non-rigid connection...... of which can be measured in real-time. Instead of using new fitting coefficients, this approach allows a broader applicability of the model beyond the Wave Dragon case, to any overtopping WEC or structure within the range of tested conditions. Predictions reliability of overtopping over Wave Dragon...
Application of a Bayesian algorithm for the Statistical Energy model updating of a railway coach
DEFF Research Database (Denmark)
Sadri, Mehran; Brunskog, Jonas; Younesian, Davood
2016-01-01
into account based on published data on comparison between experimental and theoretical results, so that the variance of the theory is estimated. The Monte Carlo Metropolis Hastings algorithm is employed to estimate the modified values of the parameters. It is shown that the algorithm can be efficiently used......The classical statistical energy analysis (SEA) theory is a common approach for vibroacoustic analysis of coupled complex structures, being efficient to predict high-frequency noise and vibration of engineering systems. There are however some limitations in applying the conventional SEA...... the performance of the proposed strategy, the SEA model updating of a railway passenger coach is carried out. First, a sensitivity analysis is carried out to select the most sensitive parameters of the SEA model. For the selected parameters of the model, prior probability density functions are then taken...
Summary of Expansions, Updates, and Results in GREET® 2016 Suite of Models
Energy Technology Data Exchange (ETDEWEB)
None, None
2016-10-01
This report documents the technical content of the expansions and updates in Argonne National Laboratory’s GREET® 2016 release and provides references and links to key documents related to these expansions and updates.
Machado, M. R.; Adhikari, S.; Dos Santos, J. M. C.; Arruda, J. R. F.
2018-03-01
Structural parameter estimation is affected not only by measurement noise but also by unknown uncertainties which are present in the system. Deterministic structural model updating methods minimise the difference between experimentally measured data and computational prediction. Sensitivity-based methods are very efficient in solving structural model updating problems. Material and geometrical parameters of the structure such as Poisson's ratio, Young's modulus, mass density, modal damping, etc. are usually considered deterministic and homogeneous. In this paper, the distributed and non-homogeneous characteristics of these parameters are considered in the model updating. The parameters are taken as spatially correlated random fields and are expanded in a spectral Karhunen-Loève (KL) decomposition. Using the KL expansion, the spectral dynamic stiffness matrix of the beam is expanded as a series in terms of discretized parameters, which can be estimated using sensitivity-based model updating techniques. Numerical and experimental tests involving a beam with distributed bending rigidity and mass density are used to verify the proposed method. This extension of standard model updating procedures can enhance the dynamic description of structural dynamic models.
From Risk Models to Loan Contracts: Austerity as the Continuation of Calculation by Other Means
Directory of Open Access Journals (Sweden)
Pierre Pénet
2014-06-01
Full Text Available This article analyses how financial actors sought to minimise financial uncertainties during the European sovereign debt crisis by employing simulations as legal instruments of market regulation. We first contrast two roles that simulations can play in sovereign debt markets: ‘simulation-hypotheses’, which work as bundles of constantly updated hypotheses with the goal of better predicting financial risks; and ‘simulation-fictions’, which provide fixed narratives about the present with the purpose of postponing the revision of market risks. Using ratings reports published by Moody’s on Greece and European Central Bank (ECB regulations, we show that Moody’s stuck to a simulationfiction and displayed rating inertia on Greece’s trustworthiness to prevent the destabilising effects that further downgrades would have on Greek borrowing costs. We also show that the multi-notch downgrade issued by Moody’s in June 2010 followed the ECB’s decision to remove ratings from its collateral eligibility requirements. Then, as regulators moved from ‘regulation through model’ to ‘regulation through contract’, ratings stopped functioning as simulation-fictions. Indeed, the conditions of the Greek bailout implemented in May 2010 replaced the CRAs’ models as the main simulation-fiction, which market actors employed to postpone the prospect of a Greek default. We conclude by presenting austerity measures as instruments of calculative governance rather than ideological compacts
Updated Life-Cycle Assessment of Aluminum Production and Semi-fabrication for the GREET Model
Energy Technology Data Exchange (ETDEWEB)
Dai, Qiang [Argonne National Lab. (ANL), Argonne, IL (United States); Kelly, Jarod C. [Argonne National Lab. (ANL), Argonne, IL (United States); Burnham, Andrew [Argonne National Lab. (ANL), Argonne, IL (United States); Elgowainy, Amgad [Argonne National Lab. (ANL), Argonne, IL (United States)
2015-09-01
This report serves as an update for the life-cycle analysis (LCA) of aluminum production based on the most recent data representing the state-of-the-art of the industry in North America. The 2013 Aluminum Association (AA) LCA report on the environmental footprint of semifinished aluminum products in North America provides the basis for the update (The Aluminum Association, 2013). The scope of this study covers primary aluminum production, secondary aluminum production, as well as aluminum semi-fabrication processes including hot rolling, cold rolling, extrusion and shape casting. This report focuses on energy consumptions, material inputs and criteria air pollutant emissions for each process from the cradle-to-gate of aluminum, which starts from bauxite extraction, and ends with manufacturing of semi-fabricated aluminum products. The life-cycle inventory (LCI) tables compiled are to be incorporated into the vehicle cycle model of Argonne National Laboratory’s Greenhouse Gases, Regulated Emissions, and Energy Use in Transportation (GREET) Model for the release of its 2015 version.
Improvements in the model of neutron calculations for research reactors
International Nuclear Information System (INIS)
Calzetta, Osvaldo; Leszczynski, Francisco
1987-01-01
Within the research program in the field of neutron physics calculations being carried out in the Nuclear Engineering Division at the Centro Atomico Bariloche, the errors which due to some typical approximations appear in the final results are researched. For research MTR type reactors, two approximations, for high and low enrichment are investigated: the treatment of the geometry and the method of few-group cell cross-sections calculation, particularly in the resonance energy region. Commonly, the cell constants used for the entire reactor calculation are obtained making an homogenization of the full fuel elements, by one-dimensional calculations. An improvement is made that explicitly includes the fuel element frames in the core calculation geometry. Besides, a detailed treatment-in energy and space- is used to find the resonance few-group cross sections, and a comparison of the results with detailed and approximated calculations is made. The least number and the best mesh of energy groups needed for cell calculations is fixed too. (Author) [es
DEFF Research Database (Denmark)
Kristensen, Anders Ringgaard; Søllested, Thomas Algot
2004-01-01
that really uses all these methodological improvements. In this paper, the biological model describing the performance and feed intake of sows is presented. In particular, estimation of herd specific parameters is emphasized. The optimization model is described in a subsequent paper......Several replacement models have been presented in literature. In other applicational areas like dairy cow replacement, various methodological improvements like hierarchical Markov processes and Bayesian updating have been implemented, but not in sow models. Furthermore, there are methodological...... improvements like multi-level hierarchical Markov processes with decisions on multiple time scales, efficient methods for parameter estimations at herd level and standard software that has been hardly implemented at all in any replacement model. The aim of this study is to present a sow replacement model...
Update of the Polar SWIFT model for polar stratospheric ozone loss (Polar SWIFT version 2
Directory of Open Access Journals (Sweden)
I. Wohltmann
2017-07-01
Full Text Available The Polar SWIFT model is a fast scheme for calculating the chemistry of stratospheric ozone depletion in polar winter. It is intended for use in global climate models (GCMs and Earth system models (ESMs to enable the simulation of mutual interactions between the ozone layer and climate. To date, climate models often use prescribed ozone fields, since a full stratospheric chemistry scheme is computationally very expensive. Polar SWIFT is based on a set of coupled differential equations, which simulate the polar vortex-averaged mixing ratios of the key species involved in polar ozone depletion on a given vertical level. These species are O3, chemically active chlorine (ClOx, HCl, ClONO2 and HNO3. The only external input parameters that drive the model are the fraction of the polar vortex in sunlight and the fraction of the polar vortex below the temperatures necessary for the formation of polar stratospheric clouds. Here, we present an update of the Polar SWIFT model introducing several improvements over the original model formulation. In particular, the model is now trained on vortex-averaged reaction rates of the ATLAS Chemistry and Transport Model, which enables a detailed look at individual processes and an independent validation of the different parameterizations contained in the differential equations. The training of the original Polar SWIFT model was based on fitting complete model runs to satellite observations and did not allow for this. A revised formulation of the system of differential equations is developed, which closely fits vortex-averaged reaction rates from ATLAS that represent the main chemical processes influencing ozone. In addition, a parameterization for the HNO3 change by denitrification is included. The rates of change of the concentrations of the chemical species of the Polar SWIFT model are purely chemical rates of change in the new version, whereas in the original Polar SWIFT model, they included a transport effect
International Nuclear Information System (INIS)
Ainsworth, T.L.
1983-01-01
The Δ(1232) plays an important role in determining the properties of nuclear and neutron matter. The effects of the Δ resonance are incorporated explicitly by using a coupled channel formalism. A method for constraining a lowest order variational calculation, appropriate when nucleon internal degrees of freedom are made explicity, is presented. Different N-N potentials were calculated and fit to phase shift data and deuteron properties. The potentials were constructed to test the relative importance of the Δ resonance on nuclear properties. The symmetry energy and incompressibility of nuclear matter are generally reproduced by this calculation. Neutron matter results lead to appealing neutron star models. Fermi liquid parameters for 3 He are calculated with a model that includes both direct and induced terms. A convenient form of the direct interaction is obtained in terms of the parameters. The form of the direct interaction ensures that the forward scattering sum rule (Pauli principle) is obeyed. The parameters are adjusted to fit the experimentally determined F 0 /sup s/, F 0 /sup a/, and F 1 /sup s/ Landau parameters. Higher order Landau parameters are calculated by the self-consistent solution of the equations; comparison to experiment is good. The model also leads to a preferred value for the effective mass of 3 He. Of the three parameters only one shows any dependence on pressure. An exact sum rule is derived relating this parameter to a specific summation of Landau parameters
Shope, Christopher L.; Angeroth, Cory E.
2015-01-01
Effective management of surface waters requires a robust understanding of spatiotemporal constituent loadings from upstream sources and the uncertainty associated with these estimates. We compared the total dissolved solids loading into the Great Salt Lake (GSL) for water year 2013 with estimates of previously sampled periods in the early 1960s.We also provide updated results on GSL loading, quantitatively bounded by sampling uncertainties, which are useful for current and future management efforts. Our statistical loading results were more accurate than those from simple regression models. Our results indicate that TDS loading to the GSL in water year 2013 was 14.6 million metric tons with uncertainty ranging from 2.8 to 46.3 million metric tons, which varies greatly from previous regression estimates for water year 1964 of 2.7 million metric tons. Results also indicate that locations with increased sampling frequency are correlated with decreasing confidence intervals. Because time is incorporated into the LOADEST models, discrepancies are largely expected to be a function of temporally lagged salt storage delivery to the GSL associated with terrestrial and in-stream processes. By incorporating temporally variable estimates and statistically derived uncertainty of these estimates,we have provided quantifiable variability in the annual estimates of dissolved solids loading into the GSL. Further, our results support the need for increased monitoring of dissolved solids loading into saline lakes like the GSL by demonstrating the uncertainty associated with different levels of sampling frequency.
International Nuclear Information System (INIS)
Pusateri, Elise N.; Morris, Heidi E.; Nelson, Eric M.; Ji, Wei
2015-01-01
Electromagnetic pulse (EMP) events produce low-energy conduction electrons from Compton electron or photoelectron ionizations with air. It is important to understand how conduction electrons interact with air in order to accurately predict EMP evolution and propagation. An electron swarm model can be used to monitor the time evolution of conduction electrons in an environment characterized by electric field and pressure. Here a swarm model is developed that is based on the coupled ordinary differential equations (ODEs) described by Higgins et al. (1973), hereinafter HLO. The ODEs characterize the swarm electric field, electron temperature, electron number density, and drift velocity. Important swarm parameters, the momentum transfer collision frequency, energy transfer collision frequency, and ionization rate, are calculated and compared to the previously reported fitted functions given in HLO. These swarm parameters are found using BOLSIG+, a two term Boltzmann solver developed by Hagelaar and Pitchford (2005), which utilizes updated cross sections from the LXcat website created by Pancheshnyi et al. (2012). We validate the swarm model by comparing to experimental effective ionization coefficient data in Dutton (1975) and drift velocity data in Ruiz-Vargas et al. (2010). In addition, we report on electron equilibrium temperatures and times for a uniform electric field of 1 StatV/cm for atmospheric heights from 0 to 40 km. We show that the equilibrium temperature and time are sensitive to the modifications in the collision frequencies and ionization rate based on the updated electron interaction cross sections
An update of the classical Bokhman’s dualistic model of endometrial cancer
Directory of Open Access Journals (Sweden)
Miłosz Wilczyński
2016-07-01
Full Text Available According to the classical dualistic model introduced by Bokhman in 1983, endometrial cancer (EC is divided into two basic types. The prototypical histological type for type I and type II of EC is endometrioid carcinoma and serous carcinoma, respectively. The traditional classification is based on clinical, endocrine and histopathological features, however, it sometimes does not reflect the full heterogeneity of EC. New molecular evidence, supported by clinical diversity of the cancer, indicates that the classical dualistic model is valid only to some extent. The review updates a mutational diversity of EC, introducing a new molecular classification of the tumour in regard to data presented by The Cancer Genome Atlas Research Network (TGCA.
International Nuclear Information System (INIS)
2013-12-01
For those Member States that have or have had significant fast reactor development programmes, it is of utmost importance that they have validated up to date codes and methods for fast reactor physics analysis in support of R and D and core design activities in the area of actinide utilization and incineration. In particular, some Member States have recently focused on fast reactor systems for minor actinide transmutation and on cores optimized for consuming rather than breeding plutonium; the physics of the breeder reactor cycle having already been widely investigated. Plutonium burning systems may have an important role in managing plutonium stocks until the time when major programmes of self-sufficient fast breeder reactors are established. For assessing the safety of these systems, it is important to determine the prediction accuracy of transient simulations and their associated reactivity coefficients. In response to Member States' expressed interest, the IAEA sponsored a coordinated research project (CRP) on Updated Codes and Methods to Reduce the Calculational Uncertainties of the LMFR Reactivity Effects. The CRP started in November 1999 and, at the first meeting, the members of the CRP endorsed a benchmark on the BN-600 hybrid core for consideration in its first studies. Benchmark analyses of the BN-600 hybrid core were performed during the first three phases of the CRP, investigating different nuclear data and levels of approximation in the calculation of safety related reactivity effects and their influence on uncertainties in transient analysis prediction. In an additional phase of the benchmark studies, experimental data were used for the verification and validation of nuclear data libraries and methods in support of the previous three phases. The results of phases 1, 2, 3 and 5 of the CRP are reported in IAEA-TECDOC-1623, BN-600 Hybrid Core Benchmark Analyses, Results from a Coordinated Research Project on Updated Codes and Methods to Reduce the
Standard Model updates and new physics analysis with the Unitarity Triangle fit
International Nuclear Information System (INIS)
Bevan, A.; Bona, M.; Ciuchini, M.; Derkach, D.; Franco, E.; Silvestrini, L.; Lubicz, V.; Tarantino, C.; Martinelli, G.; Parodi, F.; Schiavi, C.; Pierini, M.; Sordini, V.; Stocchi, A.; Vagnoni, V.
2013-01-01
We present the summer 2012 update of the Unitarity Triangle (UT) analysis performed by the UTfit Collaboration within the Standard Model (SM) and beyond. The increased accuracy on several of the fundamental constraints is now enhancing some of the tensions amongst and within the constraint themselves. In particular, the long standing tension between exclusive and inclusive determinations of the V ub and V cb CKM matrix elements is now playing a major role. Then we present the generalisation the UT analysis to investigate new physics (NP) effects, updating the constraints on NP contributions to ΔF=2 processes. In the NP analysis, both CKM and NP parameters are fitted simultaneously to obtain the possible NP effects in any specific sector. Finally, based on the NP constraints, we derive upper bounds on the coefficients of the most general ΔF=2 effective Hamiltonian. These upper bounds can be translated into lower bounds on the scale of NP that contributes to these low-energy effective interactions
Yu, Jen-Shiang K; Hwang, Jenn-Kang; Tang, Chuan Yi; Yu, Chin-Hui
2004-01-01
A number of recently released numerical libraries including Automatically Tuned Linear Algebra Subroutines (ATLAS) library, Intel Math Kernel Library (MKL), GOTO numerical library, and AMD Core Math Library (ACML) for AMD Opteron processors, are linked against the executables of the Gaussian 98 electronic structure calculation package, which is compiled by updated versions of Fortran compilers such as Intel Fortran compiler (ifc/efc) 7.1 and PGI Fortran compiler (pgf77/pgf90) 5.0. The ifc 7.1 delivers about 3% of improvement on 32-bit machines compared to the former version 6.0. Performance improved from pgf77 3.3 to 5.0 is also around 3% when utilizing the original unmodified optimization options of the compiler enclosed in the software. Nevertheless, if extensive compiler tuning options are used, the speed can be further accelerated to about 25%. The performances of these fully optimized numerical libraries are similar. The double-precision floating-point (FP) instruction sets (SSE2) are also functional on AMD Opteron processors operated in 32-bit compilation, and Intel Fortran compiler has performed better optimization. Hardware-level tuning is able to improve memory bandwidth by adjusting the DRAM timing, and the efficiency in the CL2 mode is further accelerated by 2.6% compared to that of the CL2.5 mode. The FP throughput is measured by simultaneous execution of two identical copies of each of the test jobs. Resultant performance impact suggests that IA64 and AMD64 architectures are able to fulfill significantly higher throughput than the IA32, which is consistent with the SpecFPrate2000 benchmarks.
Comparison of standard fast reactor calculations (Baker model)
Energy Technology Data Exchange (ETDEWEB)
Voropaev, A I; Van' kov, A A; Tsybulya, A M
1978-12-01
Compared are standard fast reactor calculations performed at different laboratories using several nuclear data files: BNAB-70 and OSKAR-75 (the USSR), CARNAVAL-4 (France), FD-5 (Great Britain), KFK-INR (West Germany), ENDF/B4 (the USA). Three fuel compositions were chosen: (1) /sup 239/Pu and /sup 238/U; (2) /sup 239/Pu, /sup 238/U and fission products; (3) /sup 239/Pu, /sup 240/Pu, /sup 238/U and fission products. Medium temperature was 300K. The calculations have been conducted in the diffusion approximation. Data on critical masses and breeding ratios are tabulated. Discrepancies in the calculations of all the characteristics are small since all the countries possess practically the same nuclear data files.
Calculational advance in the modeling of fuel-coolant interactions
International Nuclear Information System (INIS)
Bohl, W.R.
1982-01-01
A new technique is applied to numerically simulate a fuel-coolant interaction. The technique is based on the ability to calculate separate space- and time-dependent velocities for each of the participating components. In the limiting case of a vapor explosion, this framework allows calculation of the pre-mixing phase of film boiling and interpenetration of the working fluid by hot liquid, which is required for extrapolating from experiments to a reactor hypothetical accident. Qualitative results are compared favorably to published experimental data where an iron-alumina mixture was poured into water. Differing results are predicted with LMFBR materials
Calculation of the 3D density model of the Earth
Piskarev, A.; Butsenko, V.; Poselov, V.; Savin, V.
2009-04-01
The study of the Earth's crust is a part of investigation aimed at extension of the Russian Federation continental shelf in the Sea of Okhotsk Gathered data allow to consider the Sea of Okhotsk' area located outside the exclusive economic zone of the Russian Federation as the natural continuation of Russian territory. The Sea of Okhotsk is an Epi-Mesozoic platform with Pre-Cenozoic heterogeneous folded basement of polycyclic development and sediment cover mainly composed of Paleocene - Neocene - Quaternary deposits. Results of processing and complex interpretation of seismic, gravity, and aeromagnetic data along profile 2-DV-M, as well as analysis of available geological and geophysical information on the Sea of Okhotsk region, allowed to calculate of the Earth crust model. 4 layers stand out (bottom-up) in structure of the Earth crust: granulite-basic (density 2.90 g/cm3), granite-gneiss (limits of density 2.60-2.76 g/cm3), volcanogenic-sedimentary (2.45 g/cm3) and sedimentary (density 2.10 g/cm3). The last one is absent on the continent; it is observed only on the water area. Density of the upper mantle is taken as 3.30 g/cm3. The observed gravity anomalies are mostly related to the surface relief of the above mentioned layers or to the density variations of the granite-metamorphic basement. So outlining of the basement blocks of different constitution preceded to the modeling. This operation is executed after Double Fourier Spectrum analysis of the gravity and magnetic anomalies and following compilation of the synthetic anomaly maps, related to the basement density and magnetic heterogeneity. According to bathymetry data, the Sea of Okhotsk can be subdivided at three mega-blocks. Taking in consideration that central Sea of Okhotsk area is aseismatic, i.e. isostatic compensated, it is obvious that Earth crust structure of these three blocks is different. The South-Okhotsk depression is characteristics by 3200-3300 m of sea depths. Moho surface in this area is at
Martínez-López, Brais; Gontard, Nathalie; Peyron, Stéphane
2018-03-01
A reliable prediction of migration levels of plastic additives into food requires a robust estimation of diffusivity. Predictive modelling of diffusivity as recommended by the EU commission is carried out using a semi-empirical equation that relies on two polymer-dependent parameters. These parameters were determined for the polymers most used by packaging industry (LLDPE, HDPE, PP, PET, PS, HIPS) from the diffusivity data available at that time. In the specific case of general purpose polystyrene, the diffusivity data published since then shows that the use of the equation with the original parameters results in systematic underestimation of diffusivity. The goal of this study was therefore, to propose an update of the aforementioned parameters for PS on the basis of up to date diffusivity data, so the equation can be used for a reasoned overestimation of diffusivity.
[Social determinants of health and disability: updating the model for determination].
Tamayo, Mauro; Besoaín, Álvaro; Rebolledo, Jaime
Social determinants of health (SDH) are conditions in which people live. These conditions impact their lives, health status and social inclusion level. In line with the conceptual and comprehensive progression of disability, it is important to update SDH due to their broad implications in implementing health interventions in society. This proposal supports incorporating disability in the model as a structural determinant, as it would lead to the same social inclusion/exclusion of people described in other structural SDH. This proposal encourages giving importance to designing and implementing public policies to improve societal conditions and contribute to social equity. This will be an act of reparation, justice and fulfilment with the Convention on the Rights of Persons with Disabilities. Copyright © 2017 SESPAS. Publicado por Elsevier España, S.L.U. All rights reserved.
Díaz, Verónica; Poblete, Alvaro
2017-07-01
This paper describes part of a research and development project carried out in public elementary schools. Its objective was to update the mathematical and didactic knowledge of teachers in two consecutive levels in urban and rural public schools of Region de Los Lagos and Region de Los Rios of southern Chile. To that effect, and by means of an advanced training project based on a professional competences model, didactic interventions based on types of problems and types of mathematical competences with analysis of contents and learning assessment were designed. The teachers' competence regarding the didactic strategy used and its results, as well as the students' learning achievements are specified. The project made possible to validate a strategy of lifelong improvement in mathematics, based on the professional competences of teachers and their didactic transposition in the classroom, as an alternative to consolidate learning in areas considered vulnerable in two regions of the country.
Metal-rich, Metal-poor: Updated Stellar Population Models for Old Stellar Systems
Conroy, Charlie; Villaume, Alexa; van Dokkum, Pieter G.; Lind, Karin
2018-02-01
We present updated stellar population models appropriate for old ages (>1 Gyr) and covering a wide range in metallicities (‑1.5 ≲ [Fe/H] ≲ 0.3). These models predict the full spectral variation associated with individual element abundance variation as a function of metallicity and age. The models span the optical–NIR wavelength range (0.37–2.4 μm), include a range of initial mass functions, and contain the flexibility to vary 18 individual elements including C, N, O, Mg, Si, Ca, Ti, and Fe. To test the fidelity of the models, we fit them to integrated light optical spectra of 41 Galactic globular clusters (GCs). The value of testing models against GCs is that their ages, metallicities, and detailed abundance patterns have been derived from the Hertzsprung–Russell diagram in combination with high-resolution spectroscopy of individual stars. We determine stellar population parameters from fits to all wavelengths simultaneously (“full spectrum fitting”), and demonstrate explicitly with mock tests that this approach produces smaller uncertainties at fixed signal-to-noise ratio than fitting a standard set of 14 line indices. Comparison of our integrated-light results to literature values reveals good agreement in metallicity, [Fe/H]. When restricting to GCs without prominent blue horizontal branch populations, we also find good agreement with literature values for ages, [Mg/Fe], [Si/Fe], and [Ti/Fe].
A new frequency matching technique for FRF-based model updating
Yang, Xiuming; Guo, Xinglin; Ouyang, Huajiang; Li, Dongsheng
2017-05-01
Frequency Response Function (FRF) residues have been widely used to update Finite Element models. They are a kind of original measurement information and have the advantages of rich data and no extraction errors, etc. However, like other sensitivity-based methods, an FRF-based identification method also needs to face the ill-conditioning problem which is even more serious since the sensitivity of the FRF in the vicinity of a resonance is much greater than elsewhere. Furthermore, for a given frequency measurement, directly using a theoretical FRF at a frequency may lead to a huge difference between the theoretical FRF and the corresponding experimental FRF which finally results in larger effects of measurement errors and damping. Hence in the solution process, correct selection of the appropriate frequency to get the theoretical FRF in every iteration in the sensitivity-based approach is an effective way to improve the robustness of an FRF-based algorithm. A primary tool for right frequency selection based on the correlation of FRFs is the Frequency Domain Assurance Criterion. This paper presents a new frequency selection method which directly finds the frequency that minimizes the difference of the order of magnitude between the theoretical and experimental FRFs. A simulated truss structure is used to compare the performance of different frequency selection methods. For the sake of reality, it is assumed that not all the degrees of freedom (DoFs) are available for measurement. The minimum number of DoFs required in each approach to correctly update the analytical model is regarded as the right identification standard.
Experimental test of spatial updating models for monkey eye-head gaze shifts.
Directory of Open Access Journals (Sweden)
Tom J Van Grootel
Full Text Available How the brain maintains an accurate and stable representation of visual target locations despite the occurrence of saccadic gaze shifts is a classical problem in oculomotor research. Here we test and dissociate the predictions of different conceptual models for head-unrestrained gaze-localization behavior of macaque monkeys. We adopted the double-step paradigm with rapid eye-head gaze shifts to measure localization accuracy in response to flashed visual stimuli in darkness. We presented the second target flash either before (static, or during (dynamic the first gaze displacement. In the dynamic case the brief visual flash induced a small retinal streak of up to about 20 deg at an unpredictable moment and retinal location during the eye-head gaze shift, which provides serious challenges for the gaze-control system. However, for both stimulus conditions, monkeys localized the flashed targets with accurate gaze shifts, which rules out several models of visuomotor control. First, these findings exclude the possibility that gaze-shift programming relies on retinal inputs only. Instead, they support the notion that accurate eye-head motor feedback updates the gaze-saccade coordinates. Second, in dynamic trials the visuomotor system cannot rely on the coordinates of the planned first eye-head saccade either, which rules out remapping on the basis of a predictive corollary gaze-displacement signal. Finally, because gaze-related head movements were also goal-directed, requiring continuous access to eye-in-head position, we propose that our results best support a dynamic feedback scheme for spatial updating in which visuomotor control incorporates accurate signals about instantaneous eye- and head positions rather than relative eye- and head displacements.
Power plant reliability calculation with Markov chain models
International Nuclear Information System (INIS)
Senegacnik, A.; Tuma, M.
1998-01-01
In the paper power plant operation is modelled using continuous time Markov chains with discrete state space. The model is used to compute the power plant reliability and the importance and influence of individual states, as well as the transition probabilities between states. For comparison the model is fitted to data for coal and nuclear power plants recorded over several years. (orig.) [de
Calculation of single chain cellulose elasticity using fully atomistic modeling
Xiawa Wu; Robert J. Moon; Ashlie Martini
2011-01-01
Cellulose nanocrystals, a potential base material for green nanocomposites, are ordered bundles of cellulose chains. The properties of these chains have been studied for many years using atomic-scale modeling. However, model predictions are difficult to interpret because of the significant dependence of predicted properties on model details. The goal of this study is...
Model calculation for energy loss in ion-surface collisions
International Nuclear Information System (INIS)
Miraglia, J.E.; Gravielle, M.S.
2003-01-01
The so-called local plasma approximation is generalized to deal with projectiles colliding with surfaces of amorphous solids and with a specific crystalline structure (plannar channeling). Energy loss of protons grazingly colliding with aluminum, SnTe alloy, and LiF surfaces is investigated. The calculations agree quite well with previous theoretical results and explain the experimental findings of energy loss for aluminum and SnTe alloy, but they fall short to explain the data for LiF surfaces
Model Hamiltonian Calculations of the Nonlinear Polarizabilities of Conjugated Molecules.
Risser, Steven Michael
This dissertation advances the theoretical knowledge of the nonlinear polarizabilities of conjugated molecules. The unifying feature of these molecules is an extended delocalized pi electron structure. The pi electrons dominate the electronic properties of the molecules, allowing prediction of molecular properties based on the treatment of just the pi electrons. Two separate pi electron Hamiltonians are used in the research. The principal Hamiltonian used is the non-interacting single-particle Huckel Hamiltonian, which replaces the Coulomb interaction among the pi electrons with a mean field interaction. The simplification allows for exact solution of the Hamiltonian for large molecules. The second Hamiltonian used for this research is the interacting multi-particle Pariser-Parr-Pople (PPP) Hamiltonian, which retains explicit Coulomb interactions. This limits exact solutions to molecules containing at most eight electrons. The molecular properties being investigated are the linear polarizability, and the second and third order hyperpolarizabilities. The hyperpolarizabilities determine the nonlinear optical response of materials. These molecular parameters are determined by two independent approaches. The results from the Huckel Hamiltonian are obtained through first, second and third order perturbation theory. The results from the PPP Hamiltonian are obtained by including the applied field directly in the Hamiltonian and determining the ground state energy at a series of field strengths. By fitting the energy to a polynomial in field strength, the polarizability and hyperpolarizabilities are determined. The Huckel Hamiltonian is used to calculate the third order hyperpolarizability of polyenes. These calculations were the first to show the average hyperpolarizability of the polyenes to be positive, and also to show the saturation of the hyperpolarizability. Comparison of these Huckel results to those from the PPP Hamiltonian shows the lack of explicit Coulomb
Gantt, B.; Kelly, J. T.; Bash, J. O.
2015-11-01
Sea spray aerosols (SSAs) impact the particle mass concentration and gas-particle partitioning in coastal environments, with implications for human and ecosystem health. Model evaluations of SSA emissions have mainly focused on the global scale, but regional-scale evaluations are also important due to the localized impact of SSAs on atmospheric chemistry near the coast. In this study, SSA emissions in the Community Multiscale Air Quality (CMAQ) model were updated to enhance the fine-mode size distribution, include sea surface temperature (SST) dependency, and reduce surf-enhanced emissions. Predictions from the updated CMAQ model and those of the previous release version, CMAQv5.0.2, were evaluated using several coastal and national observational data sets in the continental US. The updated emissions generally reduced model underestimates of sodium, chloride, and nitrate surface concentrations for coastal sites in the Bay Regional Atmospheric Chemistry Experiment (BRACE) near Tampa, Florida. Including SST dependency to the SSA emission parameterization led to increased sodium concentrations in the southeastern US and decreased concentrations along parts of the Pacific coast and northeastern US. The influence of sodium on the gas-particle partitioning of nitrate resulted in higher nitrate particle concentrations in many coastal urban areas due to increased condensation of nitric acid in the updated simulations, potentially affecting the predicted nitrogen deposition in sensitive ecosystems. Application of the updated SSA emissions to the California Research at the Nexus of Air Quality and Climate Change (CalNex) study period resulted in a modest improvement in the predicted surface concentration of sodium and nitrate at several central and southern California coastal sites. This update of SSA emissions enabled a more realistic simulation of the atmospheric chemistry in coastal environments where marine air mixes with urban pollution.
Energy Technology Data Exchange (ETDEWEB)
Moeller, M. P.; Urbanik, II, T.; Desrosiers, A. E.
1982-03-01
This paper describes the methodology and application of the computer model CLEAR (Calculates Logical Evacuation And Response) which estimates the time required for a specific population density and distribution to evacuate an area using a specific transportation network. The CLEAR model simulates vehicle departure and movement on a transportation network according to the conditions and consequences of traffic flow. These include handling vehicles at intersecting road segments, calculating the velocity of travel on a road segment as a function of its vehicle density, and accounting for the delay of vehicles in traffic queues. The program also models the distribution of times required by individuals to prepare for an evacuation. In order to test its accuracy, the CLEAR model was used to estimate evacuatlon tlmes for the emergency planning zone surrounding the Beaver Valley Nuclear Power Plant. The Beaver Valley site was selected because evacuation time estimates had previously been prepared by the licensee, Duquesne Light, as well as by the Federal Emergency Management Agency and the Pennsylvania Emergency Management Agency. A lack of documentation prevented a detailed comparison of the estimates based on the CLEAR model and those obtained by Duquesne Light. However, the CLEAR model results compared favorably with the estimates prepared by the other two agencies.
International Nuclear Information System (INIS)
Moeller, M.P.; Desrosiers, A.E.; Urbanik, T. II
1982-03-01
This paper describes the methodology and application of the computer model CLEAR (Calculates Logical Evacuation And Response) which estimates the time required for a specific population density and distribution to evacuate an area using a specific transportation network. The CLEAR model simulates vehicle departure and movement on a transportation network according to the conditions and consequences of traffic flow. These include handling vehicles at intersecting road segments, calculating the velocity of travel on a road segment as a function of its vehicle density, and accounting for the delay of vehicles in traffic queues. The program also models the distribution of times required by individuals to prepare for an evacuation. In order to test its accuracy, the CLEAR model was used to estimate evacuation times for the emergency planning zone surrounding the Beaver Valley Nuclear Power Plant. The Beaver Valley site was selected because evacuation time estimates had previously been prepared by the licensee, Duquesne Light, as well as by the Federal Emergency Management Agency and the Pennsylvania Emergency Management Agency. A lack of documentation prevented a detailed comparison of the estimates based on the CLEAR model and those obtained by Duquesne Light. However, the CLEAR model results compared favorably with the estimates prepared by the other two agencies. (author)
Carbon dioxide fluid-flow modeling and injectivity calculations
Burke, Lauri
2011-01-01
At present, the literature lacks a geologic-based assessment methodology for numerically estimating injectivity, lateral migration, and subsequent long-term containment of supercritical carbon dioxide that has undergone geologic sequestration into subsurface formations. This study provides a method for and quantification of first-order approximations for the time scale of supercritical carbon dioxide lateral migration over a one-kilometer distance through a representative volume of rock. These calculations provide a quantified foundation for estimating injectivity and geologic storage of carbon dioxide.
Quark model calculations of current correlators in the nonperturbative domain
International Nuclear Information System (INIS)
Celenza, L.S.; Shakin, C.M.; Sun, W.D.
1995-01-01
The authors study the vector-isovector current correlator in this work, making use of a generalized Nambu-Jona-Lasinio (NJL) model. In their work, the original NJL model is extended to describe the coupling of the quark-antiquark states to the two-pion continuum. Further, a model for confinement is introduced that is seen to remove the nonphysical cuts that appear in various amplitudes when the quark and antiquark go on mass shell. Quite satisfactory results are obtained for the correlator. The authors also use the correlator to define a T-matrix for confined quarks and discuss a rho-dominance model for that T-matrix. It is also seen that the Bethe-Salpeter equation that determines the rho mass (in the absence of the coupling to the two-pion continuum) has more satisfactory behavior in the generalized model than in the model without confinement. That improved behavior is here related to the absence of the q bar q cut in the basic quark-loop integral of the generalized model. In this model, it is seen how one may work with both quark and hadron degrees of freedom, with only the hadrons appearing as physical particles. 12 refs., 16 figs., 1 tab
Real-time dispersion calculation using the Lagrange model LASAT
International Nuclear Information System (INIS)
Janicke, L.
1987-01-01
The LASAT (Lagrange Simulation of Aerosol Transport) dispersion model demonstrates pollutant transport in the atmosphere by simulating the paths of representative random samples of pollutant particles on the computer as natural as possible. The author demonstrates the generated particle paths and refers to literature for details of the model algorithm. (DG) [de
Long-Term Calculations with Large Air Pollution Models
DEFF Research Database (Denmark)
Ambelas Skjøth, C.; Bastrup-Birk, A.; Brandt, J.
1999-01-01
Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998......Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998...
A review of Higgs mass calculations in supersymmetric models
DEFF Research Database (Denmark)
Draper, P.; Rzehak, H.
2016-01-01
The discovery of the Higgs boson is both a milestone achievement for the Standard Model and an exciting probe of new physics beyond the SM. One of the most important properties of the Higgs is its mass, a number that has proven to be highly constraining for models of new physics, particularly those...... related to the electroweak hierarchy problem. Perhaps the most extensively studied examples are supersymmetric models, which, while capable of producing a 125 GeV Higgs boson with SM-like properties, do so in non-generic parts of their parameter spaces. We review the computation of the Higgs mass...
Directory of Open Access Journals (Sweden)
M. J. Alvarado
2013-07-01
Full Text Available Modern data assimilation algorithms depend on accurate infrared spectroscopy in order to make use of the information related to temperature, water vapor (H2O, and other trace gases provided by satellite observations. Reducing the uncertainties in our knowledge of spectroscopic line parameters and continuum absorption is thus important to improve the application of satellite data to weather forecasting. Here we present the results of a rigorous validation of spectroscopic updates to an advanced radiative transfer model, the Line-By-Line Radiative Transfer Model (LBLRTM, against a global dataset of 120 near-nadir, over-ocean, nighttime spectra from the Infrared Atmospheric Sounding Interferometer (IASI. We compare calculations from the latest version of LBLRTM (v12.1 to those from a previous version (v9.4+ to determine the impact of spectroscopic updates to the model on spectral residuals as well as retrieved temperature and H2O profiles. We show that the spectroscopy in the CO2 ν2 and ν3 bands is significantly improved in LBLRTM v12.1 relative to v9.4+, and that these spectroscopic updates lead to mean changes of ~0.5 K in the retrieved vertical temperature profiles between the surface and 10 hPa, with the sign of the change and the variability among cases depending on altitude. We also find that temperature retrievals using each of these two CO2 bands are remarkably consistent in LBLRTM v12.1, potentially allowing these bands to be used to retrieve atmospheric temperature simultaneously. The updated H2O spectroscopy in LBLRTM v12.1 substantially improves the a posteriori residuals in the P-branch of the H2O ν2 band, while the improvements in the R-branch are more modest. The H2O amounts retrieved with LBLRTM v12.1 are on average 14% lower between 100 and 200 hPa, 42% higher near 562 hPa, and 31% higher near the surface compared to the amounts retrieved with v9.4+ due to a combination of the different retrieved temperature profiles and the
An updated conceptual model of Delta Smelt biology: Our evolving understanding of an estuarine fish
Baxter, Randy; Brown, Larry R.; Castillo, Gonzalo; Conrad, Louise; Culberson, Steven D.; Dekar, Matthew P.; Dekar, Melissa; Feyrer, Frederick; Hunt, Thaddeus; Jones, Kristopher; Kirsch, Joseph; Mueller-Solger, Anke; Nobriga, Matthew; Slater, Steven B.; Sommer, Ted; Souza, Kelly; Erickson, Gregg; Fong, Stephanie; Gehrts, Karen; Grimaldo, Lenny; Herbold, Bruce
2015-01-01
The main purpose of this report is to provide an up-to-date assessment and conceptual model of factors affecting Delta Smelt (Hypomesus transpacificus) throughout its primarily annual life cycle and to demonstrate how this conceptual model can be used for scientific and management purposes. The Delta Smelt is a small estuarine fish that only occurs in the San Francisco Estuary. Once abundant, it is now rare and has been protected under the federal and California Endangered Species Acts since 1993. The Delta Smelt listing was related to a step decline in the early 1980s; however, population abundance decreased even further with the onset of the “pelagic organism decline” (POD) around 2002. A substantial, albeit short-lived, increase in abundance of all life stages in 2011 showed that the Delta Smelt population can still rebound when conditions are favorable for spawning, growth, and survival. In this report, we update previous conceptual models for Delta Smelt to reflect new data and information since the release of the last synthesis report about the POD by the Interagency Ecological Program for the San Francisco Estuary (IEP) in 2010. Specific objectives include:
2013-01-01
Liver fibrosis is defined as excessive extracellular matrix deposition and is based on complex interactions between matrix-producing hepatic stellate cells and an abundance of liver-resident and infiltrating cells. Investigation of these processes requires in vitro and in vivo experimental work in animals. However, the use of animals in translational research will be increasingly challenged, at least in countries of the European Union, because of the adoption of new animal welfare rules in 2013. These rules will create an urgent need for optimized standard operating procedures regarding animal experimentation and improved international communication in the liver fibrosis community. This review gives an update on current animal models, techniques and underlying pathomechanisms with the aim of fostering a critical discussion of the limitations and potential of up-to-date animal experimentation. We discuss potential complications in experimental liver fibrosis and provide examples of how the findings of studies in which these models are used can be translated to human disease and therapy. In this review, we want to motivate the international community to design more standardized animal models which might help to address the legally requested replacement, refinement and reduction of animals in fibrosis research. PMID:24274743
A simple model for calculating air pollution within street canyons
Venegas, Laura E.; Mazzeo, Nicolás A.; Dezzutti, Mariana C.
2014-04-01
This paper introduces the Semi-Empirical Urban Street (SEUS) model. SEUS is a simple mathematical model based on the scaling of air pollution concentration inside street canyons employing the emission rate, the width of the canyon, the dispersive velocity scale and the background concentration. Dispersive velocity scale depends on turbulent motions related to wind and traffic. The parameterisations of these turbulent motions include two dimensionless empirical parameters. Functional forms of these parameters have been obtained from full scale data measured in street canyons at four European cities. The sensitivity of SEUS model is studied analytically. Results show that relative errors in the evaluation of the two dimensionless empirical parameters have less influence on model uncertainties than uncertainties in other input variables. The model estimates NO2 concentrations using a simple photochemistry scheme. SEUS is applied to estimate NOx and NO2 hourly concentrations in an irregular and busy street canyon in the city of Buenos Aires. The statistical evaluation of results shows that there is a good agreement between estimated and observed hourly concentrations (e.g. fractional bias are -10.3% for NOx and +7.8% for NO2). The agreement between the estimated and observed values has also been analysed in terms of its dependence on wind speed and direction. The model shows a better performance for wind speeds >2 m s-1 than for lower wind speeds and for leeward situations than for others. No significant discrepancies have been found between the results of the proposed model and that of a widely used operational dispersion model (OSPM), both using the same input information.
Calculation of the fermionic determinant in the Schwinger model
International Nuclear Information System (INIS)
Dias, S.A.; Linhares, C.A.
1991-01-01
We compute explicitly the fermionic determinant and the effective action for the generalized Schwinger model in two dimensions and compare it with respective results for the particular cases of the Schwinger, chiral Schwinger and axial Schwinger models. The parameters that signal the ambiguity in the regularization scheme fo the determinant are introduced through the point-splitting method. The Wess-Zumino functional is also obtained and compared with the known expressions for the above-mentioned particular cases. (author)
Li, Chang; Wang, Qing; Shi, Wenzhong; Zhao, Sisi
2018-05-01
The accuracy of earthwork calculations that compute terrain volume is critical to digital terrain analysis (DTA). The uncertainties in volume calculations (VCs) based on a DEM are primarily related to three factors: 1) model error (ME), which is caused by an adopted algorithm for a VC model, 2) discrete error (DE), which is usually caused by DEM resolution and terrain complexity, and 3) propagation error (PE), which is caused by the variables' error. Based on these factors, the uncertainty modelling and analysis of VCs based on a regular grid DEM are investigated in this paper. Especially, how to quantify the uncertainty of VCs is proposed by a confidence interval based on truncation error (TE). In the experiments, the trapezoidal double rule (TDR) and Simpson's double rule (SDR) were used to calculate volume, where the TE is the major ME, and six simulated regular grid DEMs with different terrain complexity and resolution (i.e. DE) were generated by a Gauss synthetic surface to easily obtain the theoretical true value and eliminate the interference of data errors. For PE, Monte-Carlo simulation techniques and spatial autocorrelation were used to represent DEM uncertainty. This study can enrich uncertainty modelling and analysis-related theories of geographic information science.
Directory of Open Access Journals (Sweden)
Velichkovsky B. B.
2017-09-01
Full Text Available Background. Working memory (WM seems to be central to most forms of high-level cognition. This fact is fueling the growing interest in studying its structure and functional organization. The influential “concentric model” (Oberauer, 2002 suggests that WM contains a processing component and two storage components with different capacity limitations and sensitivity to interference. There is, to date, only limited support for the concentric model in the research literature, and it is limited to a number of specially designed tasks. Objective. In the present paper, we attempted to validate the concentric model by testing its major predictions using complex span and updating tasks in a number of experimental paradigms. Method. The model predictions were tested with the help of review of data obtained primarily in our own experiments in several research domains, including Sternberg’s additive factors method; factor structure of WM; serial position effects in WM; and WM performance in a sample with episodic long-term memory deficits. Results. Predictions generated by the concentric model were shown to hold in all these domains. In addition, several new properties of WM were identified. In particular, we recently found that WM indeed contains a processing component which functions independent of storage components. In turn, the latter were found to form a storage hierarchy which balances fast access to selected items, with the storing of large amounts of potentially relevant information. Processing and storage in WM were found to be dependent on shared cognitive resources which are dynamically allocated between WM components according to actual task requirements. e implications of these findings for the theory of WM are discussed. Conclusion. The concentric model was shown to be valid with respect to standard WM tasks. The concentric model others promising research perspectives for the study of higher- order cognition, including underlying
An hydrodynamic model for the calculation of oil spills trajectories
Energy Technology Data Exchange (ETDEWEB)
Paladino, Emilio Ernesto; Maliska, Clovis Raimundo [Santa Catarina Univ., Florianopolis, SC (Brazil). Dept. de Engenharia Mecanica. Lab. de Dinamica dos Fluidos Computacionais]. E-mails: emilio@sinmec.ufsc.br; maliska@sinmec.ufsc.br
2000-07-01
The aim of this paper is to present a mathematical model and its numerical treatment to forecast oil spills trajectories in the sea. The knowledge of the trajectory followed by an oil slick spilled on the sea is of fundamental importance in the estimation of potential risks for pipeline and tankers route selection, and in combating the pollution using floating barriers, detergents, etc. In order to estimate these slicks trajectories a new model, based on the mass and momentum conservation equations is presented. The model considers the spreading in the regimes when the inertial and viscous forces counterbalance gravity and takes into account the effects of winds and water currents. The inertial forces are considered for the spreading and the displacement of the oil slick, i.e., is considered its effects on the movement of the mass center of the slick. The mass loss caused by oil evaporation is also taken into account. The numerical model is developed in generalized coordinates, making the model easily applicable to complex coastal geographies. (author)
Uncertain hybrid model for the response calculation of an alternator
International Nuclear Information System (INIS)
Kuczkowiak, Antoine
2014-01-01
The complex structural dynamic behavior of alternator must be well understood in order to insure their reliable and safe operation. The numerical model is however difficult to construct mainly due to the presence of a high level of uncertainty. The objective of this work is to provide decision support tools in order to assess the vibratory levels in operation before to restart the alternator. Based on info-gap theory, a first decision support tool is proposed: the objective here is to assess the robustness of the dynamical response to the uncertain modal model. Based on real data, the calibration of an info-gap model of uncertainty is also proposed in order to enhance its fidelity to reality. Then, the extended constitutive relation error is used to expand identified mode shapes which are used to assess the vibratory levels. The robust expansion process is proposed in order to obtain robust expanded mode shapes to parametric uncertainties. In presence of lack-of knowledge, the trade-off between fidelity-to-data and robustness-to-uncertainties which expresses that robustness improves as fidelity deteriorates is emphasized on an industrial structure by using both reduced order model and surrogate model techniques. (author)
Model for calculation of concentration and load on behalf of accidents with radioactive materials
International Nuclear Information System (INIS)
Janssen, L.A.M.; Heugten, W.H.H. van
1987-04-01
In the project 'Information- and calculation-system for disaster combatment', by order of the Dutch government, a demonstration model has been developed for a diagnosis system for accidents. In this demonstration a model is used to calculate the concentration- and dose-distributions caused by incidental emissions of limited time. This model is described in this report. 4 refs.; 2 figs.; 3 tabs
A calculation model for the noise from steel railway bridges
Janssens, M.H.A.; Thompson, D.J.
1996-01-01
The sound level of a train crossing a steel railway bridge is usually about 10 dB higher than on plain track. In the Netherlands there are many such bridges which, for practical reasons, cannot be replaced by more intrinsically quiet concrete bridges. A computational model is described for the
Reactor accident calculation models in use in the Nordic countries
International Nuclear Information System (INIS)
Tveten, U.
1984-01-01
The report relates to a subproject under a Nordic project called ''Large reactor accidents - consequences and mitigating actions''. In the first part of the report short descriptions of the various models are given. A systematic list by subject is then given. In the main body of the report chapter and subchapter headings are by subject. (Auth.)
A calculation model for a HTR core seismic response
International Nuclear Information System (INIS)
Buland, P.; Berriaud, C.; Cebe, E.; Livolant, M.
1975-01-01
The paper presents the experimental results obtained at Saclay on a HTGR core model and comparisons with analytical results. Two series of horizontal tests have been performed on the shaking table VESUVE: sinusoidal test and time history response. Acceleration of graphite blocks, forces on the boundaries, relative displacement of the core and PCRB model, impact velocity of the blocks on the boundaries were recorded. These tests have shown the strongly non-linear dynamic behaviour of the core. The resonant frequency of the core is dependent on the level of the excitation. These phenomena have been explained by a computer code, which is a lumped mass non-linear model. Good correlation between experimental and analytical results was obtained for impact velocities and forces on the boundaries. This comparison has shown that the damping of the core is a critical parameter for the estimation of forces and velocities. Time history displacement at the level of PCRV was reproduced on the shaking table. The analytical model was applied to this excitation and good agreement was obtained for forces and velocities. (orig./HP) [de
Glass viscosity calculation based on a global statistical modelling approach
Energy Technology Data Exchange (ETDEWEB)
Fluegel, Alex
2007-02-01
A global statistical glass viscosity model was developed for predicting the complete viscosity curve, based on more than 2200 composition-property data of silicate glasses from the scientific literature, including soda-lime-silica container and float glasses, TV panel glasses, borosilicate fiber wool and E type glasses, low expansion borosilicate glasses, glasses for nuclear waste vitrification, lead crystal glasses, binary alkali silicates, and various further compositions from over half a century. It is shown that within a measurement series from a specific laboratory the reported viscosity values are often over-estimated at higher temperatures due to alkali and boron oxide evaporation during the measurement and glass preparation, including data by Lakatos et al. (1972) and the recently published High temperature glass melt property database for process modeling by Seward et al. (2005). Similarly, in the glass transition range many experimental data of borosilicate glasses are reported too high due to phase separation effects. The developed global model corrects those errors. The model standard error was 9-17°C, with R^2 = 0.985-0.989. The prediction 95% confidence interval for glass in mass production largely depends on the glass composition of interest, the composition uncertainty, and the viscosity level. New insights in the mixed-alkali effect are provided.
Badhwar-O'Neill 2011 Galactic Cosmic Ray Model Update and Future Improvements
O'Neill, Pat M.; Kim, Myung-Hee Y.
2014-01-01
The Badhwar-O'Neill Galactic Cosmic Ray (GCR) Model based on actual GR measurements is used by deep space mission planners for the certification of micro-electronic systems and the analysis of radiation health risks to astronauts in space missions. The BO GCR Model provides GCR flux in deep space (outside the earth's magnetosphere) for any given time from 1645 to present. The energy spectrum from 50 MeV/n-20 GeV/n is provided for ions from hydrogen to uranium. This work describes the most recent version of the BO GCR model (BO'11). BO'11 determines the GCR flux at a given time applying an empirical time delay function to past sunspot activity. We describe the GCR measurement data used in the BO'11 update - modern data from BESS, PAMELA, CAPRICE, and ACE emphasized for than the older balloon data used for the previous BO model (BO'10). We look at the GCR flux for the last 24 solar minima and show how much greater the flux was for the cycle 24 minimum in 2010. The BO'11 Model uses the traditional, steady-state Fokker-Planck differential equation to account for particle transport in the heliosphere due to diffusion, convection, and adiabatic deceleration. It assumes a radially symmetrical diffusion coefficient derived from magnetic disturbances caused by sunspots carried onward by a constant solar wind. A more complex differential equation is now being tested to account for particle transport in the heliosphere in the next generation BO model. This new model is time-dependent (no longer a steady state model). In the new model, the dynamics and anti-symmetrical features of the actual heliosphere are accounted for so empirical time delay functions will no longer be required. The new model will be capable of simulating the more subtle features of modulation - such as the Sun's polarity and modulation dependence on the gradient and curvature drift. This improvement is expected to significantly improve the fidelity of the BO GCR model. Preliminary results of its
Semiclassical calculation for collision induced dissociation. II. Morse oscillator model
International Nuclear Information System (INIS)
Rusinek, I.; Roberts, R.E.
1978-01-01
A recently developed semiclassical procedure for calculating collision induced dissociation probabilities P/sup diss/ is applied to the collinear collision between a particle and a Morse oscillator diatomic. The particle--diatom interaction is described with a repulsive exponential potential function. P/sup diss/ is reported for a system of three identical particles, as a function of collision energy E/sub t/ and initial vibrational state of the diatomic n 1 . The results are compared with the previously reported values for the collision between a particle and a truncated harmonic oscillator. The two studies show similar features, namely: (a) there is an oscillatory structure in the P/sup diss/ energy profiles, which is directly related to n 1 ; (b) P/sup diss/ becomes noticeable (> or approx. =10 -3 ) for E/sub t/ values appreciably higher than the energetic threshold; (c) vibrational enhancement (inhibition) of collision induced dissociation persists at low (high) energies; and (d) good agreement between the classical and semiclassical results is found above the classical dynamic threshold. Finally, the convergence of P/sup diss/ for increasing box length is shown to be rapid and satisfactory
Advanced Test Reactor Core Modeling Update Project Annual Report for Fiscal Year 2010
Energy Technology Data Exchange (ETDEWEB)
Rahmat Aryaeinejad; Douglas S. Crawford; Mark D. DeHart; George W. Griffith; D. Scott Lucas; Joseph W. Nielsen; David W. Nigg; James R. Parry; Jorge Navarro
2010-09-01
Legacy computational reactor physics software tools and protocols currently used for support of Advanced Test Reactor (ATR) core fuel management and safety assurance and, to some extent, experiment management are obsolete, inconsistent with the state of modern nuclear engineering practice, and are becoming increasingly difficult to properly verify and validate (V&V). Furthermore, the legacy staff knowledge required for application of these tools and protocols from the 1960s and 1970s is rapidly being lost due to staff turnover and retirements. In 2009 the Idaho National Laboratory (INL) initiated a focused effort to address this situation through the introduction of modern high-fidelity computational software and protocols, with appropriate V&V, within the next 3-4 years via the ATR Core Modeling and Simulation and V&V Update (or “Core Modeling Update”) Project. This aggressive computational and experimental campaign will have a broad strategic impact on the operation of the ATR, both in terms of improved computational efficiency and accuracy for support of ongoing DOE programs as well as in terms of national and international recognition of the ATR National Scientific User Facility (NSUF).
Rosa, Sarah N.; Hay, Lauren E.
2017-12-01
In 2014, the U.S. Geological Survey, in cooperation with the U.S. Department of Defense’s Strategic Environmental Research and Development Program, initiated a project to evaluate the potential impacts of projected climate-change on Department of Defense installations that rely on Guam’s water resources. A major task of that project was to develop a watershed model of southern Guam and a water-balance model for the Fena Valley Reservoir. The southern Guam watershed model provides a physically based tool to estimate surface-water availability in southern Guam. The U.S. Geological Survey’s Precipitation Runoff Modeling System, PRMS-IV, was used to construct the watershed model. The PRMS-IV code simulates different parts of the hydrologic cycle based on a set of user-defined modules. The southern Guam watershed model was constructed by updating a watershed model for the Fena Valley watersheds, and expanding the modeled area to include all of southern Guam. The Fena Valley watershed model was combined with a previously developed, but recently updated and recalibrated Fena Valley Reservoir water-balance model.Two important surface-water resources for the U.S. Navy and the citizens of Guam were modeled in this study; the extended model now includes the Ugum River watershed and improves upon the previous model of the Fena Valley watersheds. Surface water from the Ugum River watershed is diverted and treated for drinking water, and the Fena Valley watersheds feed the largest surface-water reservoir on Guam. The southern Guam watershed model performed “very good,” according to the criteria of Moriasi and others (2007), in the Ugum River watershed above Talofofo Falls with monthly Nash-Sutcliffe efficiency statistic values of 0.97 for the calibration period and 0.93 for the verification period (a value of 1.0 represents perfect model fit). In the Fena Valley watershed, monthly simulated streamflow volumes from the watershed model compared reasonably well with the
International Nuclear Information System (INIS)
Pique, Angels; Pekala, Marek; Molinero, Jorge; Duro, Lara; Trinchero, Paolo; Vries, Luis Manuel de
2013-02-01
The Forsmark area has been proposed for potential siting of a deep underground (geological) repository for radioactive waste in Sweden. Safety assessment of the repository requires radionuclide transport from the disposal depth to recipients at the surface to be studied quantitatively. The near-surface quaternary deposits at Forsmark are considered a pathway for potential discharge of radioactivity from the underground facility to the biosphere, thus radionuclide transport in this system has been extensively investigated over the last years. The most recent work of Pique and co-workers (reported in SKB report R-10-30) demonstrated that in case of release of radioactivity the near-surface sedimentary system at Forsmark would act as an important geochemical barrier, retarding the transport of reactive radionuclides through a combination of retention processes. In this report the conceptual model of radionuclide transport in the quaternary till at Forsmark has been updated, by considering recent revisions regarding the near-surface lithology. In addition, the impact of important conceptual assumptions made in the model has been evaluated through a series of deterministic and probabilistic (Monte Carlo) sensitivity calculations. The sensitivity study focused on the following effects: 1. Radioactive decay of 135 Cs, 59 Ni, 230 Th and 226 Ra and effects on their transport. 2. Variability in key geochemical parameters, such as the composition of the deep groundwater, availability of sorbing materials in the till, and mineral equilibria. 3. Variability in hydraulic parameters, such as the definition of hydraulic boundaries, and values of hydraulic conductivity, dispersivity and the deep groundwater inflow rate. The overarching conclusion from this study is that the current implementation of the model is robust (the model is largely insensitive to variations in the parameters within the studied ranges) and conservative (the Base Case calculations have a tendency to
Energy Technology Data Exchange (ETDEWEB)
Pique, Angels; Pekala, Marek; Molinero, Jorge; Duro, Lara; Trinchero, Paolo; Vries, Luis Manuel de [Amphos 21 Consulting S.L., Barcelona (Spain)
2013-02-15
The Forsmark area has been proposed for potential siting of a deep underground (geological) repository for radioactive waste in Sweden. Safety assessment of the repository requires radionuclide transport from the disposal depth to recipients at the surface to be studied quantitatively. The near-surface quaternary deposits at Forsmark are considered a pathway for potential discharge of radioactivity from the underground facility to the biosphere, thus radionuclide transport in this system has been extensively investigated over the last years. The most recent work of Pique and co-workers (reported in SKB report R-10-30) demonstrated that in case of release of radioactivity the near-surface sedimentary system at Forsmark would act as an important geochemical barrier, retarding the transport of reactive radionuclides through a combination of retention processes. In this report the conceptual model of radionuclide transport in the quaternary till at Forsmark has been updated, by considering recent revisions regarding the near-surface lithology. In addition, the impact of important conceptual assumptions made in the model has been evaluated through a series of deterministic and probabilistic (Monte Carlo) sensitivity calculations. The sensitivity study focused on the following effects: 1. Radioactive decay of {sup 135}Cs, {sup 59}Ni, {sup 230}Th and {sup 226}Ra and effects on their transport. 2. Variability in key geochemical parameters, such as the composition of the deep groundwater, availability of sorbing materials in the till, and mineral equilibria. 3. Variability in hydraulic parameters, such as the definition of hydraulic boundaries, and values of hydraulic conductivity, dispersivity and the deep groundwater inflow rate. The overarching conclusion from this study is that the current implementation of the model is robust (the model is largely insensitive to variations in the parameters within the studied ranges) and conservative (the Base Case calculations have a
Approximate models for neutral particle transport calculations in ducts
International Nuclear Information System (INIS)
Ono, Shizuca
2000-01-01
The problem of neutral particle transport in evacuated ducts of arbitrary, but axially uniform, cross-sectional geometry and isotropic reflection at the wall is studied. The model makes use of basis functions to represent the transverse and azimuthal dependences of the particle angular flux in the duct. For the approximation in terms of two basis functions, an improvement in the method is implemented by decomposing the problem into uncollided and collided components. A new quadrature set, more suitable to the problem, is developed and generated by one of the techniques of the constructive theory of orthogonal polynomials. The approximation in terms of three basis functions is developed and implemented to improve the precision of the results. For both models of two and three basis functions, the energy dependence of the problem is introduced through the multigroup formalism. The results of sample problems are compared to literature results and to results of the Monte Carlo code, MCNP. (author)
Aeroelastic Calculations Using CFD for a Typical Business Jet Model
Gibbons, Michael D.
1996-01-01
Two time-accurate Computational Fluid Dynamics (CFD) codes were used to compute several flutter points for a typical business jet model. The model consisted of a rigid fuselage with a flexible semispan wing and was tested in the Transonic Dynamics Tunnel at NASA Langley Research Center where experimental flutter data were obtained from M(sub infinity) = 0.628 to M(sub infinity) = 0.888. The computational results were computed using CFD codes based on the inviscid TSD equation (CAP-TSD) and the Euler/Navier-Stokes equations (CFL3D-AE). Comparisons are made between analytical results and with experiment where appropriate. The results presented here show that the Navier-Stokes method is required near the transonic dip due to the strong viscous effects while the TSD and Euler methods used here provide good results at the lower Mach numbers.
Model calculation of the scanned field enhancement factor of CNTs
International Nuclear Information System (INIS)
Ahmad, Amir; Tripathi, V K
2006-01-01
The field enhancement factor of a carbon nanotube (CNT) placed in a cluster of CNTs is smaller than an isolated CNT because the electric field on one tube is screened by neighbouring tubes. This screening depends on the length of the CNTs and the spacing between them. We have derived an expression to compute the field enhancement factor of CNTs under any positional distribution of CNTs using a model of a floating sphere between parallel anode and cathode plates. Using this expression we can compute the field enhancement factor of a CNT in a cluster (non-uniformly distributed CNTs). This expression is used to compute the field enhancement factor of a CNT in an array (uniformly distributed CNTs). Comparison has been shown with experimental results and existing models
Quark model calculation of charmed baryon production by neutrinos
International Nuclear Information System (INIS)
Avilez, C.; Kobayashi, T.; Koerner, J.G.
1976-11-01
We study the neutrino production of 25 low-lying charmed baryon resonances in the four flavour quark model. The mass difference of ordinary and charmed quarks is explicitly taken into account. The quark model is used to determine the spectrum of the charmed baryon resonances and the q 2 = 0 values of the weak current transition matrix elements. These transition matrix elements are then continued to space-like q 2 -values by a generalized meson dominance ansatz for a set of suitably chosen invariant form factors. We find that the production of the L = 0 states C 0 , C 1 and C 1 * is dominant, with the C 0 produced most copiously. For L = 1, 2 the Jsup(P) = 3/2 - 5/2 + charm states are dominant. We give differential cross sections, total cross sections and energy integrated total cross sections using experimental neutrino fluxes. (orig./BJ) [de
Microscopic calculation of parameters of the sdg interacting boson model for 104-110Pd isotopes
International Nuclear Information System (INIS)
Liu Yong
1995-01-01
The parameters of the sdg interacting boson model Hamiltonian are calculated for the 104-110 Pd isotopes. The calculations utilize the microscopic procedure based on the Dyson boson mapping proposed by Yang-Liu-Qi and extended to include the g boson effects. The calculated parameters reproduce those values from the phenomenological fits. The resulting spectra are compared with the experimental spectra
An Updated Geophysical Model for AMSR-E and SSMIS Brightness Temperature Simulations over Oceans
Directory of Open Access Journals (Sweden)
Elizaveta Zabolotskikh
2014-03-01
Full Text Available In this study, we considered the geophysical model for microwave brightness temperature (BT simulation for the Atmosphere-Ocean System under non-precipitating conditions. The model is presented as a combination of atmospheric absorption and ocean emission models. We validated this model for two satellite instruments—for Advanced Microwave Sounding Radiometer-Earth Observing System (AMSR-E onboard Aqua satellite and for Special Sensor Microwave Imager/Sounder (SSMIS onboard F16 satellite of Defense Meteorological Satellite Program (DMSP series. We compared simulated BT values with satellite BT measurements for different combinations of various water vapor and oxygen absorption models and wind induced ocean emission models. A dataset of clear sky atmospheric and oceanic parameters, collocated in time and space with satellite measurements, was used for the comparison. We found the best model combination, providing the least root mean square error between calculations and measurements. A single combination of models ensured the best results for all considered radiometric channels. We also obtained the adjustments to simulated BT values, as averaged differences between the model simulations and satellite measurements. These adjustments can be used in any research based on modeling data for removing model/calibration inconsistencies. We demonstrated the application of the model by means of the development of the new algorithm for sea surface wind speed retrieval from AMSR-E data.
International Nuclear Information System (INIS)
Thomas, D; O’Connell, D; Lamb, J; Cao, M; Yang, Y; Agazaryan, N; Lee, P; Low, D
2015-01-01
Purpose: To demonstrate real-time dose calculation of free-breathing MRI guided Co−60 treatments, using a motion model and Monte-Carlo dose calculation to accurately account for the interplay between irregular breathing motion and an IMRT delivery. Methods: ViewRay Co-60 dose distributions were optimized on ITVs contoured from free-breathing CT images of lung cancer patients. Each treatment plan was separated into 0.25s segments, accounting for the MLC positions and beam angles at each time point. A voxel-specific motion model derived from multiple fast-helical free-breathing CTs and deformable registration was calculated for each patient. 3D images for every 0.25s of a simulated treatment were generated in real time, here using a bellows signal as a surrogate to accurately account for breathing irregularities. Monte-Carlo dose calculation was performed every 0.25s of the treatment, with the number of histories in each calculation scaled to give an overall 1% statistical uncertainty. Each dose calculation was deformed back to the reference image using the motion model and accumulated. The static and real-time dose calculations were compared. Results: Image generation was performed in real time at 4 frames per second (GPU). Monte-Carlo dose calculation was performed at approximately 1frame per second (CPU), giving a total calculation time of approximately 30 minutes per treatment. Results show both cold- and hot-spots in and around the ITV, and increased dose to contralateral lung as the tumor moves in and out of the beam during treatment. Conclusion: An accurate motion model combined with a fast Monte-Carlo dose calculation allows almost real-time dose calculation of a free-breathing treatment. When combined with sagittal 2D-cine-mode MRI during treatment to update the motion model in real time, this will allow the true delivered dose of a treatment to be calculated, providing a useful tool for adaptive planning and assessing the effectiveness of gated treatments
Benchmarking Exercises To Validate The Updated ELLWF GoldSim Slit Trench Model
International Nuclear Information System (INIS)
Taylor, G. A.; Hiergesell, R. A.
2013-01-01
The Savannah River National Laboratory (SRNL) results of the 2008 Performance Assessment (PA) (WSRC, 2008) sensitivity/uncertainty analyses conducted for the trenches located in the EArea LowLevel Waste Facility (ELLWF) were subject to review by the United States Department of Energy (U.S. DOE) Low-Level Waste Disposal Facility Federal Review Group (LFRG) (LFRG, 2008). LFRG comments were generally approving of the use of probabilistic modeling in GoldSim to support the quantitative sensitivity analysis. A recommendation was made, however, that the probabilistic models be revised and updated to bolster their defensibility. SRS committed to addressing those comments and, in response, contracted with Neptune and Company to rewrite the three GoldSim models. The initial portion of this work, development of Slit Trench (ST), Engineered Trench (ET) and Components-in-Grout (CIG) trench GoldSim models, has been completed. The work described in this report utilizes these revised models to test and evaluate the results against the 2008 PORFLOW model results. This was accomplished by first performing a rigorous code-to-code comparison of the PORFLOW and GoldSim codes and then performing a deterministic comparison of the two-dimensional (2D) unsaturated zone and three-dimensional (3D) saturated zone PORFLOW Slit Trench models against results from the one-dimensional (1D) GoldSim Slit Trench model. The results of the code-to-code comparison indicate that when the mechanisms of radioactive decay, partitioning of contaminants between solid and fluid, implementation of specific boundary conditions and the imposition of solubility controls were all tested using identical flow fields, that GoldSim and PORFLOW produce nearly identical results. It is also noted that GoldSim has an advantage over PORFLOW in that it simulates all radionuclides simultaneously - thus avoiding a potential problem as demonstrated in the Case Study (see Section 2.6). Hence, it was concluded that the follow
Benchmarking Exercises To Validate The Updated ELLWF GoldSim Slit Trench Model
Energy Technology Data Exchange (ETDEWEB)
Taylor, G. A.; Hiergesell, R. A.
2013-11-12
The Savannah River National Laboratory (SRNL) results of the 2008 Performance Assessment (PA) (WSRC, 2008) sensitivity/uncertainty analyses conducted for the trenches located in the EArea LowLevel Waste Facility (ELLWF) were subject to review by the United States Department of Energy (U.S. DOE) Low-Level Waste Disposal Facility Federal Review Group (LFRG) (LFRG, 2008). LFRG comments were generally approving of the use of probabilistic modeling in GoldSim to support the quantitative sensitivity analysis. A recommendation was made, however, that the probabilistic models be revised and updated to bolster their defensibility. SRS committed to addressing those comments and, in response, contracted with Neptune and Company to rewrite the three GoldSim models. The initial portion of this work, development of Slit Trench (ST), Engineered Trench (ET) and Components-in-Grout (CIG) trench GoldSim models, has been completed. The work described in this report utilizes these revised models to test and evaluate the results against the 2008 PORFLOW model results. This was accomplished by first performing a rigorous code-to-code comparison of the PORFLOW and GoldSim codes and then performing a deterministic comparison of the two-dimensional (2D) unsaturated zone and three-dimensional (3D) saturated zone PORFLOW Slit Trench models against results from the one-dimensional (1D) GoldSim Slit Trench model. The results of the code-to-code comparison indicate that when the mechanisms of radioactive decay, partitioning of contaminants between solid and fluid, implementation of specific boundary conditions and the imposition of solubility controls were all tested using identical flow fields, that GoldSim and PORFLOW produce nearly identical results. It is also noted that GoldSim has an advantage over PORFLOW in that it simulates all radionuclides simultaneously - thus avoiding a potential problem as demonstrated in the Case Study (see Section 2.6). Hence, it was concluded that the follow
Candido Dos Reis, Francisco J; Wishart, Gordon C; Dicks, Ed M; Greenberg, David; Rashbass, Jem; Schmidt, Marjanka K; van den Broek, Alexandra J; Ellis, Ian O; Green, Andrew; Rakha, Emad; Maishman, Tom; Eccles, Diana M; Pharoah, Paul D P
2017-05-22
PREDICT is a breast cancer prognostic and treatment benefit model implemented online. The overall fit of the model has been good in multiple independent case series, but PREDICT has been shown to underestimate breast cancer specific mortality in women diagnosed under the age of 40. Another limitation is the use of discrete categories for tumour size and node status resulting in 'step' changes in risk estimates on moving between categories. We have refitted the PREDICT prognostic model using the original cohort of cases from East Anglia with updated survival time in order to take into account age at diagnosis and to smooth out the survival function for tumour size and node status. Multivariable Cox regression models were used to fit separate models for ER negative and ER positive disease. Continuous variables were fitted using fractional polynomials and a smoothed baseline hazard was obtained by regressing the baseline cumulative hazard for each patients against time using fractional polynomials. The fit of the prognostic models were then tested in three independent data sets that had also been used to validate the original version of PREDICT. In the model fitting data, after adjusting for other prognostic variables, there is an increase in risk of breast cancer specific mortality in younger and older patients with ER positive disease, with a substantial increase in risk for women diagnosed before the age of 35. In ER negative disease the risk increases slightly with age. The association between breast cancer specific mortality and both tumour size and number of positive nodes was non-linear with a more marked increase in risk with increasing size and increasing number of nodes in ER positive disease. The overall calibration and discrimination of the new version of PREDICT (v2) was good and comparable to that of the previous version in both model development and validation data sets. However, the calibration of v2 improved over v1 in patients diagnosed under the age
Static model calculation of pion-nucleon scattering
International Nuclear Information System (INIS)
Itoh, Takashi
1975-01-01
The p-wave pion-nucleon scattering phase-shifts are computed by the Chew-Low static model for pion incident energy of 0-300 MeV. The square of the unrenormalized coupling constant is taken to be f 2 =0.2, and the cutoff is made at k sub(max)=6μ. The computed 3,3 phase-shift passes through 90 deg about at the right energy. The other phase-shifts computed are small in rough agreement with experiment. (auth.)
Simplified models for radiational losses calculating a tokamak plasma
International Nuclear Information System (INIS)
Arutiunov, A.B.; Krasheninnikov, S.I.; Prokhorov, D.Yu.
1990-01-01
To determine the magnitudes and profiles of radiational losses in a Tokamak plasma, particularly for high plasma densities, when formation of MARFE or detached-plasma takes place, it is necessary to know impurity distribution over the ionization states. Equations describing time evolution of this distribution are rather cumbersome, besides that, transport coefficients as well as rate constants of the processes involving complex ions are known nowadays with high degree of uncertainty, thus it is believed necessary to develop simplified, half-analytical models describing time evolution of the impurities analysis of physical processes taking place in a Tokamak plasma on the base of the experimental data. (author) 6 refs., 2 figs
Gantt, B.; Kelly, J. T.; Bash, J. O.
2015-01-01
Sea spray aerosols (SSAs) impact the particle mass concentration and gas-particle partitioning in coastal environments, with implications for human and ecosystem health. Model evaluations of SSA emissions have mainly focused on the global scale, but regional-scale evaluations are also important due to the localized impact of SSAs on atmospheric chemistry near the coast. In this study, SSA emissions in the Community Multiscale Air Quality (CMAQ) model were updated to enhance the...
Nuclear matter calculations with a pseudoscalar-pseudovector chiral model
Energy Technology Data Exchange (ETDEWEB)
Niembro, R.; Marcos, S.; Bernardos, P. [University of Cantabria, Faculty of Sciences, Department of Modern Physics, 39005 Santander (Spain); Fomenko, V.N. [St Petersburg University for Railway Engineering, Department of Mathematics, 197341 St Petersburg (Russian Federation); Savushkin, L.N. [St Petersburg University for Telecomunications, Department of Physics, 191065 St Petersburg (Russian Federation); Lopez-Quelle, M. [University of Cantabria, Faculty of Sciences, Department of Applied Physics, 39005 Santander, Spain (Spain)
1998-10-01
A mixed pseudoscalar-pseudovector {pi}N coupling relativistic Lagrangian is obtained from a pure pseudoscalar chiral one, by transforming the nucleon field according to a generalized Weinberg transformation, which depends on a mixing parameter. The interaction is generated by the {sigma}, {omega} and {pi} meson exchanges. Within the Hartree-Fock context, pion polarization effects, including the {delta} isobar, are considered in the random phase approximation in nuclear matter. These effects are interpreted, in a non-relativistic framework, as a modification of the range and intensity of a Yukawa-type potential by means of a simple function which takes into account the nucleon-hole and {delta}-hole excitations. Results show stability of relativistic nuclear matter against pion condensation. Compression modulus is diminished by the combined effects of the nucleon and {delta} polarization towards the usually accepted experimental values. The {pi}N interaction strength used in this paper is less than the conventional one to ensure the viability of the model. The fitting parameters of the model are the scalar meson mass m{sub {sigma}} and the {omega}-N coupling constant g{sub {omega}}. (author)
Predictive Modelling Risk Calculators and the Non Dialysis Pathway.
Robins, Jennifer; Katz, Ivor
2013-04-16
This guideline will review the current prediction models and survival/mortality scores available for decision making in patients with advanced kidney disease who are being considered for a non-dialysis treatment pathway. Risk prediction is gaining increasing attention with emerging literature suggesting improved patient outcomes through individualised risk prediction (1). Predictive models help inform the nephrologist and the renal palliative care specialists in their discussions with patients and families about suitability or otherwise of dialysis. Clinical decision making in the care of end stage kidney disease (ESKD) patients on a non-dialysis treatment pathway is currently governed by several observational trials (3). Despite the paucity of evidence based medicine in this field, it is becoming evident that the survival advantages associated with renal replacement therapy in these often elderly patients with multiple co-morbidities and limited functional status may be negated by loss of quality of life (7) (6), further functional decline (5, 8), increased complications and hospitalisations. This article is protected by copyright. All rights reserved.
McAllister, M.; Gochis, D.; Dugger, A. L.; Karsten, L. R.; McCreight, J. L.; Pan, L.; Rafieeinasab, A.; Read, L. K.; Sampson, K. M.; Yu, W.
2017-12-01
The community WRF-Hydro modeling system is publicly available and provides researchers and operational forecasters a flexible and extensible capability for performing multi-scale, multi-physics options for hydrologic modeling that can be run independent or fully-interactive with the WRF atmospheric model. The core WRF-Hydro physics model contains very high-resolution descriptions of terrestrial hydrologic process representations such as land-atmosphere exchanges of energy and moisture, snowpack evolution, infiltration, terrain routing, channel routing, basic reservoir representation and hydrologic data assimilation. Complementing the core physics components of WRF-Hydro are an ecosystem of pre- and post-processing tools that facilitate the preparation of terrain and meteorological input data, an open-source hydrologic model evaluation toolset (Rwrfhydro), hydrologic data assimilation capabilities with DART and advanced model visualization capabilities. The National Center for Atmospheric Research (NCAR), through collaborative support from the National Science Foundation and other funding partners, provides community support for the entire WRF-Hydro system through a variety of mechanisms. This presentation summarizes the enhanced user support capabilities that are being developed for the community WRF-Hydro modeling system. These products and services include a new website, open-source code repositories, documentation and user guides, test cases, online training materials, live, hands-on training sessions, an email list serve, and individual user support via email through a new help desk ticketing system. The WRF-Hydro modeling system and supporting tools which now include re-gridding scripts and model calibration have recently been updated to Version 4 and are merging toward capabilities of the National Water Model.
Pankatz, K.; Kerkweg, A.
2014-12-01
The work presented is part of the joint project "DecReg" ("Regional decadal predictability") which is in turn part of the project "MiKlip" ("Decadal predictions"), an effort funded by the german Federal Ministry of Education and Research to improve decadal predictions on a global and regional scale. In regional climate modeling it is common to update the lateral boundary conditions (LBC) of the regional model every six hours. This is mainly due to the fact, that reference data sets like ERA are only available every six hours. Additionally, for offline coupling procedures it would be too costly to store LBC data in higher temporal resolution for climate simulations. However, theoretically, the coupling frequency could be as high as the time step of the driving model. Meanwhile, it is unclear if a more frequent update of the LBC has a significant effect on the climate in the domain of the regional model (RCM). This study uses the RCM COSMO-CLM/MESSy (Kerkweg and Jöckel, 2012) to couple COSMO-CLM offline to the GCM ECHAM5. One study examines a 30 year time slice experiment for three update frequencies of the LBC, namely six hours, one hour and six minutes. The evaluation of means, standard deviations and statistics of the climate in regional domain shows only small deviations, some stastically significant though, of 2m temperature, sea level pressure and precipitaion.The second scope of the study assesses parameters linked to cyclone activity, which is affected by the LBC update frequency. Differences in track density and strength are found when comparing the simulations.The second study examines the quality of decadal hind-casts of the decade 2001-2010 when the horizontal resolution of the driving model, namely T42, T63, T85, T106, from which the LBC are calculated, is altered. Two sets of simulations are evaluated. For the first set of simulations, the GCM simulations are performed at different resolutions using the same boundary conditions for GHGs and SSTs, thus
The curvature calculation mechanism based on simple cell model.
Yu, Haiyang; Fan, Xingyu; Song, Aiqi
2017-07-20
A conclusion has not yet been reached on how exactly the human visual system detects curvature. This paper demonstrates how orientation-selective simple cells can be used to construct curvature-detecting neural units. Through fixed arrangements, multiple plurality cells were constructed to simulate curvature cells with a proportional output to their curvature. In addition, this paper offers a solution to the problem of narrow detection range under fixed resolution by selecting an output value under multiple resolution. Curvature cells can be treated as concrete models of an end-stopped mechanism, and they can be used to further understand "curvature-selective" characteristics and to explain basic psychophysical findings and perceptual phenomena in current studies.
Research in Model-Based Change Detection and Site Model Updating
National Research Council Canada - National Science Library
Nevatia, R
1998-01-01
.... Some of these techniques also are applicable to automatic site modeling and some of our change detection techniques may apply to detection of larger mobile objects, such as airplanes. We have implemented an interactive modeling system that works in conjunction with our automatic system to minimize the need for tedious interaction.
Influence of FRAPCON-1 evaluation models on fuel behavior calculations for commercial power reactors
International Nuclear Information System (INIS)
Chambers, R.; Laats, E.T.
1981-01-01
A preliminary set of nine evaluation models (EMs) was added to the FRAPCON-1 computer code, which is used to calculate fuel rod behavior in a nuclear reactor during steady-state operation. The intent was to provide an audit code to be used in the United States Nuclear Regulatory Commission (NRC) licensing activities when calculations of conservative fuel rod temperatures are required. The EMs place conservatisms on the calculation of rod temperature by modifying the calculation of rod power history, fuel and cladding behavior models, and materials properties correlations. Three of the nine EMs provide either input or model specifications, or set the reference temperature for stored energy calculations. The remaining six EMs were intended to add thermal conservatism through model changes. To determine the relative influence of these six EMs upon fuel behavior calculations for commercial power reactors, a sensitivity study was conducted. That study is the subject of this paper
Life cycle reliability assessment of new products—A Bayesian model updating approach
International Nuclear Information System (INIS)
Peng, Weiwen; Huang, Hong-Zhong; Li, Yanfeng; Zuo, Ming J.; Xie, Min
2013-01-01
The rapidly increasing pace and continuously evolving reliability requirements of new products have made life cycle reliability assessment of new products an imperative yet difficult work. While much work has been done to separately estimate reliability of new products in specific stages, a gap exists in carrying out life cycle reliability assessment throughout all life cycle stages. We present a Bayesian model updating approach (BMUA) for life cycle reliability assessment of new products. Novel features of this approach are the development of Bayesian information toolkits by separately including “reliability improvement factor” and “information fusion factor”, which allow the integration of subjective information in a specific life cycle stage and the transition of integrated information between adjacent life cycle stages. They lead to the unique characteristics of the BMUA in which information generated throughout life cycle stages are integrated coherently. To illustrate the approach, an application to the life cycle reliability assessment of a newly developed Gantry Machining Center is shown
Advanced Test Reactor Core Modeling Update Project Annual Report for Fiscal Year 2013
Energy Technology Data Exchange (ETDEWEB)
Nigg, David W. [Idaho National Lab. (INL), Idaho Falls, ID (United States)
2013-09-01
Legacy computational reactor physics software tools and protocols currently used for support of Advanced Test Reactor (ATR) core fuel management and safety assurance, and to some extent, experiment management, are inconsistent with the state of modern nuclear engineering practice, and are difficult, if not impossible, to verify and validate (V&V) according to modern standards. Furthermore, the legacy staff knowledge required for effective application of these tools and protocols from the 1960s and 1970s is rapidly being lost due to staff turnover and retirements. In late 2009, the Idaho National Laboratory (INL) initiated a focused effort, the ATR Core Modeling Update Project, to address this situation through the introduction of modern high-fidelity computational software and protocols. This aggressive computational and experimental campaign will have a broad strategic impact on the operation of the ATR, both in terms of improved computational efficiency and accuracy for support of ongoing DOE programs as well as in terms of national and international recognition of the ATR National Scientific User Facility (NSUF).
Updated comparison of groundwater flow model results and isotopic data in the Leon Valley, Mexico
Hernandez-Garcia, G. D.
2015-12-01
Northwest of Mexico City, the study area is located in the State of Guanajuato. Leon Valley has covered with groundwater its demand of water, estimated in 20.6 cubic meters per second. The constant increase of population and economic activities in the region, mainly in cities and automobile factories, has also a constant growth in water needs. Related extraction rate has produced an average decrease of approximately 1.0 m per year over the past two decades. This suggests that the present management of the groundwater should be checked. Management of groundwater in the study area involves the possibility of producing environmental impacts by extraction. This vital resource under stress becomes necessary studying its hydrogeological functioning to achieve scientific management of groundwater in the Valley. This research was based on the analysis and integration of existing information and the field generated by the authors. On the base of updated concepts like the geological structure of the area, the hydraulic parameters and the composition of deuterium-delta and delta-oxygen -18, this research has new results. This information has been fully analyzed by applying a groundwater flow model with particle tracking: the result has also a similar result in terms of travel time and paths derived from isotopic data.
An evolutionary cascade model for sauropod dinosaur gigantism--overview, update and tests.
Directory of Open Access Journals (Sweden)
P Martin Sander
Full Text Available Sauropod dinosaurs are a group of herbivorous dinosaurs which exceeded all other terrestrial vertebrates in mean and maximal body size. Sauropod dinosaurs were also the most successful and long-lived herbivorous tetrapod clade, but no abiological factors such as global environmental parameters conducive to their gigantism can be identified. These facts justify major efforts by evolutionary biologists and paleontologists to understand sauropods as living animals and to explain their evolutionary success and uniquely gigantic body size. Contributions to this research program have come from many fields and can be synthesized into a biological evolutionary cascade model of sauropod dinosaur gigantism (sauropod gigantism ECM. This review focuses on the sauropod gigantism ECM, providing an updated version based on the contributions to the PLoS ONE sauropod gigantism collection and on other very recent published evidence. The model consist of five separate evolutionary cascades ("Reproduction", "Feeding", "Head and neck", "Avian-style lung", and "Metabolism". Each cascade starts with observed or inferred basal traits that either may be plesiomorphic or derived at the level of Sauropoda. Each trait confers hypothetical selective advantages which permit the evolution of the next trait. Feedback loops in the ECM consist of selective advantages originating from traits higher in the cascades but affecting lower traits. All cascades end in the trait "Very high body mass". Each cascade is linked to at least one other cascade. Important plesiomorphic traits of sauropod dinosaurs that entered the model were ovipary as well as no mastication of food. Important evolutionary innovations (derived traits were an avian-style respiratory system and an elevated basal metabolic rate. Comparison with other tetrapod lineages identifies factors limiting body size.
An Evolutionary Cascade Model for Sauropod Dinosaur Gigantism - Overview, Update and Tests
Sander, P. Martin
2013-01-01
Sauropod dinosaurs are a group of herbivorous dinosaurs which exceeded all other terrestrial vertebrates in mean and maximal body size. Sauropod dinosaurs were also the most successful and long-lived herbivorous tetrapod clade, but no abiological factors such as global environmental parameters conducive to their gigantism can be identified. These facts justify major efforts by evolutionary biologists and paleontologists to understand sauropods as living animals and to explain their evolutionary success and uniquely gigantic body size. Contributions to this research program have come from many fields and can be synthesized into a biological evolutionary cascade model of sauropod dinosaur gigantism (sauropod gigantism ECM). This review focuses on the sauropod gigantism ECM, providing an updated version based on the contributions to the PLoS ONE sauropod gigantism collection and on other very recent published evidence. The model consist of five separate evolutionary cascades (“Reproduction”, “Feeding”, “Head and neck”, “Avian-style lung”, and “Metabolism”). Each cascade starts with observed or inferred basal traits that either may be plesiomorphic or derived at the level of Sauropoda. Each trait confers hypothetical selective advantages which permit the evolution of the next trait. Feedback loops in the ECM consist of selective advantages originating from traits higher in the cascades but affecting lower traits. All cascades end in the trait “Very high body mass”. Each cascade is linked to at least one other cascade. Important plesiomorphic traits of sauropod dinosaurs that entered the model were ovipary as well as no mastication of food. Important evolutionary innovations (derived traits) were an avian-style respiratory system and an elevated basal metabolic rate. Comparison with other tetrapod lineages identifies factors limiting body size. PMID:24205267
An evolutionary cascade model for sauropod dinosaur gigantism--overview, update and tests.
Sander, P Martin
2013-01-01
Sauropod dinosaurs are a group of herbivorous dinosaurs which exceeded all other terrestrial vertebrates in mean and maximal body size. Sauropod dinosaurs were also the most successful and long-lived herbivorous tetrapod clade, but no abiological factors such as global environmental parameters conducive to their gigantism can be identified. These facts justify major efforts by evolutionary biologists and paleontologists to understand sauropods as living animals and to explain their evolutionary success and uniquely gigantic body size. Contributions to this research program have come from many fields and can be synthesized into a biological evolutionary cascade model of sauropod dinosaur gigantism (sauropod gigantism ECM). This review focuses on the sauropod gigantism ECM, providing an updated version based on the contributions to the PLoS ONE sauropod gigantism collection and on other very recent published evidence. The model consist of five separate evolutionary cascades ("Reproduction", "Feeding", "Head and neck", "Avian-style lung", and "Metabolism"). Each cascade starts with observed or inferred basal traits that either may be plesiomorphic or derived at the level of Sauropoda. Each trait confers hypothetical selective advantages which permit the evolution of the next trait. Feedback loops in the ECM consist of selective advantages originating from traits higher in the cascades but affecting lower traits. All cascades end in the trait "Very high body mass". Each cascade is linked to at least one other cascade. Important plesiomorphic traits of sauropod dinosaurs that entered the model were ovipary as well as no mastication of food. Important evolutionary innovations (derived traits) were an avian-style respiratory system and an elevated basal metabolic rate. Comparison with other tetrapod lineages identifies factors limiting body size.
User Guide for GoldSim Model to Calculate PA/CA Doses and Limits
International Nuclear Information System (INIS)
Smith, F.
2016-01-01
A model to calculate doses for solid waste disposal at the Savannah River Site (SRS) and corresponding disposal limits has been developed using the GoldSim commercial software. The model implements the dose calculations documented in SRNL-STI-2015-00056, Rev. 0 ''Dose Calculation Methodology and Data for Solid Waste Performance Assessment (PA) and Composite Analysis (CA) at the Savannah River Site''.
User Guide for GoldSim Model to Calculate PA/CA Doses and Limits
Energy Technology Data Exchange (ETDEWEB)
Smith, F. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)
2016-10-31
A model to calculate doses for solid waste disposal at the Savannah River Site (SRS) and corresponding disposal limits has been developed using the GoldSim commercial software. The model implements the dose calculations documented in SRNL-STI-2015-00056, Rev. 0 “Dose Calculation Methodology and Data for Solid Waste Performance Assessment (PA) and Composite Analysis (CA) at the Savannah River Site”.
A model for calculating expected performance of the Apollo unified S-band (USB) communication system
Schroeder, N. W.
1971-01-01
A model for calculating the expected performance of the Apollo unified S-band (USB) communication system is presented. The general organization of the Apollo USB is described. The mathematical model is reviewed and the computer program for implementation of the calculations is included.
Improvements to the nuclear model code GNASH for cross section calculations at higher energies
International Nuclear Information System (INIS)
Young, P.G.; Chadwick, M.B.
1994-01-01
The nuclear model code GNASH, which in the past has been used predominantly for incident particle energies below 20 MeV, has been modified extensively for calculations at higher energies. The model extensions and improvements are described in this paper, and their significance is illustrated by comparing calculations with experimental data for incident energies up to 160 MeV
An updated fracture-flow model for total-system performance assessment of Yucca Mountain
International Nuclear Information System (INIS)
Gauthier, J.H.
1994-01-01
Improvements have been made to the fracture-flow model being used in the total-system performance assessment of a potential high-level radioactive waste repository at Yucca Mountain, Nevada. The ''weeps model'' now includes (1) weeps of varied sizes, (2) flow-pattern fluctuations caused by climate change, and (3) flow-pattern perturbations caused by repository heat generation. Comparison with the original weeps model indicates that allowing weeps of varied sizes substantially reduces the number of weeps and the number of containers contacted by weeps. However, flow-pattern perturbations caused by either climate change or repository heat generation greatly increases the number of containers contacted by weeps. In preliminary total-system calculations, using a phenomenological container-failure and radionuclide-release model, the weeps model predicts that radionuclide releases from a high-level radioactive waste repository at Yucca Mountain will be below the EPA standard specified in 40 CFR 191, but that the maximum radiation dose to an individual could be significant. Specific data from the site are required to determine the validity of the weep-flow mechanism and to better determine the parameters to which the dose calculation is sensitive
Cost calculation model concerning small-scale production of chips and split firewood
International Nuclear Information System (INIS)
Ryynaenen, S.; Naett, H.; Valkonen, J.
1995-01-01
The TTS-Institute's Forestry Department has developed a computer-based cost calculation model for the production of wood chips and split firewood. This development work was carried out in conjunction with the nation-wide BIOENERGY -research programme. The said calculation model eases and speeds up the calculation of unit costs and resource needs in harvesting systems for wood chips and split firewood. The model also enables the user to find out how changes in the productivity and costs bases of different harvesting chains influences the unit costs of the system as a whole. The undertaking was composed of the following parts: clarification and modification of productivity bases for application in the model as mathematical models, clarification of machine and device costs bases, designing of the structure and functions of the calculation model, construction and testing of the model's 0-version, model calculations concerning typical chains, review of calculation bases, and charting of development needs focusing on the model. The calculation model was developed to serve research needs, but with further development it could be useful as a tool in forestry and agricultural extension work, related schools and colleges, and in the hands of firewood producers. (author)
Cholewa, Jason; Guimarães-Ferreira, Lucas; da Silva Teixeira, Tamiris; Naimo, Marshall Alan; Zhi, Xia; de Sá, Rafaele Bis Dal Ponte; Lodetti, Alice; Cardozo, Mayara Quadros; Zanchi, Nelo Eidy
2014-09-01
Human muscle hypertrophy brought about by voluntary exercise in laboratorial conditions is the most common way to study resistance exercise training, especially because of its reliability, stimulus control and easy application to resistance training exercise sessions at fitness centers. However, because of the complexity of blood factors and organs involved, invasive data is difficult to obtain in human exercise training studies due to the integration of several organs, including adipose tissue, liver, brain and skeletal muscle. In contrast, studying skeletal muscle remodeling in animal models are easier to perform as the organs can be easily obtained after euthanasia; however, not all models of resistance training in animals displays a robust capacity to hypertrophy the desired muscle. Moreover, some models of resistance training rely on voluntary effort, which complicates the results observed when animal models are employed since voluntary capacity is something theoretically impossible to measure in rodents. With this information in mind, we will review the modalities used to simulate resistance training in animals in order to present to investigators the benefits and risks of different animal models capable to provoke skeletal muscle hypertrophy. Our second objective is to help investigators analyze and select the experimental resistance training model that best promotes the research question and desired endpoints. © 2013 Wiley Periodicals, Inc.
Development of the model for the stress calculation of fuel assembly under accident load
International Nuclear Information System (INIS)
Kim, Il Kon
1993-01-01
The finite element model for the stress calculation in guide thimbles of a fuel assembly (FA) under seismic and loss-of-coolant-accident (LOCA) load is developed. For the stress calculation of FA under accident load, at first the program MAIN is developed to select the worst bending mode shaped FA from core model. And then the model for the stress calculation of FA is developed by means of the finite element code. The calculated results of program MAIN are used as the kinematic constraints of the finite element model of a FA. Compared the calculated results of the stiffness of the finite element model of FA with the test results they have good agreements. (Author)
Parabolic Trough Collector Cost Update for the System Advisor Model (SAM)
Energy Technology Data Exchange (ETDEWEB)
Kurup, Parthiv [National Renewable Energy Lab. (NREL), Golden, CO (United States); Turchi, Craig S. [National Renewable Energy Lab. (NREL), Golden, CO (United States)
2015-11-01
This report updates the baseline cost for parabolic trough solar fields in the United States within NREL's System Advisor Model (SAM). SAM, available at no cost at https://sam.nrel.gov/, is a performance and financial model designed to facilitate decision making for people involved in the renewable energy industry. SAM is the primary tool used by NREL and the U.S. Department of Energy (DOE) for estimating the performance and cost of concentrating solar power (CSP) technologies and projects. The study performed a bottom-up build and cost estimate for two state-of-the-art parabolic trough designs -- the SkyTrough and the Ultimate Trough. The SkyTrough analysis estimated the potential installed cost for a solar field of 1500 SCAs as $170/m^{2} +/- $6/m^{2}. The investigation found that SkyTrough installed costs were sensitive to factors such as raw aluminum alloy cost and production volume. For example, in the case of the SkyTrough, the installed cost would rise to nearly $210/m^{2} if the aluminum alloy cost was $1.70/lb instead of $1.03/lb. Accordingly, one must be aware of fluctuations in the relevant commodities markets to track system cost over time. The estimated installed cost for the Ultimate Trough was only slightly higher at $178/m^{2}, which includes an assembly facility of $11.6 million amortized over the required production volume. Considering the size and overall cost of a 700 SCA Ultimate Trough solar field, two parallel production lines in a fully covered assembly facility, each with the specific torque box, module and mirror jigs, would be justified for a full CSP plant.
A. Goverde (Anne); M.C.W. Spaander (Manon); D. Nieboer (Daan); A.M.W. van den Ouweland (Ans); W.N.M. Dinjens (Winand); H.J. Dubbink (Erik Jan); C. Tops (Cmj); S.W. Ten Broeke (Sanne W.); M.J. Bruno (Marco); R.M.W. Hofstra (Robert); E.W. Steyerberg (Ewout); A. Wagner (Anja)
2017-01-01
textabstractUntil recently, no prediction models for Lynch syndrome (LS) had been validated for PMS2 mutation carriers. We aimed to evaluate MMRpredict and PREMM5 in a clinical cohort and for PMS2 mutation carriers specifically. In a retrospective, clinic-based cohort we calculated predictions for
An updated fracture-flow model for total-system performance assessment of Yucca Mountain
International Nuclear Information System (INIS)
Gauthier, J.H.
1994-01-01
Improvements have been made to the fracture-flow model being used in the total-system performance assessment of a potential high-level radioactive waste repository at Yucca Mountain, Nevada. The open-quotes weeps modelclose quotes now includes (1) weeps of varied sizes, (2) flow-pattern fluctuations caused by climate change, and (3) flow-pattern perturbations caused by repository heat generation. Comparison with the original weeps model indicates that allowing weeps of varied sizes substantially reduces the number of weeps and the number of containers contacted by weeps. However, flow-pattern perturbations caused by either climate change or repository heat generation greatly increases the number of containers contacted by weeps. In preliminary total-system calculations, using a phenomenological container-failure and radionuclide-release model, the weeps model predicts that radionuclide releases from a high-level radioactive waste repository at Yucca Mountain will be below the EPA standard specified in 40 CFR 191, but that the maximum radiation dose to an individual could be significant. Specific data from the site are required to determine the validity of the weep-flow mechanism and to better determine the parameters to which the dose calculation is sensitive
Lu, Xiaoman; Zheng, Guang; Miller, Colton; Alvarado, Ernesto
2017-09-08
Monitoring and understanding the spatio-temporal variations of forest aboveground biomass (AGB) is a key basis to quantitatively assess the carbon sequestration capacity of a forest ecosystem. To map and update forest AGB in the Greater Khingan Mountains (GKM) of China, this work proposes a physical-based approach. Based on the baseline forest AGB from Landsat Enhanced Thematic Mapper Plus (ETM+) images in 2008, we dynamically updated the annual forest AGB from 2009 to 2012 by adding the annual AGB increment (ABI) obtained from the simulated daily and annual net primary productivity (NPP) using the Boreal Ecosystem Productivity Simulator (BEPS) model. The 2012 result was validated by both field- and aerial laser scanning (ALS)-based AGBs. The predicted forest AGB for 2012 estimated from the process-based model can explain 31% ( n = 35, p forest AGBs, respectively. However, due to the saturation of optical remote sensing-based spectral signals and contribution of understory vegetation, the BEPS-based AGB tended to underestimate/overestimate the AGB for dense/sparse forests. Generally, our results showed that the remotely sensed forest AGB estimates could serve as the initial carbon pool to parameterize the process-based model for NPP simulation, and the combination of the baseline forest AGB and BEPS model could effectively update the spatiotemporal distribution of forest AGB.
Updates on Modeling the Water Cycle with the NASA Ames Mars Global Climate Model
Kahre, M. A.; Haberle, R. M.; Hollingsworth, J. L.; Montmessin, F.; Brecht, A. S.; Urata, R.; Klassen, D. R.; Wolff, M. J.
2017-01-01
Global Circulation Models (GCMs) have made steady progress in simulating the current Mars water cycle. It is now widely recognized that clouds are a critical component that can significantly affect the nature of the simulated water cycle. Two processes in particular are key to implementing clouds in a GCM: the microphysical processes of formation and dissipation, and their radiative effects on heating/ cooling rates. Together, these processes alter the thermal structure, change the dynamics, and regulate inter-hemispheric transport. We have made considerable progress representing these processes in the NASA Ames GCM, particularly in the presence of radiatively active water ice clouds. We present the current state of our group's water cycle modeling efforts, show results from selected simulations, highlight some of the issues, and discuss avenues for further investigation.
Are Forecast Updates Progressive?
C-L. Chang (Chia-Lin); Ph.H.B.F. Franses (Philip Hans); M.J. McAleer (Michael)
2010-01-01
textabstractMacro-economic forecasts typically involve both a model component, which is replicable, as well as intuition, which is non-replicable. Intuition is expert knowledge possessed by a forecaster. If forecast updates are progressive, forecast updates should become more accurate, on average,
Directory of Open Access Journals (Sweden)
Darya Sergeevna Simonenkova
2013-09-01
Full Text Available The subject of the research is analysis of various models of the information system constructed with the use of technologies of cloud calculations. Analysis of models is required for constructing a new reference model which will be used for develop a security threats model.
Butler, Doug; Bauman, David; Johnson-Throop, Kathy
2011-01-01
The Integrated Medical Model (IMM) Project has been developing a probabilistic risk assessment tool, the IMM, to help evaluate in-flight crew health needs and impacts to the mission due to medical events. This package is a follow-up to a data package provided in June 2009. The IMM currently represents 83 medical conditions and associated ISS resources required to mitigate medical events. IMM end state forecasts relevant to the ISS PRA model include evacuation (EVAC) and loss of crew life (LOCL). The current version of the IMM provides the basis for the operational version of IMM expected in the January 2011 timeframe. The objectives of this data package are: 1. To provide a preliminary understanding of medical risk data used to update the ISS PRA Model. The IMM has had limited validation and an initial characterization of maturity has been completed using NASA STD 7009 Standard for Models and Simulation. The IMM has been internally validated by IMM personnel but has not been validated by an independent body external to the IMM Project. 2. To support a continued dialogue between the ISS PRA and IMM teams. To ensure accurate data interpretation, and that IMM output format and content meets the needs of the ISS Risk Management Office and ISS PRA Model, periodic discussions are anticipated between the risk teams. 3. To help assess the differences between the current ISS PRA and IMM medical risk forecasts of EVAC and LOCL. Follow-on activities are anticipated based on the differences between the current ISS PRA medical risk data and the latest medical risk data produced by IMM.
Implementation of the neutronics model of HEXTRAN/HEXBU-3D into APROS for WWER calculations
International Nuclear Information System (INIS)
Rintala, J.
2008-01-01
A new three-dimensional nodal model for neutronics calculation is currently under implementation into APROS - Advanced PROcess Simulation environment - to conform the increasing accuracy requirements. The new model is based on an advanced nodal code HEXTRAN and its static version HEXBU-3D by VTT, Technical Research Centre of Finland. Currently the new APROS is under a testing programme. Later a systematic validation will be performed. In the first phase, a goal is to obtain a fully validated model for VVER-440 calculations. Thus, all the current test calculations are performed by using Loviisa NPP's VVER-440 model of APROS. In future, the model is planned to be applied for the calculations of VVER-1000 type reactors as well as in rectangular fuel geometry. The paper outlines first the general aspects of the method, and then the current situation of the implementation. Because of the identical model with the models of HEXTRAN and HEXBU-3D, the results in the test calculations are compared to the results of those. In the paper, results of two static test calculations are shown. Currently the model works well already in static analyses. Only minor problems with the control assemblies of VVER-440 type reactor still exist but the reasons are known and will be corrected in near future. Dynamical characteristics of the model are up to now tested only by some empirical tests. (author)
International Nuclear Information System (INIS)
2004-01-01
The general objective of the CRP is to validate, verify and improve methodologies and computer codes used for the calculation of reactivity coefficients in fast reactors aiming at enhancing the utilization of plutonium and minor actinides. The objectives of the fifth RCM were: to review the progress achieved since the 4th RCM; to review and finalize the draft synthesis report on BN-600 MOX Fueled Core Benchmark Analysis (Phase 4); to compare the results of Phase 5 (BFS Benchmark Analysis); to agree on the work scope of Phase 6 (BN-Full MOX Minor Actinide Core Benchmark); to discuss the preparation of the final report. In this context, review and related discussions were made on the following items: summary review of Actions and results since the 4th RCM; finalization of the draft synthesis report on BN-600 full MOX-fueled core benchmark analysis (Phase 4); presentation of individual results for Phase 5 by Member States; preliminary inter-comparison analysis of the results for Phase 5; definition of the benchmark model and work scope to be performed for Phase 6; details of the work scope and future CRP timetable for preparing a final report
Energy Technology Data Exchange (ETDEWEB)
WHEELER, TIMOTHY A.; WYSS, GREGORY D.; HARPER, FREDERICK T.
2000-11-01
Uncertainty distributions for specific parameters of the Cassini General Purpose Heat Source Radioisotope Thermoelectric Generator (GPHS-RTG) Final Safety Analysis Report consequence risk analysis were revised and updated. The revisions and updates were done for all consequence parameters for which relevant information exists from the joint project on Probabilistic Accident Consequence Uncertainty Analysis by the United States Nuclear Regulatory Commission and the Commission of European Communities.
Directory of Open Access Journals (Sweden)
Michal Fusek
2016-11-01
Full Text Available Precipitation records from six stations of the Czech Hydrometeorological Institute were subject to statistical analysis with the objectives of updating the intensity–duration–frequency (IDF curves, by applying extreme value distributions, and comparing the updated curves against those produced by an empirical procedure in 1958. Another objective was to investigate differences between both sets of curves, which could be explained by such factors as different measuring instruments, measuring stations altitudes and data analysis methods. It has been shown that the differences between the two sets of IDF curves are significantly influenced by the chosen method of data analysis.
Ganschow, Pamela S; Jacobs, Elizabeth A; Mackinnon, Jennifer; Charney, Pamela
2009-06-01
average 50-year-old woman, is provided in the guidelines. In addition, available risk prediction models, such as the NIH Web site calculator (http://www.cancer.gov/bcrisktool/) can also be used to estimate quantitative breast cancer risk. This model was updated in 2008 with race-specific data for calculating risk in African-American women.18 The harms and benefits of mammography should be discussed and incorporated along with a woman's preferences and breast cancer risk profile into the decision on when to begin screening. If a woman decides to forgo mammography, the decision should be readdressed every 1 to 2 years. STD screening guidelines19 USPSTF and CDC Routine screening for this infection is now recommended for ALL sexually active women age 24 and under, based on the recent high prevalence estimates for chlamydia It is not recommended for women (pregnant or nonpregnant) age 25 and older, unless they are at increased risk for infection. STD treatment guidelines20 CDC Flouroquinolones are NO longer recommended for treatment of N. gonorrhea, due to increasing resistance (as high as 15% of isolates in 2006). For uncomplicated infections, treatment of gonorrhea should be initiated with ceftriaxone 125 mg IM or cefixime 400 mg PO and co-treatment for chlamydia infection (unless ruled out with testing). Recent estimates demonstrate that almost 50% of persons with gonorrhea have concomitant chlamydia infection21. STD = sexually transmitted disease, NIH = National Institutes of Health, ACP = American College of Physicians, USPSTF = United States Prevention Services Task Force, CDC = Centers for Disease Control.
Calculation of DC Arc Plasma Torch Voltage- Current Characteristics Based on Steebeck Model
International Nuclear Information System (INIS)
Gnedenko, V.G.; Ivanov, A.A.; Pereslavtsev, A.V.; Tresviatsky, S.S.
2006-01-01
The work is devoted to the problem of the determination of plasma torches parameters and power sources parameters (working voltage and current of plasma torch) at the predesigning stage. The sequence of calculation of voltage-current characteristics of DC arc plasma torch is proposed. It is shown that the simple Steenbeck model of arc discharge in cylindrical channel makes it possible to carry out this calculation. The results of the calculation are confirmed by the experiments
International Nuclear Information System (INIS)
Gasco, C.; Anton, M. P.; Ampudia, J.
2003-01-01
The introduction of macros in try calculation sheets allows the automatic application of various dating models using unsupported ''210 Pb data from a data base. The calculation books the contain the models have been modified to permit the implementation of these macros. The Marine and Aquatic Radioecology group of CIEMAT (MARG) will be involved in new European Projects, thus new models have been developed. This report contains a detailed description of: a) the new implement macros b) the design of a dating Menu in the calculation sheet and c) organization and structure of the data base. (Author) 4 refs
Optimization of the neutron calculation model for the RA-6 reactor
International Nuclear Information System (INIS)
Coscia, G.A.
1981-01-01
A model for the neutronic calculation of the RA-6 reactor which includes the codes ANISN and EQUIPOSE is analyzed. Starting with a brief description of the reactor, the core and its parts, the general scheme of calculation applied is presented. The fuel elements used were those which are utilized in the RA-3 reactor; this is of the MTR type with 90% enriched uranium. With the approximations used, an analysis of such model of calculation was made, trying to optimize it by reducing, if possible, the calculation time without loosing accuracy. In order to improve the calculation model, it is recomended a cross section data library specific for the enrichment of the fuel considered 90% and the incorporation of a more advanced code than EQUIPOISE which would be DIXYBAR. (M.E.L.) [es
Energy Technology Data Exchange (ETDEWEB)
Zhang, Zhen [School of Electrical Engineering and Automation, Tianjin University, Tianjin 300072 (China); Xia, Changliang [School of Electrical Engineering and Automation, Tianjin University, Tianjin 300072 (China); Tianjin Engineering Center of Electric Machine System Design and Control, Tianjin 300387 (China); Yan, Yan, E-mail: yanyan@tju.edu.cn [School of Electrical Engineering and Automation, Tianjin University, Tianjin 300072 (China); Geng, Qiang [Tianjin Engineering Center of Electric Machine System Design and Control, Tianjin 300387 (China); Shi, Tingna [School of Electrical Engineering and Automation, Tianjin University, Tianjin 300072 (China)
2017-08-01
Highlights: • A hybrid analytical model is developed for field calculation of multilayer IPM machines. • The rotor magnetic field is calculated by the magnetic equivalent circuit method. • The field in the stator and air-gap is calculated by subdomain technique. • The magnetic scalar potential on rotor surface is modeled as trapezoidal distribution. - Abstract: Due to the complicated rotor structure and nonlinear saturation of rotor bridges, it is difficult to build a fast and accurate analytical field calculation model for multilayer interior permanent magnet (IPM) machines. In this paper, a hybrid analytical model suitable for the open-circuit field calculation of multilayer IPM machines is proposed by coupling the magnetic equivalent circuit (MEC) method and the subdomain technique. In the proposed analytical model, the rotor magnetic field is calculated by the MEC method based on the Kirchhoff’s law, while the field in the stator slot, slot opening and air-gap is calculated by subdomain technique based on the Maxwell’s equation. To solve the whole field distribution of the multilayer IPM machines, the coupled boundary conditions on the rotor surface are deduced for the coupling of the rotor MEC and the analytical field distribution of the stator slot, slot opening and air-gap. The hybrid analytical model can be used to calculate the open-circuit air-gap field distribution, back electromotive force (EMF) and cogging torque of multilayer IPM machines. Compared with finite element analysis (FEA), it has the advantages of faster modeling, less computation source occupying and shorter time consuming, and meanwhile achieves the approximate accuracy. The analytical model is helpful and applicable for the open-circuit field calculation of multilayer IPM machines with any size and pole/slot number combination.
Energy Technology Data Exchange (ETDEWEB)
Druce, C.H.; Barrett, B.R. (Arizona Univ., Tucson (USA). Dept. of Physics); Pittel, S. (Delaware Univ., Newark (USA). Bartol Research Foundation); Duval, P.D. (BEERS Associates, Reston, VA (USA))
1985-07-11
The parameters of the Majorana interaction of the neutron-proton interacting boson model are calculated for the Hg isotopes. The calculations utilize the Otsuka-Arima-Iachello mapping procedure and also lead to predictions for the other boson parameters. The resulting spectra are compared with experimental spectra and those obtained from phenomenological fits.
Druce, C. H.; Pittel, S.; Barrett, B. R.; Duval, P. D.
1985-07-01
The parameters of the Majorana interaction of the neutron-proton interacting boson model are calculated for the Hg isotopes. The calculations utilize the Otsuka-Arima-Iachello mapping procedure and also lead to predictions for the other boson parameters. The resulting spectra are compared with experimental spectra and those obtained from phenomenological fits.
Cooney, Gregory; Jamieson, Matthew; Marriott, Joe; Bergerson, Joule; Brandt, Adam; Skone, Timothy J
2017-01-17
The National Energy Technology Laboratory produced a well-to-wheels (WTW) life cycle greenhouse gas analysis of petroleum-based fuels consumed in the U.S. in 2005, known as the NETL 2005 Petroleum Baseline. This study uses a set of engineering-based, open-source models combined with publicly available data to calculate baseline results for 2014. An increase between the 2005 baseline and the 2014 results presented here (e.g., 92.4 vs 96.2 g CO 2 e/MJ gasoline, + 4.1%) are due to changes both in modeling platform and in the U.S. petroleum sector. An updated result for 2005 was calculated to minimize the effect of the change in modeling platform, and emissions for gasoline in 2014 were about 2% lower than in 2005 (98.1 vs 96.2 g CO 2 e/MJ gasoline). The same methods were utilized to forecast emissions from fuels out to 2040, indicating maximum changes from the 2014 gasoline result between +2.1% and -1.4%. The changing baseline values lead to potential compliance challenges with frameworks such as the Energy Independence and Security Act (EISA) Section 526, which states that Federal agencies should not purchase alternative fuels unless their life cycle GHG emissions are less than those of conventionally produced, petroleum-derived fuels.
The effect of magnetic field models on cosmic ray cutoff calculations
International Nuclear Information System (INIS)
Pfitzer, K.A.
1979-01-01
The inaccuracies in the 1974 Olson-Pfitzer model appeared to be the probable cause for discrepancies between the observed and calculated cosmic ray cutoff values. An improved version of the Olson-Pfitzer model is now available which includes the effects of the tilt of the earth's dipole axis and which has removed most of the problems encountered in the earlier model. The paper demonstrates that when this new accurate magnetic field model is used, the calculated and observed cutoff values agree with the experimental error without the need for invoking anomalous diffusion mechanisms. This tilt-dependent model also permits a study of cutoffs versus the tilt of the dipole axis
Díaz, Verónica; Poblete, Alvaro
2017-01-01
This paper describes part of a research and development project carried out in public elementary schools. Its objective was to update the mathematical and didactic knowledge of teachers in two consecutive levels in urban and rural public schools of Region de Los Lagos and Region de Los Rios of southern Chile. To that effect, and by means of an…
Developing a Model of Tuition Fee Calculation for Universities of Medical Sciences
Directory of Open Access Journals (Sweden)
Seyed Amir Mohsen Ziaee
2018-01-01
Full Text Available Background: The aim of our study was to introduce and evaluate a practicable model for tuition fee calculation of each medical field in universities of medical sciences in Iran.Methods: Fifty experts in 11 panels were interviewed to identify variables that affect tuition fee calculation. This led to key points including total budgets, expenses of the universities, different fields’ attractiveness, universities’ attractiveness, and education quality. Tuition fees were calculated for different levels of education, such as post-diploma, Bachelor, Master, and Doctor of Philosophy (Ph.D degrees, Medical specialty, and Fellowship. After tuition fee calculation, the model was tested during 2013-2015. Since then, a questionnaire including 20 questions was prepared. All Universities’ financial and educational managers were asked to respond to the questions regarding the model’s reliability and effectiveness.Results: According to the results, fields’ attractiveness, universities’ attractiveness, zone distinction and education quality were selected as effective variables for tuition fee calculation. In this model, tuition fees per student were calculated for the year 2013, and, therefore, the inflation rate of the same year was used. Testing of the model showed that there is a 92% of satisfaction. This model is used by medical science universities in Iran.Conclusion: Education quality, zone coefficient, fields’ attractiveness, universities’ attractiveness, inflation rate, and portion of each level of education were the most important variables affecting tuition fee calculation.Keywords: TUITION FEES, FIELD’S ATTRACTIVENESS, UNIVERSITIES’ ATTRACTIVENESS, ZONE DISTINCTION, EDUCATION QUALITY
Strip yielding model for calculation of COD in spheres with short cracks
International Nuclear Information System (INIS)
Miller, A.G.
1981-08-01
The crack opening displacement at the centre of a crack in a sphere with internal pressure has been calculated, using a strip yielding model. The results have been displayed for a range of geometrical parameters and loads. (author)
The contribution of Skyrme Hartree-Fock calculations to the understanding of the shell model
International Nuclear Information System (INIS)
Zamick, L.
1984-01-01
The authors present a detailed comparison of Skyrme Hartree-Fock and the shell model. The H-F calculations are sensitive to the parameters that are chosen. The H-F results justify the use of effective charges in restricted model space calculations by showing that the core contribution can be large. Further, the H-F results roughly justify the use of a constant E2 effective charge, but seem to yield nucleus dependent E4 effective charges. The H-F can yield results for E6 and higher multipoles, which would be zero in s-d model space calculations. On the other side of the coin in H-F the authors can easily consider only the lowest rotational band, whereas in the shell model one can calculate the energies and properties of many more states. In the comparison some apparent problems remain, in particular E4 transitions in the upper half of the s-d shell
Modeling for Dose Rate Calculation of the External Exposure to Gamma Emitters in Soil
International Nuclear Information System (INIS)
Allam, K. A.; El-Mongy, S. A.; El-Tahawy, M. S.; Mohsen, M. A.
2004-01-01
Based on the model proposed and developed in Ph.D thesis of the first author of this work, the dose rate conversion factors (absorbed dose rate in air per specific activity of soil in nGy.hr - 1 per Bq.kg - 1) are calculated 1 m above the ground for photon emitters of natural radionuclides uniformly distributed in the soil. This new and simple dose rate calculation software was used for calculation of the dose rate in air 1 m above the ground. Then the results were compared with those obtained by five different groups. Although the developed model is extremely simple, the obtained results of calculations, based on this model, show excellent agreement with those obtained by the above-mentioned models specially that one adopted by UNSCEAR. (authors)
Chiral Lagrangian calculation of nucleon branching ratios in the supersymmetric SU(5) model
International Nuclear Information System (INIS)
Chadha, S.; Daniel, M.
1983-12-01
The branching ratios are calculated for the two body nucleon decay modes involving pseudoscalars in the minimal SU(5) supersymmetric model with three generations using the techniques of chiral dynamics. (author)
The transition equation of the state intensities for exciton model and the calculation program
International Nuclear Information System (INIS)
Yu Xian; Zheng Jiwen; Liu Guoxing; Chen Keliang
1995-01-01
An equation set of the exciton model is given and calculation program is developed. The process of approaching to equilibrium state has been investigated with the program for 12 C + 64 Ni reaction at energy 72 MeV
Fast and accurate calculation of dilute quantum gas using Uehling–Uhlenbeck model equation
Energy Technology Data Exchange (ETDEWEB)
Yano, Ryosuke, E-mail: ryosuke.yano@tokiorisk.co.jp
2017-02-01
The Uehling–Uhlenbeck (U–U) model equation is studied for the fast and accurate calculation of a dilute quantum gas. In particular, the direct simulation Monte Carlo (DSMC) method is used to solve the U–U model equation. DSMC analysis based on the U–U model equation is expected to enable the thermalization to be accurately obtained using a small number of sample particles and the dilute quantum gas dynamics to be calculated in a practical time. Finally, the applicability of DSMC analysis based on the U–U model equation to the fast and accurate calculation of a dilute quantum gas is confirmed by calculating the viscosity coefficient of a Bose gas on the basis of the Green–Kubo expression and the shock layer of a dilute Bose gas around a cylinder.
NUCORE - A system for nuclear structure calculations with cluster-core models
International Nuclear Information System (INIS)
Heras, C.A.; Abecasis, S.M.
1982-01-01
Calculation of nuclear energy levels and their electromagnetic properties, modelling the nucleus as a cluster of a few particles and/or holes interacting with a core which in turn is modelled as a quadrupole vibrator (cluster-phonon model). The members of the cluster interact via quadrupole-quadrupole and pairing forces. (orig.)
Thermodynamic modeling of the Sc-Zn system coupled with first-principles calculation
Directory of Open Access Journals (Sweden)
Tang C.
2012-01-01
Full Text Available The Sc-Zn system has been critically reviewed and assessed by means of CALPHAD (CALculation of PHAse Diagram approach. By means of first-principles calculation, the enthalpies of formation at 0 K for the ScZn, ScZn2, Sc17Zn58, Sc3Zn17 and ScZn12 have been computed with the desire to assist thermodynamic modeling. A set of self-consistent thermodynamic parameters for the Sc-Zn system is then obtained. The calculated phase diagram and thermodynamic properties agree well with the experimental data and first-principles calculations, respectively.
On thermal vibration effects in diffusion model calculations of blocking dips
International Nuclear Information System (INIS)
Fuschini, E.; Ugozzoni, A.
1983-01-01
In the framework of the diffusion model, a method for calculating blocking dips is suggested that takes into account thermal vibrations of the crystal lattice. Results of calculations of the diffusion factor and the transverse energy distribution taking into accoUnt scattering of the channeled particles at thermal vibrations of lattice nuclei, are presented. Calculations are performed for α-particles with the energy of 2.12 MeV at 300 K scattered by Al crystal. It is shown that calculations performed according to the above method prove the necessity of taking into account effects of multiple scattering under blocking conditions
Heterogeneous neutron-leakage model for PWR pin-by-pin calculation
International Nuclear Information System (INIS)
Li, Yunzhao; Zhang, Bin; Wu, Hongchun; Shen, Wei
2017-01-01
Highlights: •The derivation of the formula of the leakage model is introduced. This paper evaluates homogeneous and heterogeneous leakage models used in PWR pin-by-pin calculation. •The implements of homogeneous and heterogeneous leakage models used in pin-cell homogenization of the lattice calculation are studied. A consistent method of cooperation between the heterogeneous leakage model and the pin-cell homogenization theory is proposed. •Considering the computational cost, a new buckling search scheme is proposed to reach the convergence faster. The computational cost of the newly proposed neutron balance scheme is much less than the power-method scheme and the linear-interpolation scheme. -- Abstract: When assembly calculation is performed with the reflective boundary condition, a leakage model is usually required in the lattice code. The previous studies show that the homogeneous leakage model works effectively for the assembly homogenization. However, it becomes different and unsettled for the pin-cell homogenization. Thus, this paper evaluates homogeneous and heterogeneous leakage models used in pin-by-pin calculation. The implements of homogeneous and heterogeneous leakage models used in pin-cell homogenization of the lattice calculation are studied. A consistent method of cooperation between the heterogeneous leakage model and the pin-cell homogenization theory is proposed. Considering the computational cost, a new buckling search scheme is proposed to reach the convergence faster. For practical reactor-core applications, the diffusion coefficients determined by the transport cross-section or by the leakage model are compared with each other to determine which one is more accurate for the Pressurized Water Reactor pin-by-pin calculation. Numerical results have demonstrated that the heterogeneous leakage model together with the diffusion coefficient determined by the heterogeneous leakage model would have the higher accuracy. The new buckling search
DEFF Research Database (Denmark)
Yuan, Hao; You, Zhenjiang; Shapiro, Alexander
2013-01-01
Colloidal-suspension flow in porous media is modelled simultaneously by the large scale population balance equations and by the microscale network model. The phenomenological parameter of the correlation length in the population balance model is determined from the network modelling. It is found...... out that the correlation length in the population balance model depends on the particle size. This dependency calculated by two-dimensional network has the same tendency as that obtained from the laboratory tests in engineered porous media....
International Nuclear Information System (INIS)
Abbate, P.
1990-01-01
The CONVEC program developed for the thermohydraulic calculation under a natural convection regime for MTR type reactors is presented. The program is based on a stationary, one dimensional model of finite differences that allow to calculate the temperatures of cooler, cladding and fuel as well as the flow for a power level specified by the user. This model has been satisfactorily validated by a water cooling (liquid phase) and air system. (Author) [es
CASIM calculations and angular dependent parameter β in the Moyer model
International Nuclear Information System (INIS)
Yamaguchi, Chiri.
1988-04-01
The dose equivalent on the shield surface has been calculated using both the Moyer model and the Monte Carlo code CASIM. Calculations with various values of the angular distribution parameter β in the Moyer model show that β = 7.0 ± 0.5 would meet the CASIM results at most, especially regarding locations at which the values of the maximum dose equivalent occur. (author)
Energy Technology Data Exchange (ETDEWEB)
Carlen, Ida; Nikolopoulos, Anna; Isaeus, Martin (AquaBiota Water Research, Stockholm (SE))
2007-06-15
GIS grids (maps) of marine parameters were created using point data from previous site investigations in the Forsmark and Oskarshamn areas. The proportion of global radiation reaching the sea bottom in Forsmark and Oskarshamn was calculated in ArcView, using Secchi depth measurements and the digital elevation models for the respective area. The number of days per year when the incoming light exceeds 5 MJ/m2 at the bottom was then calculated using the result of the previous calculations together with measured global radiation. Existing modelled grid-point data on bottom and pelagic temperature for Forsmark were interpolated to create surface covering grids. Bottom and pelagic temperature grids for Oskarshamn were calculated using point measurements to achieve yearly averages for a few points and then using regressions with existing grids to create new maps. Phytoplankton primary production in Forsmark was calculated using point measurements of chlorophyll and irradiance, and a regression with a modelled grid of Secchi depth. Distribution of biomass of macrophyte communities in Forsmark and Oskarshamn was calculated using spatial modelling in GRASP, based on field data from previous surveys. Physical parameters such as those described above were used as predictor variables. Distribution of biomass of different functional groups of fish in Forsmark was calculated using spatial modelling based on previous surveys and with predictor variables such as physical parameters and results from macrophyte modelling. All results are presented as maps in the report. The quality of the modelled predictions varies as a consequence of the quality and amount of the input data, the ecology and knowledge of the predicted phenomena, and by the modelling technique used. A substantial part of the variation is not described by the models, which should be expected for biological modelling. Therefore, the resulting grids should be used with caution and with this uncertainty kept in mind. All
International Nuclear Information System (INIS)
Carlen, Ida; Nikolopoulos, Anna; Isaeus, Martin
2007-06-01
GIS grids (maps) of marine parameters were created using point data from previous site investigations in the Forsmark and Oskarshamn areas. The proportion of global radiation reaching the sea bottom in Forsmark and Oskarshamn was calculated in ArcView, using Secchi depth measurements and the digital elevation models for the respective area. The number of days per year when the incoming light exceeds 5 MJ/m2 at the bottom was then calculated using the result of the previous calculations together with measured global radiation. Existing modelled grid-point data on bottom and pelagic temperature for Forsmark were interpolated to create surface covering grids. Bottom and pelagic temperature grids for Oskarshamn were calculated using point measurements to achieve yearly averages for a few points and then using regressions with existing grids to create new maps. Phytoplankton primary production in Forsmark was calculated using point measurements of chlorophyll and irradiance, and a regression with a modelled grid of Secchi depth. Distribution of biomass of macrophyte communities in Forsmark and Oskarshamn was calculated using spatial modelling in GRASP, based on field data from previous surveys. Physical parameters such as those described above were used as predictor variables. Distribution of biomass of different functional groups of fish in Forsmark was calculated using spatial modelling based on previous surveys and with predictor variables such as physical parameters and results from macrophyte modelling. All results are presented as maps in the report. The quality of the modelled predictions varies as a consequence of the quality and amount of the input data, the ecology and knowledge of the predicted phenomena, and by the modelling technique used. A substantial part of the variation is not described by the models, which should be expected for biological modelling. Therefore, the resulting grids should be used with caution and with this uncertainty kept in mind. All
Zhang, Zhen; Xia, Changliang; Yan, Yan; Geng, Qiang; Shi, Tingna
2017-08-01
Due to the complicated rotor structure and nonlinear saturation of rotor bridges, it is difficult to build a fast and accurate analytical field calculation model for multilayer interior permanent magnet (IPM) machines. In this paper, a hybrid analytical model suitable for the open-circuit field calculation of multilayer IPM machines is proposed by coupling the magnetic equivalent circuit (MEC) method and the subdomain technique. In the proposed analytical model, the rotor magnetic field is calculated by the MEC method based on the Kirchhoff's law, while the field in the stator slot, slot opening and air-gap is calculated by subdomain technique based on the Maxwell's equation. To solve the whole field distribution of the multilayer IPM machines, the coupled boundary conditions on the rotor surface are deduced for the coupling of the rotor MEC and the analytical field distribution of the stator slot, slot opening and air-gap. The hybrid analytical model can be used to calculate the open-circuit air-gap field distribution, back electromotive force (EMF) and cogging torque of multilayer IPM machines. Compared with finite element analysis (FEA), it has the advantages of faster modeling, less computation source occupying and shorter time consuming, and meanwhile achieves the approximate accuracy. The analytical model is helpful and applicable for the open-circuit field calculation of multilayer IPM machines with any size and pole/slot number combination.
Comparison study on models for calculation of NPP’s levelized unit electricity cost
International Nuclear Information System (INIS)
Nuryanti; Mochamad Nasrullah; Suparman
2014-01-01
Economic analysis that is generally done through the calculation of Levelized Unit Electricity Cost (LUEC) is crucial to be done prior to any investment decision on the nuclear power plant (NPP) project. There are several models that can be used to calculate LUEC, which are: R&D PT. PLN (Persero) Model, Mini G4ECONS model and Levelized Cost model. This study aimed to perform a comparison between the three models. Comparison technique was done by tracking the similarity used for each model and then given a case of LUEC calculation for SMR NPP 2 x 100 MW using these models. The result showed that the R&D PT. PLN (Persero) Model have a common principle with Mini G4ECONS model, which use Capital Recovery Factor (CRF) to discount the investment cost which eventually become annuity value along the life of plant. LUEC on both models is calculated by dividing the sum of the annual investment cost and the cost for operating NPP with an annual electricity production.While Levelized Cost model based on the annual cash flow. Total of annual costs and annual electricity production were discounted to the first year of construction in order to obtain the total discounted annual cost and the total discounted energy generation. LUEC was obtained by dividing both of the discounted values. LUEC calculations on the three models produce LUEC value, which are: 14.5942 cents US$/kWh for R&D PT. PLN (Persero) Model, 15.056 cents US$/kWh for Mini G4ECONs model and 14.240 cents US$/kWh for Levelized Cost model. (author)
Nuclear model calculations below 200 MeV and evaluation prospects
International Nuclear Information System (INIS)
Koning, A.J.; Bersillon, O.; Delaroche, J.P.
1994-08-01
A computational method is outlined for the quantum-mechanical prediction of the whole double-differential energy spectrum. Cross sections as calculated with the code system MINGUS are presented for (n,xn) and (p,xn) reactions on 208 Pb and 209 Bi. Our approach involves a dispersive optical model, comprehensive discrete state calculations, renormalized particle-hole state densities, a combined MSD/MSC model for pre-equilibrium reactions and compound nucleus calculations. The relation with the evaluation of nuclear data files is discussed. (orig.)
Thermal-hydraulic feedback model to calculate the neutronic cross-section in PWR reactions
International Nuclear Information System (INIS)
Santiago, Daniela Maiolino Norberto
2011-01-01
In neutronic codes,it is important to have a thermal-hydraulic feedback module. This module calculates the thermal-hydraulic feedback of the fuel, that feeds the neutronic cross sections. In the neutronic co de developed at PEN / COPPE / UFRJ, the fuel temperature is obtained through an empirical model. This work presents a physical model to calculate this temperature. We used the finite volume technique of discretized the equation of temperature distribution, while calculation the moderator coefficient of heat transfer, was carried out using the ASME table, and using some of their routines to our program. The model allows one to calculate an average radial temperature per node, since the thermal-hydraulic feedback must follow the conditions imposed by the neutronic code. The results were compared with to the empirical model. Our results show that for the fuel elements near periphery, the empirical model overestimates the temperature in the fuel, as compared to our model, which may indicate that the physical model is more appropriate to calculate the thermal-hydraulic feedback temperatures. The proposed model was validated by the neutronic simulator developed in the PEN / COPPE / UFRJ for analysis of PWR reactors. (author)
An investigation of fission models for high-energy radiation transport calculations
International Nuclear Information System (INIS)
Armstrong, T.W.; Cloth, P.; Filges, D.; Neef, R.D.
1983-07-01
An investigation of high-energy fission models for use in the HETC code has been made. The validation work has been directed checking the accuracy of the high-energy radiation transport computer code HETC to investigate the appropriate model for routine calculations, particularly for spallation neutron source applications. Model calculations are given in terms of neutron production, fission fragment energy release, and residual nuclei production for high-energy protons incident on thin uranium targets. The effect of the fission models on neutron production from thick uranium targets is also shown. (orig.)
International Nuclear Information System (INIS)
Thykier-Nielsen, S.
1980-07-01
A brief description is given of the model used at Risoe for calculating the consequences of releases of radioactive material to the atmosphere. The model is based on the Gaussian plume model, and it provides possibilities for calculation of: doses to individuals, collective doses, contamination of the ground, probability distribution of doses, and the consequences of doses for give dose-risk relationships. The model is implemented as a computer program PLUCON2, written in ALGOL for the Burroughs B6700 computer at Risoe. A short description of PLUCON2 is given. (author)
Energy Technology Data Exchange (ETDEWEB)
Yoon, Jihyung; Jung, Jae Won, E-mail: jungj@ecu.edu [Department of Physics, East Carolina University, Greenville, North Carolina 27858 (United States); Kim, Jong Oh [Department of Radiation Oncology, University of Pittsburgh Cancer Institute, Pittsburgh, Pennsylvania 15232 (United States); Yeo, Inhwan [Department of Radiation Medicine, Loma Linda University Medical Center, Loma Linda, California 92354 (United States)
2016-05-15
Purpose: To develop and evaluate a fast Monte Carlo (MC) dose calculation model of electronic portal imaging device (EPID) based on its effective atomic number modeling in the XVMC code. Methods: A previously developed EPID model, based on the XVMC code by density scaling of EPID structures, was modified by additionally considering effective atomic number (Z{sub eff}) of each structure and adopting a phase space file from the EGSnrc code. The model was tested under various homogeneous and heterogeneous phantoms and field sizes by comparing the calculations in the model with measurements in EPID. In order to better evaluate the model, the performance of the XVMC code was separately tested by comparing calculated dose to water with ion chamber (IC) array measurement in the plane of EPID. Results: In the EPID plane, calculated dose to water by the code showed agreement with IC measurements within 1.8%. The difference was averaged across the in-field regions of the acquired profiles for all field sizes and phantoms. The maximum point difference was 2.8%, affected by proximity of the maximum points to penumbra and MC noise. The EPID model showed agreement with measured EPID images within 1.3%. The maximum point difference was 1.9%. The difference dropped from the higher value of the code by employing the calibration that is dependent on field sizes and thicknesses for the conversion of calculated images to measured images. Thanks to the Z{sub eff} correction, the EPID model showed a linear trend of the calibration factors unlike those of the density-only-scaled model. The phase space file from the EGSnrc code sharpened penumbra profiles significantly, improving agreement of calculated profiles with measured profiles. Conclusions: Demonstrating high accuracy, the EPID model with the associated calibration system may be used for in vivo dosimetry of radiation therapy. Through this study, a MC model of EPID has been developed, and their performance has been rigorously
National Oceanic and Atmospheric Administration, Department of Commerce — Declination is calculated using the current International Geomagnetic Reference Field (IGRF) model. Declination is calculated using the current World Magnetic Model...
Comparison of results of experimental research with numerical calculations of a model one-sided seal
Directory of Open Access Journals (Sweden)
Joachimiak Damian
2015-06-01
Full Text Available Paper presents the results of experimental and numerical research of a model segment of a labyrinth seal for a different wear level. The analysis covers the extent of leakage and distribution of static pressure in the seal chambers and the planes upstream and downstream of the segment. The measurement data have been compared with the results of numerical calculations obtained using commercial software. Based on the flow conditions occurring in the area subjected to calculations, the size of the mesh defined by parameter y+ has been analyzed and the selection of the turbulence model has been described. The numerical calculations were based on the measurable thermodynamic parameters in the seal segments of steam turbines. The work contains a comparison of the mass flow and distribution of static pressure in the seal chambers obtained during the measurement and calculated numerically in a model segment of the seal of different level of wear.
Formation of decontamination cost calculation model for severe accident consequence assessment
International Nuclear Information System (INIS)
Silva, Kampanart; Promping, Jiraporn; Okamoto, Koji; Ishiwatari, Yuki
2014-01-01
In previous studies, the authors developed an index “cost per severe accident” to perform a severe accident consequence assessment that can cover various kinds of accident consequences, namely health effects, economic, social and environmental impacts. Though decontamination cost was identified as a major component, it was taken into account using simple and conservative assumptions, which make it difficult to have further discussions. The decontamination cost calculation model was therefore reconsidered. 99 parameters were selected to take into account all decontamination-related issues, and the decontamination cost calculation model was formed. The distributions of all parameters were determined. A sensitivity analysis using the Morris method was performed in order to identify important parameters that have large influence on the cost per severe accident and large extent of interactions with other parameters. We identified 25 important parameters, and fixed most negligible parameters to the median of their distributions to form a simplified decontamination cost calculation model. Calculations of cost per severe accident with the full model (all parameters distributed), and with the simplified model were performed and compared. The differences of the cost per severe accident and its components were not significant, which ensure the validity of the simplified model. The simplified model is used to perform a full scope calculation of the cost per severe accident and compared with the previous study. The decontamination cost increased its importance significantly. (author)
Comparison of a semi-empirical method with some model codes for gamma-ray spectrum calculation
Energy Technology Data Exchange (ETDEWEB)
Sheng, Fan; Zhixiang, Zhao [Chinese Nuclear Data Center, Beijing, BJ (China)
1996-06-01
Gamma-ray spectra calculated by a semi-empirical method are compared with those calculated by the model codes such as GNASH, TNG, UNF and NDCP-1. The results of the calculations are discussed. (2 tabs., 3 figs.).
SHINE Virtual Machine Model for In-flight Updates of Critical Mission Software
Plesea, Lucian
2008-01-01
This software is a new target for the Spacecraft Health Inference Engine (SHINE) knowledge base that compiles a knowledge base to a language called Tiny C - an interpreted version of C that can be embedded on flight processors. This new target allows portions of a running SHINE knowledge base to be updated on a "live" system without needing to halt and restart the containing SHINE application. This enhancement will directly provide this capability without the risk of software validation problems and can also enable complete integration of BEAM and SHINE into a single application. This innovation enables SHINE deployment in domains where autonomy is used during flight-critical applications that require updates. This capability eliminates the need for halting the application and performing potentially serious total system uploads before resuming the application with the loss of system integrity. This software enables additional applications at JPL (microsensors, embedded mission hardware) and increases the marketability of these applications outside of JPL.
«Soft Power»: the Updated Theoretical Concept and Russian Assembly Model
Directory of Open Access Journals (Sweden)
Владимир Сергеевич Изотов
2011-12-01
Full Text Available The article is dedicated to critically important informational and ideological aspects of Russia's foreign policy. The goal is to revise and specify the notion soft power in the context of rapidly changing space of global politics. During the last years international isolation of Russia, including informational and ideological sphere is increasing. The way to overcome this negative trend is modernization of foreign policy strategy on the basis of updating of operational tools and ideological accents. It's becoming obvious that the real foreign policy success in the global world system is achieved by the use of soft power. The author tries to specify and conceptualize the phenomenon of Russia's soft power as a purposeful external ideology facing the urgent need of updating.
Mahanama, Sarith P.; Koster, Randal D.; Walker, Gregory K.; Takacs, Lawrence L.; Reichle, Rolf H.; De Lannoy, Gabrielle; Liu, Qing; Zhao, Bin; Suarez, Max J.
2015-01-01
The Earths land surface boundary conditions in the Goddard Earth Observing System version 5 (GEOS-5) modeling system were updated using recent high spatial and temporal resolution global data products. The updates include: (i) construction of a global 10-arcsec land-ocean lakes-ice mask; (ii) incorporation of a 10-arcsec Globcover 2009 land cover dataset; (iii) implementation of Level 12 Pfafstetter hydrologic catchments; (iv) use of hybridized SRTM global topography data; (v) construction of the HWSDv1.21-STATSGO2 merged global 30 arc second soil mineral and carbon data in conjunction with a highly-refined soil classification system; (vi) production of diffuse visible and near-infrared 8-day MODIS albedo climatologies at 30-arcsec from the period 2001-2011; and (vii) production of the GEOLAND2 and MODIS merged 8-day LAI climatology at 30-arcsec for GEOS-5. The global data sets were preprocessed and used to construct global raster data files for the software (mkCatchParam) that computes parameters on catchment-tiles for various atmospheric grids. The updates also include a few bug fixes in mkCatchParam, as well as changes (improvements in algorithms, etc.) to mkCatchParam that allow it to produce tile-space parameters efficiently for high resolution AGCM grids. The update process also includes the construction of data files describing the vegetation type fractions, soil background albedo, nitrogen deposition and mean annual 2m air temperature to be used with the future Catchment CN model and the global stream channel network to be used with the future global runoff routing model. This report provides detailed descriptions of the data production process and data file format of each updated data set.
Puelles, Luis
2017-01-01
This essay reviews step by step the conceptual changes of the updated tetrapartite pallium model from its tripartite and early tetrapartite antecedents. The crucial observations in mouse material are explained first in the context of assumptions, tentative interpretations, and literature data. Errors and the solutions offered to resolve them are made explicit. Next, attention is centered on the lateral pallium sector of the updated model, whose definition is novel in incorporating a claustro-insular complex distinct from both olfactory centers (ventral pallium) and the isocortex (dorsal pallium). The general validity of the model is postulated at least for tetrapods. Genoarchitectonic studies performed to check the presence of a claustro-insular field homolog in the avian brain are reviewed next. These studies have indeed revealed the existence of such a complex in the avian mesopallium (though stratified outside-in rather than inside-out as in mammals), and there are indications that the same pattern may be found in reptiles as well. Peculiar pallio-pallial tangential migratory phenomena are apparently shared as well between mice and chicks. The issue of whether the avian mesopallium has connections that are similar to the known connections of the mammalian claustro-insular complex is considered next. Accrued data are consistent with similar connections for the avian insula homolog, but they are judged to be insufficient to reach definitive conclusions about the avian claustrum. An aside discusses that conserved connections are not a necessary feature of field-homologous neural centers. Finally, the present scenario on the evolution of the pallium of sauropsids and mammals is briefly visited, as highlighted by the updated tetrapartite model and present results. © 2017 S. Karger AG, Basel.
International Nuclear Information System (INIS)
Kljenak, I.; Mavko, B.; Babic, M.
2005-01-01
Full text of publication follows: The modelling and simulation of atmosphere mixing and stratification in nuclear power plant containments is a topic, which is currently being intensely investigated. With the increase of computer power, it has now become possible to model these phenomena with a local instantaneous description, using so-called Computational Fluid Dynamics (CFD) codes. However, calculations with these codes still take relatively long times. An alternative faster approach, which is also being applied, is to model nonhomogeneous atmosphere with lumped-parameter codes by dividing larger control volumes into smaller volumes, in which conditions are modelled as homogeneous. The flow between smaller volumes is modelled using one-dimensional approaches, which includes the prescription of flow loss coefficients. However, some authors have questioned this approach, as it appears that atmosphere stratification may sometimes be well simulated only by adjusting flow loss coefficients to adequate 'artificial' values that are case-dependent. To start the resolution of this issue, a modelling of nonhomogeneous atmosphere with a lumped-parameter code is proposed, where the subdivision of a large volume into smaller volumes is based on results of CFD simulations. The basic idea is to use the results of a CFD simulation to define regions, in which the flow velocities have roughly the same direction. These regions are then modelled as control volumes in a lumped-parameter model. In the proposed work, this procedure was applied to a simulation of an experiment of atmosphere mixing and stratification, which was performed in the TOSQAN facility. The facility is located at the Institut de Radioprotection et de Surete Nucleaire (IRSN) in Saclay (France) and consists of a cylindrical vessel (volume: 7 m3), in which gases are injected. In the experiment, which was also proposed for the OECD/NEA International Standard Problem No.47, air was initially present in the vessel, and
An Updated Model for the Anomalous Resistivity of LNAPL Plumes in Sandy Environments
Sauck, W. A.; Atekwana, E. A.; Werkema, D. D.
2006-05-01
Anomalously low resistivities have been observed at some sites contaminated by light non-aqueous phase liquid (LNAPL) since. The model that has been used to explain this phenomenon was published in 2000. This working hypothesis invokes both physical mixing and bacterial action to explain the low resistivities near the base of the vadose zone and the upper part of the aquifer. The hydrocarbon-degrading bacteria (of which there are numerous species found in soils) produce organic acids and carbonic acids. The acidic pore waters dissolve readily soluble ions from the native soil grains and grain coatings, to produce a leachate high in total dissolved solids. The free product LNAPL is initially a wetting phase, although not generally more than 50% extent, and seasonal water table fluctuations mix the hydrocarbons vertically through the upper water saturated zone and transition zone. This update introduces several new aspects of the conductive model. The first is that, in addition to the acids being produced by the oil-degrading bacteria, they also produce surfactants. Surfactants act similarly to detergents in detaching the oil phase from the solid substrate, and forming an emulsion of oil droplets within the water. This has helped to explain how continuous, high-TDS capillary paths can develop and pass vertically through what appears to be a substantial free product layer, thus providing easy passage for electrical current during electrical resistivity measurements. Further, it has also been shown that the addition of organic acids and biosurfactants to pore fluids can directly contribute to the conductivity of the pore fluids. A second development is that large-diameter column experiments were conducted for nearly two years (8 columns for 4 experiments). The columns had a vertical row of eletrodes for resistivity measurements, ports for extracting water samples with a syringe, and sample tubes for extracting soil samples. Water samples were used for chemical analysis
A musculoskeletal lumbar and thoracic model for calculation of joint kinetics in the spine
International Nuclear Information System (INIS)
Kim, Yong Cheol; Ta, Duc manh; Koo, Seung Bum; Jung Moon Ki
2016-01-01
The objective of this study was to develop a musculoskeletal spine model that allows relative movements in the thoracic spine for calculation of intra-discal forces in the lumbar and thoracic spine. The thoracic part of the spine model was composed of vertebrae and ribs connected with mechanical joints similar to anatomical joints. Three different muscle groups around the thoracic spine were inserted, along with eight muscle groups around the lumbar spine in the original model from AnyBody. The model was tested using joint kinematics data obtained from two normal subjects during spine flexion and extension, axial rotation and lateral bending motions beginning from a standing posture. Intra-discal forces between spine segments were calculated in a musculoskeletal simulation. The force at the L4-L5 joint was chosen to validate the model's prediction against the lumbar model in the original AnyBody model, which was previously validated against clinical data.
A musculoskeletal lumbar and thoracic model for calculation of joint kinetics in the spine
Energy Technology Data Exchange (ETDEWEB)
Kim, Yong Cheol; Ta, Duc manh; Koo, Seung Bum [Chung-Ang University, Seoul (Korea, Republic of); Jung Moon Ki [AnyBody Technology A/S, Aalborg (Denmark)
2016-06-15
The objective of this study was to develop a musculoskeletal spine model that allows relative movements in the thoracic spine for calculation of intra-discal forces in the lumbar and thoracic spine. The thoracic part of the spine model was composed of vertebrae and ribs connected with mechanical joints similar to anatomical joints. Three different muscle groups around the thoracic spine were inserted, along with eight muscle groups around the lumbar spine in the original model from AnyBody. The model was tested using joint kinematics data obtained from two normal subjects during spine flexion and extension, axial rotation and lateral bending motions beginning from a standing posture. Intra-discal forces between spine segments were calculated in a musculoskeletal simulation. The force at the L4-L5 joint was chosen to validate the model's prediction against the lumbar model in the original AnyBody model, which was previously validated against clinical data.
Directory of Open Access Journals (Sweden)
M. Ridolfi
2014-12-01
Full Text Available We review the main factors driving the calculation of the tangent height of spaceborne limb measurements: the ray-tracing method, the refractive index model and the assumed atmosphere. We find that commonly used ray tracing and refraction models are very accurate, at least in the mid-infrared. The factor with largest effect in the tangent height calculation is the assumed atmosphere. Using a climatological model in place of the real atmosphere may cause tangent height errors up to ± 200 m. Depending on the adopted retrieval scheme, these errors may have a significant impact on the derived profiles.
Power Loss Calculation and Thermal Modelling for a Three Phase Inverter Drive System
Directory of Open Access Journals (Sweden)
Z. Zhou
2005-12-01
Full Text Available Power losses calculation and thermal modelling for a three-phase inverter power system is presented in this paper. Aiming a long real time thermal simulation, an accurate average power losses calculation based on PWM reconstruction technique is proposed. For carrying out the thermal simulation, a compact thermal model for a three-phase inverter power module is built. The thermal interference of adjacent heat sources is analysed using 3D thermal simulation. The proposed model can provide accurate power losses with a large simulation time-step and suitable for a long real time thermal simulation for a three phase inverter drive system for hybrid vehicle applications.
International Nuclear Information System (INIS)
Oliveira, A.C.J.G. de; Andrade Lima, F.R. de
1989-01-01
The present work is an application of the perturbation theory (Matricial formalism) to a simplified two channels model, for sensitivity calculations in PWR cores. Expressions for some sensitivity coefficients of thermohydraulic interest were developed from the proposed model. The code CASNUR.FOR was written in FORTRAN to evaluate these sensitivity coefficients. The comparison between results obtained from the matrical formalism of pertubation theory with those obtained directly from the two channels model, makes evident the efficiency and potentiality of this perturbation method for nuclear reactor cores sensitivity calculations. (author) [pt
Model calculations of excitation functions of neutron-induced reactions on Rh
International Nuclear Information System (INIS)
Strohmaier, Brigitte
1995-01-01
Cross sections of neutron-induced reactions on 103 Rh have been calculated by means of the statistical model and the coupled-channels optical model for incident-neutron energies up to 30 MeV. The incentive for this study was a new measurement of the 103 Rh(n, n') 103m Rh cross section which will - together with the present calculations -enter into a dosimetry-reaction evaluation. The validation of the model parameters relied on nuclear-structure data as far as possible. (author)
A conceptual and calculational model for gas formation from impure calcined plutonium oxides
International Nuclear Information System (INIS)
Lyman, John L.; Eller, P. Gary
2000-01-01
Safe transport and storage of pure and impure plutonium oxides requires an understanding of processes that may generate or consume gases in a confined storage vessel. We have formulated conceptual and calculational models for gas formation from calcined materials. The conceptual model for impure calcined plutonium oxides is based on the data collected to date
3D Printing of Molecular Models with Calculated Geometries and p Orbital Isosurfaces
Carroll, Felix A.; Blauch, David N.
2017-01-01
3D printing was used to prepare models of the calculated geometries of unsaturated organic structures. Incorporation of p orbital isosurfaces into the models enables students in introductory organic chemistry courses to have hands-on experience with the concept of orbital alignment in strained and unstrained p systems.
Model calculations on LIS. II1. 2-, 3- and 7-substituted indanones
International Nuclear Information System (INIS)
Hofer, O.
1979-01-01
The space close to the coordination site of 1-indanone is modified systematically by placing alkyl groups of different bulkiness on C-2, C-3 and C-7, resp. The 1 H-LIS for the compounds are interpreted using the one site and two site model for carbonyl. Precautionary measures are discussed for both models to give reliable results in the calculation. (author)
MONNIE 2000: A description of a model to calculate environmental costs
International Nuclear Information System (INIS)
Hanemaaijer, A.H.; Kirkx, M.C.A.P.
2001-02-01
A new model (MONNIE 2000) was developed by the RIVM in the Netherlands in 2000 to calculate environmental costs on a macro level. The model, it's theoretical backgrounds and the technical aspects are described, making it attractive to both the user and the designer of the model. A user manual on how to calculate with the model is included. The basic principle of the model is the use of a harmonised method for calculating environmental costs, which provides the user with an output that can easily be compared with and used in other economic statistics and macro-economic models in the Netherlands. Input for the model are yearly figures on operational costs, investments and savings from environmental measures. With MONNIE 2000 calculated environmental costs per policy target group, economic sector and theme can be shown, With this model the burden of environmental measures on the economic sectors and the environmental expenditures of the government can be presented as well. MONNIE 2000 is developed in Visual Basic and by using Excel as input and output a user-friendly data exchange is realised. 12 refs
Inclusion of temperature dependence of fission barriers in statistical model calculations
International Nuclear Information System (INIS)
Newton, J.O.; Popescu, D.G.; Leigh, J.R.
1990-08-01
The temperature dependence of fission barriers has been interpolated from the results of recent theoretical calculations and included in the statistical model code PACE2. It is shown that the inclusion of temperature dependence causes significant changes to the values of the statistical model parameters deduced from fits to experimental data. 21 refs., 2 figs
On the applicability of nearly free electron model for resistivity calculations in liquid metals
International Nuclear Information System (INIS)
Gorecki, J.; Popielawski, J.
1982-09-01
The calculations of resistivity based on the nearly free electron model are presented for many noble and transition liquid metals. The triple ion correlation is included in resistivity formula according to SCQCA approximation. Two different methods for describing the conduction band are used. The problem of applicability of the nearly free electron model for different metals is discussed. (author)
A simple model for calculating the bulk modulus of the mixed ionic ...
Indian Academy of Sciences (India)
thermophysical properties, viz., bulk modulus, molecular force constant, reststrahlen fre- quency and Debye temperature using the three-body potential model. The calculated bulk modulus, from the TBPM model, for the pure end members (NH4Cl and NH4Br) are in agreement with the experimental values, as shown in ...
Validation of a model for calculating environmental doses caused by gamma emitters in the soil
International Nuclear Information System (INIS)
Ortega, X.; Rosell, J.R.; Dies, X.
1991-01-01
A model has been developed to calculate the absorbed dose rates caused by gamma emitters of both natural and artificial origin distributed in the soil. The model divides the soil into five compartments corresponding to layers situated at different depths, and assumes that the concentration of radionuclides is constant in each one of them. The calculations, following the model developed, are undertaken through a program which, based on the concentrations of the radionuclides in the different compartments, gives as a result the dose rate at a height of one metre above the ground caused by each radionuclide and the percentage this represents with respect to the total absorbed dose rate originating from this soil. The validity of the model has been checked in the case of sandy soils by comparing the exposure rates calculated for five sites with the experimental values obtained with an ionisation chamber. (author)
Diameter structure modeling and the calculation of plantation volume of black poplar clones
Directory of Open Access Journals (Sweden)
Andrašev Siniša
2004-01-01
Full Text Available A method of diameter structure modeling was applied in the calculation of plantation (stand volume of two black poplar clones in the section Aigeiros (Duby: 618 (Lux and S1-8. Diameter structure modeling by Weibull function makes it possible to calculate the plantation volume by volume line. Based on the comparison of the proposed method with the existing methods, the obtained error of plantation volume was less than 2%. Diameter structure modeling and the calculation of plantation volume by diameter structure model, by the regularity of diameter distribution, enables a better analysis of the production level and assortment structure and it can be used in the construction of yield and increment tables.
A neural network model and an update correlation for estimation of dead crude oil viscosity
Energy Technology Data Exchange (ETDEWEB)
Naseri, A.; Gharesheikhlou, A.A. [Research Institute of Petroleum Industry (RIPI), Tehran (Iran, Islamic Republic of). PVT Dept.; Yousefi, S.H.; Sanaei, A. [Amirkabir University of Technology, Tehran (Iran, Islamic Republic of). Faculty of Petroleum Engineering], E-mail: alirezasanaei.aut@gmail.com
2012-01-15
Viscosity is one of the most important physical properties in reservoir simulation, formation evaluation, in designing surface facilities and in the calculation of original hydrocarbon in-place. Mostly, oil viscosity is measured in PVT laboratories only at reservoir temperature. Hence, it is of great importance to use an accurate correlation for prediction of oil viscosity at different operating conditions and various temperatures. Although, different correlations have been proposed for various regions, the applicability of the existing correlations for Iranian oil reservoirs is limited due to the nature of the Iranian crude oil. In this study, based on Iranian oil reservoir data, a new correlation for the estimation of dead oil viscosity was provided using non-linear multivariable regression and non-linear optimization methods simultaneously with the optimization of the other existing correlations. This new correlation uses API Gravity and temperature as an input parameter. In addition, a neural-network-based model for prediction of dead oil viscosity is presented. Detailed comparisons show that validity and accuracy of the new correlation and the neural-network model are in good agreement with large data set of Iranian oil reservoir when compared with other correlations. (author)
Recommendations on dose buildup factors used in models for calculating gamma doses for a plume
International Nuclear Information System (INIS)
Hedemann Jensen, P.; Thykier-Nielsen, S.
1980-09-01
Calculations of external γ-doses from radioactivity released to the atmosphere have been made using different dose buildup factor formulas. Some of the dose buildup factor formulas are used by the Nordic countries in their respective γ-dose models. A comparison of calculated γ-doses using these dose buildup factors shows that the γ-doses can be significantly dependent on the buildup factor formula used in the calculation. Increasing differences occur for increasing plume height, crosswind distance, and atmospheric stability and also for decreasing downwind distance. It is concluded that the most accurate γ-dose can be calculated by use of Capo's polynomial buildup factor formula. Capo-coefficients have been calculated and shown in this report for γ-energies below the original lower limit given by Capo. (author)
Skolubovich, Yuriy; Skolubovich, Aleksandr; Voitov, Evgeniy; Soppa, Mikhail; Chirkunov, Yuriy
2017-10-01
The article considers the current questions of technological modeling and calculation of the new facility for cleaning natural waters, the clarifier reactor for the optimal operating mode, which was developed in Novosibirsk State University of Architecture and Civil Engineering (SibSTRIN). A calculation technique based on well-known dependences of hydraulics is presented. A calculation example of a structure on experimental data is considered. The maximum possible rate of ascending flow of purified water was determined, based on the 24 hour clarification cycle. The fractional composition of the contact mass was determined with minimal expansion of contact mass layer, which ensured the elimination of stagnant zones. The clarification cycle duration was clarified by the parameters of technological modeling by recalculating maximum possible upward flow rate of clarified water. The thickness of the contact mass layer was determined. Likewise, clarification reactors can be calculated for any other lightening conditions.
Liu, Long; Liu, Wei
2018-04-01
A forward modeling and inversion algorithm is adopted in order to determine the water injection plan in the oilfield water injection network. The main idea of the algorithm is shown as follows: firstly, the oilfield water injection network is inversely calculated. The pumping station demand flow is calculated. Then, forward modeling calculation is carried out for judging whether all water injection wells meet the requirements of injection allocation or not. If all water injection wells meet the requirements of injection allocation, calculation is stopped, otherwise the demand injection allocation flow rate of certain step size is reduced aiming at water injection wells which do not meet requirements, and next iterative operation is started. It is not necessary to list the algorithm into water injection network system algorithm, which can be realized easily. Iterative method is used, which is suitable for computer programming. Experimental result shows that the algorithm is fast and accurate.
Research of coincidence method for calculation model of the specific detector
Energy Technology Data Exchange (ETDEWEB)
Guangchun, Hu; Suping, Liu; Jian, Gong [China Academy of Engineering Physics, Mianyang (China). Inst. of Nuclear Physics and Chemistry
2003-07-01
The physical size of specific detector is known normally, but production business is classified for some sizes that is concerned with the property of detector, such as the well diameter, well depth of detector and dead region. The surface source of even distribution and the sampling method of source particle isotropy sport have been established with the method of Monte Carlo, and gamma ray respond spectral with the {sup 152}Eu surface source been calculated. The experiment have been performed under the same conditions. Calculation and experiment results are compared with relative efficiency coincidence method and spectral similar degree coincidence method. According to comparison as a result, detector model is revised repeatedly to determine the calculation model of detector and to calculate efficiency of detector and spectra. (authors)
International Nuclear Information System (INIS)
Mueller, P.
1995-01-01
This talks describes updates in the following updates in FRMAC publications concerning radiation emergencies: Monitoring and Analysis Manual; Evaluation and Assessment Manual; Handshake Series (Biannual) including exercises participated in; environmental Data and Instrument Transmission System (EDITS); Plume in a Box with all radiological data stored onto a hand-held computer; and courses given
Olexová, Lucia; Talarovičová, Alžbeta; Lewis-Evans, Ben; Borbélyová, Veronika; Kršková, Lucia
2012-12-01
Research on autism has been gaining more and more attention. However, its aetiology is not entirely known and several factors are thought to contribute to the development of this neurodevelopmental disorder. These potential contributing factors range from genetic heritability to environmental effects. A significant number of reviews have already been published on different aspects of autism research as well as focusing on using animal models to help expand current knowledge around its aetiology. However, the diverse range of symptoms and possible causes of autism have resulted in as equally wide variety of animal models of autism. In this update article we focus only on the animal models with neurobehavioural characteristics of social deficit related to autism and present an overview of the animal models with alterations in brain regions, neurotransmitters, or hormones that are involved in a decrease in sociability. Copyright © 2012 Elsevier Ireland Ltd and the Japan Neuroscience Society. All rights reserved.
Townsend, Molly T; Sarigul-Klijn, Nesrin
2016-01-01
Simplified material models are commonly used in computational simulation of biological soft tissue as an approximation of the complicated material response and to minimize computational resources. However, the simulation of complex loadings, such as long-duration tissue swelling, necessitates complex models that are not easy to formulate. This paper strives to offer the updated Lagrangian formulation comprehensive procedure of various non-linear material models for the application of finite element analysis of biological soft tissues including a definition of the Cauchy stress and the spatial tangential stiffness. The relationships between water content, osmotic pressure, ionic concentration and the pore pressure stress of the tissue are discussed with the merits of these models and their applications.
Calculations of higher twist distribution functions in the MIT bag model
International Nuclear Information System (INIS)
Signal, A.I.
1997-01-01
We calculate all twist-2, -3 and -4 parton distribution functions involving two quark correlations using the wave function of the MIT bag model. The distributions are evolved up to experimental scales and combined to give the various nucleon structure functions. Comparisons with recent experimental data on higher twist structure functions at moderate values of Q 2 give good agreement with the calculated structure functions. (orig.)
Kanematsu, Yusuke; Tachikawa, Masanori
2015-05-21
Multicomponent quantum mechanical (MC_QM) calculations with polarizable continuum model (PCM) have been tested against liquid (1)H NMR chemical shifts for a test set of 80 molecules. Improvement from conventional quantum mechanical calculations was achieved for MC_QM calculations. The advantage of the multicomponent scheme could be attributed to the geometrical change from the equilibrium geometry by the incorporation of the hydrogen nuclear quantum effect, while that of PCM can be attributed to the change of the electronic structure according to the polarization by solvent effects.
QEDMOD: Fortran program for calculating the model Lamb-shift operator
Shabaev, V. M.; Tupitsyn, I. I.; Yerokhin, V. A.
2018-02-01
We present Fortran package QEDMOD for computing the model QED operator hQED that can be used to account for the Lamb shift in accurate atomic-structure calculations. The package routines calculate the matrix elements of hQED with the user-specified one-electron wave functions. The operator can be used to calculate Lamb shift in many-electron atomic systems with a typical accuracy of few percent, either by evaluating the matrix element of hQED with the many-electron wave function, or by adding hQED to the Dirac-Coulomb-Breit Hamiltonian.
HgTe-CdTe phase diagrams calculation by RAS model
International Nuclear Information System (INIS)
Hady, A.A.A.
1986-11-01
The model of Regular Associated Solutions (RAS) for binary solution, which extended onto the ternary solution was used for Mercury-Cadnium-Tellurim phase diagrams calculations. The function of dissociation parameters is used here as a function of temperature and it is independent of composition. The ratio of mole fractions has a weak dependence on temperature and is not neglected. The calculated liquidus binary temperature and the experimental one are so fitted to give the best values of parameters used to calculate the HgTe-CdTe phase diagrams. (author)
FEM Updating of the Heritage Court Building Structure
DEFF Research Database (Denmark)
Ventura, C. E.; Brincker, Rune; Dascotte, E.
2001-01-01
. The starting model of the structure was developed from the information provided in the design documentation of the building. Different parameters of the model were then modified using an automated procedure to improve the correlation between measured and calculated modal parameters. Careful attention......This paper describes results of a model updating study conducted on a 15-storey reinforced concrete shear core building. The output-only modal identification results obtained from ambient vibration measurements of the building were used to update a finite element model of the structure...
Weather Correlations to Calculate Infiltration Rates for U. S. Commercial Building Energy Models.
Ng, Lisa C; Quiles, Nelson Ojeda; Dols, W Stuart; Emmerich, Steven J
2018-01-01
As building envelope performance improves, a greater percentage of building energy loss will occur through envelope leakage. Although the energy impacts of infiltration on building energy use can be significant, current energy simulation software have limited ability to accurately account for envelope infiltration and the impacts of improved airtightness. This paper extends previous work by the National Institute of Standards and Technology that developed a set of EnergyPlus inputs for modeling infiltration in several commercial reference buildings using Chicago weather. The current work includes cities in seven additional climate zones and uses the updated versions of the prototype commercial building types developed by the Pacific Northwest National Laboratory for the U. S. Department of Energy. Comparisons were made between the predicted infiltration rates using three representations of the commercial building types: PNNL EnergyPlus models, CONTAM models, and EnergyPlus models using the infiltration inputs developed in this paper. The newly developed infiltration inputs in EnergyPlus yielded average annual increases of 3 % and 8 % in the HVAC electrical and gas use, respectively, over the original infiltration inputs in the PNNL EnergyPlus models. When analyzing the benefits of building envelope airtightening, greater HVAC energy savings were predicted using the newly developed infiltration inputs in EnergyPlus compared with using the original infiltration inputs. These results indicate that the effects of infiltration on HVAC energy use can be significant and that infiltration can and should be better accounted for in whole-building energy models.
OSATE Overview & Community Updates
2015-02-15
update 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Delange /Julien 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK...main language capabilities Modeling patterns & model samples for beginners Error-Model examples EMV2 model constructs Demonstration of tools Case
SITE-94. Adaptation of mechanistic sorption models for performance assessment calculations
International Nuclear Information System (INIS)
Arthur, R.C.
1996-10-01
Sorption is considered in most predictive models of radionuclide transport in geologic systems. Most models simulate the effects of sorption in terms of empirical parameters, which however can be criticized because the data are only strictly valid under the experimental conditions at which they were measured. An alternative is to adopt a more mechanistic modeling framework based on recent advances in understanding the electrical properties of oxide mineral-water interfaces. It has recently been proposed that these 'surface-complexation' models may be directly applicable to natural systems. A possible approach for adapting mechanistic sorption models for use in performance assessments, using this 'surface-film' concept, is described in this report. Surface-acidity parameters in the Generalized Two-Layer surface complexation model are combined with surface-complexation constants for Np(V) sorption ob hydrous ferric oxide to derive an analytical model enabling direct calculation of corresponding intrinsic distribution coefficients as a function of pH, and Ca 2+ , Cl - , and HCO 3 - concentrations. The surface film concept is then used to calculate whole-rock distribution coefficients for Np(V) sorption by altered granitic rocks coexisting with a hypothetical, oxidized Aespoe groundwater. The calculated results suggest that the distribution coefficients for Np adsorption on these rocks could range from 10 to 100 ml/g. Independent estimates of K d for Np sorption in similar systems, based on an extensive review of experimental data, are consistent, though slightly conservative, with respect to the calculated values. 31 refs
International Nuclear Information System (INIS)
Guimaraes, A.C.F.; Goes, A.G.
1988-12-01
The ''RASO'', an activity and operational situation report for Angra 1, is transmitted daily by phone to CNEN by inspectors living at Angra 1 site, anal shows the parameters about the operational status of the nuclear power plant. In the period of 26.10.88 to 04.12.88 a discretized power serie was determined for greater time intervals than the original serie, and then new concentrations were calculated using origem 2 computer code. (author) [pt
A pencil beam dose calculation model for CyberKnife system
Energy Technology Data Exchange (ETDEWEB)
Liang, Bin; Li, Yongbao; Liu, Bo; Zhou, Fugen [Image Processing Center, Beihang University, Beijing 100191 (China); Xu, Shouping [Department of Radiation Oncology, PLA General Hospital, Beijing 100853 (China); Wu, Qiuwen, E-mail: Qiuwen.Wu@Duke.edu [Department of Radiation Oncology, Duke University Medical Center, Durham, North Carolina 27710 (United States)
2016-10-15
Purpose: CyberKnife system is initially equipped with fixed circular cones for stereotactic radiosurgery. Two dose calculation algorithms, Ray-Tracing and Monte Carlo, are available in the supplied treatment planning system. A multileaf collimator system was recently introduced in the latest generation of system, capable of arbitrarily shaped treatment field. The purpose of this study is to develop a model based dose calculation algorithm to better handle the lateral scatter in an irregularly shaped small field for the CyberKnife system. Methods: A pencil beam dose calculation algorithm widely used in linac based treatment planning system was modified. The kernel parameters and intensity profile were systematically determined by fitting to the commissioning data. The model was tuned using only a subset of measured data (4 out of 12 cones) and applied to all fixed circular cones for evaluation. The root mean square (RMS) of the difference between the measured and calculated tissue-phantom-ratios (TPRs) and off-center-ratio (OCR) was compared. Three cone size correction techniques were developed to better fit the OCRs at the penumbra region, which are further evaluated by the output factors (OFs). The pencil beam model was further validated against measurement data on the variable dodecagon-shaped Iris collimators and a half-beam blocked field. Comparison with Ray-Tracing and Monte Carlo methods was also performed on a lung SBRT case. Results: The RMS between the measured and calculated TPRs is 0.7% averaged for all cones, with the descending region at 0.5%. The RMSs of OCR at infield and outfield regions are both at 0.5%. The distance to agreement (DTA) at the OCR penumbra region is 0.2 mm. All three cone size correction models achieve the same improvement in OCR agreement, with the effective source shift model (SSM) preferred, due to their ability to predict more accurately the OF variations with the source to axis distance (SAD). In noncircular field validation
International Nuclear Information System (INIS)
Cliffe, K.A.; Morris, S.T.; Porter, J.D.
1998-05-01
NAMMU is a computer program for modelling groundwater flow and transport through porous media. This document provides an overview of the use of the program for geosphere modelling in performance assessment calculations and gives a detailed description of the program itself. The aim of the document is to give an indication of the grounds for having confidence in NAMMU as a performance assessment tool. In order to achieve this the following topics are discussed. The basic premises of the assessment approach and the purpose of and nature of the calculations that can be undertaken using NAMMU are outlined. The concepts of the validation of models and the considerations that can lead to increased confidence in models are described. The physical processes that can be modelled using NAMMU and the mathematical models and numerical techniques that are used to represent them are discussed in some detail. Finally, the grounds that would lead one to have confidence that NAMMU is fit for purpose are summarised
International Nuclear Information System (INIS)
Webb, G.A.M.; Grimwood, P.D.
1976-12-01
This report describes an oceanographic model which has been developed for the use in calculating the capacity of the oceans to accept radioactive wastes. One component is a relatively short-term diffusion model which is based on that described in an earlier report (Webb et al., NRPB-R14(1973)), but which has been generalised to some extent. Another component is a compartment model which is used to calculate long-term widespread water concentrations. This addition overcomes some of the short comings of the earlier diffusion model. Incorporation of radioactivity into deep ocean sediments is included in this long-term model as a removal mechanism. The combined model is used to provide a conservative (safe) estimate of the maximum concentrations of radioactivity in water as a function of time after the start of a continuous disposal operation. These results can then be used to assess the limiting capacity of an ocean to accept radioactive waste. (author)
International Nuclear Information System (INIS)
Ekberg, Christian; Oedegaard Jensen, Arvid
2004-04-01
Uncertainty and sensitivity analysis is becoming more and more important for testing the reliability of computer predictions. Solubility estimations play important roles for, e.g. underground repositories for nuclear waste, other hazardous materials as well as simple dissolution problems in general or industrial chemistry applications. The calculated solubility of a solid phase is dependent on several input data, e.g. the stability constants for the complexes formed in the solution, the enthalpies of reaction for the formation of these complexes and also the content of other elements in the water used for the dissolution. These input data are determined with more or less accuracy and thus the results of the calculations are uncertain. For the purpose of investigating the effects of these uncertainties several computer programs were developed in the 1990s, e.g. SENVAR, MINVAR and UNCCON. Of these SENVAR and UNCCON now exist as windows programs based on a newer speciation code. In this report we have given an explanation of how the codes work and also given some test cases as handling instructions. The results are naturally similar to the previous ones but the advantages are easier handling and more stable solubility calculations. With these improvements the programs presented here will be more publically accessible
International Nuclear Information System (INIS)
Schick, W.C. Jr.; Milani, S.; Duncombe, E.
1980-03-01
A model has been devised for incorporating into the thermal feedback procedure of the PDQ few-group diffusion theory computer program the explicit calculation of depletion and temperature dependent fuel-rod shrinkage and swelling at each mesh point. The model determines the effect on reactivity of the change in hydrogen concentration caused by the variation in coolant channel area as the rods contract and expand. The calculation of fuel temperature, and hence of Doppler-broadened cross sections, is improved by correcting the heat transfer coefficient of the fuel-clad gap for the effects of clad creep, fuel densification and swelling, and release of fission-product gases into the gap. An approximate calculation of clad stress is also included in the model
International Nuclear Information System (INIS)
Allam, Kh. A.
2017-01-01
In this work, a new methodology is developed based on Monte Carlo simulation for tunnels and mines external dose calculation. Tunnels external dose evaluation model of a cylindrical shape of finite thickness with an entrance and with or without exit. A photon transportation model was applied for exposure dose calculations. A new software based on Monte Carlo solution was designed and programmed using Delphi programming language. The variation of external dose due to radioactive nuclei in a mine tunnel and the corresponding experimental data lies in the range 7.3 19.9%. The variation of specific external dose rate with position in, tunnel building material density and composition were studied. The given new model has more flexible for real external dose in any cylindrical tunnel structure calculations. (authors)
Development of a model for the primary system CAREM reactor's stationary thermohydraulic calculation
International Nuclear Information System (INIS)
Gaspar, C.; Abbate, P.
1990-01-01
The ESCAREM program oriented to CAREM reactors' stationary thermohydraulic calculation is presented. As CAREM gives variations in relation to models for BWR (Boiling Water Reactors)/PWR (Pressurized Water Reactors) reactors, it was decided to develop a suitable model which allows to calculate: a) if the Steam Generator design is adequate to transfer the power required; b) the circulation flow that occurs in the Primary System; c) the temperature at the entrance (cool branch) and d) the contribution of each component to the pressure drop in the circulation connection. Results were verified against manual calculations and alternative numerical models. An experimental validation at the Thermohydraulic Essays Laboratory is suggested. A parametric analysis series is presented on CAREM 25 reactor, demonstrating operating conditions, at different power levels, as well as the influence of different design aspects. (Author) [es
International Nuclear Information System (INIS)
Honda, M.; Kajita, T.; Kasahara, K.; Midorikawa, S.; Sanuki, T.
2007-01-01
Using the 'modified DPMJET-III' model explained in the previous paper [T. Sanuki et al., preceding Article, Phys. Rev. D 75, 043005 (2007).], we calculate the atmospheric neutrino flux. The calculation scheme is almost the same as HKKM04 [M. Honda, T. Kajita, K. Kasahara, and S. Midorikawa, Phys. Rev. D 70, 043008 (2004).], but the usage of the 'virtual detector' is improved to reduce the error due to it. Then we study the uncertainty of the calculated atmospheric neutrino flux summarizing the uncertainties of individual components of the simulation. The uncertainty of K-production in the interaction model is estimated using other interaction models: FLUKA'97 and FRITIOF 7.02, and modifying them so that they also reproduce the atmospheric muon flux data correctly. The uncertainties of the flux ratio and zenith angle dependence of the atmospheric neutrino flux are also studied
Numerical calculation models of the elastoplastic response of a structure under seismic action
International Nuclear Information System (INIS)
Edjtemai, Nima.
1982-06-01
Two digital calculation models developed in this work have made it possible to analyze the exact dynamic behaviour of ductile structures with one or several degrees of liberty, during earthquakes. With the first model, response spectra were built in the linear and non-linear fields for different absorption and ductility values and two types of seismic accelerograms. The comparative study of these spectra made it possible to check the validity of certain hypotheses suggested for the construction of elastoplastic spectra from corresponding linear spectra. A simplified method of non-linear seismic calculation based on the modal analysis and the spectra of elastoplastic response was then applied to structures with a varying number of degrees of liberty. The results obtained in this manner were compared with those provided by an exact calculation provided by the second digital model developed by us [fr
Fission product model for lattice calculation of high conversion boiling water reactor
International Nuclear Information System (INIS)
Iijima, S.; Yoshida, T.; Yamamoto, T.
1988-01-01
A high precision fission product model for boiling water reactor (BWR) lattice calculation was developed, which consists of 45 nuclides to be treated explicitly and one nonsaturating pseudo nuclide. This model is applied to a high conversion BWR lattice calculation code. From a study based on a three-energy-group calculation of fission product poisoning due to full fission products and explicitly treated nuclides, the multigroup capture cross sections and the effective fission yields of the pseudo nuclide are determined, which do not depend on fuel types or reactor operating conditions for a good approximation. Apart from nuclear data uncertainties, the model and the derived pseudo nuclide constants would predict the fission product reactivity within an error of 0.1% Δk at high burnup
CLEAR: a model for the calculation of evacuation-time estimates in Emergency Planning Zones
International Nuclear Information System (INIS)
McLean, M.A.; Moeller, M.P.; Desrosiers, A.E.
1983-01-01
This paper describes the methodology and application of the computer model CLEAR (Calculates Logical Evacuation And Response) which estimates the time required for a specific population density and distribution to evacuate an area using a specific transportation network. The CLEAR model simulates vehicle departure and movement on a transportation network according to the conditions and consequences of traffice flow. These include handling vehicles at intersecting road segments, calculating the velocity of travel on a road segment as a function of its vehicle density, and accounting for the delay of vehicles in traffice queues. The program also models the distribution of times required by individuals to prepare for an evacuation. CLEAR can calculate realistic evacuation time estimates using site specific data and can identify troublesome areas within an Emergency Planning Zone
Weck, Philippe F; Kim, Eunja; Wang, Yifeng; Kruichak, Jessica N; Mills, Melissa M; Matteo, Edward N; Pellenq, Roland J-M
2017-08-01
Molecular structures of kerogen control hydrocarbon production in unconventional reservoirs. Significant progress has been made in developing model representations of various kerogen structures. These models have been widely used for the prediction of gas adsorption and migration in shale matrix. However, using density functional perturbation theory (DFPT) calculations and vibrational spectroscopic measurements, we here show that a large gap may still remain between the existing model representations and actual kerogen structures, therefore calling for new model development. Using DFPT, we calculated Fourier transform infrared (FTIR) spectra for six most widely used kerogen structure models. The computed spectra were then systematically compared to the FTIR absorption spectra collected for kerogen samples isolated from Mancos, Woodford and Marcellus formations representing a wide range of kerogen origin and maturation conditions. Limited agreement between the model predictions and the measurements highlights that the existing kerogen models may still miss some key features in structural representation. A combination of DFPT calculations with spectroscopic measurements may provide a useful diagnostic tool for assessing the adequacy of a proposed structural model as well as for future model development. This approach may eventually help develop comprehensive infrared (IR)-fingerprints for tracing kerogen evolution.