WorldWideScience

Sample records for models updated calculations

  1. The Updated BaSTI Stellar Evolution Models and Isochrones. I. Solar-scaled Calculations

    Science.gov (United States)

    Hidalgo, Sebastian L.; Pietrinferni, Adriano; Cassisi, Santi; Salaris, Maurizio; Mucciarelli, Alessio; Savino, Alessandro; Aparicio, Antonio; Silva Aguirre, Victor; Verma, Kuldeep

    2018-04-01

    We present an updated release of the BaSTI (a Bag of Stellar Tracks and Isochrones) stellar model and isochrone library for a solar-scaled heavy element distribution. The main input physics that have been changed from the previous BaSTI release include the solar metal mixture, electron conduction opacities, a few nuclear reaction rates, bolometric corrections, and the treatment of the overshooting efficiency for shrinking convective cores. The new model calculations cover a mass range between 0.1 and 15 M ⊙, 22 initial chemical compositions between [Fe/H] = ‑3.20 and +0.45, with helium to metal enrichment ratio dY/dZ = 1.31. The isochrones cover an age range between 20 Myr and 14.5 Gyr, consistently take into account the pre-main-sequence phase, and have been translated to a large number of popular photometric systems. Asteroseismic properties of the theoretical models have also been calculated. We compare our isochrones with results from independent databases and with several sets of observations to test the accuracy of the calculations. All stellar evolution tracks, asteroseismic properties, and isochrones are made available through a dedicated web site.

  2. The Updated BaSTI Stellar Evolution Models and Isochrones. I. Solar-scaled Calculations

    DEFF Research Database (Denmark)

    Hidalgo, Sebastian L.; Pietrinferni, Adriano; Cassisi, Santi

    2018-01-01

    We present an updated release of the BaSTI (a Bag of Stellar Tracks and Isochrones) stellar model and isochrone library for a solar-scaled heavy element distribution. The main input physics that have been changed from the previous BaSTI release include the solar metal mixture, electron conduction...

  3. New calculation of derived limits for the 1960 radiation protection guides reflecting updated models for dosimetry and biological transport

    International Nuclear Information System (INIS)

    Eckerman, K.F.; Watson, S.B.; Nelson, C.B.; Nelson, D.R.; Richardson, A.C.B.; Sullivan, R.E.

    1984-12-01

    This report presents revised values for the radioactivity concentration guides (RCGs), based on the 1960 primary radiation protection guides (RPGs) for occupational exposure (FRC 1960) and for underground uranium miners (EPA 1971a) using the updated dosimetric models developed to prepare ICRP Publication 30. Unlike the derived quantities presented in Publication 30, which are based on limitation of the weighted sum of doses to all irradiated tissues, these RCGs are based on the ''critical organ'' approach of the 1960 guidance, which was a single limit for the most critically irradiated organ or tissue. This report provides revised guides for the 1960 Federal guidance which are consistent with current dosimetric relationships. 2 figs., 4 tabs

  4. Model validation: Correlation for updating

    Indian Academy of Sciences (India)

    Department of Mechanical Engineering, Imperial College of Science, ... If we are unable to obtain a satisfactory degree of correlation between the initial theoretical model and the test data, then it is extremely unlikely that any form of model updating (correcting the model to match the test data) will succeed. Thus, a successful ...

  5. Updates to In-Line Calculation of Photolysis Rates

    Science.gov (United States)

    How photolysis rates are calculated affects ozone and aerosol concentrations predicted by the CMAQ model and the model?s run-time. The standard configuration of CMAQ uses the inline option that calculates photolysis rates by solving the radiative transfer equation for the needed ...

  6. Validation of updated neutronic calculation models proposed for Atucha-II PHWR. Part II: Benchmark comparisons of PUMA core parameters with MCNP5 and improvements due to a simple cell heterogeneity correction

    International Nuclear Information System (INIS)

    Grant, C.; Mollerach, R.; Leszczynski, F.; Serra, O.; Marconi, J.; Fink, J.

    2006-01-01

    In 2005 the Argentine Government took the decision to complete the construction of the Atucha-II nuclear power plant, which has been progressing slowly during the last ten years. Atucha-II is a 745 MWe nuclear station moderated and cooled with heavy water, of German (Siemens) design located in Argentina. It has a pressure vessel design with 451 vertical coolant channels and the fuel assemblies (FA) are clusters of 37 natural UO 2 rods with an active length of 530 cm. For the reactor physics area, a revision and update of reactor physics calculation methods and models was recently carried out covering cell, supercell (control rod) and core calculations. This paper presents benchmark comparisons of core parameters of a slightly idealized model of the Atucha-I core obtained with the PUMA reactor code with MCNP5. The Atucha-I core was selected because it is smaller, similar from a neutronic point of view, more symmetric than Atucha-II, and has some experimental data available. To validate the new models benchmark comparisons of k-effective, channel power and axial power distributions obtained with PUMA and MCNP5 have been performed. In addition, a simple cell heterogeneity correction recently introduced in PUMA is presented, which improves significantly the agreement of calculated channel powers with MCNP5. To complete the validation, the calculation of some of the critical configurations of the Atucha-I reactor measured during the experiments performed at first criticality is also presented. (authors)

  7. Model Updating Nonlinear System Identification Toolbox Project

    Data.gov (United States)

    National Aeronautics and Space Administration — ZONA Technology (ZONA) proposes to develop an enhanced model updating nonlinear system identification (MUNSID) methodology that utilizes flight data with...

  8. ON-LINE CALCULATOR: FORWARD CALCULATION JOHNSON ETTINGER MODEL

    Science.gov (United States)

    On-Site was developed to provide modelers and model reviewers with prepackaged tools ("calculators") for performing site assessment calculations. The philosophy behind OnSite is that the convenience of the prepackaged calculators helps provide consistency for simple calculations,...

  9. SAM Photovoltaic Model Technical Reference 2016 Update

    Energy Technology Data Exchange (ETDEWEB)

    Gilman, Paul [National Renewable Energy Laboratory (NREL), Golden, CO (United States); DiOrio, Nicholas A [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Freeman, Janine M [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Janzou, Steven [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Dobos, Aron [No longer NREL employee; Ryberg, David [No longer NREL employee

    2018-03-19

    This manual describes the photovoltaic performance model in the System Advisor Model (SAM) software, Version 2016.3.14 Revision 4 (SSC Version 160). It is an update to the 2015 edition of the manual, which describes the photovoltaic model in SAM 2015.1.30 (SSC 41). This new edition includes corrections of errors in the 2015 edition and descriptions of new features introduced in SAM 2016.3.14, including: 3D shade calculator Battery storage model DC power optimizer loss inputs Snow loss model Plane-of-array irradiance input from weather file option Support for sub-hourly simulations Self-shading works with all four subarrays, and uses same algorithm for fixed arrays and one-axis tracking Linear self-shading algorithm for thin-film modules Loss percentages replace derate factors. The photovoltaic performance model is one of the modules in the SAM Simulation Core (SSC), which is part of both SAM and the SAM SDK. SAM is a user-friedly desktop application for analysis of renewable energy projects. The SAM SDK (Software Development Kit) is for developers writing their own renewable energy analysis software based on SSC. This manual is written for users of both SAM and the SAM SDK wanting to learn more about the details of SAM's photovoltaic model.

  10. MARMOT update for oxide fuel modeling

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Yongfeng [Idaho National Lab. (INL), Idaho Falls, ID (United States); Schwen, Daniel [Idaho National Lab. (INL), Idaho Falls, ID (United States); Chakraborty, Pritam [Idaho National Lab. (INL), Idaho Falls, ID (United States); Jiang, Chao [Idaho National Lab. (INL), Idaho Falls, ID (United States); Aagesen, Larry [Idaho National Lab. (INL), Idaho Falls, ID (United States); Ahmed, Karim [Idaho National Lab. (INL), Idaho Falls, ID (United States); Jiang, Wen [Idaho National Lab. (INL), Idaho Falls, ID (United States); Biner, Bulent [Idaho National Lab. (INL), Idaho Falls, ID (United States); Bai, Xianming [Virginia Polytechnic Inst. and State Univ. (Virginia Tech), Blacksburg, VA (United States); Tonks, Michael [Pennsylvania State Univ., University Park, PA (United States); Millett, Paul [Univ. of Arkansas, Fayetteville, AR (United States)

    2016-09-01

    This report summarizes the lower-length-scale research and development progresses in FY16 at Idaho National Laboratory in developing mechanistic materials models for oxide fuels, in parallel to the development of the MARMOT code which will be summarized in a separate report. This effort is a critical component of the microstructure based fuel performance modeling approach, supported by the Fuels Product Line in the Nuclear Energy Advanced Modeling and Simulation (NEAMS) program. The progresses can be classified into three categories: 1) development of materials models to be used in engineering scale fuel performance modeling regarding the effect of lattice defects on thermal conductivity, 2) development of modeling capabilities for mesoscale fuel behaviors including stage-3 gas release, grain growth, high burn-up structure, fracture and creep, and 3) improved understanding in material science by calculating the anisotropic grain boundary energies in UO$_2$ and obtaining thermodynamic data for solid fission products. Many of these topics are still under active development. They are updated in the report with proper amount of details. For some topics, separate reports are generated in parallel and so stated in the text. The accomplishments have led to better understanding of fuel behaviors and enhance capability of the MOOSE-BISON-MARMOT toolkit.

  11. Model Updating Nonlinear System Identification Toolbox Project

    Data.gov (United States)

    National Aeronautics and Space Administration — ZONA Technology proposes to develop an enhanced model updating nonlinear system identification (MUNSID) methodology by adopting the flight data with state-of-the-art...

  12. NODA for EPA's Updated Ozone Transport Modeling

    Science.gov (United States)

    Find EPA's NODA for the Updated Ozone Transport Modeling Data for the 2008 Ozone National Ambient Air Quality Standard (NAAQS) along with the ExitExtension of Public Comment Period on CSAPR for the 2008 NAAQS.

  13. Model parameter updating using Bayesian networks

    Energy Technology Data Exchange (ETDEWEB)

    Treml, C. A. (Christine A.); Ross, Timothy J.

    2004-01-01

    This paper outlines a model parameter updating technique for a new method of model validation using a modified model reference adaptive control (MRAC) framework with Bayesian Networks (BNs). The model parameter updating within this method is generic in the sense that the model/simulation to be validated is treated as a black box. It must have updateable parameters to which its outputs are sensitive, and those outputs must have metrics that can be compared to that of the model reference, i.e., experimental data. Furthermore, no assumptions are made about the statistics of the model parameter uncertainty, only upper and lower bounds need to be specified. This method is designed for situations where a model is not intended to predict a complete point-by-point time domain description of the item/system behavior; rather, there are specific points, features, or events of interest that need to be predicted. These specific points are compared to the model reference derived from actual experimental data. The logic for updating the model parameters to match the model reference is formed via a BN. The nodes of this BN consist of updateable model input parameters and the specific output values or features of interest. Each time the model is executed, the input/output pairs are used to adapt the conditional probabilities of the BN. Each iteration further refines the inferred model parameters to produce the desired model output. After parameter updating is complete and model inputs are inferred, reliabilities for the model output are supplied. Finally, this method is applied to a simulation of a resonance control cooling system for a prototype coupled cavity linac. The results are compared to experimental data.

  14. Development of a Software Tool for Calculating Transmission Line Parameters and Updating Related Databases

    Science.gov (United States)

    Xiu, Wanjing; Liao, Yuan

    2014-12-01

    Transmission lines are essential components of electric power grids. Diverse power system applications and simulation based studies require transmission line parameters including series resistance, reactance, and shunt susceptance, and accurate parameters are pivotal in ensuring the accuracy of analyses and reliable system operation. Commercial software packages for performing power system studies usually have their own databases that store the power system model including line parameters. When there is a physical system model change, the corresponding component in the database of the software packages will need to be modified. Manually updating line parameters are tedious and error-prone. This paper proposes a solution for streamlining the calculation of line parameters and updating of their values in respective software databases. The algorithms used for calculating the values of line parameters are described. The software developed for implementing the solution is described, and typical results are presented. The proposed solution is developed for a utility and has a potential to be put into use by other utilities.

  15. Updated Collisional Ionization Equilibrium Calculated for Optically Thin Plasmas

    Science.gov (United States)

    Savin, Daniel Wolf; Bryans, P.; Badnell, N. R.; Gorczyca, T. W.; Laming, J. M.; Mitthumsiri, W.

    2010-03-01

    Reliably interpreting spectra from electron-ionized cosmic plasmas requires accurate ionization balance calculations for the plasma in question. However, much of the atomic data needed for these calculations have not been generated using modern theoretical methods and their reliability are often highly suspect. We have carried out state-of-the-art calculations of dielectronic recombination (DR) rate coefficients for the hydrogenic through Na-like ions of all elements from He to Zn as well as for Al-like to Ar-like ions of Fe. We have also carried out state-of-the-art radiative recombination (RR) rate coefficient calculations for the bare through Na-like ions of all elements from H to Zn. Using our data and the recommended electron impact ionization data of Dere (2007), we present improved collisional ionization equilibrium calculations (Bryans et al. 2006, 2009). We compare our calculated fractional ionic abundances using these data with those presented by Mazzotta et al. (1998) for all elements from H to Ni. This work is supported in part by the NASA APRA and SHP SR&T programs.

  16. A Provenance Tracking Model for Data Updates

    Directory of Open Access Journals (Sweden)

    Gabriel Ciobanu

    2012-08-01

    Full Text Available For data-centric systems, provenance tracking is particularly important when the system is open and decentralised, such as the Web of Linked Data. In this paper, a concise but expressive calculus which models data updates is presented. The calculus is used to provide an operational semantics for a system where data and updates interact concurrently. The operational semantics of the calculus also tracks the provenance of data with respect to updates. This provides a new formal semantics extending provenance diagrams which takes into account the execution of processes in a concurrent setting. Moreover, a sound and complete model for the calculus based on ideals of series-parallel DAGs is provided. The notion of provenance introduced can be used as a subjective indicator of the quality of data in concurrent interacting systems.

  17. Updated Arkansas Global Rice Model

    OpenAIRE

    Wailes, Eric J.; Chavez, Eddie C.

    2010-01-01

    The Arkansas Global Rice Model is based on a multi-country statistical simulation and econometric framework. The model consists of six sub regions. These regions are the U.S., South Asia, North Asia and the Middle East, the Americas, Africa and Europe. Each region comprises of several countries and each country model has a supply sector, a demand sector, a trade, stocks and price linkage equations. All equations used in this model were estimated using econometric procedures or identities. Est...

  18. Adjustment or updating of models

    Indian Academy of Sciences (India)

    While the model is defined in terms of these spatial parameters, .... (mode shapes defined at the n DOFs of a typical modal test in place of the complete N DOFs .... In these expressions,. N И the number of degrees of freedom in the model, while N1 and N2 are the numbers of mass and stiffness elements to be corrected ...

  19. An updated pH calculation tool for new challenges

    Energy Technology Data Exchange (ETDEWEB)

    Crolet, J.L. [Consultant, 36 Chemin Mirassou, 64140 Lons (France)

    2004-07-01

    The time evolution of the in-situ pH concept is summarised, as well as the past and present challenges of pH calculations. Since the beginning of such calculations on spread sheets, the tremendous progress in the computer technology has progressively removed all its past limitations. On the other hand, the development of artificial acetate buffering in standardized and non-standardized corrosion testing has raised quite a few new questions. Especially, a straightforward precautionary principle now requires to limit all what is artificial to situations where this is really necessary and, consequently, seriously consider the possibility of periodic pH readjustment as an alternative to useless or excessive artificial buffering, including in the case of an over-acidification at ambient pressure through HCl addition only (e.g. SSC testing of martensitic stainless steels). These new challenges require a genuine 'pH engineering' for the design of corrosion testing protocols under CO{sub 2} and H{sub 2}S partial pressures, at ambient pressure or in autoclave. In this aim, not only a great many detailed pH data shall be automatically delivered to unskilled users, but this shall be done in an experimental context which is most often new and much more complicated than before: e.g. pH adjustment of artificial buffers before saturation in the test gas and further pH evolution under acid gas pressure (pH shift before test beginning), anticipation of the pH readjustment frequency from just a volume / surface ratio and an expected corrosion rate (pH drift during the test). Furthermore, in order to be really useful and reliable, such numerous pH data have also to be well understood. Therefore, their origin, significance and parametric sensitivity are backed up and explained through three self-understanding graphical illustrations: 1. an 'anion - pH' nomogram shows the pH dependence of all the variable ions, H{sup +}, HCO{sub 3}{sup -}, HS{sup -}, Ac{sup -} (and

  20. OSPREY Model Development Status Update

    Energy Technology Data Exchange (ETDEWEB)

    Veronica J Rutledge

    2014-04-01

    During the processing of used nuclear fuel, volatile radionuclides will be discharged to the atmosphere if no recovery processes are in place to limit their release. The volatile radionuclides of concern are 3H, 14C, 85Kr, and 129I. Methods are being developed, via adsorption and absorption unit operations, to capture these radionuclides. It is necessary to model these unit operations to aid in the evaluation of technologies and in the future development of an advanced used nuclear fuel processing plant. A collaboration between Fuel Cycle Research and Development Offgas Sigma Team member INL and a NEUP grant including ORNL, Syracuse University, and Georgia Institute of Technology has been formed to develop off gas models and support off gas research. Georgia Institute of Technology is developing fundamental level model to describe the equilibrium and kinetics of the adsorption process, which are to be integrated with OSPREY. This report discusses the progress made on expanding OSPREY to be multiple component and the integration of macroscale and microscale level models. Also included in this report is a brief OSPREY user guide.

  1. Model validation: Correlation for updating

    Indian Academy of Sciences (India)

    of refining the theoretical model which will be used for the design optimisation process. There are many different names given to the tasks involved in this refinement. .... slightly from the ideal line but in a systematic rather than a random fashion as this situation suggests that there is a specific characteristic responsible for the ...

  2. Adjustment or updating of models

    Indian Academy of Sciences (India)

    Department of Mechanical Engineering, Imperial College of Science, .... It is first necessary to decide upon the level of accuracy, or correctness which is sought from the adjustment of the initial model, and this will be heavily influenced by the eventual application of the ..... reviewing the degree of success attained.

  3. Robot Visual Tracking via Incremental Self-Updating of Appearance Model

    Directory of Open Access Journals (Sweden)

    Danpei Zhao

    2013-09-01

    Full Text Available This paper proposes a target tracking method called Incremental Self-Updating Visual Tracking for robot platforms. Our tracker treats the tracking problem as a binary classification: the target and the background. The greyscale, HOG and LBP features are used in this work to represent the target and are integrated into a particle filter framework. To track the target over long time sequences, the tracker has to update its model to follow the most recent target. In order to deal with the problems of calculation waste and lack of model-updating strategy with the traditional methods, an intelligent and effective online self-updating strategy is devised to choose the optimal update opportunity. The strategy of updating the appearance model can be achieved based on the change in the discriminative capability between the current frame and the previous updated frame. By adjusting the update step adaptively, severe waste of calculation time for needless updates can be avoided while keeping the stability of the model. Moreover, the appearance model can be kept away from serious drift problems when the target undergoes temporary occlusion. The experimental results show that the proposed tracker can achieve robust and efficient performance in several benchmark-challenging video sequences with various complex environment changes in posture, scale, illumination and occlusion.

  4. 40 CFR 600.207-93 - Calculation of fuel economy values for a model type.

    Science.gov (United States)

    2010-07-01

    ... Values § 600.207-93 Calculation of fuel economy values for a model type. (a) Fuel economy values for a... update sales projections at the time any model type value is calculated for a label value. (iii) The... those intended for sale in other states, he will calculate fuel economy values for each model type for...

  5. Risk calculators and updated tools to select and plan a repeat biopsy for prostate cancer detection

    Directory of Open Access Journals (Sweden)

    Igor Sorokin

    2015-01-01

    Full Text Available Millions of men each year are faced with a clinical suspicion of prostate cancer (PCa but the prostate biopsy fails to detect the disease. For the urologists, how to select the appropriate candidate for repeat biopsy is a significant clinical dilemma. Traditional risk-stratification tools in this setting such as prostate-specific antigen (PSA related markers and histopathology findings have met with limited correlation with cancer diagnosis or with significant disease. Thus, an individualized approach using predictive models such as an online risk calculator (RC or updated biomarkers is more suitable in counseling men about their risk of harboring clinically significant prostate cancer. This review will focus on the available risk-stratification tools in the population of men with prior negative biopsies and persistent suspicion of PCa. The underlying methodology and platforms of the available tools are reviewed to better understand the development and validation of these models. The index patient is then assessed with different RCs to determine the range of heterogeneity among various RCs. This should allow the urologists to better incorporate these various risk-stratification tools into their clinical practice and improve patient counseling.

  6. Risk calculators and updated tools to select and plan a repeat biopsy for prostate cancer detection.

    Science.gov (United States)

    Sorokin, Igor; Mian, Badar M

    2015-01-01

    Millions of men each year are faced with a clinical suspicion of prostate cancer (PCa) but the prostate biopsy fails to detect the disease. For the urologists, how to select the appropriate candidate for repeat biopsy is a significant clinical dilemma. Traditional risk-stratification tools in this setting such as prostate-specific antigen (PSA) related markers and histopathology findings have met with limited correlation with cancer diagnosis or with significant disease. Thus, an individualized approach using predictive models such as an online risk calculator (RC) or updated biomarkers is more suitable in counseling men about their risk of harboring clinically significant prostate cancer. This review will focus on the available risk-stratification tools in the population of men with prior negative biopsies and persistent suspicion of PCa. The underlying methodology and platforms of the available tools are reviewed to better understand the development and validation of these models. The index patient is then assessed with different RCs to determine the range of heterogeneity among various RCs. This should allow the urologists to better incorporate these various risk-stratification tools into their clinical practice and improve patient counseling.

  7. Updating of a dynamic finite element model from the Hualien scale model reactor building

    International Nuclear Information System (INIS)

    Billet, L.; Moine, P.; Lebailly, P.

    1996-08-01

    The forces occurring at the soil-structure interface of a building have generally a large influence on the way the building reacts to an earthquake. One can be tempted to characterise these forces more accurately bu updating a model from the structure. However, this procedure requires an updating method suitable for dissipative models, since significant damping can be observed at the soil-structure interface of buildings. Such a method is presented here. It is based on the minimization of a mechanical energy built from the difference between Eigen data calculated bu the model and Eigen data issued from experimental tests on the real structure. An experimental validation of this method is then proposed on a model from the HUALIEN scale-model reactor building. This scale-model, built on the HUALIEN site of TAIWAN, is devoted to the study of soil-structure interaction. The updating concerned the soil impedances, modelled by a layer of springs and viscous dampers attached to the building foundation. A good agreement was found between the Eigen modes and dynamic responses calculated bu the updated model and the corresponding experimental data. (authors). 12 refs., 3 figs., 4 tabs

  8. Resource Tracking Model Updates and Trade Studies

    Science.gov (United States)

    Chambliss, Joe; Stambaugh, Imelda; Moore, Michael

    2016-01-01

    The Resource Tracking Model has been updated to capture system manager and project manager inputs. Both the Trick/General Use Nodal Network Solver Resource Tracking Model (RTM) simulator and the RTM mass balance spreadsheet have been revised to address inputs from system managers and to refine the way mass balance is illustrated. The revisions to the RTM included the addition of a Plasma Pyrolysis Assembly (PPA) to recover hydrogen from Sabatier Reactor methane, which was vented in the prior version of the RTM. The effect of the PPA on the overall balance of resources in an exploration vehicle is illustrated in the increased recycle of vehicle oxygen. Case studies have been run to show the relative effect of performance changes on vehicle resources.

  9. Update on GOCART Model Development and Applications

    Science.gov (United States)

    Kim, Dongchul

    2013-01-01

    Recent results from the GOCART and GMI models are reported. They include: Updated emission inventories for anthropogenic and volcano sources, satellite-derived vegetation index for seasonal variations of dust emission, MODIS-derived smoke AOT for assessing uncertainties of biomass-burning emissions, long-range transport of aerosol across the Pacific Ocean, and model studies on the multi-decadal trend of regional and global aerosol distributions from 1980 to 2010, volcanic aerosols, and nitrate aerosols. The document was presented at the 2013 AEROCENTER Annual Meeting held at the GSFC Visitors Center, May 31, 2013. The Organizers of the meeting are posting the talks to the public Aerocentr website, after the meeting.

  10. Updating parameters of the chicken processing line model

    DEFF Research Database (Denmark)

    Kurowicka, Dorota; Nauta, Maarten; Jozwiak, Katarzyna

    2010-01-01

    A mathematical model of chicken processing that quantitatively describes the transmission of Campylobacter on chicken carcasses from slaughter to chicken meat product has been developed in Nauta et al. (2005). This model was quantified with expert judgment. Recent availability of data allows...... updating parameters of the model to better describe processes observed in slaughterhouses. We propose Bayesian updating as a suitable technique to update expert judgment with microbiological data. Berrang and Dickens’s data are used to demonstrate performance of this method in updating parameters...... of the chicken processing line model....

  11. Model Updating Nonlinear System Identification Toolbox, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — ZONA Technology (ZONA) proposes to develop an enhanced model updating nonlinear system identification (MUNSID) methodology that utilizes flight data with...

  12. Microbial Communities Model Parameter Calculation for TSPA/SR

    Energy Technology Data Exchange (ETDEWEB)

    D. Jolley

    2001-07-16

    This calculation has several purposes. First the calculation reduces the information contained in ''Committed Materials in Repository Drifts'' (BSC 2001a) to useable parameters required as input to MING V1.O (CRWMS M&O 1998, CSCI 30018 V1.O) for calculation of the effects of potential in-drift microbial communities as part of the microbial communities model. The calculation is intended to replace the parameters found in Attachment II of the current In-Drift Microbial Communities Model revision (CRWMS M&O 2000c) with the exception of Section 11-5.3. Second, this calculation provides the information necessary to supercede the following DTN: M09909SPAMING1.003 and replace it with a new qualified dataset (see Table 6.2-1). The purpose of this calculation is to create the revised qualified parameter input for MING that will allow {Delta}G (Gibbs Free Energy) to be corrected for long-term changes to the temperature of the near-field environment. Calculated herein are the quadratic or second order regression relationships that are used in the energy limiting calculations to potential growth of microbial communities in the in-drift geochemical environment. Third, the calculation performs an impact review of a new DTN: M00012MAJIONIS.000 that is intended to replace the currently cited DTN: GS9809083 12322.008 for water chemistry data used in the current ''In-Drift Microbial Communities Model'' revision (CRWMS M&O 2000c). Finally, the calculation updates the material lifetimes reported on Table 32 in section 6.5.2.3 of the ''In-Drift Microbial Communities'' AMR (CRWMS M&O 2000c) based on the inputs reported in BSC (2001a). Changes include adding new specified materials and updating old materials information that has changed.

  13. Microbial Communities Model Parameter Calculation for TSPA/SR

    International Nuclear Information System (INIS)

    D. Jolley

    2001-01-01

    This calculation has several purposes. First the calculation reduces the information contained in ''Committed Materials in Repository Drifts'' (BSC 2001a) to useable parameters required as input to MING V1.O (CRWMS M and O 1998, CSCI 30018 V1.O) for calculation of the effects of potential in-drift microbial communities as part of the microbial communities model. The calculation is intended to replace the parameters found in Attachment II of the current In-Drift Microbial Communities Model revision (CRWMS M and O 2000c) with the exception of Section 11-5.3. Second, this calculation provides the information necessary to supercede the following DTN: M09909SPAMING1.003 and replace it with a new qualified dataset (see Table 6.2-1). The purpose of this calculation is to create the revised qualified parameter input for MING that will allow ΔG (Gibbs Free Energy) to be corrected for long-term changes to the temperature of the near-field environment. Calculated herein are the quadratic or second order regression relationships that are used in the energy limiting calculations to potential growth of microbial communities in the in-drift geochemical environment. Third, the calculation performs an impact review of a new DTN: M00012MAJIONIS.000 that is intended to replace the currently cited DTN: GS9809083 12322.008 for water chemistry data used in the current ''In-Drift Microbial Communities Model'' revision (CRWMS M and O 2000c). Finally, the calculation updates the material lifetimes reported on Table 32 in section 6.5.2.3 of the ''In-Drift Microbial Communities'' AMR (CRWMS M and O 2000c) based on the inputs reported in BSC (2001a). Changes include adding new specified materials and updating old materials information that has changed

  14. Radiation shielding calculation for digital breast tomosynthesis rooms with an updated workload survey.

    Science.gov (United States)

    Yang, Kai; Schultz, Thomas J; Li, Xinhua; Liu, Bob

    2017-03-20

    To present shielding calculations for clinical digital breast tomosynthesis (DBT) rooms with updated workload data from a comprehensive survey and to provide reference shielding data for DBT rooms. The workload survey was performed from eight clinical DBT (Hologic Selenia Dimensions) rooms at Massachusetts General Hospital (MGH) for the time period between 10/1/2014 and 10/1/2015. Radiation output related information tags from the DICOM header, including mAs, kVp, beam filter material and gantry angle, were extracted from a total of 310 421 clinical DBT acquisitions from the PACS database. DBT workload distributions were determined from the survey data. In combination with previously measured scatter fraction data, unshielded scatter air kerma for each room was calculated. Experiment measurements with a linear-array detector were also performed on representative locations for verification. Necessary shielding material and thickness were determined for all barriers. For the general purpose of DBT room shielding, a set of workload-distribution-specific transmission data and unshielded scatter air kerma values were calculated using the updated workload distribution. The workload distribution for Hologic DBT systems could be simplified by five different kVp/filter combinations for shielding purpose. The survey data showed the predominance of 45° gantry location for medial-lateral-oblique views at MGH. When taking into consideration the non-isotropic scatter fraction distribution together with the gantry angle distribution, accurate and conservative estimate of the unshielded scatter air kerma levels were determined for all eight DBT rooms. Additional shielding was shown to be necessary for two 4.5 cm wood doors. This study provided a detailed workload survey and updated transmission data and unshielded scatter air kerma values for Hologic DBT rooms. Example shielding calculations were presented for clinical DBT rooms.

  15. 40 CFR 600.207-86 - Calculation of fuel economy values for a model type.

    Science.gov (United States)

    2010-07-01

    ... Values § 600.207-86 Calculation of fuel economy values for a model type. (a) Fuel economy values for a... update sales projections at the time any model type value is calculated for a label value. (iii) The... the projected sales and fuel economy values for each base level within the model type. (1) If the...

  16. 2017 Updates: Earth Gravitational Model 2020

    Science.gov (United States)

    Barnes, D. E.; Holmes, S. A.; Ingalls, S.; Beale, J.; Presicci, M. R.; Minter, C.

    2017-12-01

    The National Geospatial-Intelligence Agency [NGA], in conjunction with its U.S. and international partners, has begun preliminary work on its next Earth Gravitational Model, to replace EGM2008. The new `Earth Gravitational Model 2020' [EGM2020] has an expected public release date of 2020, and will retain the same harmonic basis and resolution as EGM2008. As such, EGM2020 will be essentially an ellipsoidal harmonic model up to degree (n) and order (m) 2159, but will be released as a spherical harmonic model to degree 2190 and order 2159. EGM2020 will benefit from new data sources and procedures. Updated satellite gravity information from the GOCE and GRACE mission, will better support the lower harmonics, globally. Multiple new acquisitions (terrestrial, airborne and shipborne) of gravimetric data over specific geographical areas (Antarctica, Greenland …), will provide improved global coverage and resolution over the land, as well as for coastal and some ocean areas. Ongoing accumulation of satellite altimetry data as well as improvements in the treatment of this data, will better define the marine gravity field, most notably in polar and near-coastal regions. NGA and partners are evaluating different approaches for optimally combining the new GOCE/GRACE satellite gravity models with the terrestrial data. These include the latest methods employing a full covariance adjustment. NGA is also working to assess systematically the quality of its entire gravimetry database, towards correcting biases and other egregious errors. Public release number 15-564

  17. Carbon cycle modeling calculations for the IPCC

    International Nuclear Information System (INIS)

    Wuebbles, D.J.; Jain, A.K.

    1993-01-01

    We carried out essentially all the carbon cycle modeling calculations that were required by the IPCC Working Group 1. Specifically, IPCC required two types of calculations, namely, ''inverse calculations'' (input was CO 2 concentrations and the output was CO 2 emissions), and the ''forward calculations'' (input was CO 2 emissions and output was CO 2 concentrations). In particular, we have derived carbon dioxide concentrations and/or emissions for several scenarios using our coupled climate-carbon cycle modelling system

  18. General Separations Area (GSA) Groundwater Flow Model Update: Hydrostratigraphic Data

    Energy Technology Data Exchange (ETDEWEB)

    Bagwell, L. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Bennett, P. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Flach, G. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2017-02-21

    This document describes the assembly, selection, and interpretation of hydrostratigraphic data for input to an updated groundwater flow model for the General Separations Area (GSA; Figure 1) at the Department of Energy’s (DOE) Savannah River Site (SRS). This report is one of several discrete but interrelated tasks that support development of an updated groundwater model (Bagwell and Flach, 2016).

  19. Evaluation of two updating methods for dissipative models on a real structure

    International Nuclear Information System (INIS)

    Moine, P.; Billet, L.

    1996-01-01

    Finite Element Models are widely used to predict the dynamic behaviour from structures. Frequently, the model does not represent the structure with all be expected accuracy i.e. the measurements realised on the structure differ from the data predicted by the model. It is therefore necessary to update the model. Although many modeling errors come from inadequate representation of the damping phenomena, most of the model updating techniques are up to now restricted to conservative models only. In this paper, we present two updating methods for dissipative models using Eigen mode shapes and Eigen values as behavioural information from the structure. The first method - the modal output error method - compares directly the experimental Eigen vectors and Eigen values to the model Eigen vectors and Eigen values whereas the second method - the error in constitutive relation method - uses an energy error derived from the equilibrium relation. The error function, in both cases, is minimized by a conjugate gradient algorithm and the gradient is calculated analytically. These two methods behave differently which can be evidenced by updating a real structure constituted from a piece of pipe mounted on two viscous elastic suspensions. The updating of the model validates an updating strategy consisting in realizing a preliminary updating with the error in constitutive relation method (a fast to converge but difficult to control method) and then to pursue the updating with the modal output error method (a slow to converge but reliable and easy to control method). Moreover the problems encountered during the updating process and their corresponding solutions are given. (authors)

  20. Numerical model updating technique for structures using firefly algorithm

    Science.gov (United States)

    Sai Kubair, K.; Mohan, S. C.

    2018-03-01

    Numerical model updating is a technique used for updating the existing experimental models for any structures related to civil, mechanical, automobiles, marine, aerospace engineering, etc. The basic concept behind this technique is updating the numerical models to closely match with experimental data obtained from real or prototype test structures. The present work involves the development of numerical model using MATLAB as a computational tool and with mathematical equations that define the experimental model. Firefly algorithm is used as an optimization tool in this study. In this updating process a response parameter of the structure has to be chosen, which helps to correlate the numerical model developed with the experimental results obtained. The variables for the updating can be either material or geometrical properties of the model or both. In this study, to verify the proposed technique, a cantilever beam is analyzed for its tip deflection and a space frame has been analyzed for its natural frequencies. Both the models are updated with their respective response values obtained from experimental results. The numerical results after updating show that there is a close relationship that can be brought between the experimental and the numerical models.

  1. Reservoir management under geological uncertainty using fast model update

    NARCIS (Netherlands)

    Hanea, R.; Evensen, G.; Hustoft, L.; Ek, T.; Chitu, A.; Wilschut, F.

    2015-01-01

    Statoil is implementing "Fast Model Update (FMU)," an integrated and automated workflow for reservoir modeling and characterization. FMU connects all steps and disciplines from seismic depth conversion to prediction and reservoir management taking into account relevant reservoir uncertainty. FMU

  2. Self-shielding models of MICROX-2 code: Review and updates

    International Nuclear Information System (INIS)

    Hou, J.; Choi, H.; Ivanov, K.N.

    2014-01-01

    Highlights: • The MICROX-2 code has been improved to expand its application to advanced reactors. • New fine-group cross section libraries based on ENDF/B-VII have been generated. • Resonance self-shielding and spatial self-shielding models have been improved. • The improvements were assessed by a series of benchmark calculations against MCNPX. - Abstract: The MICROX-2 is a transport theory code that solves for the neutron slowing-down and thermalization equations of a two-region lattice cell. The MICROX-2 code has been updated to expand its application to advanced reactor concepts and fuel cycle simulations, including generation of new fine-group cross section libraries based on ENDF/B-VII. In continuation of previous work, the MICROX-2 methods are reviewed and updated in this study, focusing on its resonance self-shielding and spatial self-shielding models for neutron spectrum calculations. The improvement of self-shielding method was assessed by a series of benchmark calculations against the Monte Carlo code, using homogeneous and heterogeneous pin cell models. The results have shown that the implementation of the updated self-shielding models is correct and the accuracy of physics calculation is improved. Compared to the existing models, the updates reduced the prediction error of the infinite multiplication factor by ∼0.1% and ∼0.2% for the homogeneous and heterogeneous pin cell models, respectively, considered in this study

  3. A comparison of updating algorithms for large $N$ reduced models

    CERN Document Server

    Pérez, Margarita García; Keegan, Liam; Okawa, Masanori; Ramos, Alberto

    2015-01-01

    We investigate Monte Carlo updating algorithms for simulating $SU(N)$ Yang-Mills fields on a single-site lattice, such as for the Twisted Eguchi-Kawai model (TEK). We show that performing only over-relaxation (OR) updates of the gauge links is a valid simulation algorithm for the Fabricius and Haan formulation of this model, and that this decorrelates observables faster than using heat-bath updates. We consider two different methods of implementing the OR update: either updating the whole $SU(N)$ matrix at once, or iterating through $SU(2)$ subgroups of the $SU(N)$ matrix, we find the same critical exponent in both cases, and only a slight difference between the two.

  4. Updates to the Demographic and Spatial Allocation Models to ...

    Science.gov (United States)

    EPA announced the availability of the draft report, Updates to the Demographic and Spatial Allocation Models to Produce Integrated Climate and Land Use Scenarios (ICLUS) for a 30-day public comment period. The ICLUS version 2 (v2) modeling tool furthered land change modeling by providing nationwide housing development scenarios up to 2100. ICLUS V2 includes updated population and land use data sets and addressing limitations identified in ICLUS v1 in both the migration and spatial allocation models. The companion user guide describes the development of ICLUS v2 and the updates that were made to the original data sets and the demographic and spatial allocation models. [2017 UPDATE] Get the latest version of ICLUS and stay up-to-date by signing up to the ICLUS mailing list. The GIS tool enables users to run SERGoM with the population projections developed for the ICLUS project and allows users to modify the spatial allocation housing density across the landscape.

  5. Precipitates/Salts Model Sensitivity Calculation

    International Nuclear Information System (INIS)

    Mariner, P.

    2001-01-01

    The objective and scope of this calculation is to assist Performance Assessment Operations and the Engineered Barrier System (EBS) Department in modeling the geochemical effects of evaporation on potential seepage waters within a potential repository drift. This work is developed and documented using procedure AP-3.12Q, ''Calculations'', in support of ''Technical Work Plan For Engineered Barrier System Department Modeling and Testing FY 02 Work Activities'' (BSC 2001a). The specific objective of this calculation is to examine the sensitivity and uncertainties of the Precipitates/Salts model. The Precipitates/Salts model is documented in an Analysis/Model Report (AMR), ''In-Drift Precipitates/Salts Analysis'' (BSC 2001b). The calculation in the current document examines the effects of starting water composition, mineral suppressions, and the fugacity of carbon dioxide (CO 2 ) on the chemical evolution of water in the drift

  6. Model Updating Nonlinear System Identification Toolbox, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — ZONA Technology proposes to develop an enhanced model updating nonlinear system identification (MUNSID) methodology by adopting the flight data with state-of-the-art...

  7. Aqua/Aura Updated Inclination Adjust Maneuver Performance Prediction Model

    Science.gov (United States)

    Boone, Spencer

    2017-01-01

    This presentation will discuss the updated Inclination Adjust Maneuver (IAM) performance prediction model that was developed for Aqua and Aura following the 2017 IAM series. This updated model uses statistical regression methods to identify potential long-term trends in maneuver parameters, yielding improved predictions when re-planning past maneuvers. The presentation has been reviewed and approved by Eric Moyer, ESMO Deputy Project Manager.

  8. Preconditioner Updates Applied to CFD Model Problems

    Czech Academy of Sciences Publication Activity Database

    Birken, P.; Duintjer Tebbens, Jurjen; Meister, A.; Tůma, Miroslav

    2008-01-01

    Roč. 58, č. 11 (2008), s. 1628-1641 ISSN 0168-9274 R&D Projects: GA AV ČR 1ET400300415; GA AV ČR KJB100300703 Institutional research plan: CEZ:AV0Z10300504 Keywords : finite volume methods * update preconditioning * Krylov subspace methods * Euler equations * conservation laws Subject RIV: BA - General Mathematics Impact factor: 0.952, year: 2008

  9. A revised calculational model for fission

    International Nuclear Information System (INIS)

    Atchison, F.

    1998-09-01

    A semi-empirical parametrization has been developed to calculate the fission contribution to evaporative de-excitation of nuclei with a very wide range of charge, mass and excitation-energy and also the nuclear states of the scission products. The calculational model reproduces measured values (cross-sections, mass distributions, etc.) for a wide range of fissioning systems: Nuclei from Ta to Cf, interactions involving nucleons up to medium energy and light ions. (author)

  10. Calculation of nitrous oxide emission from agriculture in the Netherlands : update of emission factors and leaching fraction

    NARCIS (Netherlands)

    Velthof, G.L.; Mosquera Losada, J.

    2011-01-01

    A study was conducted to update the NO2 emission factors for nitrogen (N) fertilizer and animal manures applied to soils, based on results of Dutch experiments, and to derive a country specific methodology to calculate nitrate leachting using a leaching fraction (FracLEACH). It is recommended to use

  11. Update of a thermodynamic database for radionuclides to assist solubility limits calculation for performance assessment

    Energy Technology Data Exchange (ETDEWEB)

    Duro, L.; Grive, M.; Cera, E.; Domenech, C.; Bruno, J. (Enviros Spain S.L., Barcelona (ES))

    2006-12-15

    This report presents and documents the thermodynamic database used in the assessment of the radionuclide solubility limits within the SR-Can Exercise. It is a supporting report to the solubility assessment. Thermodynamic data are reviewed for 20 radioelements from Groups A and B, lanthanides and actinides. The development of this database is partially based on the one prepared by PSI and NAGRA. Several changes, updates and checks for internal consistency and completeness to the reference NAGRA-PSI 01/01 database have been conducted when needed. These modifications are mainly related to the information from the various experimental programmes and scientific literature available until the end of 2003. Some of the discussions also refer to a previous database selection conducted by Enviros Spain on behalf of ANDRA, where the reader can find additional information. When possible, in order to optimize the robustness of the database, the description of the solubility of the different radionuclides calculated by using the reported thermodynamic database is tested in front of experimental data available in the open scientific literature. When necessary, different procedures to estimate gaps in the database have been followed, especially accounting for temperature corrections. All the methodologies followed are discussed in the main text

  12. Reservoir structural model updating using the Ensemble Kalman Filter

    Energy Technology Data Exchange (ETDEWEB)

    Seiler, Alexandra

    2010-09-15

    In reservoir characterization, a large emphasis is placed on risk management and uncertainty assessment, and the dangers of basing decisions on a single base-case reservoir model are widely recognized. In the last years, statistical methods for assisted history matching have gained popularity for providing integrated models with quantified uncertainty, conditioned on all available data. Structural modeling is the first step in a reservoir modeling work flow and consists in defining the geometrical framework of the reservoir, based on the information from seismic surveys and well data. Large uncertainties are typically associated with the processing and interpretation of seismic data. However, the structural model is often fixed to a single interpretation in history-matching work flows due to the complexity of updating the structural model and related reservoir grid. This thesis present a method that allows to account for the uncertainties in the structural model and continuously update the model and related uncertainties by assimilation of production data using the Ensemble Kalman Filter (EnKF). We consider uncertainties in the depth of the reservoir horizons and in the fault geometry, and assimilate production data, such as oil production rate, gas-oil ratio and water-cut. In the EnKF model-updating work flow, an ensemble of reservoir models, expressing explicitly the model uncertainty, is created. We present a parameterization that allows to generate different realizations of the structural model to account for the uncertainties in faults and horizons and that maintains the consistency throughout the reservoir characterization project, from the structural model to the prediction of production profiles. The uncertainty in the depth of the horizons is parameterized as simulated depth surfaces, the fault position as a displacement vector and the fault throw as a throw-scaling factor. In the EnKF, the model parameters and state variables are updated sequentially in

  13. Numerical modelling of mine workings: annual update 1999/2000.

    CSIR Research Space (South Africa)

    Lightfoot, N

    1999-09-01

    Full Text Available chapters of the guidebook. In order to download the guidebook a visitor needs to have a password which will issued upon receipt of a nominal charge. 7 2 Updated Edition of Numerical Modelling of Mine Workings Enabling Output 1: Updates to the current... of rock mass ratings. 4.3.3.2 Quadratic model Figure describing the quadratic backfill material model has been corrected. Chapter 5 Solution Methods 5.2 Analytical Methods and 5.3 Computational Methods Use of the words slot, crack and slit...

  14. Model calculations in correlated finite nuclei

    Energy Technology Data Exchange (ETDEWEB)

    Guardiola, R.; Ros, J. (Granada Univ. (Spain). Dept. de Fisica Nuclear); Polls, A. (Tuebingen Univ. (Germany, F.R.). Inst. fuer Theoretische Physik)

    1980-10-21

    In order to study the convergence condition of the FAHT cluster expansion several model calculations are described and numerically tested. It is concluded that this cluster expansion deals properly with the central part of the two-body distribution function, but presents some difficulties for the exchange part.

  15. EARTHWORK VOLUME CALCULATION FROM DIGITAL TERRAIN MODELS

    Directory of Open Access Journals (Sweden)

    JANIĆ Milorad

    2015-06-01

    Full Text Available Accurate calculation of cut and fill volume has an essential importance in many fields. This article shows a new method, which has no approximation, based on Digital Terrain Models. A relatively new mathematical model is developed for that purpose, which is implemented in the software solution. Both of them has been tested and verified in the praxis on several large opencast mines. This application is developed in AutoLISP programming language and works in AutoCAD environment.

  16. Real Time Updating in Distributed Urban Rainfall Runoff Modelling

    DEFF Research Database (Denmark)

    Borup, Morten; Madsen, Henrik

    are equipped with basins and automated structures that allow for a large degree of control of the systems, but in order to do this optimally it is required to know what is happening throughout the system. For this task models are needed, due to the large scale and complex nature of the systems. The physically...... that are being updated from system measurements was studied. The results showed that the fact alone that it takes time for rainfall data to travel the distance between gauges and catchments has such a big negative effect on the forecast skill of updated models, that it can justify the choice of even very...... when it was used to update the water level in multiple upstream basins. This method is, however, not capable of utilising the spatial correlations in the errors to correct larger parts of the models. To accommodate this a method was developed for correcting the slow changing inflows to urban drainage...

  17. Circumplex model of marital and family systems: VI. Theoretical update.

    Science.gov (United States)

    Olson, D H; Russell, C S; Sprenkle, D H

    1983-03-01

    This paper updates the theoretical work on the Circumplex Model and provides revised and new hypotheses. Similarities and contrasts to the Beavers Systems Model are made along with comments regarding Beavers and Voeller's critique. FACES II, a newly revised assessment tool, provides both "perceived" and "ideal" family assessment that is useful empirically and clinically.

  18. ON-LINE CALCULATOR: JOHNSON ETTINGER VAPOR INTRUSION MODEL

    Science.gov (United States)

    On-Site was developed to provide modelers and model reviewers with prepackaged tools ("calculators") for performing site assessment calculations. The philosophy behind OnSite is that the convenience of the prepackaged calculators helps provide consistency for simple calculations,...

  19. 2011 Updated Arkansas Global Rice Model

    OpenAIRE

    Wailes, Eric J.; Chavez, Eddie C.

    2011-01-01

    The Arkansas Global Rice Model is based on a multi-country statistical simulation and econometric framework. The model is disaggregated by five world regions: Africa, the Americas, Asia, Europe, and Oceania. Each region includes country models which have a supply sector, a demand sector, a trade, stocks and price linkage equations. All equations used in this model are estimated using econometric procedures or identities. Estimates are based upon a set of explanatory variables including exogen...

  20. Crushed-salt constitutive model update

    Energy Technology Data Exchange (ETDEWEB)

    Callahan, G.D.; Loken, M.C.; Mellegard, K.D. [RE/SPEC Inc., Rapid City, SD (United States); Hansen, F.D. [Sandia National Labs., Albuquerque, NM (United States)

    1998-01-01

    Modifications to the constitutive model used to describe the deformation of crushed salt are presented in this report. Two mechanisms--dislocation creep and grain boundary diffusional pressure solutioning--defined previously but used separately are combined to form the basis for the constitutive model governing the deformation of crushed salt. The constitutive model is generalized to represent three-dimensional states of stress. New creep consolidation tests are combined with an existing database that includes hydrostatic consolidation and shear consolidation tests conducted on Waste Isolation Pilot Plant and southeastern New Mexico salt to determine material parameters for the constitutive model. Nonlinear least-squares model fitting to data from the shear consolidation tests and a combination of the shear and hydrostatic consolidation tests produced two sets of material parameter values for the model. The change in material parameter values from test group to test group indicates the empirical nature of the model but demonstrates improvement over earlier work with the previous models. Key improvements are the ability to capture lateral strain reversal and better resolve parameter values. To demonstrate the predictive capability of the model, each parameter value set was used to predict each of the tests in the database. Based on the fitting statistics and the ability of the model to predict the test data, the model appears to capture the creep consolidation behavior of crushed salt quite well.

  1. Crushed-salt constitutive model update

    International Nuclear Information System (INIS)

    Callahan, G.D.; Loken, M.C.; Mellegard, K.D.; Hansen, F.D.

    1998-01-01

    Modifications to the constitutive model used to describe the deformation of crushed salt are presented in this report. Two mechanisms--dislocation creep and grain boundary diffusional pressure solutioning--defined previously but used separately are combined to form the basis for the constitutive model governing the deformation of crushed salt. The constitutive model is generalized to represent three-dimensional states of stress. New creep consolidation tests are combined with an existing database that includes hydrostatic consolidation and shear consolidation tests conducted on Waste Isolation Pilot Plant and southeastern New Mexico salt to determine material parameters for the constitutive model. Nonlinear least-squares model fitting to data from the shear consolidation tests and a combination of the shear and hydrostatic consolidation tests produced two sets of material parameter values for the model. The change in material parameter values from test group to test group indicates the empirical nature of the model but demonstrates improvement over earlier work with the previous models. Key improvements are the ability to capture lateral strain reversal and better resolve parameter values. To demonstrate the predictive capability of the model, each parameter value set was used to predict each of the tests in the database. Based on the fitting statistics and the ability of the model to predict the test data, the model appears to capture the creep consolidation behavior of crushed salt quite well

  2. Construction and Updating of Event Models in Auditory Event Processing

    Science.gov (United States)

    Huff, Markus; Maurer, Annika E.; Brich, Irina; Pagenkopf, Anne; Wickelmaier, Florian; Papenmeier, Frank

    2018-01-01

    Humans segment the continuous stream of sensory information into distinct events at points of change. Between 2 events, humans perceive an event boundary. Present theories propose changes in the sensory information to trigger updating processes of the present event model. Increased encoding effort finally leads to a memory benefit at event…

  3. A Kriging Model Based Finite Element Model Updating Method for Damage Detection

    Directory of Open Access Journals (Sweden)

    Xiuming Yang

    2017-10-01

    Full Text Available Model updating is an effective means of damage identification and surrogate modeling has attracted considerable attention for saving computational cost in finite element (FE model updating, especially for large-scale structures. In this context, a surrogate model of frequency is normally constructed for damage identification, while the frequency response function (FRF is rarely used as it usually changes dramatically with updating parameters. This paper presents a new surrogate model based model updating method taking advantage of the measured FRFs. The Frequency Domain Assurance Criterion (FDAC is used to build the objective function, whose nonlinear response surface is constructed by the Kriging model. Then, the efficient global optimization (EGO algorithm is introduced to get the model updating results. The proposed method has good accuracy and robustness, which have been verified by a numerical simulation of a cantilever and experimental test data of a laboratory three-story structure.

  4. Updating the debate on model complexity

    Science.gov (United States)

    Simmons, Craig T.; Hunt, Randall J.

    2012-01-01

    As scientists who are trying to understand a complex natural world that cannot be fully characterized in the field, how can we best inform the society in which we live? This founding context was addressed in a special session, “Complexity in Modeling: How Much is Too Much?” convened at the 2011 Geological Society of America Annual Meeting. The session had a variety of thought-provoking presentations—ranging from philosophy to cost-benefit analyses—and provided some areas of broad agreement that were not evident in discussions of the topic in 1998 (Hunt and Zheng, 1999). The session began with a short introduction during which model complexity was framed borrowing from an economic concept, the Law of Diminishing Returns, and an example of enjoyment derived by eating ice cream. Initially, there is increasing satisfaction gained from eating more ice cream, to a point where the gain in satisfaction starts to decrease, ending at a point when the eater sees no value in eating more ice cream. A traditional view of model complexity is similar—understanding gained from modeling can actually decrease if models become unnecessarily complex. However, oversimplified models—those that omit important aspects of the problem needed to make a good prediction—can also limit and confound our understanding. Thus, the goal of all modeling is to find the “sweet spot” of model sophistication—regardless of whether complexity was added sequentially to an overly simple model or collapsed from an initial highly parameterized framework that uses mathematics and statistics to attain an optimum (e.g., Hunt et al., 2007). Thus, holistic parsimony is attained, incorporating “as simple as possible,” as well as the equally important corollary “but no simpler.”

  5. A last updating evolution model for online social networks

    Science.gov (United States)

    Bu, Zhan; Xia, Zhengyou; Wang, Jiandong; Zhang, Chengcui

    2013-05-01

    As information technology has advanced, people are turning to electronic media more frequently for communication, and social relationships are increasingly found on online channels. However, there is very limited knowledge about the actual evolution of the online social networks. In this paper, we propose and study a novel evolution network model with the new concept of “last updating time”, which exists in many real-life online social networks. The last updating evolution network model can maintain the robustness of scale-free networks and can improve the network reliance against intentional attacks. What is more, we also found that it has the “small-world effect”, which is the inherent property of most social networks. Simulation experiment based on this model show that the results and the real-life data are consistent, which means that our model is valid.

  6. Matrix model calculations beyond the spherical limit

    International Nuclear Information System (INIS)

    Ambjoern, J.; Chekhov, L.; Kristjansen, C.F.; Makeenko, Yu.

    1993-01-01

    We propose an improved iterative scheme for calculating higher genus contributions to the multi-loop (or multi-point) correlators and the partition function of the hermitian one matrix model. We present explicit results up to genus two. We develop a version which gives directly the result in the double scaling limit and present explicit results up to genus four. Using the latter version we prove that the hermitian and the complex matrix model are equivalent in the double scaling limit and that in this limit they are both equivalent to the Kontsevich model. We discuss how our results away from the double scaling limit are related to the structure of moduli space. (orig.)

  7. Construction and updating of event models in auditory event processing.

    Science.gov (United States)

    Huff, Markus; Maurer, Annika E; Brich, Irina; Pagenkopf, Anne; Wickelmaier, Florian; Papenmeier, Frank

    2018-02-01

    Humans segment the continuous stream of sensory information into distinct events at points of change. Between 2 events, humans perceive an event boundary. Present theories propose changes in the sensory information to trigger updating processes of the present event model. Increased encoding effort finally leads to a memory benefit at event boundaries. Evidence from reading time studies (increased reading times with increasing amount of change) suggest that updating of event models is incremental. We present results from 5 experiments that studied event processing (including memory formation processes and reading times) using an audio drama as well as a transcript thereof as stimulus material. Experiments 1a and 1b replicated the event boundary advantage effect for memory. In contrast to recent evidence from studies using visual stimulus material, Experiments 2a and 2b found no support for incremental updating with normally sighted and blind participants for recognition memory. In Experiment 3, we replicated Experiment 2a using a written transcript of the audio drama as stimulus material, allowing us to disentangle encoding and retrieval processes. Our results indicate incremental updating processes at encoding (as measured with reading times). At the same time, we again found recognition performance to be unaffected by the amount of change. We discuss these findings in light of current event cognition theories. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  8. Cost Calculation Model for Logistics Service Providers

    Directory of Open Access Journals (Sweden)

    Zoltán Bokor

    2012-11-01

    Full Text Available The exact calculation of logistics costs has become a real challenge in logistics and supply chain management. It is essential to gain reliable and accurate costing information to attain efficient resource allocation within the logistics service provider companies. Traditional costing approaches, however, may not be sufficient to reach this aim in case of complex and heterogeneous logistics service structures. So this paper intends to explore the ways of improving the cost calculation regimes of logistics service providers and show how to adopt the multi-level full cost allocation technique in logistics practice. After determining the methodological framework, a sample cost calculation scheme is developed and tested by using estimated input data. Based on the theoretical findings and the experiences of the pilot project it can be concluded that the improved costing model contributes to making logistics costing more accurate and transparent. Moreover, the relations between costs and performances also become more visible, which enhances the effectiveness of logistics planning and controlling significantly

  9. Updating river basin models with radar altimetry

    DEFF Research Database (Denmark)

    Michailovsky, Claire Irene B.

    Hydrological models are widely used by water managers as a decision support tool for both real-time and long-term applications. Some examples of real-time management issues are the optimal management of reservoir releases, flood forecasting or water allocation in drought conditions. Long term....... Many types of RS are now routinely used to set up and drive river basin models. One of the key hydrological state variables is river discharge. It is typically the output of interest for water allocation applications and is also widely used as a source of calibration data as it presents the integrated...... response of a catchment to meteorological forcing. While river discharge cannot be directly measured from space, radar altimetry (RA) can measure water level variations in rivers at the locations where the satellite ground track and river network intersect called virtual stations or VS. In this PhD study...

  10. An updated digital model of plate boundaries

    Science.gov (United States)

    Bird, Peter

    2003-03-01

    A global set of present plate boundaries on the Earth is presented in digital form. Most come from sources in the literature. A few boundaries are newly interpreted from topography, volcanism, and/or seismicity, taking into account relative plate velocities from magnetic anomalies, moment tensor solutions, and/or geodesy. In addition to the 14 large plates whose motion was described by the NUVEL-1A poles (Africa, Antarctica, Arabia, Australia, Caribbean, Cocos, Eurasia, India, Juan de Fuca, Nazca, North America, Pacific, Philippine Sea, South America), model PB2002 includes 38 small plates (Okhotsk, Amur, Yangtze, Okinawa, Sunda, Burma, Molucca Sea, Banda Sea, Timor, Birds Head, Maoke, Caroline, Mariana, North Bismarck, Manus, South Bismarck, Solomon Sea, Woodlark, New Hebrides, Conway Reef, Balmoral Reef, Futuna, Niuafo'ou, Tonga, Kermadec, Rivera, Galapagos, Easter, Juan Fernandez, Panama, North Andes, Altiplano, Shetland, Scotia, Sandwich, Aegean Sea, Anatolia, Somalia), for a total of 52 plates. No attempt is made to divide the Alps-Persia-Tibet mountain belt, the Philippine Islands, the Peruvian Andes, the Sierras Pampeanas, or the California-Nevada zone of dextral transtension into plates; instead, they are designated as "orogens" in which this plate model is not expected to be accurate. The cumulative-number/area distribution for this model follows a power law for plates with areas between 0.002 and 1 steradian. Departure from this scaling at the small-plate end suggests that future work is very likely to define more very small plates within the orogens. The model is presented in four digital files: a set of plate boundary segments; a set of plate outlines; a set of outlines of the orogens; and a table of characteristics of each digitization step along plate boundaries, including estimated relative velocity vector and classification into one of 7 types (continental convergence zone, continental transform fault, continental rift, oceanic spreading ridge

  11. Shell model calculations for exotic nuclei

    Energy Technology Data Exchange (ETDEWEB)

    Brown, B.A. (Michigan State Univ., East Lansing, MI (USA)); Warburton, E.K. (Brookhaven National Lab., Upton, NY (USA)); Wildenthal, B.H. (New Mexico Univ., Albuquerque, NM (USA). Dept. of Physics and Astronomy)

    1990-02-01

    In this paper we review the progress of the shell-model approach to understanding the properties of light exotic nuclei (A < 40). By shell-model'' we mean the consistent and large-scale application of the classic methods discussed, for example, in the book of de-Shalit and Talmi. Modern calculations incorporate as many of the important configurations as possible and make use of realistic effective interactions for the valence nucleons. Properties such as the nuclear densities depend on the mean-field potential, which is usually separately from the valence interaction. We will discuss results for radii which are based on a standard Hartree-Fock approach with Skyrme-type interactions.

  12. Effective hamiltonian calculations using incomplete model spaces

    International Nuclear Information System (INIS)

    Koch, S.; Mukherjee, D.

    1987-01-01

    It appears that the danger of encountering ''intruder states'' is substantially reduced if an effective hamiltonian formalism is developed for incomplete model spaces (IMS). In a Fock-space approach, the proof a ''connected diagram theorem'' is fairly straightforward with exponential-type of ansatze for the wave-operator W, provided the normalization chosen for W is separable. Operationally, one just needs a suitable categorization of the Fock-space operators into ''diagonal'' and ''non-diagonal'' parts that is generalization of the corresponding procedure for the complete model space. The formalism is applied to prototypical 2-electron systems. The calculations have been performed on the Cyber 205 super-computer. The authors paid special attention to an efficient vectorization for the construction and solution of the resulting coupled non-linear equations

  13. An updated geospatial liquefaction model for global application

    Science.gov (United States)

    Zhu, Jing; Baise, Laurie G.; Thompson, Eric M.

    2017-01-01

    We present an updated geospatial approach to estimation of earthquake-induced liquefaction from globally available geospatial proxies. Our previous iteration of the geospatial liquefaction model was based on mapped liquefaction surface effects from four earthquakes in Christchurch, New Zealand, and Kobe, Japan, paired with geospatial explanatory variables including slope-derived VS30, compound topographic index, and magnitude-adjusted peak ground acceleration from ShakeMap. The updated geospatial liquefaction model presented herein improves the performance and the generality of the model. The updates include (1) expanding the liquefaction database to 27 earthquake events across 6 countries, (2) addressing the sampling of nonliquefaction for incomplete liquefaction inventories, (3) testing interaction effects between explanatory variables, and (4) overall improving model performance. While we test 14 geospatial proxies for soil density and soil saturation, the most promising geospatial parameters are slope-derived VS30, modeled water table depth, distance to coast, distance to river, distance to closest water body, and precipitation. We found that peak ground velocity (PGV) performs better than peak ground acceleration (PGA) as the shaking intensity parameter. We present two models which offer improved performance over prior models. We evaluate model performance using the area under the curve under the Receiver Operating Characteristic (ROC) curve (AUC) and the Brier score. The best-performing model in a coastal setting uses distance to coast but is problematic for regions away from the coast. The second best model, using PGV, VS30, water table depth, distance to closest water body, and precipitation, performs better in noncoastal regions and thus is the model we recommend for global implementation.

  14. Olkiluoto surface hydrological modelling: Update 2012 including salt transport modelling

    International Nuclear Information System (INIS)

    Karvonen, T.

    2013-11-01

    Posiva Oy is responsible for implementing a final disposal program for spent nuclear fuel of its owners Teollisuuden Voima Oyj and Fortum Power and Heat Oy. The spent nuclear fuel is planned to be disposed at a depth of about 400-450 meters in the crystalline bedrock at the Olkiluoto site. Leakages located at or close to spent fuel repository may give rise to the upconing of deep highly saline groundwater and this is a concern with regard to the performance of the tunnel backfill material after the closure of the tunnels. Therefore a salt transport sub-model was added to the Olkiluoto surface hydrological model (SHYD). The other improvements include update of the particle tracking algorithm and possibility to estimate the influence of open drillholes in a case where overpressure in inflatable packers decreases causing a hydraulic short-circuit between hydrogeological zones HZ19 and HZ20 along the drillhole. Four new hydrogeological zones HZ056, HZ146, BFZ100 and HZ039 were added to the model. In addition, zones HZ20A and HZ20B intersect with each other in the new structure model, which influences salinity upconing caused by leakages in shafts. The aim of the modelling of long-term influence of ONKALO, shafts and repository tunnels provide computational results that can be used to suggest limits for allowed leakages. The model input data included all the existing leakages into ONKALO (35-38 l/min) and shafts in the present day conditions. The influence of shafts was computed using eight different values for total shaft leakage: 5, 11, 20, 30, 40, 50, 60 and 70 l/min. The selection of the leakage criteria for shafts was influenced by the fact that upconing of saline water increases TDS-values close to the repository areas although HZ20B does not intersect any deposition tunnels. The total limit for all leakages was suggested to be 120 l/min. The limit for HZ20 zones was proposed to be 40 l/min: about 5 l/min the present day leakages to access tunnel, 25 l/min from

  15. Acceleration methods and models in Sn calculations

    International Nuclear Information System (INIS)

    Sbaffoni, M.M.; Abbate, M.J.

    1984-01-01

    In some neutron transport problems solved by the discrete ordinate method, it is relatively common to observe some particularities as, for example, negative fluxes generation, slow and insecure convergences and solution instabilities. The commonly used models for neutron flux calculation and acceleration methods included in the most used codes were analyzed, in face of their use in problems characterized by a strong upscattering effect. Some special conclusions derived from this analysis are presented as well as a new method to perform the upscattering scaling for solving the before mentioned problems in this kind of cases. This method has been included in the DOT3.5 code (two dimensional discrete ordinates radiation transport code) generating a new version of wider application. (Author) [es

  16. Recent Updates to the System Advisor Model (SAM)

    Energy Technology Data Exchange (ETDEWEB)

    DiOrio, Nicholas A [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2018-02-14

    The System Advisor Model (SAM) is a mature suite of techno-economic models for many renewable energy technologies that can be downloaded for free as a desktop application or software development kit. SAM is used for system-level modeling, including generating performance pro the release of the code as an open source project on GitHub. Other additions that will be covered include the ability to download data directly into SAM from the National Solar Radiation Database (NSRDB) and up- dates to a user-interface macro that assists with PV system sizing. A brief update on SAM's battery model and its integration with the detailed photovoltaic model will also be discussed. Finally, an outline of planned work for the next year will be presented, including the addition of a bifacial model, support for multiple MPPT inputs for detailed inverter modeling, and the addition of a model for inverter thermal behavior.

  17. The Potosi Reservoir Model 2013c, Property Modeling Update

    Energy Technology Data Exchange (ETDEWEB)

    Adushita, Yasmin; Smith, Valerie; Leetaru, Hannes

    2014-09-30

    property modeling workflows and layering. This model was retained as the base case. In the preceding Task [1], the Potosi reservoir model was updated to take into account the new data from the Verification Well #2 (VW2) which was drilled in 2012. The porosity and permeability modeling was revised to take into account the log data from the new well. Revisions of the 2010 modeling assumptions were also done on relative permeability, capillary pressures, formation water salinity, and the maximum allowable well bottomhole pressure. Dynamic simulations were run using the injection target of 3.5 million tons per annum (3.2 MTPA) for 30 years. This dynamic model was named Potosi Dynamic Model 2013b. In this Task, a new property modeling workflow was applied, where seismic inversion data guided the porosity mapping and geobody extraction. The static reservoir model was fully guided by PorosityCube interpretations and derivations coupled with petrophysical logs from three wells. The two main assumptions are: porosity features in the PorosityCube that correlate with lost circulation zones represent vugular zones, and that these vugular zones are laterally continuous. Extrapolation was done carefully to populate the vugular facies and their corresponding properties outside the seismic footprint up to the boundary of the 30 by 30 mi (48 by 48 km) model. Dynamic simulations were also run using the injection target of 3.5 million tons per annum (3.2 MTPA) for 30 years. This new dynamic model was named Potosi Dynamic Model 2013c. Reservoir simulation with the latest model gives a cumulative injection of 43 million tons (39 MT) in 30 years with a single well, which corresponds to 40% of the injection target. The injection rate is approx. 3.2 MTPA in the first six months as the well is injecting into the surrounding vugs, and declines rapidly to 1.8 million tons per annum (1.6 MTPA) in year 3 once the surrounding vugs are full and the CO2 start to reach the matrix. After, the injection

  18. Decision-making in an era of cancer prevention via aspirin: New Zealand needs updated guidelines and risk calculators.

    Science.gov (United States)

    Wilson, Nick; Selak, Vanessa; Blakely, Tony; Leung, William; Clarke, Philip; Jackson, Rod; Knight, Josh; Nghiem, Nhung

    2016-03-11

    Based on new systematic reviews of the evidence, the US Preventive Services Task Force has drafted updated guidelines on the use of low-dose aspirin for the primary prevention of both cardiovascular disease (CVD) and cancer. The Task Force generally recommends consideration of aspirin in adults aged 50-69 years with 10-year CVD risk of at least 10%, in who absolute health gain (reduction of CVD and cancer) is estimated to exceed absolute health loss (increase in bleeds). With the ongoing decline in CVD, current risk calculators for New Zealand are probably outdated, so it is difficult to be precise about what proportion of the population is in this risk category (roughly equivalent to 5-year CVD risk ≥5%). Nevertheless, we suspect that most smokers aged 50-69 years, and some non-smokers, would probably meet the new threshold for taking low-dose aspirin. The country therefore needs updated guidelines and risk calculators that are ideally informed by estimates of absolute net health gain (in quality-adjusted life-years (QALYs) per person) and cost-effectiveness. Other improvements to risk calculators include: epidemiological rigour (eg, by addressing competing mortality); providing enhanced graphical display of risk to enhance risk communication; and possibly capturing the issues of medication disutility and comparison with lifestyle changes.

  19. Examining the influence of working memory on updating mental models.

    Science.gov (United States)

    Valadao, Derick F; Anderson, Britt; Danckert, James

    2015-01-01

    The ability to accurately build and update mental representations of our environment depends on our ability to integrate information over a variety of time scales and detect changes in the regularity of events. As such, the cognitive mechanisms that support model building and updating are likely to interact with those involved in working memory (WM). To examine this, we performed three experiments that manipulated WM demands concurrently with the need to attend to regularities in other stimulus properties (i.e., location and shape). That is, participants completed a prediction task while simultaneously performing an n-back WM task with either no load or a moderate load. The distribution of target locations (Experiment 1) or shapes (Experiments 2 and 3) included some level of probabilistic regularity, which, unbeknown to participants, changed abruptly within each block. Moderate WM load hampered the ability to benefit from target regularities and to adapt to changes in those regularities (i.e., the prediction task). This was most pronounced when both prediction and WM requirements shared the same target feature. Our results show that representational updating depends on free WM resources in a domain-specific fashion.

  20. A Component Model for Cable System Calculations

    NARCIS (Netherlands)

    Nijs, J.M.M. de; Boschma, J.J.

    2012-01-01

    Unfortunately, no method yet exists for system calculations to support cable engineers with the technical challenge of increasing digital loads when confronted with ever-increasing capacity demands from commercial departments. This article introduces a reliable method of cable system calculations.

  1. CLPX-Model: Rapid Update Cycle 40km (RUC-40) Model Output Reduced Data, Version 1

    Data.gov (United States)

    National Aeronautics and Space Administration — The Rapid Update Cycle, version 2 at 40km (RUC-2, known to the Cold Land Processes community as RUC40) model is a Mesoscale Analysis and Prediction System (MAPS)...

  2. On-line Bayesian model updating for structural health monitoring

    Science.gov (United States)

    Rocchetta, Roberto; Broggi, Matteo; Huchet, Quentin; Patelli, Edoardo

    2018-03-01

    Fatigue induced cracks is a dangerous failure mechanism which affects mechanical components subject to alternating load cycles. System health monitoring should be adopted to identify cracks which can jeopardise the structure. Real-time damage detection may fail in the identification of the cracks due to different sources of uncertainty which have been poorly assessed or even fully neglected. In this paper, a novel efficient and robust procedure is used for the detection of cracks locations and lengths in mechanical components. A Bayesian model updating framework is employed, which allows accounting for relevant sources of uncertainty. The idea underpinning the approach is to identify the most probable crack consistent with the experimental measurements. To tackle the computational cost of the Bayesian approach an emulator is adopted for replacing the computationally costly Finite Element model. To improve the overall robustness of the procedure, different numerical likelihoods, measurement noises and imprecision in the value of model parameters are analysed and their effects quantified. The accuracy of the stochastic updating and the efficiency of the numerical procedure are discussed. An experimental aluminium frame and on a numerical model of a typical car suspension arm are used to demonstrate the applicability of the approach.

  3. Neutron transport model for standard calculation experiment

    International Nuclear Information System (INIS)

    Lukhminskij, B.E.; Lyutostanskij, Yu.S.; Lyashchuk, V.I.; Panov, I.V.

    1989-01-01

    The neutron transport calculation algorithms in complex composition media with a predetermined geometry are realized by the multigroups representations within Monte Carlo methods in the MAMONT code. The code grade was evaluated with benchmark experiments comparison. The neutron leakage spectra calculations in the spherical-symmetric geometry were carried out for iron and polyethylene. The MAMONT code utilization for metrological furnishes of the geophysics tasks is proposed. The code is orientated towards neutron transport and secondary nuclides accumulation calculations in blankets and geophysics media. 7 refs.; 2 figs

  4. Finite element model updating in structural dynamics using design sensitivity and optimisation

    OpenAIRE

    Calvi, Adriano

    1998-01-01

    Model updating is an important issue in engineering. In fact a well-correlated model provides for accurate evaluation of the structure loads and responses. The main objectives of the study were to exploit available optimisation programs to create an error localisation and updating procedure of nite element models that minimises the "error" between experimental and analytical modal data, addressing in particular the updating of large scale nite element models with se...

  5. Updated observational constraints on quintessence dark energy models

    Science.gov (United States)

    Durrive, Jean-Baptiste; Ooba, Junpei; Ichiki, Kiyotomo; Sugiyama, Naoshi

    2018-02-01

    The recent GW170817 measurement favors the simplest dark energy models, such as a single scalar field. Quintessence models can be classified in two classes, freezing and thawing, depending on whether the equation of state decreases towards -1 or departs from it. In this paper, we put observational constraints on the parameters governing the equations of state of tracking freezing, scaling freezing, and thawing models using updated data, from the Planck 2015 release, joint light-curve analysis, and baryonic acoustic oscillations. Because of the current tensions on the value of the Hubble parameter H0, unlike previous authors, we let this parameter vary, which modifies significantly the results. Finally, we also derive constraints on neutrino masses in each of these scenarios.

  6. Soft sensor modelling by time difference, recursive partial least squares and adaptive model updating

    International Nuclear Information System (INIS)

    Fu, Y; Xu, O; Yang, W; Zhou, L; Wang, J

    2017-01-01

    To investigate time-variant and nonlinear characteristics in industrial processes, a soft sensor modelling method based on time difference, moving-window recursive partial least square (PLS) and adaptive model updating is proposed. In this method, time difference values of input and output variables are used as training samples to construct the model, which can reduce the effects of the nonlinear characteristic on modelling accuracy and retain the advantages of recursive PLS algorithm. To solve the high updating frequency of the model, a confidence value is introduced, which can be updated adaptively according to the results of the model performance assessment. Once the confidence value is updated, the model can be updated. The proposed method has been used to predict the 4-carboxy-benz-aldehyde (CBA) content in the purified terephthalic acid (PTA) oxidation reaction process. The results show that the proposed soft sensor modelling method can reduce computation effectively, improve prediction accuracy by making use of process information and reflect the process characteristics accurately. (paper)

  7. Hydrogeological structure model of the Olkiluoto Site. Update in 2010

    International Nuclear Information System (INIS)

    Vaittinen, T.; Ahokas, H.; Nummela, J.; Paulamaeki, S.

    2011-09-01

    As part of the programme for the final disposal of spent nuclear fuel, a hydrogeological structure model containing the hydraulically significant zones on Olkiluoto Island has been compiled. The structure model describes the deterministic site scale zones that dominate the groundwater flow. The main objective of the study is to provide the geometry and the hydrogeological properties related to the groundwater flow for the zones and the sparsely fractured bedrock to be used in the numerical modelling of groundwater flow and geochemical transport and thereby in the safety assessment. Also, these zones should be taken into account in the repository layout and in the construction of the disposal facility and they have a long-term impact on the evolution of the site and the safety of the disposal repository. The previous hydrogeological model was compiled in 2008 and this updated version is based on data available at the end of May 2010. The updating was based on new hydrogeological observations and a systematic approach covering all drillholes to assess measured fracture transmissivities typical of the site-scale hydrogeological zones. New data consisted of head observations and interpreted pressure and flow responses caused by field activities. Essential background data for the modelling included the ductile deformation model and the site scale brittle deformation zones modelled in the geological model version 2.0. The GSM combine both geological and geophysical investigation data on the site. As a result of the modelling campaign, hydrogeological zones HZ001, HZ008, HZ19A, HZ19B, HZ19C, HZ20A, HZ20B, HZ21, HZ21B, HZ039, HZ099, OL-BFZ100, and HZ146 were included in the structure model. Compared with the previous model, zone HZ004 was replaced with zone HZ146 and zone HZ039 was introduced for the first time. Alternative zone HZ21B was included in the basic model. For the modelled zones, both the zone intersections, describing the fractures with dominating groundwater

  8. 40 CFR 600.208-08 - Calculation of FTP-based and HFET-based fuel economy values for a model type.

    Science.gov (United States)

    2010-07-01

    ...-based fuel economy values for a model type. 600.208-08 Section 600.208-08 Protection of Environment... model type. (a) Fuel economy values for a base level are calculated from vehicle configuration fuel... update sales projections at the time any model type value is calculated for a label value. (iii) The...

  9. Operational model updating of spinning finite element models for HAWT blades

    Science.gov (United States)

    Velazquez, Antonio; Swartz, R. Andrew; Loh, Kenneth J.; Zhao, Yingjun; La Saponara, Valeria; Kamisky, Robert J.; van Dam, Cornelis P.

    2014-04-01

    Structural health monitoring (SHM) relies on collection and interrogation of operational data from the monitored structure. To make this data meaningful, a means of understanding how damage sensitive data features relate to the physical condition of the structure is required. Model-driven SHM applications achieve this goal through model updating. This study proposed a novel approach for updating of aero-elastic turbine blade vibrational models for operational horizontal-axis wind turbines (HAWTs). The proposed approach updates estimates of modal properties for spinning HAWT blades intended for use in SHM and load estimation of these structures. Spinning structures present additional challenges for model updating due to spinning effects, dependence of modal properties on rotational velocity, and gyroscopic effects that lead to complex mode shapes. A cyclo-stationary stochastic-based eigensystem realization algorithm (ERA) is applied to operational turbine data to identify data-driven modal properties including frequencies and mode shapes. Model-driven modal properties are derived through modal condensation of spinning finite element models with variable physical parameters. Complex modes are converted into equivalent real modes through reduction transformation. Model updating is achieved through use of an adaptive simulated annealing search process, via Modal Assurance Criterion (MAC) with complex-conjugate modes, to find the physical parameters that best match the experimentally derived data.

  10. Shell model calculations for exotic nuclei

    International Nuclear Information System (INIS)

    Brown, B.A.; Wildenthal, B.H.

    1991-01-01

    A review of the shell-model approach to understanding the properties of light exotic nuclei is given. Binding energies including p and p-sd model spaces and sd and sd-pf model spaces; cross-shell excitations around 32 Mg, including weak-coupling aspects and mechanisms for lowering the ntw excitations; beta decay properties of neutron-rich sd model, of p-sd and sd-pf model spaces, of proton-rich sd model space; coulomb break-up cross sections are discussed. (G.P.) 76 refs.; 12 figs

  11. An Updated Gas/grain Sulfur Network for Astrochemical Models

    Science.gov (United States)

    Laas, Jacob; Caselli, Paola

    2017-06-01

    Sulfur is a chemical element that enjoys one of the highest cosmic abundances. However, it has traditionally played a relatively minor role in the field of astrochemistry, being drowned out by other chemistries after it depletes from the gas phase during the transition from a diffuse cloud to a dense one. A wealth of laboratory studies have provided clues to its rich chemistry in the condensed phase, and most recently, a report by a team behind the Rosetta spacecraft has significantly helped to unveil its rich cometary chemistry. We have set forth to use this information to greatly update/extend the sulfur reactions within the OSU gas/grain astrochemical network in a systematic way, to provide more realistic chemical models of sulfur for a variety of interstellar environments. We present here some results and implications of these models.

  12. "Updates to Model Algorithms & Inputs for the Biogenic ...

    Science.gov (United States)

    We have developed new canopy emission algorithms and land use data for BEIS. Simulations with BEIS v3.4 and these updates in CMAQ v5.0.2 are compared these changes to the Model of Emissions of Gases and Aerosols from Nature (MEGAN) and evaluated the simulations against observations. This has resulted in improvements in model evaluations of modeled isoprene, NOx, and O3. The National Exposure Research Laboratory (NERL) Atmospheric Modeling and Analysis Division (AMAD) conducts research in support of EPA mission to protect human health and the environment. AMAD research program is engaged in developing and evaluating predictive atmospheric models on all spatial and temporal scales for forecasting the air quality and for assessing changes in air quality and air pollutant exposures, as affected by changes in ecosystem management and regulatory decisions. AMAD is responsible for providing a sound scientific and technical basis for regulatory policies based on air quality models to improve ambient air quality. The models developed by AMAD are being used by EPA, NOAA, and the air pollution community in understanding and forecasting not only the magnitude of the air pollution problem, but also in developing emission control policies and regulations for air quality improvements.

  13. Significance of predictive models/risk calculators for HBV-related hepatocellular carcinoma

    Directory of Open Access Journals (Sweden)

    DONG Jing

    2015-06-01

    Full Text Available Hepatitis B virus (HBV-related hepatocellular carcinoma (HCC is a major public health problem in Southeast Asia. In recent years, researchers from Hong Kong and Taiwan have reported predictive models or risk calculators for HBV-associated HCC by studying its natural history, which, to some extent, predicts the possibility of HCC development. Generally, risk factors of each model involve age, sex, HBV DNA level, and liver cirrhosis. This article discusses the evolution and clinical significance of currently used predictive models for HBV-associated HCC and assesses the advantages and limits of risk calculators. Updated REACH-B model and LSM-HCC model show better negative predictive values and have better performance in predicting the outcomes of patients with chronic hepatitis B (CHB. These models can be applied to stratified screening of HCC and, meanwhile, become an assessment tool for the management of CHB patients.

  14. A Stress Update Algorithm for Constitutive Models of Glassy Polymers

    Science.gov (United States)

    Danielsson, Mats

    2013-06-01

    A semi-implicit stress update algorithm is developed for the elastic-viscoplastic behavior of glassy polymers. The case of near rate-insensitivity is addressed, and the stress update algorithm is designed to handle this case robustly. A consistent tangent stiffness matrix is derived based on a full linearization of the internal virtual work. The stress update algorithm and (a slightly modified) tangent stiffness matrix are implemented in a commercial finite element program. The stress update algorithm is tested on a large boundary value problem for illustrative purposes.

  15. Uncertainty calculation in transport models and forecasts

    DEFF Research Database (Denmark)

    Manzo, Stefano; Prato, Carlo Giacomo

    . Forthcoming: European Journal of Transport and Infrastructure Research, 15-3, 64-72. 4 The last paper4 examined uncertainty in the spatial composition of residence and workplace locations in the Danish National Transport Model. Despite the evidence that spatial structure influences travel behaviour...... to increase the quality of the decision process and to develop robust or adaptive plans. In fact, project evaluation processes that do not take into account model uncertainty produce not fully informative and potentially misleading results so increasing the risk inherent to the decision to be taken...

  16. Contact-based model for strategy updating and evolution of cooperation

    Science.gov (United States)

    Zhang, Jianlei; Chen, Zengqiang

    2016-06-01

    To establish an available model for the astoundingly strategy decision process of players is not easy, sparking heated debate about the related strategy updating rules is intriguing. Models for evolutionary games have traditionally assumed that players imitate their successful partners by the comparison of respective payoffs, raising the question of what happens if the game information is not easily available. Focusing on this yet-unsolved case, the motivation behind the work presented here is to establish a novel model for the updating of states in a spatial population, by detouring the required payoffs in previous models and considering much more players' contact patterns. It can be handy and understandable to employ switching probabilities for determining the microscopic dynamics of strategy evolution. Our results illuminate the conditions under which the steady coexistence of competing strategies is possible. These findings reveal that the evolutionary fate of the coexisting strategies can be calculated analytically, and provide novel hints for the resolution of cooperative dilemmas in a competitive context. We hope that our results have disclosed new explanations about the survival and coexistence of competing strategies in structured populations.

  17. Temperature Calculations in the Coastal Modeling System

    Science.gov (United States)

    2017-04-01

    with the change of water turbidity in coastal and estuarine systems. Water quality and ecological models often require input of water temperature...of the American Society of Civil Engineers 81(717): 1–11. Sánchez, A., W. Wu, H. Li, M. E. Brown, C. W. Reed, J. D. Rosati, and Z. Demirbilek. 2014

  18. Slab2 - Updated Subduction Zone Geometries and Modeling Tools

    Science.gov (United States)

    Moore, G.; Hayes, G. P.; Portner, D. E.; Furtney, M.; Flamme, H. E.; Hearne, M. G.

    2017-12-01

    The U.S. Geological Survey database of global subduction zone geometries (Slab1.0), is a highly utilized dataset that has been applied to a wide range of geophysical problems. In 2017, these models have been improved and expanded upon as part of the Slab2 modeling effort. With a new data driven approach that can be applied to a broader range of tectonic settings and geophysical data sets, we have generated a model set that will serve as a more comprehensive, reliable, and reproducible resource for three-dimensional slab geometries at all of the world's convergent margins. The newly developed framework of Slab2 is guided by: (1) a large integrated dataset, consisting of a variety of geophysical sources (e.g., earthquake hypocenters, moment tensors, active-source seismic survey images of the shallow slab, tomography models, receiver functions, bathymetry, trench ages, and sediment thickness information); (2) a dynamic filtering scheme aimed at constraining incorporated seismicity to only slab related events; (3) a 3-D data interpolation approach which captures both high resolution shallow geometries and instances of slab rollback and overlap at depth; and (4) an algorithm which incorporates uncertainties of contributing datasets to identify the most probable surface depth over the extent of each subduction zone. Further layers will also be added to the base geometry dataset, such as historic moment release, earthquake tectonic providence, and interface coupling. Along with access to several queryable data formats, all components have been wrapped into an open source library in Python, such that suites of updated models can be released as further data becomes available. This presentation will discuss the extent of Slab2 development, as well as the current availability of the model and modeling tools.

  19. Venus Global Reference Atmospheric Model Status and Planned Updates

    Science.gov (United States)

    Justh, H. L.; Cianciolol, A. M. Dwyer

    2017-01-01

    The Venus Global Reference Atmospheric Model (Venus-GRAM) was originally developed in 2004 under funding from NASA's In Space Propulsion (ISP) Aerocapture Project to support mission studies at the planet. Many proposals, including NASA New Frontiers and Discovery, as well as other studies have used Venus-GRAM to design missions and assess system robustness. After Venus-GRAM's release in 2005, several missions to Venus have generated a wealth of additional atmospheric data, yet few model updates have been made to Venus-GRAM. This paper serves to address three areas: (1) to present the current status of Venus-GRAM, (2) to identify new sources of data and other upgrades that need to be incorporated to maintain Venus-GRAM credibility and (3) to identify additional Venus-GRAM options and features that could be included to increase its capability. This effort will de-pend on understanding the needs of the user community, obtaining new modeling data and establishing a dedicated funding source to support continual up-grades. This paper is intended to initiate discussion that can result in an upgraded and validated Venus-GRAM being available to future studies and NASA proposals.

  20. Prediction error, ketamine and psychosis: An updated model.

    Science.gov (United States)

    Corlett, Philip R; Honey, Garry D; Fletcher, Paul C

    2016-11-01

    In 2007, we proposed an explanation of delusion formation as aberrant prediction error-driven associative learning. Further, we argued that the NMDA receptor antagonist ketamine provided a good model for this process. Subsequently, we validated the model in patients with psychosis, relating aberrant prediction error signals to delusion severity. During the ensuing period, we have developed these ideas, drawing on the simple principle that brains build a model of the world and refine it by minimising prediction errors, as well as using it to guide perceptual inferences. While previously we focused on the prediction error signal per se, an updated view takes into account its precision, as well as the precision of prior expectations. With this expanded perspective, we see several possible routes to psychotic symptoms - which may explain the heterogeneity of psychotic illness, as well as the fact that other drugs, with different pharmacological actions, can produce psychotomimetic effects. In this article, we review the basic principles of this model and highlight specific ways in which prediction errors can be perturbed, in particular considering the reliability and uncertainty of predictions. The expanded model explains hallucinations as perturbations of the uncertainty mediated balance between expectation and prediction error. Here, expectations dominate and create perceptions by suppressing or ignoring actual inputs. Negative symptoms may arise due to poor reliability of predictions in service of action. By mapping from biology to belief and perception, the account proffers new explanations of psychosis. However, challenges remain. We attempt to address some of these concerns and suggest future directions, incorporating other symptoms into the model, building towards better understanding of psychosis. © The Author(s) 2016.

  1. Updated Conceptual Model for the 300 Area Uranium Groundwater Plume

    Energy Technology Data Exchange (ETDEWEB)

    Zachara, John M.; Freshley, Mark D.; Last, George V.; Peterson, Robert E.; Bjornstad, Bruce N.

    2012-11-01

    The 300 Area uranium groundwater plume in the 300-FF-5 Operable Unit is residual from past discharge of nuclear fuel fabrication wastes to a number of liquid (and solid) disposal sites. The source zones in the disposal sites were remediated by excavation and backfilled to grade, but sorbed uranium remains in deeper, unexcavated vadose zone sediments. In spite of source term removal, the groundwater plume has shown remarkable persistence, with concentrations exceeding the drinking water standard over an area of approximately 1 km2. The plume resides within a coupled vadose zone, groundwater, river zone system of immense complexity and scale. Interactions between geologic structure, the hydrologic system driven by the Columbia River, groundwater-river exchange points, and the geochemistry of uranium contribute to persistence of the plume. The U.S. Department of Energy (DOE) recently completed a Remedial Investigation/Feasibility Study (RI/FS) to document characterization of the 300 Area uranium plume and plan for beginning to implement proposed remedial actions. As part of the RI/FS document, a conceptual model was developed that integrates knowledge of the hydrogeologic and geochemical properties of the 300 Area and controlling processes to yield an understanding of how the system behaves and the variables that control it. Recent results from the Hanford Integrated Field Research Challenge site and the Subsurface Biogeochemistry Scientific Focus Area Project funded by the DOE Office of Science were used to update the conceptual model and provide an assessment of key factors controlling plume persistence.

  2. Development of new model for high explosives detonation parameters calculation

    Directory of Open Access Journals (Sweden)

    Jeremić Radun

    2012-01-01

    Full Text Available The simple semi-empirical model for calculation of detonation pressure and velocity for CHNO explosives has been developed, which is based on experimental values of detonation parameters. Model uses Avakyan’s method for determination of detonation products' chemical composition, and is applicable in wide range of densities. Compared with the well-known Kamlet's method and numerical model of detonation based on BKW EOS, the calculated values from proposed model have significantly better accuracy.

  3. Methodology report on the calculation of emissions to air from the sectors Energy, Industry and Waste (Update 2016), as used by the Dutch Pollutant Release and Transfer Register

    NARCIS (Netherlands)

    Peek CJ; Montfoort JA; Droge R; Guis B; Baas C; van Huet B; van Hunnik OR; van den Berghe ACWM; DMO; MIL

    2017-01-01

    In this technical report RIVM describes the updated methods that The Netherlands Pollutant Release and Transfer Register uses to calculate the emissions of contaminated substances into the air from the Industry, Energy Generating and Waste Processing sectors. Due to international treaties, such

  4. A review on model updating of joint structure for dynamic analysis purpose

    Directory of Open Access Journals (Sweden)

    Zahari S.N.

    2016-01-01

    Full Text Available Structural joints provide connection between structural element (beam, plate etc. in order to construct a whole assembled structure. There are many types of structural joints such as bolted joint, riveted joints and welded joints. The joints structures significantly contribute to structural stiffness and dynamic behaviour of structures hence the main objectives of this paper are to review on method of model updating on joints structure and to discuss the guidelines to perform model updating for dynamic analysis purpose. This review paper firstly will outline some of the existing finite element modelling works of joints structure. Experimental modal analysis is the next step to obtain modal parameters (natural frequency & mode shape to validate and improve the discrepancy between results obtained from experimental and the simulation counterparts. Hence model updating will be carried out to minimize the differences between the two results. There are two methods of model updating; direct method and iterative method. Sensitivity analysis employed using SOL200 in NASTRAN by selecting the suitable updating parameters to avoid ill-conditioning problem. It is best to consider both geometrical and material properties in the updating procedure rather than choosing only a number of geometrical properties alone. Iterative method was chosen as the best model updating procedure because the physical meaning of updated parameters are guaranteed although this method required computational effort compare to direct method.

  5. Summary of Expansions, Updates, and Results in GREET 2017 Suite of Models

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Michael [Argonne National Lab. (ANL), Argonne, IL (United States); Elgowainy, Amgad [Argonne National Lab. (ANL), Argonne, IL (United States); Han, Jeongwoo [Argonne National Lab. (ANL), Argonne, IL (United States); Benavides, Pahola Thathiana [Argonne National Lab. (ANL), Argonne, IL (United States); Burnham, Andrew [Argonne National Lab. (ANL), Argonne, IL (United States); Cai, Hao [Argonne National Lab. (ANL), Argonne, IL (United States); Canter, Christina [Argonne National Lab. (ANL), Argonne, IL (United States); Chen, Rui [Argonne National Lab. (ANL), Argonne, IL (United States); Dai, Qiang [Argonne National Lab. (ANL), Argonne, IL (United States); Kelly, Jarod [Argonne National Lab. (ANL), Argonne, IL (United States); Lee, Dong-Yeon [Argonne National Lab. (ANL), Argonne, IL (United States); Lee, Uisung [Argonne National Lab. (ANL), Argonne, IL (United States); Li, Qianfeng [Argonne National Lab. (ANL), Argonne, IL (United States); Lu, Zifeng [Argonne National Lab. (ANL), Argonne, IL (United States); Qin, Zhangcai [Argonne National Lab. (ANL), Argonne, IL (United States); Sun, Pingping [Argonne National Lab. (ANL), Argonne, IL (United States); Supekar, Sarang D. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2017-11-01

    This report provides a technical summary of the expansions and updates to the 2017 release of Argonne National Laboratory’s Greenhouse Gases, Regulated Emissions, and Energy Use in Transportation (GREET®) model, including references and links to key technical documents related to these expansions and updates. The GREET 2017 release includes an updated version of the GREET1 (the fuel-cycle GREET model) and GREET2 (the vehicle-cycle GREET model), both in the Microsoft Excel platform and in the GREET.net modeling platform. Figure 1 shows the structure of the GREET Excel modeling platform. The .net platform integrates all GREET modules together seamlessly.

  6. A revised model of Jupiter's inner electron belts: Updating the Divine radiation model

    Science.gov (United States)

    Garrett, Henry B.; Levin, Steven M.; Bolton, Scott J.; Evans, Robin W.; Bhattacharya, Bidushi

    2005-02-01

    In 1983, Divine presented a comprehensive model of the Jovian charged particle environment that has long served as a reference for missions to Jupiter. However, in situ observations by Galileo and synchrotron observations from Earth indicate the need to update the model in the inner radiation zone. Specifically, a review of the model for 1 MeV data. Further modifications incorporating observations from the Galileo and Cassini spacecraft will be reported in the future.

  7. "Updates to Model Algorithms & Inputs for the Biogenic Emissions Inventory System (BEIS) Model"

    Science.gov (United States)

    We have developed new canopy emission algorithms and land use data for BEIS. Simulations with BEIS v3.4 and these updates in CMAQ v5.0.2 are compared these changes to the Model of Emissions of Gases and Aerosols from Nature (MEGAN) and evaluated the simulations against observatio...

  8. Dynamic model updating based on strain mode shape and natural frequency using hybrid pattern search technique

    Science.gov (United States)

    Guo, Ning; Yang, Zhichun; Wang, Le; Ouyang, Yan; Zhang, Xinping

    2018-05-01

    Aiming at providing a precise dynamic structural finite element (FE) model for dynamic strength evaluation in addition to dynamic analysis. A dynamic FE model updating method is presented to correct the uncertain parameters of the FE model of a structure using strain mode shapes and natural frequencies. The strain mode shape, which is sensitive to local changes in structure, is used instead of the displacement mode for enhancing model updating. The coordinate strain modal assurance criterion is developed to evaluate the correlation level at each coordinate over the experimental and the analytical strain mode shapes. Moreover, the natural frequencies which provide the global information of the structure are used to guarantee the accuracy of modal properties of the global model. Then, the weighted summation of the natural frequency residual and the coordinate strain modal assurance criterion residual is used as the objective function in the proposed dynamic FE model updating procedure. The hybrid genetic/pattern-search optimization algorithm is adopted to perform the dynamic FE model updating procedure. Numerical simulation and model updating experiment for a clamped-clamped beam are performed to validate the feasibility and effectiveness of the present method. The results show that the proposed method can be used to update the uncertain parameters with good robustness. And the updated dynamic FE model of the beam structure, which can correctly predict both the natural frequencies and the local dynamic strains, is reliable for the following dynamic analysis and dynamic strength evaluation.

  9. Machine learning in updating predictive models of planning and scheduling transportation projects

    Science.gov (United States)

    1997-01-01

    A method combining machine learning and regression analysis to automatically and intelligently update predictive models used in the Kansas Department of Transportations (KDOTs) internal management system is presented. The predictive models used...

  10. Highly efficient model updating for structural condition assessment of large-scale bridges.

    Science.gov (United States)

    2015-02-01

    For eciently updating models of large-scale structures, the response surface (RS) method based on radial basis : functions (RBFs) is proposed to model the input-output relationship of structures. The key issues for applying : the proposed method a...

  11. Precipitates/Salts Model Calculations for Various Drift Temperature Environments

    International Nuclear Information System (INIS)

    Marnier, P.

    2001-01-01

    The objective and scope of this calculation is to assist Performance Assessment Operations and the Engineered Barrier System (EBS) Department in modeling the geochemical effects of evaporation within a repository drift. This work is developed and documented using procedure AP-3.12Q, Calculations, in support of ''Technical Work Plan For Engineered Barrier System Department Modeling and Testing FY 02 Work Activities'' (BSC 2001a). The primary objective of this calculation is to predict the effects of evaporation on the abstracted water compositions established in ''EBS Incoming Water and Gas Composition Abstraction Calculations for Different Drift Temperature Environments'' (BSC 2001c). A secondary objective is to predict evaporation effects on observed Yucca Mountain waters for subsequent cement interaction calculations (BSC 2001d). The Precipitates/Salts model is documented in an Analysis/Model Report (AMR), ''In-Drift Precipitates/Salts Analysis'' (BSC 2001b)

  12. Finite-element-model updating using computational intelligence techniques applications to structural dynamics

    CERN Document Server

    Marwala, Tshilidzi

    2010-01-01

    Finite element models (FEMs) are widely used to understand the dynamic behaviour of various systems. FEM updating allows FEMs to be tuned better to reflect measured data and may be conducted using two different statistical frameworks: the maximum likelihood approach and Bayesian approaches. Finite Element Model Updating Using Computational Intelligence Techniques applies both strategies to the field of structural mechanics, an area vital for aerospace, civil and mechanical engineering. Vibration data is used for the updating process. Following an introduction a number of computational intelligence techniques to facilitate the updating process are proposed; they include: • multi-layer perceptron neural networks for real-time FEM updating; • particle swarm and genetic-algorithm-based optimization methods to accommodate the demands of global versus local optimization models; • simulated annealing to put the methodologies into a sound statistical basis; and • response surface methods and expectation m...

  13. Experimental Studies on Finite Element Model Updating for a Heated Beam-Like Structure

    Directory of Open Access Journals (Sweden)

    Kaipeng Sun

    2015-01-01

    Full Text Available An experimental study was made for the identification procedure of time-varying modal parameters and the finite element model updating technique of a beam-like thermal structure in both steady and unsteady high temperature environments. An improved time-varying autoregressive method was proposed first to extract the instantaneous natural frequencies of the structure in the unsteady high temperature environment. Based on the identified modal parameters, then, a finite element model for the structure was updated by using Kriging meta-model and optimization-based finite-element model updating method. The temperature-dependent parameters to be updated were expressed as low-order polynomials of temperature increase, and the finite element model updating problem was solved by updating several coefficients of the polynomials. The experimental results demonstrated the effectiveness of the time-varying modal parameter identification method and showed that the instantaneous natural frequencies of the updated model well tracked the trends of the measured values with high accuracy.

  14. In-Drift Microbial Communities Model Validation Calculations

    Energy Technology Data Exchange (ETDEWEB)

    D. M. Jolley

    2001-09-24

    The objective and scope of this calculation is to create the appropriate parameter input for MING 1.0 (CSCI 30018 V1.0, CRWMS M&O 1998b) that will allow the testing of the results from the MING software code with both scientific measurements of microbial populations at the site and laboratory and with natural analogs to the site. This set of calculations provides results that will be used in model validation for the ''In-Drift Microbial Communities'' model (CRWMS M&O 2000) which is part of the Engineered Barrier System Department (EBS) process modeling effort that eventually will feed future Total System Performance Assessment (TSPA) models. This calculation is being produced to replace MING model validation output that is effected by the supersession of DTN MO9909SPAMING1.003 using its replacement DTN MO0106SPAIDM01.034 so that the calculations currently found in the ''In-Drift Microbial Communities'' AMR (CRWMS M&O 2000) will be brought up to date. This set of calculations replaces the calculations contained in sections 6.7.2, 6.7.3 and Attachment I of CRWMS M&O (2000) As all of these calculations are created explicitly for model validation, the data qualification status of all inputs can be considered corroborative in accordance with AP-3.15Q. This work activity has been evaluated in accordance with the AP-2.21 procedure, ''Quality Determinations and Planning for Scientific, Engineering, and Regulatory Compliance Activities'', and is subject to QA controls (BSC 2001). The calculation is developed in accordance with the AP-3.12 procedure, Calculations, and prepared in accordance with the ''Technical Work Plan For EBS Department Modeling FY 01 Work Activities'' (BSC 2001) which includes controls for the management of electronic data.

  15. In-Drift Microbial Communities Model Validation Calculation

    Energy Technology Data Exchange (ETDEWEB)

    D. M. Jolley

    2001-10-31

    The objective and scope of this calculation is to create the appropriate parameter input for MING 1.0 (CSCI 30018 V1.0, CRWMS M&O 1998b) that will allow the testing of the results from the MING software code with both scientific measurements of microbial populations at the site and laboratory and with natural analogs to the site. This set of calculations provides results that will be used in model validation for the ''In-Drift Microbial Communities'' model (CRWMS M&O 2000) which is part of the Engineered Barrier System Department (EBS) process modeling effort that eventually will feed future Total System Performance Assessment (TSPA) models. This calculation is being produced to replace MING model validation output that is effected by the supersession of DTN MO9909SPAMING1.003 using its replacement DTN MO0106SPAIDM01.034 so that the calculations currently found in the ''In-Drift Microbial Communities'' AMR (CRWMS M&O 2000) will be brought up to date. This set of calculations replaces the calculations contained in sections 6.7.2, 6.7.3 and Attachment I of CRWMS M&O (2000) As all of these calculations are created explicitly for model validation, the data qualification status of all inputs can be considered corroborative in accordance with AP-3.15Q. This work activity has been evaluated in accordance with the AP-2.21 procedure, ''Quality Determinations and Planning for Scientific, Engineering, and Regulatory Compliance Activities'', and is subject to QA controls (BSC 2001). The calculation is developed in accordance with the AP-3.12 procedure, Calculations, and prepared in accordance with the ''Technical Work Plan For EBS Department Modeling FY 01 Work Activities'' (BSC 2001) which includes controls for the management of electronic data.

  16. In-Drift Microbial Communities Model Validation Calculations

    International Nuclear Information System (INIS)

    Jolley, D.M.

    2001-01-01

    The objective and scope of this calculation is to create the appropriate parameter input for MING 1.0 (CSCI 30018 V1.0, CRWMS MandO 1998b) that will allow the testing of the results from the MING software code with both scientific measurements of microbial populations at the site and laboratory and with natural analogs to the site. This set of calculations provides results that will be used in model validation for the ''In-Drift Microbial Communities'' model (CRWMS MandO 2000) which is part of the Engineered Barrier System Department (EBS) process modeling effort that eventually will feed future Total System Performance Assessment (TSPA) models. This calculation is being produced to replace MING model validation output that is effected by the supersession of DTN MO9909SPAMING1.003 using its replacement DTN MO0106SPAIDM01.034 so that the calculations currently found in the ''In-Drift Microbial Communities'' AMR (CRWMS MandO 2000) will be brought up to date. This set of calculations replaces the calculations contained in sections 6.7.2, 6.7.3 and Attachment I of CRWMS MandO (2000) As all of these calculations are created explicitly for model validation, the data qualification status of all inputs can be considered corroborative in accordance with AP-3.15Q. This work activity has been evaluated in accordance with the AP-2.21 procedure, ''Quality Determinations and Planning for Scientific, Engineering, and Regulatory Compliance Activities'', and is subject to QA controls (BSC 2001). The calculation is developed in accordance with the AP-3.12 procedure, Calculations, and prepared in accordance with the ''Technical Work Plan For EBS Department Modeling FY 01 Work Activities'' (BSC 2001) which includes controls for the management of electronic data

  17. IN-DRIFT MICROBIAL COMMUNITIES MODEL VALIDATION CALCULATIONS

    Energy Technology Data Exchange (ETDEWEB)

    D.M. Jolley

    2001-12-18

    The objective and scope of this calculation is to create the appropriate parameter input for MING 1.0 (CSCI 30018 V1.0, CRWMS M&O 1998b) that will allow the testing of the results from the MING software code with both scientific measurements of microbial populations at the site and laboratory and with natural analogs to the site. This set of calculations provides results that will be used in model validation for the ''In-Drift Microbial Communities'' model (CRWMS M&O 2000) which is part of the Engineered Barrier System Department (EBS) process modeling effort that eventually will feed future Total System Performance Assessment (TSPA) models. This calculation is being produced to replace MING model validation output that is effected by the supersession of DTN M09909SPAMINGl.003 using its replacement DTN M00106SPAIDMO 1.034 so that the calculations currently found in the ''In-Drift Microbial Communities'' AMR (CRWMS M&O 2000) will be brought up to date. This set of calculations replaces the calculations contained in sections 6.7.2, 6.7.3 and Attachment I of CRWMS M&O (2000) As all of these calculations are created explicitly for model validation, the data qualification status of all inputs can be considered corroborative in accordance with AP-3.15Q. This work activity has been evaluated in accordance with the AP-2.21 procedure, ''Quality Determinations and Planning for Scientific, Engineering, and Regulatory Compliance Activities'', and is subject to QA controls (BSC 2001). The calculation is developed in accordance with the AP-3.12 procedure, Calculations, and prepared in accordance with the ''Technical Work Plan For EBS Department Modeling FY 01 Work Activities'' (BSC 200 1) which includes controls for the management of electronic data.

  18. IN-DRIFT MICROBIAL COMMUNITIES MODEL VALIDATION CALCULATIONS

    International Nuclear Information System (INIS)

    D.M. Jolley

    2001-01-01

    The objective and scope of this calculation is to create the appropriate parameter input for MING 1.0 (CSCI 30018 V1.0, CRWMS M andO 1998b) that will allow the testing of the results from the MING software code with both scientific measurements of microbial populations at the site and laboratory and with natural analogs to the site. This set of calculations provides results that will be used in model validation for the ''In-Drift Microbial Communities'' model (CRWMS M andO 2000) which is part of the Engineered Barrier System Department (EBS) process modeling effort that eventually will feed future Total System Performance Assessment (TSPA) models. This calculation is being produced to replace MING model validation output that is effected by the supersession of DTN M09909SPAMINGl.003 using its replacement DTN M00106SPAIDMO 1.034 so that the calculations currently found in the ''In-Drift Microbial Communities'' AMR (CRWMS M andO 2000) will be brought up to date. This set of calculations replaces the calculations contained in sections 6.7.2, 6.7.3 and Attachment I of CRWMS M andO (2000) As all of these calculations are created explicitly for model validation, the data qualification status of all inputs can be considered corroborative in accordance with AP-3.15Q. This work activity has been evaluated in accordance with the AP-2.21 procedure, ''Quality Determinations and Planning for Scientific, Engineering, and Regulatory Compliance Activities'', and is subject to QA controls (BSC 2001). The calculation is developed in accordance with the AP-3.12 procedure, Calculations, and prepared in accordance with the ''Technical Work Plan For EBS Department Modeling FY 01 Work Activities'' (BSC 200 1) which includes controls for the management of electronic data

  19. Synthetic Modifications In the Frequency Domain for Finite Element Model Update and Damage Detection

    Science.gov (United States)

    2017-09-01

    Aeronautical Society , 24, pp. 590–591. [23] Fritzen, C., and Kiefer, T., 1992, “Localization and Correction of Errors in Finite Element Models Based on...MODIFICATIONS IN THE FREQUENCY DOMAIN FOR FINITE ELEMENT MODEL UPDATE AND DAMAGE DETECTION by Ryun J. C. Konze September 2017 Thesis Advisor...FINITE ELEMENT MODEL UPDATE AND DAMAGE DETECTION 5. FUNDING NUMBERS 6. AUTHOR(S) Ryun J. C. Konze 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES

  20. The accuracy of heavy ion optical model calculations

    International Nuclear Information System (INIS)

    Kozik, T.

    1980-01-01

    There is investigated in detail the sources and magnitude of numerical errors in heavy ion optical model calculations. It is shown on example of 20 Ne + 24 Mg scattering at Esub(LAB)=100 MeV. (author)

  1. Modeling and Calculator Tools for State and Local Transportation Resources

    Science.gov (United States)

    Air quality models, calculators, guidance and strategies are offered for estimating and projecting vehicle air pollution, including ozone or smog-forming pollutants, particulate matter and other emissions that pose public health and air quality concerns.

  2. A finite element model updating technique for adjustment of parameters near boundaries

    Science.gov (United States)

    Gwinn, Allen Fort, Jr.

    Even though there have been many advances in research related to methods of updating finite element models based on measured normal mode vibration characteristics, there is yet to be a widely accepted method that works reliably with a wide range of problems. This dissertation focuses on the specific class of problems having to do with changes in stiffness near the clamped boundary of plate structures. This class of problems is especially important as it relates to the performance of turbine engine blades, where a change in stiffness at the base of the blade can be indicative of structural damage. The method that is presented herein is a new technique for resolving the differences between the physical structure and the finite element model. It is a semi-iterative technique that incorporates a "physical expansion" of the measured eigenvectors along with appropriate scaling of these expanded eigenvectors into an iterative loop that uses the Engel's model modification method to then calculate adjusted stiffness parameters for the finite element model. Three example problems are presented that use eigenvalues and mass normalized eigenvectors that have been calculated from experimentally obtained accelerometer readings. The test articles that were used were all thin plates with one edge fully clamped. They each had a cantilevered length of 8.5 inches and a width of 4 inches. The three plates differed from one another in thickness from 0.100 inches to 0.188 inches. These dimensions were selected in order to approximate a gas turbine engine blade. The semi-iterative modification technique is shown to do an excellent job of calculating the necessary adjustments to the finite element model so that the analytically determined eigenvalues and eigenvectors for the adjusted model match the corresponding values from the experimental data with good agreement. Furthermore, the semi-iterative method is quite robust. For the examples presented here, the method consistently converged

  3. A methodology for constructing the calculation model of scientific spreadsheets

    NARCIS (Netherlands)

    Vos, de M.; Wielemaker, J.; Schreiber, G.; Wielinga, B.; Top, J.L.

    2015-01-01

    Spreadsheets models are frequently used by scientists to analyze research data. These models are typically described in a paper or a report, which serves as single source of information on the underlying research project. As the calculation workflow in these models is not made explicit, readers are

  4. Mathematical models for calculating radiation dose to the fetus

    International Nuclear Information System (INIS)

    Watson, E.E.

    1992-01-01

    Estimates of radiation dose from radionuclides inside the body are calculated on the basis of energy deposition in mathematical models representing the organs and tissues of the human body. Complex models may be used with radiation transport codes to calculate the fraction of emitted energy that is absorbed in a target tissue even at a distance from the source. Other models may be simple geometric shapes for which absorbed fractions of energy have already been calculated. Models of Reference Man, the 15-year-old (Reference Woman), the 10-year-old, the five-year-old, the one-year-old, and the newborn have been developed and used for calculating specific absorbed fractions (absorbed fractions of energy per unit mass) for several different photon energies and many different source-target combinations. The Reference woman model is adequate for calculating energy deposition in the uterus during the first few weeks of pregnancy. During the course of pregnancy, the embryo/fetus increases rapidly in size and thus requires several models for calculating absorbed fractions. In addition, the increases in size and changes in shape of the uterus and fetus result in the repositioning of the maternal organs and in different geometric relationships among the organs and the fetus. This is especially true of the excretory organs such as the urinary bladder and the various sections of the gastrointestinal tract. Several models have been developed for calculating absorbed fractions of energy in the fetus, including models of the uterus and fetus for each month of pregnancy and complete models of the pregnant woman at the end of each trimester. In this paper, the available models and the appropriate use of each will be discussed. (Author) 19 refs., 7 figs

  5. Effective UV radiation from model calculations and measurements

    Science.gov (United States)

    Feister, Uwe; Grewe, Rolf

    1994-01-01

    Model calculations have been made to simulate the effect of atmospheric ozone and geographical as well as meteorological parameters on solar UV radiation reaching the ground. Total ozone values as measured by Dobson spectrophotometer and Brewer spectrometer as well as turbidity were used as input to the model calculation. The performance of the model was tested by spectroradiometric measurements of solar global UV radiation at Potsdam. There are small differences that can be explained by the uncertainty of the measurements, by the uncertainty of input data to the model and by the uncertainty of the radiative transfer algorithms of the model itself. Some effects of solar radiation to the biosphere and to air chemistry are discussed. Model calculations and spectroradiometric measurements can be used to study variations of the effective radiation in space in space time. The comparability of action spectra and their uncertainties are also addressed.

  6. Finite element model updating of the UCF grid benchmark using measured frequency response functions

    Science.gov (United States)

    Sipple, Jesse D.; Sanayei, Masoud

    2014-05-01

    A frequency response function based finite element model updating method is presented and used to perform parameter estimation of the University of Central Florida Grid Benchmark Structure. The proposed method is used to calibrate the initial finite element model using measured frequency response functions from the undamaged, intact structure. Stiffness properties, mass properties, and boundary conditions of the initial model were estimated and updated. Model updating was then performed using measured frequency response functions from the damaged structure to detect physical structural change. Grouping and ungrouping were utilized to determine the exact location and magnitude of the damage. The fixity in rotation of two boundary condition nodes was accurately and successfully estimated. The usefulness of the proposed method for finite element model updating is shown by being able to detect, locate, and quantify change in structural properties.

  7. Lazy Updating of hubs can enable more realistic models by speeding up stochastic simulations

    Science.gov (United States)

    Ehlert, Kurt; Loewe, Laurence

    2014-11-01

    To respect the nature of discrete parts in a system, stochastic simulation algorithms (SSAs) must update for each action (i) all part counts and (ii) each action's probability of occurring next and its timing. This makes it expensive to simulate biological networks with well-connected "hubs" such as ATP that affect many actions. Temperature and volume also affect many actions and may be changed significantly in small steps by the network itself during fever and cell growth, respectively. Such trends matter for evolutionary questions, as cell volume determines doubling times and fever may affect survival, both key traits for biological evolution. Yet simulations often ignore such trends and assume constant environments to avoid many costly probability updates. Such computational convenience precludes analyses of important aspects of evolution. Here we present "Lazy Updating," an add-on for SSAs designed to reduce the cost of simulating hubs. When a hub changes, Lazy Updating postpones all probability updates for reactions depending on this hub, until a threshold is crossed. Speedup is substantial if most computing time is spent on such updates. We implemented Lazy Updating for the Sorting Direct Method and it is easily integrated into other SSAs such as Gillespie's Direct Method or the Next Reaction Method. Testing on several toy models and a cellular metabolism model showed >10× faster simulations for its use-cases—with a small loss of accuracy. Thus we see Lazy Updating as a valuable tool for some special but important simulation problems that are difficult to address efficiently otherwise.

  8. Model for calculating the boron concentration in PWR type reactors

    International Nuclear Information System (INIS)

    Reis Martins Junior, L.L. dos; Vanni, E.A.

    1986-01-01

    A PWR boron concentration model has been developed for use with RETRAN code. The concentration model calculates the boron mass balance in the primary circuit as the injected boron mixes and is transported through the same circuit. RETRAN control blocks are used to calculate the boron concentration in fluid volumes during steady-state and transient conditions. The boron reactivity worth is obtained from the core concentration and used in RETRAN point kinetics model. A FSAR type analysis of a Steam Line Break Accident in Angra I plant was selected to test the model and the results obtained indicate a sucessfull performance. (Author) [pt

  9. Damage severity assessment in wind turbine blade laboratory model through fuzzy finite element model updating

    Science.gov (United States)

    Turnbull, Heather; Omenzetter, Piotr

    2017-04-01

    The recent shift towards development of clean, sustainable energy sources has provided a new challenge in terms of structural safety and reliability: with aging, manufacturing defects, harsh environmental and operational conditions, and extreme events such as lightning strikes wind turbines can become damaged resulting in production losses and environmental degradation. To monitor the current structural state of the turbine, structural health monitoring (SHM) techniques would be beneficial. Physics based SHM in the form of calibration of a finite element model (FEMs) by inverse techniques is adopted in this research. Fuzzy finite element model updating (FFEMU) techniques for damage severity assessment of a small-scale wind turbine blade are discussed and implemented. The main advantage is the ability of FFEMU to account in a simple way for uncertainty within the problem of model updating. Uncertainty quantification techniques, such as fuzzy sets, enable a convenient mathematical representation of the various uncertainties. Experimental frequencies obtained from modal analysis on a small-scale wind turbine blade were described by fuzzy numbers to model measurement uncertainty. During this investigation, damage severity estimation was investigated through addition of small masses of varying magnitude to the trailing edge of the structure. This structural modification, intended to be in lieu of damage, enabled non-destructive experimental simulation of structural change. A numerical model was constructed with multiple variable additional masses simulated upon the blades trailing edge and used as updating parameters. Objective functions for updating were constructed and minimized using both particle swarm optimization algorithm and firefly algorithm. FFEMU was able to obtain a prediction of baseline material properties of the blade whilst also successfully predicting, with sufficient accuracy, a larger magnitude of structural alteration and its location.

  10. HOM study and parameter calculation of the TESLA cavity model

    CERN Document Server

    Zeng, Ri-Hua; Gerigk Frank; Wang Guang-Wei; Wegner Rolf; Liu Rong; Schuh Marcel

    2010-01-01

    The Superconducting Proton Linac (SPL) is the project for a superconducting, high current H-accelerator at CERN. To find dangerous higher order modes (HOMs) in the SPL superconducting cavities, simulation and analysis for the cavity model using simulation tools are necessary. The. existing TESLA 9-cell cavity geometry data have been used for the initial construction of the models in HFSS. Monopole, dipole and quadrupole modes have been obtained by applying different symmetry boundaries on various cavity models. In calculation, scripting language in HFSS was used to create scripts to automatically calculate the parameters of modes in these cavity models (these scripts are also available in other cavities with different cell numbers and geometric structures). The results calculated automatically are then compared with the values given in the TESLA paper. The optimized cavity model with the minimum error will be taken as the base for further simulation of the SPL cavities.

  11. Predicting Individual Physiological Responses During Marksmanship Field Training Using an Updated SCENARIO-J Model

    National Research Council Canada - National Science Library

    Yokota, Miyo

    2004-01-01

    ...)) for individual variation and a metabolic rate (M) correction during downhill movements. This study evaluated the updated version of the model incorporating these new features, using a dataset collected during U.S. Marine Corps (USMC...

  12. Predictability of locomotion: Effects on updating of spatial situation models during narrative comprehension

    NARCIS (Netherlands)

    Dutke, S.; Rinck, M.

    2006-01-01

    We investigated how the updating of spatial situation models during narrative comprehension depends on the interaction of cognitive abilities and text characteristics. Participants with low verbal and visuospatial abilities and participants with high abilities read narratives in which the

  13. Finite element model updating using bayesian framework and modal properties

    CSIR Research Space (South Africa)

    Marwala, T

    2005-01-01

    Full Text Available . In this Note, Markov chain Monte Carlo (MCMC) simulation is used to sample the probability of the updating parameters in light of the measured modal properties. This probability is known as the posterior probability. The Metropolis algorithm (see Ref. 6...

  14. batman: BAsic Transit Model cAlculatioN in Python

    Science.gov (United States)

    Kreidberg, Laura

    2015-11-01

    I introduce batman, a Python package for modeling exoplanet transit light curves. The batman package supports calculation of light curves for any radially symmetric stellar limb darkening law, using a new integration algorithm for models that cannot be quickly calculated analytically. The code uses C extension modules to speed up model calculation and is parallelized with OpenMP. For a typical light curve with 100 data points in transit, batman can calculate one million quadratic limb-darkened models in 30 seconds with a single 1.7 GHz Intel Core i5 processor. The same calculation takes seven minutes using the four-parameter nonlinear limb darkening model (computed to 1 ppm accuracy). Maximum truncation error for integrated models is an input parameter that can be set as low as 0.001 ppm, ensuring that the community is prepared for the precise transit light curves we anticipate measuring with upcoming facilities. The batman package is open source and publicly available at https://github.com/lkreidberg/batman .

  15. Microscopic interacting boson model calculations for even–even ...

    Indian Academy of Sciences (India)

    one of the goals of the present study is to test interacting boson model calculations in the mass region of A ∼= 130 by comparing them with some previous experimental and theoretical results. The interacting boson model offers a simple Hamiltonian, capable of describing collective nuclear properties across a wide range of ...

  16. Calculating gait kinematics using MR-based kinematic models.

    Science.gov (United States)

    Scheys, Lennart; Desloovere, Kaat; Spaepen, Arthur; Suetens, Paul; Jonkers, Ilse

    2011-02-01

    Rescaling generic models is the most frequently applied approach in generating biomechanical models for inverse kinematics. Nevertheless it is well known that this procedure introduces errors in calculated gait kinematics due to: (1) errors associated with palpation of anatomical landmarks, (2) inaccuracies in the definition of joint coordinate systems. Based on magnetic resonance (MR) images, more accurate, subject-specific kinematic models can be built that are significantly less sensitive to both error types. We studied the difference between the two modelling techniques by quantifying differences in calculated hip and knee joint kinematics during gait. In a clinically relevant patient group of 7 pediatric cerebral palsy (CP) subjects with increased femoral anteversion, gait kinematic were calculated using (1) rescaled generic kinematic models and (2) subject-specific MR-based models. In addition, both sets of kinematics were compared to those obtained using the standard clinical data processing workflow. Inverse kinematics, calculated using rescaled generic models or the standard clinical workflow, differed largely compared to kinematics calculated using subject-specific MR-based kinematic models. The kinematic differences were most pronounced in the sagittal and transverse planes (hip and knee flexion, hip rotation). This study shows that MR-based kinematic models improve the reliability of gait kinematics, compared to generic models based on normal subjects. This is the case especially in CP subjects where bony deformations may alter the relative configuration of joint coordinate systems. Whilst high cost impedes the implementation of this modeling technique, our results demonstrate that efforts should be made to improve the level of subject-specific detail in the joint axes determination. Copyright © 2010 Elsevier B.V. All rights reserved.

  17. Optimizing the calculation grid for atmospheric dispersion modelling.

    Science.gov (United States)

    Van Thielen, S; Turcanu, C; Camps, J; Keppens, R

    2015-04-01

    This paper presents three approaches to find optimized grids for atmospheric dispersion measurements and calculations in emergency planning. This can be useful for deriving optimal positions for mobile monitoring stations, or help to reduce discretization errors and improve recommendations. Indeed, threshold-based recommendations or conclusions may differ strongly on the shape and size of the grid on which atmospheric dispersion measurements or calculations of pollutants are based. Therefore, relatively sparse grids that retain as much information as possible, are required. The grid optimization procedure proposed here is first demonstrated with a simple Gaussian plume model as adopted in atmospheric dispersion calculations, which provides fast calculations. The optimized grids are compared to the Noodplan grid, currently used for emergency planning in Belgium, and to the exact solution. We then demonstrate how it can be used in more realistic dispersion models. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Using radar altimetry to update a routing model of the Zambezi River Basin

    DEFF Research Database (Denmark)

    Michailovsky, Claire Irene B.; Bauer-Gottwein, Peter

    2012-01-01

    of the basin was built to simulate the land phase of the water cycle and produce inflows to a Muskingum routing model. River altimetry from the ENVISAT mission was then used to update the storages in the reaches of the Muskingum model using the Extended Kalman Filter. The method showed improvements in modeled...... is needed for hydrological applications. To overcome these limitations, altimetry river levels can be combined with hydrological modeling in a dataassimilation framework. This study focuses on the updating of a river routing model of the Zambezi using river levels from radar altimetry. A hydrological model...

  19. Effects of lateral boundary condition resolution and update frequency on regional climate model predictions

    Science.gov (United States)

    Pankatz, Klaus; Kerkweg, Astrid

    2015-04-01

    The work presented is part of the joint project "DecReg" ("Regional decadal predictability") which is in turn part of the project "MiKlip" ("Decadal predictions"), an effort funded by the German Federal Ministry of Education and Research to improve decadal predictions on a global and regional scale. In MiKlip, one big question is if regional climate modeling shows "added value", i.e. to evaluate, if regional climate models (RCM) produce better results than the driving models. However, the scope of this study is to look more closely at the setup specific details of regional climate modeling. As regional models only simulate a small domain, they have to inherit information about the state of the atmosphere at their lateral boundaries from external data sets. There are many unresolved questions concerning the setup of lateral boundary conditions (LBC). External data sets come from global models or from global reanalysis data-sets. A temporal resolution of six hours is common for this kind of data. This is mainly due to the fact, that storage space is a limiting factor, especially for climate simulations. However, theoretically, the coupling frequency could be as high as the time step of the driving model. Meanwhile, it is unclear if a more frequent update of the LBCs has a significant effect on the climate in the domain of the RCM. The first study examines how the RCM reacts to a higher update frequency. The study is based on a 30 year time slice experiment for three update frequencies of the LBC, namely six hours, one hour and six minutes. The evaluation of means, standard deviations and statistics of the climate in the regional domain shows only small deviations, some statistically significant though, of 2m temperature, sea level pressure and precipitation. The second part of the first study assesses parameters linked to cyclone activity, which is affected by the LBC update frequency. Differences in track density and strength are found when comparing the simulations

  20. Approximate dynamic fault tree calculations for modelling water supply risks

    International Nuclear Information System (INIS)

    Lindhe, Andreas; Norberg, Tommy; Rosén, Lars

    2012-01-01

    Traditional fault tree analysis is not always sufficient when analysing complex systems. To overcome the limitations dynamic fault tree (DFT) analysis is suggested in the literature as well as different approaches for how to solve DFTs. For added value in fault tree analysis, approximate DFT calculations based on a Markovian approach are presented and evaluated here. The approximate DFT calculations are performed using standard Monte Carlo simulations and do not require simulations of the full Markov models, which simplifies model building and in particular calculations. It is shown how to extend the calculations of the traditional OR- and AND-gates, so that information is available on the failure probability, the failure rate and the mean downtime at all levels in the fault tree. Two additional logic gates are presented that make it possible to model a system's ability to compensate for failures. This work was initiated to enable correct analyses of water supply risks. Drinking water systems are typically complex with an inherent ability to compensate for failures that is not easily modelled using traditional logic gates. The approximate DFT calculations are compared to results from simulations of the corresponding Markov models for three water supply examples. For the traditional OR- and AND-gates, and one gate modelling compensation, the errors in the results are small. For the other gate modelling compensation, the error increases with the number of compensating components. The errors are, however, in most cases acceptable with respect to uncertainties in input data. The approximate DFT calculations improve the capabilities of fault tree analysis of drinking water systems since they provide additional and important information and are simple and practically applicable.

  1. A new multi-objective approach to finite element model updating

    Science.gov (United States)

    Jin, Seung-Seop; Cho, Soojin; Jung, Hyung-Jo; Lee, Jong-Jae; Yun, Chung-Bang

    2014-05-01

    The single objective function (SOF) has been employed for the optimization process in the conventional finite element (FE) model updating. The SOF balances the residual of multiple properties (e.g., modal properties) using weighting factors, but the weighting factors are hard to determine before the run of model updating. Therefore, the trial-and-error strategy is taken to find the most preferred model among alternative updated models resulted from varying weighting factors. In this study, a new approach to the FE model updating using the multi-objective function (MOF) is proposed to get the most preferred model in a single run of updating without trial-and-error. For the optimization using the MOF, non-dominated sorting genetic algorithm-II (NSGA-II) is employed to find the Pareto optimal front. The bend angle related to the trade-off relationship of objective functions is used to select the most preferred model among the solutions on the Pareto optimal front. To validate the proposed approach, a highway bridge is selected as a test-bed and the modal properties of the bridge are obtained from the ambient vibration test. The initial FE model of the bridge is built using SAP2000. The model is updated using the identified modal properties by the SOF approach with varying the weighting factors and the proposed MOF approach. The most preferred model is selected using the bend angle of the Pareto optimal front, and compared with the results from the SOF approach using varying the weighting factors. The comparison shows that the proposed MOF approach is superior to the SOF approach using varying the weighting factors in getting smaller objective function values, estimating better updated parameters, and taking less computational time.

  2. The updated geodetic mean dynamic topography model – DTU15MDT

    DEFF Research Database (Denmark)

    Knudsen, Per; Andersen, Ole Baltazar; Maximenko, Nikolai

    An update to the global mean dynamic topography model DTU13MDT is presented. For DTU15MDT the newer gravity model EIGEN-6C4 has been combined with the DTU15MSS mean sea surface model to construct this global mean dynamic topography model. The EIGEN-6C4 is derived using the full series of GOCE data...

  3. Summary of Calculation Performed with NPIC's New FGR Model

    International Nuclear Information System (INIS)

    Jiao Yongjun; Li Wenjie; Zhou Yi; Xing Shuo

    2013-01-01

    1. Introduction The NPIC modeling group has performed calculations on both real cases and idealized cases in FUMEX II and III data packages. The performance code we used is COPERNIC 2.4 developed by AREVA but a new FGR model has been added. Therefore, a comparison study has been made between the Bernard model (V2.2) and the new model, in order to evaluate the performance of the new model. As mentioned before, the focus of our study lies in thermal fission gas release, or more specifically the grain boundary bubble behaviors. 2. Calculation method There are some differences between the calculated burnup and measured burnup in many real cases. Considering FGR is significant dependent on rod average burnup, a multiplicative factor on fuel rod linear power, i.e. FQE, is applied and adjusted in the calculations to ensure the calculated burnup generally equals the measured burnup. Also, a multiplicative factor on upper plenum volume, i.e. AOPL, is applied and adjusted in the calculations to ensure the calculated free volume equals pre-irradiation data of total free volume in rod. Cladding temperatures were entered if they were provided . Otherwise the cladding temperatures are calculated from the inlet coolant temperature. The results are presented in excel form as an attachment of this paper, including thirteen real cases and three idealized cases. Three real cases (BK353, BK370, US PWR TSQ022) are excluded from validation of the new model, because the athermal release predicted is even greater than release measured, which means a negative thermal release. Obviously it is not reasonable for validation, but the results are also listed in excel (sheet 'Cases excluded from validation'). 3. Results The results of 10 real cases are listed in sheet 'Steady case summary', which summarizes measured and predicted values of Bu, FGR for each case, and plots M/P ratio of FGR calculation by different models in COPERNIC. A statistic comparison was also made with three indexes, i

  4. Advanced Test Reactor Core Modeling Update Project Annual Report for Fiscal Year 2011

    International Nuclear Information System (INIS)

    Nigg, David W.; Steuhm, Devin A.

    2011-01-01

    . Furthermore, a capability for rigorous sensitivity analysis and uncertainty quantification based on the TSUNAMI system is being implemented and initial computational results have been obtained. This capability will have many applications in 2011 and beyond as a tool for understanding the margins of uncertainty in the new models as well as for validation experiment design and interpretation. Finally we note that although full implementation of the new computational models and protocols will extend over a period 3-4 years as noted above, interim applications in the much nearer term have already been demonstrated. In particular, these demonstrations included an analysis that was useful for understanding the cause of some issues in December 2009 that were triggered by a larger than acceptable discrepancy between the measured excess core reactivity and a calculated value that was based on the legacy computational methods. As the Modeling Update project proceeds we anticipate further such interim, informal, applications in parallel with formal qualification of the system under the applicable INL Quality Assurance procedures and standards.

  5. Advanced Test Reactor Core Modeling Update Project Annual Report for Fiscal Year 2011

    Energy Technology Data Exchange (ETDEWEB)

    David W. Nigg; Devin A. Steuhm

    2011-09-01

    , a capability for rigorous sensitivity analysis and uncertainty quantification based on the TSUNAMI system is being implemented and initial computational results have been obtained. This capability will have many applications in 2011 and beyond as a tool for understanding the margins of uncertainty in the new models as well as for validation experiment design and interpretation. Finally we note that although full implementation of the new computational models and protocols will extend over a period 3-4 years as noted above, interim applications in the much nearer term have already been demonstrated. In particular, these demonstrations included an analysis that was useful for understanding the cause of some issues in December 2009 that were triggered by a larger than acceptable discrepancy between the measured excess core reactivity and a calculated value that was based on the legacy computational methods. As the Modeling Update project proceeds we anticipate further such interim, informal, applications in parallel with formal qualification of the system under the applicable INL Quality Assurance procedures and standards.

  6. Model calculations of groundwater conditions on Sternoe peninsula

    International Nuclear Information System (INIS)

    Axelsson, C.-L.; Carlsson, L.

    1979-09-01

    The groundwater condition within the bedrock of Sternoe was calculated by the use of a two-dimensional FEM-model. Five sections were laid out over the area. The sections had a depth of five km and length between two and six km. First the piezometric head was calculated in two major tectonic zones where the hydraulic conductivity was set to 10 -6 m/s. In the other sections of which two cross the tectonic zones, the bedrock was assumed to have hydraulic conductivities of 10 -8 m/s in the uppermost 300 m and 10 -11 m/s in the rest. From the maps of the piezometric head obtained, the flow time was calculated for the groundwater from 500 meters depth to a tectonic zone or to the 300 meters level below the sea. This calculation was performed for two sections both with and without tectonic zones. Also the influence of groundwater discharge from a well in one point in one of the tectonic zones was calculated. The kinematic porosity was assumed 10 -4 . The result showed that the flow time varied between 1000 to 500 000 years within the area with the exception of the nearest 100 m zone to any of the tectonic zones. For further calculations the use of three-dimensional models was proposed. (Auth.)

  7. Improving prediction models with new markers: a comparison of updating strategies

    Directory of Open Access Journals (Sweden)

    D. Nieboer

    2016-09-01

    Full Text Available Abstract Background New markers hold the promise of improving risk prediction for individual patients. We aimed to compare the performance of different strategies to extend a previously developed prediction model with a new marker. Methods Our motivating example was the extension of a risk calculator for prostate cancer with a new marker that was available in a relatively small dataset. Performance of the strategies was also investigated in simulations. Development, marker and test sets with different sample sizes originating from the same underlying population were generated. A prediction model was fitted using logistic regression in the development set, extended using the marker set and validated in the test set. Extension strategies considered were re-estimating individual regression coefficients, updating of predictions using conditional likelihood ratios (LR and imputation of marker values in the development set and subsequently fitting a model in the combined development and marker sets. Sample sizes considered for the development and marker set were 500 and 100, 500 and 500, and 100 and 500 patients. Discriminative ability of the extended models was quantified using the concordance statistic (c-statistic and calibration was quantified using the calibration slope. Results All strategies led to extended models with increased discrimination (c-statistic increase from 0.75 to 0.80 in test sets. Strategies estimating a large number of parameters (re-estimation of all coefficients and updating using conditional LR led to overfitting (calibration slope below 1. Parsimonious methods, limiting the number of coefficients to be re-estimated, or applying shrinkage after model revision, limited the amount of overfitting. Combining the development and marker set using imputation of missing marker values approach led to consistently good performing models in all scenarios. Similar results were observed in the motivating example. Conclusion When the

  8. Seismic source characterization for the 2014 update of the U.S. National Seismic Hazard Model

    Science.gov (United States)

    Moschetti, Morgan P.; Powers, Peter; Petersen, Mark D.; Boyd, Oliver; Chen, Rui; Field, Edward H.; Frankel, Arthur; Haller, Kathleen; Harmsen, Stephen; Mueller, Charles S.; Wheeler, Russell; Zeng, Yuehua

    2015-01-01

    We present the updated seismic source characterization (SSC) for the 2014 update of the National Seismic Hazard Model (NSHM) for the conterminous United States. Construction of the seismic source models employs the methodology that was developed for the 1996 NSHM but includes new and updated data, data types, source models, and source parameters that reflect the current state of knowledge of earthquake occurrence and state of practice for seismic hazard analyses. We review the SSC parameterization and describe the methods used to estimate earthquake rates, magnitudes, locations, and geometries for all seismic source models, with an emphasis on new source model components. We highlight the effects that two new model components—incorporation of slip rates from combined geodetic-geologic inversions and the incorporation of adaptively smoothed seismicity models—have on probabilistic ground motions, because these sources span multiple regions of the conterminous United States and provide important additional epistemic uncertainty for the 2014 NSHM.

  9. Proposed reporting model update creates dialogue between FASB and not-for-profits.

    Science.gov (United States)

    Mosrie, Norman C

    2016-04-01

    Seeing a need to refresh the current guidelines, the Financial Accounting Standards Board (FASB) proposed an update to the financial accounting and reporting model for not-for-profit entities. In a response to solicited feedback, the board is now revisiting its proposed update and has set forth a plan to finalize its new guidelines. The FASB continues to solicit and respond to feedback as the process progresses.

  10. Optimizing the calculation grid for atmospheric dispersion modelling

    International Nuclear Information System (INIS)

    Van Thielen, S.; Turcanu, C.; Camps, J.; Keppens, R.

    2015-01-01

    This paper presents three approaches to find optimized grids for atmospheric dispersion measurements and calculations in emergency planning. This can be useful for deriving optimal positions for mobile monitoring stations, or help to reduce discretization errors and improve recommendations. Indeed, threshold-based recommendations or conclusions may differ strongly on the shape and size of the grid on which atmospheric dispersion measurements or calculations of pollutants are based. Therefore, relatively sparse grids that retain as much information as possible, are required. The grid optimization procedure proposed here is first demonstrated with a simple Gaussian plume model as adopted in atmospheric dispersion calculations, which provides fast calculations. The optimized grids are compared to the Noodplan grid, currently used for emergency planning in Belgium, and to the exact solution. We then demonstrate how it can be used in more realistic dispersion models. - Highlights: • Grid points for atmospheric dispersion calculations are optimized. • Using heuristics the optimization problem results into different grid shapes. • Comparison between optimized models and the Noodplan grid is performed

  11. Precision calculations in supersymmetric extensions of the Standard Model

    International Nuclear Information System (INIS)

    Slavich, P.

    2013-01-01

    This dissertation is organized as follows: in the next chapter I will summarize the structure of the supersymmetric extensions of the standard model (SM), namely the MSSM (Minimal Supersymmetric Standard Model) and the NMSSM (Next-to-Minimal Supersymmetric Standard Model), I will provide a brief overview of different patterns of SUSY (supersymmetry) breaking and discuss some issues on the renormalization of the input parameters that are common to all calculations of higher-order corrections in SUSY models. In chapter 3 I will review and describe computations on the production of MSSM Higgs bosons in gluon fusion. In chapter 4 I will review results on the radiative corrections to the Higgs boson masses in the NMSSM. In chapter 5 I will review the calculation of BR(B → X s γ in the MSSM with Minimal Flavor Violation (MFV). Finally, in chapter 6 I will briefly summarize the outlook of my future research. (author)

  12. Updating and prospective validation of a prognostic model for high sickness absence

    NARCIS (Netherlands)

    Roelen, C.A.M.; Heymans, M.W.; Twisk, J.W.R.; van Rhenen, W.; Pallesen, S.; Bjorvatn, B.; Moen, B.E.; Mageroy, N.

    2015-01-01

    Objectives To further develop and validate a Dutch prognostic model for high sickness absence (SA). Methods Three-wave longitudinal cohort study of 2,059 Norwegian nurses. The Dutch prognostic model was used to predict high SA among Norwegian nurses at wave 2. Subsequently, the model was updated by

  13. Ab initio calculations and modelling of atomic cluster structure

    DEFF Research Database (Denmark)

    Solov'yov, Ilia; Lyalin, Andrey G.; Solov'yov, Andrey V.

    2004-01-01

    framework for modelling the fusion process of noble gas clusters is presented. We report the striking correspondence of the peaks in the experimentally measured abundance mass spectra with the peaks in the size-dependence of the second derivative of the binding energy per atom calculated for the chain...... of the noble gas clusters up to 150 atoms....

  14. TTS-Polttopuu - cost calculation model for fuelwood

    International Nuclear Information System (INIS)

    Naett, H.; Ryynaenen, S.

    1999-01-01

    The TTS-Institutes's Forestry Department has developed a computer based cost-calculation model, 'TTS-Polttopuu', for the calculation of unit costs and resource needs in the harvesting systems for wood chips and split firewood. The model enables to determine the productivity and device cost per operating hour by each working stage of the harvesting system. The calculation model also enables the user to find out how changes in the productivity and cost bases of different harvesting chains influence the unit cost of the whole system. The harvesting chain includes the cutting of delimbed and non-delimbed fuelwood, forest haulage, road transportation, chipping and chopping of longwood at storage. This individually operating software was originally developed to serve research needs, but it also serves the needs of the forestry and agricultural education, training and extension as well as individual firewood producers. The system requirements for this cost calculation model are at least 486- level processor with the Windows 95/98 -operating system, 16 MB of memory (RAM) and 5 MB of available hard-disk. This development work was carried out in conjunction with the nation-wide BIOENERGY-research programme. (orig.)

  15. A modified calculation model for groundwater flowing to horizontal ...

    Indian Academy of Sciences (India)

    All these valleys are located in Loess plateau of northern Shaanxi, China. The existing calculation model for single hori- zontal seepage well was built by Wang and Zhang. (2007) based on theory of coupled seepage-pipe flow and equivalent hydraulic conductivity (Chen. 1995; Chen and Lin 1998a, 1998b; Chen and.

  16. A kinematic model for calculating the magnitude of angular ...

    African Journals Online (AJOL)

    Keplerian velocity laws imply the existence of velocity shear and shear viscosity within an accretion disk. Due to this viscosity, angular momentum is transferred from the faster moving inner regions to the slower-moving outer regions of the disk. Here we have formulated a model for calculating the magnitude of angular ...

  17. Black Hole Entropy Calculation in a Modified Thin Film Model

    Indian Academy of Sciences (India)

    2016-01-27

    Jan 27, 2016 ... The thin film model is modified to calculate the black hole entropy. The difference from the original method is that the Parikh–Wilczek tunnelling framework is introduced and the self-gravitation of the emission particles is taken into account. In terms of our improvement, if the entropy is still proportional to the ...

  18. The role of hand calculations in ground water flow modeling.

    Science.gov (United States)

    Haitjema, Henk

    2006-01-01

    Most ground water modeling courses focus on the use of computer models and pay little or no attention to traditional analytic solutions to ground water flow problems. This shift in education seems logical. Why waste time to learn about the method of images, or why study analytic solutions to one-dimensional or radial flow problems? Computer models solve much more realistic problems and offer sophisticated graphical output, such as contour plots of potentiometric levels and ground water path lines. However, analytic solutions to elementary ground water flow problems do have something to offer over computer models: insight. For instance, an analytic one-dimensional or radial flow solution, in terms of a mathematical expression, may reveal which parameters affect the success of calibrating a computer model and what to expect when changing parameter values. Similarly, solutions for periodic forcing of one-dimensional or radial flow systems have resulted in a simple decision criterion to assess whether or not transient flow modeling is needed. Basic water balance calculations may offer a useful check on computer-generated capture zones for wellhead protection or aquifer remediation. An easily calculated "characteristic leakage length" provides critical insight into surface water and ground water interactions and flow in multi-aquifer systems. The list goes on. Familiarity with elementary analytic solutions and the capability of performing some simple hand calculations can promote appropriate (computer) modeling techniques, avoids unnecessary complexity, improves reliability, and is likely to save time and money. Training in basic hand calculations should be an important part of the curriculum of ground water modeling courses.

  19. Modelling of groundwater flow and solute transport in Olkiluoto. Update 2008

    International Nuclear Information System (INIS)

    Loefman, J.; Pitkaenen, P.; Meszaros, F.; Keto, V.; Ahokas, H.

    2009-10-01

    Posiva Oy is preparing for the final disposal of spent nuclear fuel in the crystalline bedrock in Finland. Olkiluoto in Eurajoki has been selected as the primary site for the repository, subject to further detailed characterisation which is currently focused on the construction of an underground rock characterisation and research facility (the ONKALO). An essential part of the site investigation programme is analysis of the deep groundwater flow by means of numerical flow modelling. This study is the latest update concerning the site-scale flow modelling and is based on all the hydrogeological data gathered from field investigations by the end of 2007. The work is divided into two separate modelling tasks: 1) characterization of the baseline groundwater flow conditions before excavation of the ONKALO, and 2) a prediction/outcome (P/O) study of the potential hydrogeological disturbances due to the ONKALO. The flow model was calibrated by using all the available data that was appropriate for the applied, deterministic, equivalent porous medium (EPM) / dual-porosity (DP) approach. In the baseline modelling, calibration of the flow model focused on improving the agreement between the calculated results and the undisturbed observations. The calibration resulted in a satisfactory agreement with the measured pumping test responses, a very good overall agreement with the observed pressures in the deep drill holes and a fairly good agreement with the observed salinity. Some discrepancies still remained in a few single drill hole sections, because the fresh water infiltration in the model tends to dilute the groundwater too much at shallow depths. In the P/O calculations the flow model was further calibrated by using the monitoring data on the ONKALO disturbances. Having significantly more information on the inflows to the tunnel (compared with the previous study) allowed better calibration of the model, which allowed it to capture very well the observed inflow, the

  20. Update of the Polar SWIFT model for polar stratospheric ozone loss (Polar SWIFT version 2)

    Science.gov (United States)

    Wohltmann, Ingo; Lehmann, Ralph; Rex, Markus

    2017-07-01

    The Polar SWIFT model is a fast scheme for calculating the chemistry of stratospheric ozone depletion in polar winter. It is intended for use in global climate models (GCMs) and Earth system models (ESMs) to enable the simulation of mutual interactions between the ozone layer and climate. To date, climate models often use prescribed ozone fields, since a full stratospheric chemistry scheme is computationally very expensive. Polar SWIFT is based on a set of coupled differential equations, which simulate the polar vortex-averaged mixing ratios of the key species involved in polar ozone depletion on a given vertical level. These species are O3, chemically active chlorine (ClOx), HCl, ClONO2 and HNO3. The only external input parameters that drive the model are the fraction of the polar vortex in sunlight and the fraction of the polar vortex below the temperatures necessary for the formation of polar stratospheric clouds. Here, we present an update of the Polar SWIFT model introducing several improvements over the original model formulation. In particular, the model is now trained on vortex-averaged reaction rates of the ATLAS Chemistry and Transport Model, which enables a detailed look at individual processes and an independent validation of the different parameterizations contained in the differential equations. The training of the original Polar SWIFT model was based on fitting complete model runs to satellite observations and did not allow for this. A revised formulation of the system of differential equations is developed, which closely fits vortex-averaged reaction rates from ATLAS that represent the main chemical processes influencing ozone. In addition, a parameterization for the HNO3 change by denitrification is included. The rates of change of the concentrations of the chemical species of the Polar SWIFT model are purely chemical rates of change in the new version, whereas in the original Polar SWIFT model, they included a transport effect caused by the

  1. Dynamic finite element model updating of prestressed concrete continuous box-girder bridge

    Science.gov (United States)

    Lin, Xiankun; Zhang, Lingmi; Guo, Qintao; Zhang, Yufeng

    2009-09-01

    The dynamic finite element model (FEM) of a prestressed concrete continuous box-girder bridge, called the Tongyang Canal Bridge, is built and updated based on the results of ambient vibration testing (AVT) using a real-coded accelerating genetic algorithm (RAGA). The objective functions are defined based on natural frequency and modal assurance criterion (MAC) metrics to evaluate the updated FEM. Two objective functions are defined to fully account for the relative errors and standard deviations of the natural frequencies and MAC between the AVT results and the updated FEM predictions. The dynamically updated FEM of the bridge can better represent its structural dynamics and serve as a baseline in long-term health monitoring, condition assessment and damage identification over the service life of the bridge.

  2. ERWIN2: User's manual for a computer model to calculate the economic efficiency of wind energy systems

    International Nuclear Information System (INIS)

    Van Wees, F.G.H.

    1992-01-01

    During the last few years the Business Unit ESC-Energy Studies of the Netherlands Energy Research Foundation (ECN) developed calculation programs to determine the economic efficiency of energy technologies, which programs support several studies for the Dutch Ministry of Economic Affairs. All these programs form the so-called BRET programs. One of these programs is ERWIN (Economische Rentabiliteit WINdenergiesystemen or in English: Economic Efficiency of Wind Energy Systems) of which an updated manual (ERWIN2) is presented in this report. An outline is given of the possibilities and limitations to carry out calculations with the model

  3. Nuclear reaction matrix calculations with a shell-model Q

    International Nuclear Information System (INIS)

    Barrett, B.R.; McCarthy, R.J.

    1976-01-01

    Das Barrett-Hewitt-McCarthy (BHM) method for calculating the nuclear reaction matrix G is used to compute shell-model matrix elements for A = 18 nuclei. The energy denominators in intermediate states containing one unoccupied single-particle (s.p.) state and one valence s.p. state are treated correctly, in contrast to previous calculations. These corrections are not important for valence-shell matrix elements but are found to lead to relatively large changes in cross-shell matrix elements involved in core-polarization diagrams. (orig.) [de

  4. Reactor burning calculations for a model reversed field pattern

    International Nuclear Information System (INIS)

    Yeung, B.C.; Long, J.W.; Newton, A.A.

    1976-01-01

    An outline pinch reactor scheme and a study of electrical engineering problems for cyclic operation has been further developed and a comparison of physics aspects and capital cost made with Tokamak which has many similar features. Since the properties of reversed field pinches (RFP) are now better understood more detailed studies have been made and first results of burn calculations given. Results of the burn calculations are summarised. These are based on a D-T burning model used for Tokamak with changes appropriate for RFP. (U.K.)

  5. Modelling and parallel calculation of a kinetic boundary layer

    International Nuclear Information System (INIS)

    Perlat, Jean Philippe

    1998-01-01

    This research thesis aims at addressing reliability and cost issues in the calculation by numeric simulation of flows in transition regime. The first step has been to reduce calculation cost and memory space for the Monte Carlo method which is known to provide performance and reliability for rarefied regimes. Vector and parallel computers allow this objective to be reached. Here, a MIMD (multiple instructions, multiple data) machine has been used which implements parallel calculation at different levels of parallelization. Parallelization procedures have been adapted, and results showed that parallelization by calculation domain decomposition was far more efficient. Due to reliability issue related to the statistic feature of Monte Carlo methods, a new deterministic model was necessary to simulate gas molecules in transition regime. New models and hyperbolic systems have therefore been studied. One is chosen which allows thermodynamic values (density, average velocity, temperature, deformation tensor, heat flow) present in Navier-Stokes equations to be determined, and the equations of evolution of thermodynamic values are described for the mono-atomic case. Numerical resolution of is reported. A kinetic scheme is developed which complies with the structure of all systems, and which naturally expresses boundary conditions. The validation of the obtained 14 moment-based model is performed on shock problems and on Couette flows [fr

  6. Modelling of Control Bars in Calculations of Boiling Water Reactors

    International Nuclear Information System (INIS)

    Khlaifi, A.; Buiron, L.

    2004-01-01

    The core of a nuclear reactor is generally composed of a neat assemblies of fissile material from where neutrons were descended. In general, the energy of fission is extracted by a fluid serving to cool clusters. A reflector is arranged around the assemblies to reduce escaping of neutrons. This is made outside the reactor core. Different mechanisms of reactivity are generally necessary to control the chain reaction. Manoeuvring of Boiling Water Reactor takes place by controlling insertion of absorbent rods to various places of the core. If no blocked assembly calculations are known and mastered, blocked assembly neutronic calculation are delicate and often treated by case to case in present studies [1]. Answering the question how to model crossbar for the control of a boiling water reactor ? requires the choice of a representation level for every chain of variables, the physical model, and its representing equations, etc. The aim of this study is to select the best applicable parameter serving to calculate blocked assembly of a Boiling Water Reactor. This will be made through a range of representative configurations of these reactors and used absorbing environment, in order to illustrate strategies of modelling in the case of an industrial calculation. (authors)

  7. Application of nuclear models to neutron nuclear cross section calculations

    International Nuclear Information System (INIS)

    Young, P.G.

    1983-01-01

    Nuclear theory is used increasingly to supplement and extend the nuclear data base that is available for applied studies. Areas where theoretical calculations are most important include the determination of neutron cross sections for unstable fission products and transactinide nuclei in fission reactor or nuclear waste calculations and for meeting the extensive dosimetry, activation, and neutronic data needs associated with fusion reactor development, especially for neutron energies above 14 MeV. Considerable progress has been made in the use of nuclear models for data evaluation and, particularly, in the methods used to derive physically meaningful parameters for model calculations. Theoretical studies frequently involve use of spherical and deformed optical models, Hauser-Feshbach statistical theory, preequilibrium theory, direct-reaction theory and often make use of gamma-ray strength function models and phenomenological (or microscopic) level density prescriptions. The development, application and limitations of nuclear models for data evaluation are discussed in this paper, with emphasis on the 0.1 to 50 MeV energy range. (Auth.)

  8. Basic Technology and Clinical Applications of the Updated Model of Laser Speckle Flowgraphy to Ocular Diseases

    Directory of Open Access Journals (Sweden)

    Tetsuya Sugiyama

    2014-08-01

    Full Text Available Laser speckle flowgraphy (LSFG allows for quantitative estimation of blood flow in the optic nerve head (ONH, choroid and retina, utilizing the laser speckle phenomenon. The basic technology and clinical applications of LSFG-NAVI, the updated model of LSFG, are summarized in this review. For developing a commercial version of LSFG, the special area sensor was replaced by the ordinary charge-coupled device camera. In LSFG-NAVI, the mean blur rate (MBR has been introduced as a new parameter. Compared to the original LSFG model, LSFG-NAVI demonstrates a better spatial resolution of the blood flow map of human ocular fundus. The observation area is 24 times larger than the original system. The analysis software can separately calculate MBRs in the blood vessels and tissues (capillaries of an entire ONH and the measurements have good reproducibility. The absolute values of MBR in the ONH have been shown to linearly correlate with the capillary blood flow. The Analysis of MBR pulse waveform provides parameters including skew, blowout score, blowout time, rising and falling rates, flow acceleration index, acceleration time index, and resistivity index for comparing different eyes. Recently, there have been an increasing number of reports on the clinical applications of LSFG-NAVI to ocular diseases, including glaucoma, retinal and choroidal diseases.

  9. Investigation of Transformer Model for TRV Calculation by EMTP

    Science.gov (United States)

    Thein, Myo Min; Ikeda, Hisatoshi; Harada, Katsuhiko; Ohtsuka, Shinya; Hikita, Masayuki; Haginomori, Eiichi; Koshiduka, Tadashi

    Analysis of the EMTP transformer model was performed with the 4kVA two windings low voltage transformer with the current injection (CIJ) measurement method to study a transient recovery voltage (TRV) at the transformer limited fault (TLF) current interrupting condition. Tested transformer's impedance was measured by the frequency response analyzer (FRA). From FRA measurement graphs leakage inductance, stray capacitance and resistance were calculated. The EMTP transformer model was constructed with those values. The EMTP simulation was done for a current injection circuit by using transformer model. The experiment and simulation results show a reasonable agreement.

  10. A note on vector flux models for radiation dose calculations

    International Nuclear Information System (INIS)

    Kern, J.W.

    1994-01-01

    This paper reviews and extends modelling of anisotropic fluxes for radiation belt protons to provide closed-form equations for vector proton fluxes and proton flux anisotropy in terms of standard omnidirectional flux models. These equations provide a flexible alternative to the date-based vector flux models currently available. At higher energies, anisotropy of trapped proton flux in the upper atmosphere depends strongly on the variation of atmospheric density with altitude. Calculations of proton flux anisotropies using present models require specification of the average atmospheric density along trapped particle trajectories and its variation with mirror point altitude. For an isothermal atmosphere, calculations show that in a dipole magnetic field, the scale height of this trajectory-averaged density closely approximates the scale height of the atmosphere at the mirror point of the trapped particle. However, for the earth's magnetic field, the altitudes of mirror points vary for protons drifting in longitude. This results in a small increase in longitude-averaged scale heights compared to the atmospheric scale heights at minimum mirror point altitudes. The trajectory-averaged scale heights are increased by about 10-20% over scale heights from standard atmosphere models for protons mirroring at altitudes less than 500 km in the South Atlantic Anomaly Atmospheric losses of protons in the geomagnetic field minimum in the South Atlantic Anomaly control proton flux anisotropies of interest for radiation studies in low earth orbit. Standard atmosphere models provide corrections for diurnal, seasonal and solar activity-driven variations. Thus, determination of an ''equilibrium'' model of trapped proton fluxes of a given energy requires using a scale height that is time-averaged over the lifetime of the protons. The trajectory-averaged atmospheric densities calculated here lead to estimates for trapped proton lifetimes. These lifetimes provide appropriate time

  11. Probability-Based Damage Detection of Structures Using Model Updating with Enhanced Ideal Gas Molecular Movement Algorithm

    OpenAIRE

    M. R. Ghasemi; R. Ghiasi; H. Varaee

    2017-01-01

    Model updating method has received increasing attention in damage detection structures based on measured modal parameters. Therefore, a probability-based damage detection (PBDD) procedure based on a model updating procedure is presented in this paper, in which a one-stage model-based damage identification technique based on the dynamic features of a structure is investigated. The presented framework uses a finite element updating method with a Monte Carlo simulation that ...

  12. TTS-Polttopuu - cost calculation model for fuelwood

    International Nuclear Information System (INIS)

    Naett, H.; Ryynaenen, S.

    1998-01-01

    The TTS-Institutes's Forestry Department has developed a computer based costcalculation model, 'TTS-Polttopuu', for the calculation of unit costs and resource needs in the harvesting systems for wood chips and split firewood. The model enables to determine the productivity and device cost per operating hour by each working stage of the harvesting system. The calculation model also enables the user to find out how changes in the productivity and cost bases of different harvesting chains influence the unit cost of the whole system. The harvesting chain includes the cutting of delimbed and non-delimbed fuelwood, forest haulage, road transportation chipping and chopping of longwood at storage. This individually operating software was originally developed to serve research needs, but it also serves the needs of the forestry and agricultural education, training and extension as well as individual firewood producers. The system requirements for this cost calculation model are at least 486-level processor with the Windows 95/98 -operating system, 16 MB of memory (RAM) and 5 MB of available hard-disk. This development work was carried out in conjunction with the nation-wide BIOENERGY Research Programme. (orig.)

  13. The EDF/SEPTEN crisis team calculation tools and models

    International Nuclear Information System (INIS)

    De Magondeaux, B.; Grimaldi, X.

    1993-01-01

    Electricite de France (EDF) has developed a set of simplified tools and models called TOUTEC and CRISALIDE which are devoted to be used by the French utility National Crisis Team in order to perform the task of diagnosis and prognosis during an emergency situation. As a severe accident could have important radiological consequences, this method is focused on the diagnosis of the state of the safety barriers and on the prognosis of their behaviour. These tools allow the crisis team to deliver public authorities with information on the radiological risk and to provide advices to manage the accident on the damaged unit. At a first level, TOUTEC is intended to complement the hand-book with simplified calculation models and predefined relationships. It can avoid tedious calculation during stress conditions. The main items are the calculation of the primary circuit breach size and the evaluation of hydrogen over pressurization. The set of models called CRISALIDE is devoted to evaluate the following critical parameters: delay before core uncover, which would signify more severe consequences if it occurs, containment pressure behaviour and finally source term. With these models, crisis team comes able to take into account combinations of boundary conditions according to safety and auxiliary systems availability

  14. MODELING THE EFFECTS OF UPDATING THE INFLUENZA VACCINE ON THE EFFICACY OF REPEATED VACCINATION.

    Energy Technology Data Exchange (ETDEWEB)

    D. SMITH; A. LAPEDES; ET AL

    2000-11-01

    The accumulated wisdom is to update the vaccine strain to the expected epidemic strain only when there is at least a 4-fold difference [measured by the hemagglutination inhibition (HI) assay] between the current vaccine strain and the expected epidemic strain. In this study we investigate the effect, on repeat vaccines, of updating the vaccine when there is a less than 4-fold difference. Methods: Using a computer model of the immune response to repeated vaccination, we simulated updating the vaccine on a 2-fold difference and compared this to not updating the vaccine, in each case predicting the vaccine efficacy in first-time and repeat vaccines for a variety of possible epidemic strains. Results: Updating the vaccine strain on a 2-fold difference resulted in increased vaccine efficacy in repeat vaccines compared to leaving the vaccine unchanged. Conclusions: These results suggest that updating the vaccine strain on a 2-fold difference between the existing vaccine strain and the expected epidemic strain will increase vaccine efficacy in repeat vaccines compared to leaving the vaccine unchanged.

  15. Adapting to change: The role of the right hemisphere in mental model building and updating.

    Science.gov (United States)

    Filipowicz, Alex; Anderson, Britt; Danckert, James

    2016-09-01

    We recently proposed that the right hemisphere plays a crucial role in the processes underlying mental model building and updating. Here, we review the evidence we and others have garnered to support this novel account of right hemisphere function. We begin by presenting evidence from patient work that suggests a critical role for the right hemisphere in the ability to learn from the statistics in the environment (model building) and adapt to environmental change (model updating). We then provide a review of neuroimaging research that highlights a network of brain regions involved in mental model updating. Next, we outline specific roles for particular regions within the network such that the anterior insula is purported to maintain the current model of the environment, the medial prefrontal cortex determines when to explore new or alternative models, and the inferior parietal lobule represents salient and surprising information with respect to the current model. We conclude by proposing some future directions that address some of the outstanding questions in the field of mental model building and updating. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  16. Finite element model updating of a small steel frame using neural networks

    International Nuclear Information System (INIS)

    Zapico, J L; González, M P; Alonso, R; González-Buelga, A

    2008-01-01

    This paper presents an experimental and analytical dynamic study of a small-scale steel frame. The experimental model was physically built and dynamically tested on a shaking table in a series of different configurations obtained from the original one by changing the mass and by causing structural damage. Finite element modelling and parameterization with physical meaning is iteratively tried for the original undamaged configuration. The finite element model is updated through a neural network, the natural frequencies of the model being the net input. The updating process is made more accurate and robust by using a regressive procedure, which constitutes an original contribution of this work. A novel simplified analytical model has been developed to evaluate the reduction of bending stiffness of the elements due to damage. The experimental results of the rest of the configurations have been used to validate both the updated finite element model and the analytical one. The statistical properties of the identified modal data are evaluated. From these, the statistical properties and a confidence interval for the estimated model parameters are obtained by using the Latin Hypercube sampling technique. The results obtained are successful: the updated model accurately reproduces the low modes identified experimentally for all configurations, and the statistical study of the transmission of errors yields a narrow confidence interval for all the identified parameters

  17. Update of the ITER MELCOR model for the validation of the Cryostat design

    Energy Technology Data Exchange (ETDEWEB)

    Martínez, M.; Labarta, C.; Terrón, S.; Izquierdo, J.; Perlado, J.M.

    2015-07-01

    Some transients can compromise the vacuum in the Cryostat of ITER and cause significant loads. A MELCOR model has been updated in order to assess this loads. Transients have been run with this model and its result will be used in the mechanical assessment of the cryostat. (Author)

  18. FE Model Updating on an In-Service Self-Anchored Suspension Bridge with Extra-Width Using Hybrid Method

    Directory of Open Access Journals (Sweden)

    Zhiyuan Xia

    2017-02-01

    Full Text Available Nowadays, many more bridges with extra-width have been needed for vehicle throughput. In order to obtain a precise finite element (FE model of those complex bridge structures, the practical hybrid updating method by integration of Gaussian mutation particle swarm optimization (GMPSO, Kriging meta-model and Latin hypercube sampling (LHS was proposed. By demonstrating the efficiency and accuracy of the hybrid method through the model updating of a damaged simply supported beam, the proposed method was applied to the model updating of a self-anchored suspension bridge with extra-width which showed great necessity considering the results of ambient vibration test. The results of bridge model updating showed that both of the mode frequencies and shapes had relatively high agreement between the updated model and experimental structure. The successful model updating of this bridge fills in the blanks of model updating of a complex self-anchored suspension bridge. Moreover, the updating process enables other model updating issues for complex bridge structures

  19. Finite element modelling and updating of friction stir welding (FSW joint for vibration analysis

    Directory of Open Access Journals (Sweden)

    Zahari Siti Norazila

    2017-01-01

    Full Text Available Friction stir welding of aluminium alloys widely used in automotive and aerospace application due to its advanced and lightweight properties. The behaviour of FSW joints plays a significant role in the dynamic characteristic of the structure due to its complexities and uncertainties therefore the representation of an accurate finite element model of these joints become a research issue. In this paper, various finite elements (FE modelling technique for prediction of dynamic properties of sheet metal jointed by friction stir welding will be presented. Firstly, nine set of flat plate with different series of aluminium alloy; AA7075 and AA6061 joined by FSW are used. Nine set of specimen was fabricated using various types of welding parameters. In order to find the most optimum set of FSW plate, the finite element model using equivalence technique was developed and the model validated using experimental modal analysis (EMA on nine set of specimen and finite element analysis (FEA. Three types of modelling were engaged in this study; rigid body element Type 2 (RBE2, bar element (CBAR and spot weld element connector (CWELD. CBAR element was chosen to represent weld model for FSW joints due to its accurate prediction of mode shapes and contains an updating parameter for weld modelling compare to other weld modelling. Model updating was performed to improve correlation between EMA and FEA and before proceeds to updating, sensitivity analysis was done to select the most sensitive updating parameter. After perform model updating, total error of the natural frequencies for CBAR model is improved significantly. Therefore, CBAR element was selected as the most reliable element in FE to represent FSW weld joint.

  20. Use of the Strong Collision Model to Calculate Spin Relaxation

    Science.gov (United States)

    Wang, D.; Chow, K. H.; Smadella, M.; Hossain, M. D.; MacFarlane, W. A.; Morris, G. D.; Ofer, O.; Morenzoni, E.; Salman, Z.; Saadaoui, H.; Song, Q.; Kiefl, R. F.

    The strong collision model is used to calculate spin relaxation of a muon or polarized radioactive nucleus in contact with a fluctuating environment. We show that on a time scale much longer than the mean time between collisions (fluctuations) the longitudinal polarization decays exponentially with a relaxation rate equal to a sum of Lorentzians-one for each frequency component in the static polarization function ps(t).

  1. Model and calculation of in situ stresses in anisotropic formations

    Energy Technology Data Exchange (ETDEWEB)

    Yuezhi, W.; Zijun, L.; Lixin, H. [Jianghan Petroleum Institute, (China)

    1997-08-01

    In situ stresses in transversely isotropic material in relation to wellbore stability have been investigated. Equations for three horizontal in- situ stresses and a new formation fracture pressure model were described, and the methodology for determining the elastic parameters of anisotropic rocks in the laboratory was outlined. Results indicate significantly smaller differences between theoretically calculated pressures and actual formation pressures than results obtained by using the isotropic method. Implications for improvements in drilling efficiency were reviewed. 13 refs., 6 figs.

  2. Calculation of relativistic model stars using Regge calculus

    International Nuclear Information System (INIS)

    Porter, J.

    1987-01-01

    A new approach to the Regge calculus, developed in a previous paper, is used in conjunction with the velocity potential version of relativistic fluid dynamics due to Schutz [1970, Phys. Rev., D, 2, 2762] to calculate relativistic model stars. The results are compared with those obtained when the Tolman-Oppenheimer-Volkov equations are solved by other numerical methods. The agreement is found to be excellent. (author)

  3. Structure-dynamic model verification calculation of PWR 5 tests

    International Nuclear Information System (INIS)

    Engel, R.

    1980-02-01

    Within reactor safety research project RS 16 B of the German Federal Ministry of Research and Technology (BMFT), blowdown experiments are conducted at Battelle Institut e.V. Frankfurt/Main using a model reactor pressure vessel with a height of 11,2 m and internals corresponding to those in a PWR. In the present report the dynamic loading on the pressure vessel internals (upper perforated plate and barrel suspension) during the DWR 5 experiment are calculated by means of a vertical and horizontal dynamic model using the CESHOCK code. The equations of motion are resolved by direct integration. (orig./RW) [de

  4. Mathematical model of kinetostatithic calculation of flat lever mechanisms

    Directory of Open Access Journals (Sweden)

    A. S. Sidorenko

    2016-01-01

    Full Text Available Currently widely used graphical-analytical methods of analysis largely obsolete, replaced by various analytical methods using computer technology. Therefore, of particular interest is the development of a mathematical model kinetostatical calculation mechanisms in the form of library procedures of calculation for all powered two groups Assyrians (GA and primary level. Before resorting to the appropriate procedure that computes all the forces in the kinematic pairs, you need to compute inertial forces, moments of forces of inertia and all external forces and moments acting on this GA. To this end shows the design diagram of the power analysis for each species GA of the second class, as well as the initial link. Finding reactions in the internal and external kinematic pairs based on equilibrium conditions with the account of forces of inertia and moments of inertia forces (Dalembert principle. Thus obtained equations of kinetostatical for their versatility have been solved by the Cramer rule. Thus, for each GA of the second class were found all 6 unknowns: the forces in the kinematic pairs, the directions of these forces as well as forces the shoulders. If we study kinetostatic mechanism with parallel consolidation of two GA in the initial link, in this case, power is the geometric sum of the forces acting on the primary link from the discarded GA. Thus, the obtained mathematical model kinetostatical calculation mechanisms in the form of libraries of mathematical procedures for determining reactions of all GA of the second class. The mathematical model kinetostatical calculation makes it relatively simple to implement its software implementation.

  5. Modelling African aerosol using updated fossil fuel and biofuel emission inventories for 2005 and 2030

    Science.gov (United States)

    Liousse, C.; Penner, J. E.; Assamoi, E.; Xu, L.; Criqui, P.; Mima, S.; Guillaume, B.; Rosset, R.

    2010-12-01

    A regional fossil fuel and biofuel emission inventory for particulates has been developed for Africa at a resolution of 0.25° x 0.25° for the year 2005. The original database of Junker and Liousse (2008) was used after modification for updated regional fuel consumption and emission factors. Consumption data were corrected after direct inquiries conducted in Africa, including a new emitter category (i.e. two-wheel vehicles including “zemidjans”) and a new activity sector (i.e. power plants) since both were not considered in the previous emission inventory. Emission factors were measured during the 2005 AMMA campaign (Assamoi and Liousse, 2010) and combustion chamber experiments. Two prospective inventories for 2030 are derived based on this new regional inventory and two energy consumption forecasts by the Prospective Outlook on Long-term Energy Systems (POLES) model (Criqui, 2001). The first is a reference scenario, where no emission controls beyond those achieved in 2003 are taken into account, and the second is for a "clean" scenario where possible and planned policies for emission control are assumed to be effective. BC and OCp emission budgets for these new inventories will be discussed and compared to the previous global dataset. These new inventories along with the most recent open biomass burning inventory (Liousse et al., 2010) have been tested in the ORISAM-TM5 global chemistry-climate model with a focus over Africa at a 1° x 1° resolution. Global simulations for BC and primary OC for the years 2005 and 2030 are carried out and the modelled particulate concentrations for 2005 are compared to available measurements in Africa. Finally, BC and OC radiative properties (aerosol optical depths and single scattering albedo) are calculated and the direct radiative forcing is estimated using an off line model (Wang and Penner, 2009). Results of sensitivity tests driven with different emission scenarios will be presented.

  6. Deep update with new water transport cost model

    International Nuclear Information System (INIS)

    Khamis, I.; Ibrahim, A.H.A.D.; Suleiman, S.

    2007-01-01

    DEEP 3.11 is a new version of DEEP which is capable to calculate the water transport cost in any place, with acceptable accuracy. The user needs only to specify water flow or the capacity, pipeline length and elevation of sites against sea level or difference in elevation of the beginning and end of the pipeline routs

  7. Freight Calculation Model: A Case Study of Coal Distribution

    Science.gov (United States)

    Yunianto, I. T.; Lazuardi, S. D.; Hadi, F.

    2018-03-01

    Coal has been known as one of energy alternatives that has been used as energy source for several power plants in Indonesia. During its transportation from coal sites to power plant locations is required the eligible shipping line services that are able to provide the best freight rate. Therefore, this study aims to obtain the standardized formulations for determining the ocean freight especially for coal distribution based on the theoretical concept. The freight calculation model considers three alternative transport modes commonly used in coal distribution: tug-barge, vessel and self-propelled barge. The result shows there are two cost components very dominant in determining the value of freight with the proportion reaching 90% or even more, namely: time charter hire and fuel cost. Moreover, there are three main factors that have significant impacts on the freight calculation, which are waiting time at ports, time charter rate and fuel oil price.

  8. Improved SVR Model for Multi-Layer Buildup Factor Calculation

    International Nuclear Information System (INIS)

    Trontl, K.; Pevec, D.; Smuc, T.

    2006-01-01

    The accuracy of point kernel method applied in gamma ray dose rate calculations in shielding design and radiation safety analysis is limited by the accuracy of buildup factors used in calculations. Although buildup factors for single-layer shields are well defined and understood, buildup factors for stratified shields represent a complex physical problem that is hard to express in mathematical terms. The traditional approach for expressing buildup factors of multi-layer shields is through semi-empirical formulas obtained by fitting the results of transport theory or Monte Carlo calculations. Such an approach requires an ad-hoc definition of the fitting function and often results with numerous and usually inadequately explained and defined correction factors added to the final empirical formula. Even more, finally obtained formulas are generally limited to a small number of predefined combinations of materials within relatively small range of gamma ray energies and shield thicknesses. Recently, a new approach has been suggested by the authors involving one of machine learning techniques called Support Vector Machines, i.e., Support Vector Regression (SVR). Preliminary investigations performed for double-layer shields revealed great potential of the method, but also pointed out some drawbacks of the developed model, mostly related to the selection of one of the parameters describing the problem (material atomic number), and the method in which the model was designed to evolve during the learning process. It is the aim of this paper to introduce a new parameter (single material buildup factor) that is to replace the existing material atomic number as an input parameter. The comparison of two models generated by different input parameters has been performed. The second goal is to improve the evolution process of learning, i.e., the experimental computational procedure that provides a framework for automated construction of complex regression models of predefined

  9. Finite element model updating of natural fibre reinforced composite structure in structural dynamics

    Directory of Open Access Journals (Sweden)

    Sani M.S.M.

    2016-01-01

    Full Text Available Model updating is a process of making adjustment of certain parameters of finite element model in order to reduce discrepancy between analytical predictions of finite element (FE and experimental results. Finite element model updating is considered as an important field of study as practical application of finite element method often shows discrepancy to the test result. The aim of this research is to perform model updating procedure on a composite structure as well as trying improving the presumed geometrical and material properties of tested composite structure in finite element prediction. The composite structure concerned in this study is a plate of reinforced kenaf fiber with epoxy. Modal properties (natural frequency, mode shapes, and damping ratio of the kenaf fiber structure will be determined using both experimental modal analysis (EMA and finite element analysis (FEA. In EMA, modal testing will be carried out using impact hammer test while normal mode analysis using FEA will be carried out using MSC. Nastran/Patran software. Correlation of the data will be carried out before optimizing the data from FEA. Several parameters will be considered and selected for the model updating procedure.

  10. 2HDMC — two-Higgs-doublet model calculator

    Science.gov (United States)

    Eriksson, David; Rathsman, Johan; Stål, Oscar

    2010-04-01

    We describe version 1.0.6 of the public C++ code 2HDMC, which can be used to perform calculations in a general, CP-conserving, two-Higgs-doublet model (2HDM). The program features simple conversion between different parametrizations of the 2HDM potential, a flexible Yukawa sector specification with choices of different Z-symmetries or more general couplings, a decay library including all two-body — and some three-body — decay modes for the Higgs bosons, and the possibility to calculate observables of interest for constraining the 2HDM parameter space, as well as theoretical constraints from positivity and unitarity. The latest version of the 2HDMC code and full documentation is available from: http://www.isv.uu.se/thep/MC/2HDMC. New version program summaryProgram title: 2HDMC Catalogue identifier: AEFI_v1_1 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFI_v1_1.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU GPL No. of lines in distributed program, including test data, etc.: 12 110 No. of bytes in distributed program, including test data, etc.: 92 731 Distribution format: tar.gz Programming language: C++ Computer: Any computer running Linux Operating system: Linux RAM: 5 Mb Catalogue identifier of previous version: AEFI_v1_0 Journal reference of previous version: Comput. Phys. Comm. 180 (2010) 189 Classification: 11.1 External routines: GNU Scientific Library ( http://www.gnu.org/software/gsl/) Does the new version supersede the previous version?: Yes Nature of problem: Determining properties of the potential, calculation of mass spectrum, couplings, decay widths, oblique parameters, muon g-2, and collider constraints in a general two-Higgs-doublet model. Solution method: From arbitrary potential and Yukawa sector, tree-level relations are used to determine Higgs masses and couplings. Decay widths are calculated at leading order, including FCNC decays when applicable. Decays to off

  11. The 2014 update to the National Seismic Hazard Model in California

    Science.gov (United States)

    Powers, Peter; Field, Edward H.

    2015-01-01

    The 2014 update to the U. S. Geological Survey National Seismic Hazard Model in California introduces a new earthquake rate model and new ground motion models (GMMs) that give rise to numerous changes to seismic hazard throughout the state. The updated earthquake rate model is the third version of the Uniform California Earthquake Rupture Forecast (UCERF3), wherein the rates of all ruptures are determined via a self-consistent inverse methodology. This approach accommodates multifault ruptures and reduces the overprediction of moderate earthquake rates exhibited by the previous model (UCERF2). UCERF3 introduces new faults, changes to slip or moment rates on existing faults, and adaptively smoothed gridded seismicity source models, all of which contribute to significant changes in hazard. New GMMs increase ground motion near large strike-slip faults and reduce hazard over dip-slip faults. The addition of very large strike-slip ruptures and decreased reverse fault rupture rates in UCERF3 further enhances these effects.

  12. EMPIRICAL MODEL FOR HYDROCYCLONES CORRECTED CUT SIZE CALCULATION

    Directory of Open Access Journals (Sweden)

    André Carlos Silva

    2012-12-01

    Full Text Available Hydrocyclones are devices worldwide used in mineral processing for desliming, classification, selective classification, thickening and pre-concentration. A hydrocyclone is composed by one cylindrical and one conical section joint together, without any moving parts and it is capable of perform granular material separation in pulp. The mineral particles separation mechanism acting in a hydrocyclone is complex and its mathematical modelling is usually empirical. The most used model for hydrocyclone corrected cut size is proposed by Plitt. Over the years many revisions and corrections to Plitt´s model were proposed. The present paper shows a modification in the Plitt´s model constant, obtained by exponential regression of simulated data for three different hydrocyclones geometry: Rietema, Bradley and Krebs. To validate the proposed model literature data obtained from phosphate ore using fifteen different hydrocyclones geometry are used. The proposed model shows a correlation equals to 88.2% between experimental and calculated corrected cut size, while the correlation obtained using Plitt´s model is 11.5%.

  13. High accuracy modeling for advanced nuclear reactor core designs using Monte Carlo based coupled calculations

    Science.gov (United States)

    Espel, Federico Puente

    with detailed and accurate thermal-hydraulic models. The development of such reference high-fidelity coupled multi-physics scheme is described in this dissertation on the basis of MCNP5, NEM, NJOY and COBRA-TF (CTF) computer codes. This work presents results from studies performed and implemented at the Pennsylvania State University (PSU) on both accelerating Monte Carlo criticality calculations by using hybrid nodal diffusion Monte Carlo schemes and thermal-hydraulic feedback modeling in Monte Carlo core calculations. The hybrid MCNP5/CTF/NEM/NJOY coupled code system is proposed and developed in this dissertation work. The hybrid coupled code system contains a special interface developed to update the required MCNP5 input changes to account for dimension and density changes provided by the thermal-hydraulics feedback module. The interface has also been developed to extract the flux and reaction rates calculated by MCNP5 to later transform the data into the power feedback needed by CTF (axial and radial peaking factors). The interface is contained in a master program that controls the flow of the calculations. Both feedback modules (thermal-hydraulic and power subroutines) use a common internal interface to further accelerate the data exchange. One of the most important steps to correctly include the thermal hydraulic feedback into MCNP5 calculations begins with temperature dependent cross section libraries. If the cross sections used for the calculations are not at the correct temperature, the temperature feedback cannot be included into MCNP5 (referred to the effect of temperature on cross sections: Doppler boarding of resolve and unresolved resonances, thermal scattering and elastic scattering). The only method of considering the temperature effects on cross sections is through the generation (or as introduced in this dissertation through a novel interpolation mechanism) of continuous energy temperature-dependent cross section libraries. An automated methodology for

  14. Towards an integrated workflow for structural reservoir model updating and history matching

    NARCIS (Netherlands)

    Leeuwenburgh, O.; Peters, E.; Wilschut, F.

    2011-01-01

    A history matching workflow, as typically used for updating of petrophysical reservoir model properties, is modified to include structural parameters including the top reservoir and several fault properties: position, slope, throw and transmissibility. A simple 2D synthetic oil reservoir produced by

  15. Evaluation of Lower East Fork Poplar Creek Mercury Sources - Model Update

    Energy Technology Data Exchange (ETDEWEB)

    Ketelle, Richard [East Tennessee Technology Park (ETTP), Oak Ridge, TN (United States); Brandt, Craig C. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Peterson, Mark J. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Bevelhimer, Mark S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Watson, David B. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Brooks, Scott C. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Mayes, Melanie [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); DeRolph, Christopher R. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Dickson, Johnbull O. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Olsen, Todd A. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2017-08-01

    The purpose of this report is to assess new data that has become available and provide an update to the evaluations and modeling presented in the Oak Ridge National Laboratory (ORNL) Technical Manuscript Evaluation of lower East Fork Poplar Creek (LEFPC) Mercury Sources (Watson et al., 2016). Primary sources of field and laboratory data for this update include multiple US Department of Energy (DOE) programs including Environmental Management (EM; e.g., Biological Monitoring and Abatement Program, Mercury Remediation Technology Development [TD], and Applied Field Research Initiative), Office of Science (Mercury Science Focus Areas [SFA] project), and the Y-12 National Security Complex (Y-12) Compliance Department.

  16. Absorbed Dose and Dose Equivalent Calculations for Modeling Effective Dose

    Science.gov (United States)

    Welton, Andrew; Lee, Kerry

    2010-01-01

    While in orbit, Astronauts are exposed to a much higher dose of ionizing radiation than when on the ground. It is important to model how shielding designs on spacecraft reduce radiation effective dose pre-flight, and determine whether or not a danger to humans is presented. However, in order to calculate effective dose, dose equivalent calculations are needed. Dose equivalent takes into account an absorbed dose of radiation and the biological effectiveness of ionizing radiation. This is important in preventing long-term, stochastic radiation effects in humans spending time in space. Monte carlo simulations run with the particle transport code FLUKA, give absorbed and equivalent dose data for relevant shielding. The shielding geometry used in the dose calculations is a layered slab design, consisting of aluminum, polyethylene, and water. Water is used to simulate the soft tissues that compose the human body. The results obtained will provide information on how the shielding performs with many thicknesses of each material in the slab. This allows them to be directly applicable to modern spacecraft shielding geometries.

  17. Updating known distribution models for forecasting climate change impact on endangered species.

    Science.gov (United States)

    Muñoz, Antonio-Román; Márquez, Ana Luz; Real, Raimundo

    2013-01-01

    To plan endangered species conservation and to design adequate management programmes, it is necessary to predict their distributional response to climate change, especially under the current situation of rapid change. However, these predictions are customarily done by relating de novo the distribution of the species with climatic conditions with no regard of previously available knowledge about the factors affecting the species distribution. We propose to take advantage of known species distribution models, but proceeding to update them with the variables yielded by climatic models before projecting them to the future. To exemplify our proposal, the availability of suitable habitat across Spain for the endangered Bonelli's Eagle (Aquila fasciata) was modelled by updating a pre-existing model based on current climate and topography to a combination of different general circulation models and Special Report on Emissions Scenarios. Our results suggested that the main threat for this endangered species would not be climate change, since all forecasting models show that its distribution will be maintained and increased in mainland Spain for all the XXI century. We remark on the importance of linking conservation biology with distribution modelling by updating existing models, frequently available for endangered species, considering all the known factors conditioning the species' distribution, instead of building new models that are based on climate change variables only.

  18. A modified microdosimetric kinetic model for relative biological effectiveness calculation

    Science.gov (United States)

    Chen, Yizheng; Li, Junli; Li, Chunyan; Qiu, Rui; Wu, Zhen

    2018-01-01

    In the heavy ion therapy, not only the distribution of physical absorbed dose, but also the relative biological effectiveness (RBE) weighted dose needs to be taken into account. The microdosimetric kinetic model (MKM) can predict the RBE value of heavy ions with saturation-corrected dose-mean specific energy, which has been used in clinical treatment planning at the National Institute of Radiological Sciences. In the theoretical assumption of the MKM, the yield of the primary lesion is independent of the radiation quality, while the experimental data shows that DNA double strand break (DSB) yield, considered as the main primary lesion, depends on the LET of the particle. Besides, the β parameter of the MKM is constant with LET resulting from this assumption, which also differs from the experimental conclusion. In this study, a modified MKM was developed, named MMKM. Based on the experimental DSB yield of mammalian cells under the irradiation of ions with different LETs, a RBEDSB (RBE for the induction of DSB)-LET curve was fitted as the correction factor to modify the primary lesion yield in the MKM, and the variation of the primary lesion yield with LET is considered in the MMKM. Compared with the present the MKM, not only the α parameter of the MMKM for mono-energetic ions agree with the experimental data, but also the β parameter varies with LET and the variation trend of the experimental result can be reproduced on the whole. Then a spread-out Bragg peaks (SOBP) distribution of physical dose was simulated with Geant4 Monte Carlo code, and the biological and clinical dose distributions were calculated, under the irradiation of carbon ions. The results show that the distribution of clinical dose calculated with the MMKM is closed to the distribution with the MKM in the SOBP, while the discrepancy before and after the SOBP are both within 10%. Moreover, the MKM might overestimate the clinical dose at the distal end of the SOBP more than 5% because of its

  19. Basic theory and model calculations of the Venus ionosphere

    Science.gov (United States)

    Nagy, A. F.; Cravens, T. E.; Gombosi, T. I.

    1983-01-01

    An assessment is undertaken of current understanding of the physical and chemical processes that control Venus's ionospheric behavior, in view of the data that has been made available by the Venera and Pioneer Venus missions. Attention is given to the theoretical framework used in general planetary ionosphere studies, especially to the equations describing the controlling physical and chemical processes, and to the current status of the ion composition, density and thermal structure models developed to reproduce observed ionospheric behavior. No truly comprehensive and successful model of the nightside ionosphere has been published. Furthermore, although dayside energy balance calculations yield electron and ion temperature values that are in close agreement with measured values, the energetics of the night side eludes understanding.

  20. Determination of appropriate models and parameters for premixing calculations

    Energy Technology Data Exchange (ETDEWEB)

    Park, Ik-Kyu; Kim, Jong-Hwan; Min, Beong-Tae; Hong, Seong-Wan

    2008-03-15

    The purpose of the present work is to use experiments that have been performed at Forschungszentrum Karlsruhe during about the last ten years for determining the most appropriate models and parameters for premixing calculations. The results of a QUEOS experiment are used to fix the parameters concerning heat transfer. The QUEOS experiments are especially suited for this purpose as they have been performed with small hot solid spheres. Therefore the area of heat exchange is known. With the heat transfer parameters fixed in this way, a PREMIX experiment is recalculated. These experiments have been performed with molten alumina (Al{sub 2}O{sub 3}) as a simulant of corium. Its initial temperature is 2600 K. With these experiments the models and parameters for jet and drop break-up are tested.

  1. Recent Developments in No-Core Shell-Model Calculations

    Energy Technology Data Exchange (ETDEWEB)

    Navratil, P; Quaglioni, S; Stetcu, I; Barrett, B R

    2009-03-20

    We present an overview of recent results and developments of the no-core shell model (NCSM), an ab initio approach to the nuclear many-body problem for light nuclei. In this aproach, we start from realistic two-nucleon or two- plus three-nucleon interactions. Many-body calculations are performed using a finite harmonic-oscillator (HO) basis. To facilitate convergence for realistic inter-nucleon interactions that generate strong short-range correlations, we derive effective interactions by unitary transformations that are tailored to the HO basis truncation. For soft realistic interactions this might not be necessary. If that is the case, the NCSM calculations are variational. In either case, the ab initio NCSM preserves translational invariance of the nuclear many-body problem. In this review, we, in particular, highlight results obtained with the chiral two- plus three-nucleon interactions. We discuss efforts to extend the applicability of the NCSM to heavier nuclei and larger model spaces using importance-truncation schemes and/or use of effective interactions with a core. We outline an extension of the ab initio NCSM to the description of nuclear reactions by the resonating group method technique. A future direction of the approach, the ab initio NCSM with continuum, which will provide a complete description of nuclei as open systems with coupling of bound and continuum states is given in the concluding part of the review.

  2. Modeling and calculation of open carbon dioxide refrigeration system

    International Nuclear Information System (INIS)

    Cai, Yufei; Zhu, Chunling; Jiang, Yanlong; Shi, Hong

    2015-01-01

    Highlights: • A model of open refrigeration system is developed. • The state of CO 2 has great effect on Refrigeration capacity loss by heat transfer. • Refrigeration capacity loss by remaining CO 2 has little relation to the state of CO 2 . • Calculation results are in agreement with the test results. - Abstract: Based on the analysis of the properties of carbon dioxide, an open carbon dioxide refrigeration system is proposed, which is responsible for the situation without external electricity unit. A model of open refrigeration system is developed, and the relationship between the storage environment of carbon dioxide and refrigeration capacity is conducted. Meanwhile, a test platform is developed to simulation the performance of the open carbon dioxide refrigeration system. By comparing the theoretical calculations and the experimental results, several conclusions are obtained as follows: refrigeration capacity loss by heat transfer in supercritical state is much more than that in two-phase region and the refrigeration capacity loss by remaining carbon dioxide has little relation to the state of carbon dioxide. The results will be helpful to the use of open carbon dioxide refrigeration

  3. Near-Source Modeling Updates: Building Downwash & Near-Road

    Science.gov (United States)

    The presentation describes recent research efforts in near-source model development focusing on building downwash and near-road barriers. The building downwash section summarizes a recent wind tunnel study, ongoing computational fluid dynamics simulations and efforts to improve ...

  4. Model Updating and Uncertainty Management for Aircraft Prognostic Systems Project

    Data.gov (United States)

    National Aeronautics and Space Administration — This proposal addresses the integration of physics-based damage propagation models with diagnostic measures of current state of health in a mathematically rigorous...

  5. Disentangling density-dependent dynamics using full annual cycle models and Bayesian model weight updating

    Science.gov (United States)

    Robinson, Orin J.; McGowan, Conor P.; Devers, Patrick K.

    2017-01-01

    Density dependence regulates populations of many species across all taxonomic groups. Understanding density dependence is vital for predicting the effects of climate, habitat loss and/or management actions on wild populations. Migratory species likely experience seasonal changes in the relative influence of density dependence on population processes such as survival and recruitment throughout the annual cycle. These effects must be accounted for when characterizing migratory populations via population models.To evaluate effects of density on seasonal survival and recruitment of a migratory species, we used an existing full annual cycle model framework for American black ducks Anas rubripes, and tested different density effects (including no effects) on survival and recruitment. We then used a Bayesian model weight updating routine to determine which population model best fit observed breeding population survey data between 1990 and 2014.The models that best fit the survey data suggested that survival and recruitment were affected by density dependence and that density effects were stronger on adult survival during the breeding season than during the non-breeding season.Analysis also suggests that regulation of survival and recruitment by density varied over time. Our results showed that different characterizations of density regulations changed every 8–12 years (three times in the 25-year period) for our population.Synthesis and applications. Using a full annual cycle, modelling framework and model weighting routine will be helpful in evaluating density dependence for migratory species in both the short and long term. We used this method to disentangle the seasonal effects of density on the continental American black duck population which will allow managers to better evaluate the effects of habitat loss and potential habitat management actions throughout the annual cycle. The method here may allow researchers to hone in on the proper form and/or strength of

  6. A sow replacement model using Bayesian updating in a three-level hierarchic Markov process. II. Optimization model

    DEFF Research Database (Denmark)

    Kristensen, Anders Ringgaard; Søllested, Thomas Algot

    2004-01-01

    improvements. The biological model of the replacement model is described in a previous paper and in this paper the optimization model is described. The model is developed as a prototype for use under practical conditions. The application of the model is demonstrated using data from two commercial Danish sow......Recent methodological improvements in replacement models comprising multi-level hierarchical Markov processes and Bayesian updating have hardly been implemented in any replacement model and the aim of this study is to present a sow replacement model that really uses these methodological...... herds. It is concluded that the Bayesian updating technique and the hierarchical structure decrease the size of the state space dramatically. Since parameter estimates vary considerably among herds it is concluded that decision support concerning sow replacement only makes sense with parameters...

  7. Status Update: Modeling Energy Balance in NIF Hohlraums

    Energy Technology Data Exchange (ETDEWEB)

    Jones, O. S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-07-22

    We have developed a standardized methodology to model hohlraum drive in NIF experiments. We compare simulation results to experiments by 1) comparing hohlraum xray fluxes and 2) comparing capsule metrics, such as bang times. Long-pulse, high gas-fill hohlraums require a 20-28% reduction in simulated drive and inclusion of ~15% backscatter to match experiment through (1) and (2). Short-pulse, low fill or near-vacuum hohlraums require a 10% reduction in simulated drive to match experiment through (2); no reduction through (1). Ongoing work focuses on physical model modifications to improve these matches.

  8. Cancer Survivorship, Models, and Care Plans: A Status Update.

    Science.gov (United States)

    Powel, Lorrie L; Seibert, Stephen M

    2017-03-01

    This article provides a synopsis of the status of cancer survivorship in the United States. It highlights the challenges of survivorship care as the number of cancer survivors has steadily grown over the 40 years since the signing of the National Cancer Act in 1971. Also included is an overview of various models of survivorship care plans (SCPs), facilitators and barriers to SCP use, their impact on patient outcomes, and implications for clinical practice and research. This article provides a broad overview of the cancer survivorship, including models of care and survivorship care plans. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. Improved perturbative calculations in field theory; Calculation of the mass spectrum and constraints on the supersymmetric standard model; Calculs perturbatifs variationnellement ameliores en theorie des champs; Calcul du spectre et contraintes sur le modele supersymetrique standard

    Energy Technology Data Exchange (ETDEWEB)

    Kneur, J.L

    2006-06-15

    This document is divided into 2 parts. The first part describes a particular re-summation technique of perturbative series that can give a non-perturbative results in some cases. We detail some applications in field theory and in condensed matter like the calculation of the effective temperature of Bose-Einstein condensates. The second part deals with the minimal supersymmetric standard model. We present an accurate calculation of the mass spectrum of supersymmetric particles, a calculation of the relic density of supersymmetric black matter, and the constraints that we can infer from models.

  10. Development of nuclear models for higher energy calculations

    International Nuclear Information System (INIS)

    Bozoian, M.; Siciliano, E.R.; Smith, R.D.

    1988-01-01

    Two nuclear models for higher energy calculations have been developed in the regions of high and low energy transfer, respectively. In the former, a relativistic hybrid-type preequilibrium model is compared with data ranging from 60 to 800 MeV. Also, the GNASH exciton preequilibrium-model code with higher energy improvements is compared with data at 200 and 318 MeV. In the region of low energy transfer, nucleon-nucleus scattering is predominately a direct reaction involving quasi-elastic collisions with one or more target nucleons. We discuss various aspects of quasi-elastic scattering which are important in understanding features of cross sections and spin observables. These include (1) contributions from multi-step processes; (2) damping of the continuum response from 2p-2h excitations; (3) the ''optimal'' choice of frame in which to evaluate the nucleon-nucleon amplitudes; and (4) the effect of optical and spin-orbit distortions, which are included in a model based on the RPA the DWIA and the eikonal approximation. 33 refs., 15 figs

  11. Uncertainty quantification of voice signal production mechanical model and experimental updating

    OpenAIRE

    Cataldo, Edson; Soize, Christian; Sampaio, Rubens

    2013-01-01

    International audience; The aim of this paper is to analyze the uncertainty quantification in a voice production mechanical model and update the probability density function corresponding to the tension parameter using the bayes method and experimental data. Three parameters are considered uncertain in the voice production mechanical model used: the tension parameter, the neutral glottal area and the subglottal pressure. The tension parameter of the vocal folds is mainly responsible for the c...

  12. iTree-Hydro: Snow hydrology update for the urban forest hydrology model

    Science.gov (United States)

    Yang Yang; Theodore A. Endreny; David J. Nowak

    2011-01-01

    This article presents snow hydrology updates made to iTree-Hydro, previously called the Urban Forest Effects—Hydrology model. iTree-Hydro Version 1 was a warm climate model developed by the USDA Forest Service to provide a process-based planning tool with robust water quantity and quality predictions given data limitations common to most urban areas. Cold climate...

  13. Update on Parametric Cost Models for Space Telescopes

    Science.gov (United States)

    Stahl. H. Philip; Henrichs, Todd; Luedtke, Alexander; West, Miranda

    2011-01-01

    Since the June 2010 Astronomy Conference, an independent review of our cost data base discovered some inaccuracies and inconsistencies which can modify our previously reported results. This paper will review changes to the data base, our confidence in those changes and their effect on various parametric cost models

  14. Updates to Blast Injury Criteria Models for Nuclear Casualty Estimation

    Science.gov (United States)

    2015-12-01

    based Casualty Assessment (ORCA) software package contains models which track penetrating fragments and determine the likelihood of injury caused by the...pedestrian and bicycle accidents,” The Institute of Traffic Accident Investigators. Proceedings of the 5th Interantional Conference: 17th and 18th

  15. An updated summary of MATHEW/ADPIC model evaluation studies

    Energy Technology Data Exchange (ETDEWEB)

    Foster, K.T.; Dickerson, M.H.

    1990-05-01

    This paper summarizes the major model evaluation studies conducted for the MATHEW/ADPIC atmospheric transport and diffusion models used by the US Department of Energy's Atmospheric Release Advisory Capability. These studies have taken place over the last 15 years and involve field tracer releases influenced by a variety of meteorological and topographical conditions. Neutrally buoyant tracers released both as surface and elevated point sources, as well as material dispersed by explosive, thermally bouyant release mechanisms have been studied. Results from these studies show that the MATHEW/ADPIC models estimate the tracer air concentrations to within a factor of two of the measured values 20% to 50% of the time, and within a factor of five of the measurements 35% to 85% of the time depending on the complexity of the meteorology and terrain, and the release height of the tracer. Comparisons of model estimates to peak downwind deposition and air concentration measurements from explosive releases are shown to be generally within a factor of two to three. 24 refs., 14 figs., 3 tabs.

  16. An updated summary of MATHEW/ADPIC model evaluation studies

    International Nuclear Information System (INIS)

    Foster, K.T.; Dickerson, M.H.

    1990-05-01

    This paper summarizes the major model evaluation studies conducted for the MATHEW/ADPIC atmospheric transport and diffusion models used by the US Department of Energy's Atmospheric Release Advisory Capability. These studies have taken place over the last 15 years and involve field tracer releases influenced by a variety of meteorological and topographical conditions. Neutrally buoyant tracers released both as surface and elevated point sources, as well as material dispersed by explosive, thermally bouyant release mechanisms have been studied. Results from these studies show that the MATHEW/ADPIC models estimate the tracer air concentrations to within a factor of two of the measured values 20% to 50% of the time, and within a factor of five of the measurements 35% to 85% of the time depending on the complexity of the meteorology and terrain, and the release height of the tracer. Comparisons of model estimates to peak downwind deposition and air concentration measurements from explosive releases are shown to be generally within a factor of two to three. 24 refs., 14 figs., 3 tabs

  17. General equilibrium basic needs policy model, (updating part).

    OpenAIRE

    Kouwenaar A

    1985-01-01

    ILO pub-WEP pub-PREALC pub. Working paper, econometric model for the assessment of structural change affecting development planning for basic needs satisfaction in Ecuador - considers population growth, family size (households), labour force participation, labour supply, wages, income distribution, profit rates, capital ownership, etc.; examines nutrition, education and health as factors influencing productivity. Diagram, graph, references, statistical tables.

  18. Bacteriophages: update on application as models for viruses in water

    African Journals Online (AJOL)

    In view of these features, phages are particularly useful as models to assess the behaviour and survival of enteric viruses in the environment, and as surrogates to assess the resistance of human viruses to water treatment and disinfection processes. Since there is no direct correlation between numbers of phages and ...

  19. Recent Updates to the GEOS-5 Linear Model

    Science.gov (United States)

    Holdaway, Dan; Kim, Jong G.; Errico, Ron; Gelaro, Ronald; Mahajan, Rahul

    2014-01-01

    Global Modeling and Assimilation Office (GMAO) is close to having a working 4DVAR system and has developed a linearized version of GEOS-5.This talk outlines a series of improvements made to the linearized dynamics, physics and trajectory.Of particular interest is the development of linearized cloud microphysics, which provides the framework for 'all-sky' data assimilation.

  20. Dental caries: an updated medical model of risk assessment.

    Science.gov (United States)

    Kutsch, V Kim

    2014-04-01

    Dental caries is a transmissible, complex biofilm disease that creates prolonged periods of low pH in the mouth, resulting in a net mineral loss from the teeth. Historically, the disease model for dental caries consisted of mutans streptococci and Lactobacillus species, and the dental profession focused on restoring the lesions/damage from the disease by using a surgical model. The current recommendation is to implement a risk-assessment-based medical model called CAMBRA (caries management by risk assessment) to diagnose and treat dental caries. Unfortunately, many of the suggestions of CAMBRA have been overly complicated and confusing for clinicians. The risk of caries, however, is usually related to just a few common factors, and these factors result in common patterns of disease. This article examines the biofilm model of dental caries, identifies the common disease patterns, and discusses their targeted therapeutic strategies to make CAMBRA more easily adaptable for the privately practicing professional. Copyright © 2014 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.

  1. Development of Prototype Driver Models for Highway Design: Research Update

    Science.gov (United States)

    1999-06-01

    One of the high-priority research areas of the Federal Highway Administration (FHWA) is the development of the Interactive Highway Safety Design Model (IHSDM). The goal of the IHSDM research program is to develop a systematic approach that will allow...

  2. Finite element model updating of a prestressed concrete box girder bridge using subproblem approximation

    Science.gov (United States)

    Chen, G. W.; Omenzetter, P.

    2016-04-01

    This paper presents the implementation of an updating procedure for the finite element model (FEM) of a prestressed concrete continuous box-girder highway off-ramp bridge. Ambient vibration testing was conducted to excite the bridge, assisted by linear chirp sweepings induced by two small electrodynamic shakes deployed to enhance the excitation levels, since the bridge was closed to traffic. The data-driven stochastic subspace identification method was executed to recover the modal properties from measurement data. An initial FEM was developed and correlation between the experimental modal results and their analytical counterparts was studied. Modelling of the pier and abutment bearings was carefully adjusted to reflect the real operational conditions of the bridge. The subproblem approximation method was subsequently utilized to automatically update the FEM. For this purpose, the influences of bearing stiffness, and mass density and Young's modulus of materials were examined as uncertain parameters using sensitivity analysis. The updating objective function was defined based on a summation of squared values of relative errors of natural frequencies between the FEM and experimentation. All the identified modes were used as the target responses with the purpose of putting more constrains for the optimization process and decreasing the number of potentially feasible combinations for parameter changes. The updated FEM of the bridge was able to produce sufficient improvements in natural frequencies in most modes of interest, and can serve for a more precise dynamic response prediction or future investigation of the bridge health.

  3. Real-time reservoir geological model updating using the hybrid EnKF and geostatistical technique

    Energy Technology Data Exchange (ETDEWEB)

    Li, H.; Chen, S.; Yang, D. [Regina Univ., SK (Canada). Petroleum Technology Research Centre

    2008-07-01

    Reservoir simulation plays an important role in modern reservoir management. Multiple geological models are needed in order to analyze the uncertainty of a given reservoir development scenario. Ideally, dynamic data should be incorporated into a reservoir geological model. This can be done by using history matching and tuning the model to match the past performance of reservoir history. This study proposed an assisted history matching technique to accelerate and improve the matching process. The Ensemble Kalman Filter (EnKF) technique, which is an efficient assisted history matching method, was integrated with a conditional geostatistical simulation technique to dynamically update reservoir geological models. The updated models were constrained to dynamic data, such as reservoir pressure and fluid saturations, and approaches geologically realistic at each time step by using the EnKF technique. The new technique was successfully applied in a heterogeneous synthetic reservoir. The uncertainty of the reservoir characterization was significantly reduced. More accurate forecasts were obtained from the updated models. 3 refs., 2 figs.

  4. Using radar altimetry to update a large-scale hydrological model of the Brahmaputra river basin

    DEFF Research Database (Denmark)

    Finsen, F.; Milzow, Christian; Smith, R.

    2014-01-01

    of the Brahmaputra is excellent (17 high-quality virtual stations from ERS-2, 6 from Topex and 10 from Envisat are available for the Brahmaputra). In this study, altimetry data are used to update a large-scale Budyko-type hydrological model of the Brahmaputra river basin in real time. Altimetry measurements...... improved model performance considerably. The Nash-Sutcliffe model efficiency increased from 0.77 to 0.83. Real-time river basin modelling using radar altimetry has the potential to improve the predictive capability of large-scale hydrological models elsewhere on the planet....

  5. Improved Approximation of Interactive Dynamic Influence DiagramsUsing Discriminative Model Updates

    DEFF Research Database (Denmark)

    Prashant, Doshi; Zeng, Yifeng

    2009-01-01

    Interactive dynamic influence diagrams (I-DIDs) are graphical models for sequential decision making in uncertain settings shared by other agents. Algorithms for solving I-DIDs face the challenge of an exponentially growing space of candidate models ascribed to other agents, over time. We formalize...... the concept of a minimal model set, which facilitates qualitative comparisons between different approximation techniques. We then present a new approximation technique that minimizes the space of candidate models by discriminating between model updates. We empirically demonstrate that our approach improves...

  6. Updating sea spray aerosol emissions in the Community Multiscale Air Quality (CMAQ) model version 5.0.2

    Data.gov (United States)

    U.S. Environmental Protection Agency — The uploaded data consists of the BRACE Na aerosol observations paired with CMAQ model output, the updated model's parameterization of sea salt aerosol emission size...

  7. Updates to the Demographic and Spatial Allocation Models to Produce Integrated Climate and Land Use Scenarios (ICLUS) (Final Report, Version 2)

    Science.gov (United States)

    EPA's announced the availability of the final report, Updates to the Demographic and Spatial Allocation Models to Produce Integrated Climate and Land Use Scenarios (ICLUS) (Version 2). This update furthered land change modeling by providing nationwide housing developmen...

  8. Quantum plasmonics: from jellium models to ab initio calculations

    Directory of Open Access Journals (Sweden)

    Varas Alejandro

    2016-08-01

    Full Text Available Light-matter interaction in plasmonic nanostructures is often treated within the realm of classical optics. However, recent experimental findings show the need to go beyond the classical models to explain and predict the plasmonic response at the nanoscale. A prototypical system is a nanoparticle dimer, extensively studied using both classical and quantum prescriptions. However, only very recently, fully ab initio time-dependent density functional theory (TDDFT calculations of the optical response of these dimers have been carried out. Here, we review the recent work on the impact of the atomic structure on the optical properties of such systems. We show that TDDFT can be an invaluable tool to simulate the time evolution of plasmonic modes, providing fundamental understanding into the underlying microscopical mechanisms.

  9. Renewable Energy Monitoring Protocol. Update 2010. Methodology for the calculation and recording of the amounts of energy produced from renewable sources in the Netherlands

    Energy Technology Data Exchange (ETDEWEB)

    Te Buck, S.; Van Keulen, B.; Bosselaar, L.; Gerlagh, T.; Skelton, T.

    2010-07-15

    This is the fifth, updated edition of the Dutch Renewable Energy Monitoring Protocol. The protocol, compiled on behalf of the Ministry of Economic Affairs, can be considered as a policy document that provides a uniform calculation method for determining the amount of energy produced in the Netherlands in a renewable manner. Because all governments and organisations use the calculation methods described in this protocol, this makes it possible to monitor developments in this field well and consistently. The introduction of this protocol outlines the history and describes its set-up, validity and relationship with other similar documents and agreements. The Dutch Renewable Energy Monitoring Protocol is compiled by NL Agency, and all relevant parties were given the chance to provide input. This has been incorporated as far as is possible. Statistics Netherlands (CBS) uses this protocol to calculate the amount of renewable energy produced in the Netherlands. These data are then used by the Ministry of Economic Affairs to gauge the realisation of policy objectives. In June 2009 the European Directive for energy from renewable sources was published with renewable energy targets for the Netherlands. This directive used a different calculation method - the gross energy end-use method - whilst the Dutch definition is based on the so-called substitution method. NL Agency was asked to add the calculation according to the gross end use method, although this is not clearly defined on a number of points. In describing the method, the unanswered questions become clear, as do, for example, the points the Netherlands should bring up in international discussions.

  10. Selection of models to calculate the LLW source term

    International Nuclear Information System (INIS)

    Sullivan, T.M.

    1991-10-01

    Performance assessment of a LLW disposal facility begins with an estimation of the rate at which radionuclides migrate out of the facility (i.e., the source term). The focus of this work is to develop a methodology for calculating the source term. In general, the source term is influenced by the radionuclide inventory, the wasteforms and containers used to dispose of the inventory, and the physical processes that lead to release from the facility (fluid flow, container degradation, wasteform leaching, and radionuclide transport). In turn, many of these physical processes are influenced by the design of the disposal facility (e.g., infiltration of water). The complexity of the problem and the absence of appropriate data prevent development of an entirely mechanistic representation of radionuclide release from a disposal facility. Typically, a number of assumptions, based on knowledge of the disposal system, are used to simplify the problem. This document provides a brief overview of disposal practices and reviews existing source term models as background for selecting appropriate models for estimating the source term. The selection rationale and the mathematical details of the models are presented. Finally, guidance is presented for combining the inventory data with appropriate mechanisms describing release from the disposal facility. 44 refs., 6 figs., 1 tab

  11. A sow replacement model using Bayesian updating in a three-level hierarchic Markov process. II. Optimization model

    DEFF Research Database (Denmark)

    Kristensen, Anders Ringgaard; Søllested, Thomas Algot

    2004-01-01

    improvements. The biological model of the replacement model is described in a previous paper and in this paper the optimization model is described. The model is developed as a prototype for use under practical conditions. The application of the model is demonstrated using data from two commercial Danish sow......Recent methodological improvements in replacement models comprising multi-level hierarchical Markov processes and Bayesian updating have hardly been implemented in any replacement model and the aim of this study is to present a sow replacement model that really uses these methodological...

  12. Cycle life versus depth of discharge update on modeling studies

    Science.gov (United States)

    Thaller, Lawrence H.

    1994-02-01

    The topics are presented in viewgraph form and cycle life vs. depth of discharge data for the following are presented: data as of three years ago; Air Force/Crane-Fuhr-Smithrick; Ken Fuhr's Data; Air Force/Crane Data; Eagle-Pitcher Data; Steve Schiffer's Data; John Smithrick's Data; temperature effects; and E-P, Yardney, and Hughes 26% Data. Other topics covered include the following: LeRC cycling tests of Yardney Space Station Cells; general statements; general observations; two different models of cycle life vs. depth of discharge; and other degradation modes.

  13. Calculational models of close-spaced thermionic converters

    International Nuclear Information System (INIS)

    McVey, J.B.

    1983-01-01

    Two new calculational models have been developed in conjunction with the SAVTEC experimental program. These models have been used to analyze data from experimental close-spaced converters, providing values for spacing, electrode work functions, and converter efficiency. They have also been used to make performance predictions for such converters over a wide range of conditions. Both models are intended for use in the collisionless (Knudsen) regime. They differ from each other in that the simpler one uses a Langmuir-type formulation which only considers electrons emitted from the emitter. This approach is implemented in the LVD (Langmuir Vacuum Diode) computer program, which has the virtue of being both simple and fast. The more complex model also includes both Saha-Langmuir emission of positive cesium ions from the emitter and collector back emission. Computer implementation is by the KMD1 (Knudsen Mode Diode) program. The KMD1 model derives the particle distribution functions from the Vlasov equation. From these the particle densities are found for various interelectrode motive shapes. Substituting the particle densities into Poisson's equation gives a second order differential equation for potential. This equation can be integrated once analytically. The second integration, which gives the interelectrode motive, is performed numerically by the KMD1 program. This is complicated by the fact that the integrand is often singular at one end point of the integration interval. The program performs a transformation on the integrand to make it finite over the entire interval. Once the motive has been computed, the output voltage, current density, power density, and efficiency are found. The program is presently unable to operate when the ion richness ratio β is between about .8 and 1.0, due to the occurrence of oscillatory motives

  14. Calculation of extreme wind atlases using mesoscale modeling. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Larsen, X.G..; Badger, J.

    2012-06-15

    The objective of this project is to develop new methodologies for extreme wind atlases using mesoscale modeling. Three independent methodologies have been developed. All three methodologies are targeted at confronting and solving the problems and drawbacks in existing methods for extreme wind estimation regarding the use of modeled data (coarse resolution, limited representation of storms) and measurements (short period and technical issues). The first methodology is called the selective dynamical downscaling method. For a chosen area, we identify the yearly strongest storms through global reanalysis data at each model grid point and run a mesoscale model, here the Weather Research and Forecasting (WRF) model, for all storms identified. Annual maximum winds and corresponding directions from each mesoscale grid point are then collected, post-processed and used in Gumbel distribution to obtain the 50-year wind. The second methodology is called the statistical-dynamical downscaling method. For a chosen area, the geostrophic winds at a representative grid point from the global reanalysis data are used to obtain the annual maximum winds in 12 sectors for a period of 30 years. This results in 360 extreme geostrophic winds. Each of the 360 winds is used as a stationary forcing in a mesoscale model, here KAMM. For each mesoscale grid point the annual maximum winds are post-processed and used to a Gumbel fit to obtain the 50-year wind. For the above two methods, the post-processing is an essential part. It calculates the speedup effects using a linear computation model (LINCOM) and corrects the winds from the mesoscale modeling to a standard condition, i.e. 10 m above a homogeneous surface with a roughness length 5 cm. Winds of the standard condition can then be put into a microscale model to resolve the local terrain and roughness effects around particular turbine sites. By converting both the measured and modeled winds to the same surface conditions through the post

  15. Continuous updating of a coupled reservoir-seismic model using an ensemble Kalman filter technique

    Energy Technology Data Exchange (ETDEWEB)

    Skjervheim, Jan-Arild

    2007-07-01

    This work presents the development of a method based on the ensemble Kalman filter (EnKF) for continuous reservoir model updating with respect to the combination of production data, 3D seismic data and time-lapse seismic data. The reservoir-seismic model system consists of a commercial reservoir simulator coupled to existing rock physics and seismic modelling software. The EnKF provides an ideal-setting for real time updating and prediction in reservoir simulation models, and has been applied to synthetic models and real field cases from the North Sea. In the EnKF method, static parameters as the porosity and permeability, and dynamic variables, as fluid saturations and pressure, are updated in the reservoir model at each step data become available. In addition, we have updated a lithology parameter (clay ratio) which is linked to the rock physics model, and the fracture density in a synthetic fractured reservoir. In the EnKF experiments we have assimilated various types of production and seismic data. Gas oil ratio (GOR), water cut (WCT) and bottom-hole pressure (BHP) are used in the data assimilation. Furthermore, inverted seismic data, such as Poisson's ratio and acoustic impedance, and seismic waveform data have been assimilated. In reservoir applications seismic data may introduce a large amount of data in the assimilation schemes, and the computational time becomes expensive. In this project efficient EnKF schemes are used to handle such large datasets, where challenging aspects such as the inversion of a large covariance matrix and potential loss of rank are considered. Time-lapse seismic data may be difficult to assimilate since they are time difference data, i.e. data which are related to the model variable at two or more time instances. Here we have presented a general sequential Bayesian formulation which incorporates time difference data, and we show that the posterior distribution includes both a filter and a smoother solution. Further, we show

  16. Box models for the evolution of atmospheric oxygen: an update.

    Science.gov (United States)

    Kasting, J F

    1991-01-01

    A simple 3-box model of the atmosphere/ocean system is used to describe the various stages in the evolution of atmospheric oxygen. In Stage I, which probably lasted until redbeds began to form about 2.0 Ga ago, the Earth's surface environment was generally devoid of free O2, except possibly in localized regions of high productivity in the surface ocean. In Stage II, which may have lasted for less than 150 Ma, the atmosphere and surface ocean were oxidizing, while the deep ocean remained anoxic. In Stage III, which commenced with the disappearance of banded iron formations around 1.85 Ga ago and has lasted until the present, all three surface reservoirs contained appreciable amounts of free O2. Recent and not-so-recent controversies regarding the abundance of oxygen in the Archean atmosphere are identified and discussed. The rate of O2 increase during the Middle and Late Proterozoic is identified as another outstanding question.

  17. Advanced Test Reactor Core Modeling Update Project Annual Report for Fiscal Year 2012

    Energy Technology Data Exchange (ETDEWEB)

    David W. Nigg, Principal Investigator; Kevin A. Steuhm, Project Manager

    2012-09-01

    Legacy computational reactor physics software tools and protocols currently used for support of Advanced Test Reactor (ATR) core fuel management and safety assurance, and to some extent, experiment management, are inconsistent with the state of modern nuclear engineering practice, and are difficult, if not impossible, to properly verify and validate (V&V) according to modern standards. Furthermore, the legacy staff knowledge required for application of these tools and protocols from the 1960s and 1970s is rapidly being lost due to staff turnover and retirements. In late 2009, the Idaho National Laboratory (INL) initiated a focused effort, the ATR Core Modeling Update Project, to address this situation through the introduction of modern high-fidelity computational software and protocols. This aggressive computational and experimental campaign will have a broad strategic impact on the operation of the ATR, both in terms of improved computational efficiency and accuracy for support of ongoing DOE programs as well as in terms of national and international recognition of the ATR National Scientific User Facility (NSUF). The ATR Core Modeling Update Project, targeted for full implementation in phase with the next anticipated ATR Core Internals Changeout (CIC) in the 2014-2015 time frame, began during the last quarter of Fiscal Year 2009, and has just completed its third full year. Key accomplishments so far have encompassed both computational as well as experimental work. A new suite of stochastic and deterministic transport theory based reactor physics codes and their supporting nuclear data libraries (HELIOS, KENO6/SCALE, NEWT/SCALE, ATTILA, and an extended implementation of MCNP5) has been installed at the INL under various licensing arrangements. Corresponding models of the ATR and ATRC are now operational with all five codes, demonstrating the basic feasibility of the new code packages for their intended purpose. Of particular importance, a set of as-run core

  18. Calculating ε'/ε in the standard model

    International Nuclear Information System (INIS)

    Sharpe, S.R.

    1988-01-01

    The ingredients needed in order to calculate ε' and ε are described. Particular emphasis is given to the non-perturbative calculations of matrix elements by lattice methods. The status of the electromagnetic contribution to ε' is reviewed. 15 refs

  19. Off-Highway Gasoline Consuption Estimation Models Used in the Federal Highway Administration Attribution Process: 2008 Updates

    Energy Technology Data Exchange (ETDEWEB)

    Hwang, Ho-Ling [ORNL; Davis, Stacy Cagle [ORNL

    2009-12-01

    This report is designed to document the analysis process and estimation models currently used by the Federal Highway Administration (FHWA) to estimate the off-highway gasoline consumption and public sector fuel consumption. An overview of the entire FHWA attribution process is provided along with specifics related to the latest update (2008) on the Off-Highway Gasoline Use Model and the Public Use of Gasoline Model. The Off-Highway Gasoline Use Model is made up of five individual modules, one for each of the off-highway categories: agricultural, industrial and commercial, construction, aviation, and marine. This 2008 update of the off-highway models was the second major update (the first model update was conducted during 2002-2003) after they were originally developed in mid-1990. The agricultural model methodology, specifically, underwent a significant revision because of changes in data availability since 2003. Some revision to the model was necessary due to removal of certain data elements used in the original estimation method. The revised agricultural model also made use of some newly available information, published by the data source agency in recent years. The other model methodologies were not drastically changed, though many data elements were updated to improve the accuracy of these models. Note that components in the Public Use of Gasoline Model were not updated in 2008. A major challenge in updating estimation methods applied by the public-use model is that they would have to rely on significant new data collection efforts. In addition, due to resource limitation, several components of the models (both off-highway and public-us models) that utilized regression modeling approaches were not recalibrated under the 2008 study. An investigation of the Environmental Protection Agency's NONROAD2005 model was also carried out under the 2008 model update. Results generated from the NONROAD2005 model were analyzed, examined, and compared, to the extent that

  20. Future-year ozone prediction for the United States using updated models and inputs.

    Science.gov (United States)

    Collet, Susan; Kidokoro, Toru; Karamchandani, Prakash; Shah, Tejas; Jung, Jaegun

    2017-08-01

    The relationship between emission reductions and changes in ozone can be studied using photochemical grid models. These models are updated with new information as it becomes available. The primary objective of this study was to update the previous Collet et al. studies by using the most up-to-date (at the time the study was done) modeling emission tools, inventories, and meteorology available to conduct ozone source attribution and sensitivity studies. Results show future-year, 2030, design values for 8-hr ozone concentrations were lower than base-year values, 2011. The ozone source attribution results for selected cities showed that boundary conditions were the dominant contributors to ozone concentrations at the western U.S. locations, and were important for many of the eastern U.S. Point sources were generally more important in the eastern United States than in the western United States. The contributions of on-road mobile emissions were less than 5 ppb at a majority of the cities selected for analysis. The higher-order decoupled direct method (HDDM) results showed that in most of the locations selected for analysis, NOx emission reductions were more effective than VOC emission reductions in reducing ozone levels. The source attribution results from this study provide useful information on the important source categories and provide some initial guidance on future emission reduction strategies. The relationship between emission reductions and changes in ozone can be studied using photochemical grid models, which are updated with new available information. This study was to update the previous Collet et al. studies by using the most current, at the time the study was done, models and inventory to conduct ozone source attribution and sensitivity studies. The source attribution results from this study provide useful information on the important source categories and provide some initial guidance on future emission reduction strategies.

  1. Nonlinear model updating applied to the IMAC XXXII Round Robin benchmark system

    Science.gov (United States)

    Kurt, Mehmet; Moore, Keegan J.; Eriten, Melih; McFarland, D. Michael; Bergman, Lawrence A.; Vakakis, Alexander F.

    2017-05-01

    We consider the application of a new nonlinear model updating strategy to a computational benchmark system. The approach relies on analyzing system response time series in the frequency-energy domain by constructing both Hamiltonian and forced and damped frequency-energy plots (FEPs). The system parameters are then characterized and updated by matching the backbone branches of the FEPs with the frequency-energy wavelet transforms of experimental and/or computational time series. The main advantage of this method is that no nonlinearity model is assumed a priori, and the system model is updated solely based on simulation and/or experimental measured time series. By matching the frequency-energy plots of the benchmark system and its reduced-order model, we show that we are able to retrieve the global strongly nonlinear dynamics in the frequency and energy ranges of interest, identify bifurcations, characterize local nonlinearities, and accurately reconstruct time series. We apply the proposed methodology to a benchmark problem, which was posed to the system identification community prior to the IMAC XXXII (2014) and XXXIII (2015) Conferences as a "Round Robin Exercise on Nonlinear System Identification". We show that we are able to identify the parameters of the non-linear element in the problem with a priori knowledge about its position.

  2. Comparative analysis of calculation models of railway subgrade

    Directory of Open Access Journals (Sweden)

    I.O. Sviatko

    2013-08-01

    Full Text Available Purpose. In transport engineering structures design, the primary task is to determine the parameters of foundation soil and nuances of its work under loads. It is very important to determine the parameters of shear resistance and the parameters, determining the development of deep deformations in foundation soils, while calculating the soil subgrade - upper track structure interaction. Search for generalized numerical modeling methods of embankment foundation soil work that include not only the analysis of the foundation stress state but also of its deformed one. Methodology. The analysis of existing modern and classical methods of numerical simulation of soil samples under static load was made. Findings. According to traditional methods of analysis of ground masses work, limitation and the qualitative estimation of subgrade deformations is possible only indirectly, through the estimation of stress and comparison of received values with the boundary ones. Originality. A new computational model was proposed in which it will be applied not only classical approach analysis of the soil subgrade stress state, but deformed state will be also taken into account. Practical value. The analysis showed that for accurate analysis of ground masses work it is necessary to develop a generalized methodology for analyzing of the rolling stock - railway subgrade interaction, which will use not only the classical approach of analyzing the soil subgrade stress state, but also take into account its deformed one.

  3. Updating Linear Schedules with Lowest Cost: a Linear Programming Model

    Science.gov (United States)

    Biruk, Sławomir; Jaśkowski, Piotr; Czarnigowska, Agata

    2017-10-01

    Many civil engineering projects involve sets of tasks repeated in a predefined sequence in a number of work areas along a particular route. A useful graphical representation of schedules of such projects is time-distance diagrams that clearly show what process is conducted at a particular point of time and in particular location. With repetitive tasks, the quality of project performance is conditioned by the ability of the planner to optimize workflow by synchronizing the works and resources, which usually means that resources are planned to be continuously utilized. However, construction processes are prone to risks, and a fully synchronized schedule may expire if a disturbance (bad weather, machine failure etc.) affects even one task. In such cases, works need to be rescheduled, and another optimal schedule should be built for the changed circumstances. This typically means that, to meet the fixed completion date, durations of operations have to be reduced. A number of measures are possible to achieve such reduction: working overtime, employing more resources or relocating resources from less to more critical tasks, but they all come at a considerable cost and affect the whole project. The paper investigates the problem of selecting the measures that reduce durations of tasks of a linear project so that the cost of these measures is kept to the minimum and proposes an algorithm that could be applied to find optimal solutions as the need to reschedule arises. Considering that civil engineering projects, such as road building, usually involve less process types than construction projects, the complexity of scheduling problems is lower, and precise optimization algorithms can be applied. Therefore, the authors put forward a linear programming model of the problem and illustrate its principle of operation with an example.

  4. Obtaining manufactured geometries of deep-drawn components through a model updating procedure using geometric shape parameters

    Science.gov (United States)

    Balla, Vamsi Krishna; Coox, Laurens; Deckers, Elke; Plyumers, Bert; Desmet, Wim; Marudachalam, Kannan

    2018-01-01

    The vibration response of a component or system can be predicted using the finite element method after ensuring numerical models represent realistic behaviour of the actual system under study. One of the methods to build high-fidelity finite element models is through a model updating procedure. In this work, a novel model updating method of deep-drawn components is demonstrated. Since the component is manufactured with a high draw ratio, significant deviations in both profile and thickness distributions occurred in the manufacturing process. A conventional model updating, involving Young's modulus, density and damping ratios, does not lead to a satisfactory match between simulated and experimental results. Hence a new model updating process is proposed, where geometry shape variables are incorporated, by carrying out morphing of the finite element model. This morphing process imitates the changes that occurred during the deep drawing process. An optimization procedure that uses the Global Response Surface Method (GRSM) algorithm to maximize diagonal terms of the Modal Assurance Criterion (MAC) matrix is presented. This optimization results in a more accurate finite element model. The advantage of the proposed methodology is that the CAD surface of the updated finite element model can be readily obtained after optimization. This CAD model can be used for carrying out analysis, as it represents the manufactured part more accurately. Hence, simulations performed using this updated model with an accurate geometry, will therefore yield more reliable results.

  5. Accurate Holdup Calculations with Predictive Modeling & Data Integration

    Energy Technology Data Exchange (ETDEWEB)

    Azmy, Yousry [North Carolina State Univ., Raleigh, NC (United States). Dept. of Nuclear Engineering; Cacuci, Dan [Univ. of South Carolina, Columbia, SC (United States). Dept. of Mechanical Engineering

    2017-04-03

    In facilities that process special nuclear material (SNM) it is important to account accurately for the fissile material that enters and leaves the plant. Although there are many stages and processes through which materials must be traced and measured, the focus of this project is material that is “held-up” in equipment, pipes, and ducts during normal operation and that can accumulate over time into significant quantities. Accurately estimating the holdup is essential for proper SNM accounting (vis-à-vis nuclear non-proliferation), criticality and radiation safety, waste management, and efficient plant operation. Usually it is not possible to directly measure the holdup quantity and location, so these must be inferred from measured radiation fields, primarily gamma and less frequently neutrons. Current methods to quantify holdup, i.e. Generalized Geometry Holdup (GGH), primarily rely on simple source configurations and crude radiation transport models aided by ad hoc correction factors. This project seeks an alternate method of performing measurement-based holdup calculations using a predictive model that employs state-of-the-art radiation transport codes capable of accurately simulating such situations. Inverse and data assimilation methods use the forward transport model to search for a source configuration that best matches the measured data and simultaneously provide an estimate of the level of confidence in the correctness of such configuration. In this work the holdup problem is re-interpreted as an inverse problem that is under-determined, hence may permit multiple solutions. A probabilistic approach is applied to solving the resulting inverse problem. This approach rates possible solutions according to their plausibility given the measurements and initial information. This is accomplished through the use of Bayes’ Theorem that resolves the issue of multiple solutions by giving an estimate of the probability of observing each possible solution. To use

  6. Atmospheric release model for the E-area low-level waste facility: Updates and modifications

    Energy Technology Data Exchange (ETDEWEB)

    None, None

    2017-11-16

    The atmospheric release model (ARM) utilizes GoldSim® Monte Carlo simulation software (GTG, 2017) to evaluate the flux of gaseous radionuclides as they volatilize from E-Area disposal facility waste zones, diffuse into the air-filled soil pores surrounding the waste, and emanate at the land surface. This report documents the updates and modifications to the ARM for the next planned E-Area PA considering recommendations from the 2015 PA strategic planning team outlined by Butcher and Phifer.

  7. Measuring online learning systems success: applying the updated DeLone and McLean model.

    Science.gov (United States)

    Lin, Hsiu-Fen

    2007-12-01

    Based on a survey of 232 undergraduate students, this study used the updated DeLone and McLean information systems success model to examine the determinants for successful use of online learning systems (OLS). The results provided an expanded understanding of the factors that measure OLS success. The results also showed that system quality, information quality, and service quality had a significant effect on actual OLS use through user satisfaction and behavioral intention to use OLS.

  8. Atmospheric release model for the E-area low-level waste facility: Updates and modifications

    International Nuclear Information System (INIS)

    None, None

    2017-01-01

    The atmospheric release model (ARM) utilizes GoldSim® Monte Carlo simulation software (GTG, 2017) to evaluate the flux of gaseous radionuclides as they volatilize from E-Area disposal facility waste zones, diffuse into the air-filled soil pores surrounding the waste, and emanate at the land surface. This report documents the updates and modifications to the ARM for the next planned E-Area PA considering recommendations from the 2015 PA strategic planning team outlined by Butcher and Phifer.

  9. User's guide to the MESOI diffusion model and to the utility programs UPDATE and LOGRVU

    Energy Technology Data Exchange (ETDEWEB)

    Athey, G.F.; Allwine, K.J.; Ramsdell, J.V.

    1981-11-01

    MESOI is an interactive, Lagrangian puff trajectory diffusion model. The model is documented separately (Ramsdell and Athey, 1981); this report is intended to provide MESOI users with the information needed to successfully conduct model simulations. The user is also provided with guidance in the use of the data file maintenance and review programs; UPDATE and LOGRVU. Complete examples are given for the operaton of all three programs and an appendix documents UPDATE and LOGRVU.

  10. Updated logistic regression equations for the calculation of post-fire debris-flow likelihood in the western United States

    Science.gov (United States)

    Staley, Dennis M.; Negri, Jacquelyn A.; Kean, Jason W.; Laber, Jayme L.; Tillery, Anne C.; Youberg, Ann M.

    2016-06-30

    Wildfire can significantly alter the hydrologic response of a watershed to the extent that even modest rainstorms can generate dangerous flash floods and debris flows. To reduce public exposure to hazard, the U.S. Geological Survey produces post-fire debris-flow hazard assessments for select fires in the western United States. We use publicly available geospatial data describing basin morphology, burn severity, soil properties, and rainfall characteristics to estimate the statistical likelihood that debris flows will occur in response to a storm of a given rainfall intensity. Using an empirical database and refined geospatial analysis methods, we defined new equations for the prediction of debris-flow likelihood using logistic regression methods. We showed that the new logistic regression model outperformed previous models used to predict debris-flow likelihood.

  11. Research of Cadastral Data Modelling and Database Updating Based on Spatio-temporal Process

    Directory of Open Access Journals (Sweden)

    ZHANG Feng

    2016-02-01

    Full Text Available The core of modern cadastre management is to renew the cadastre database and keep its currentness,topology consistency and integrity.This paper analyzed the changes and their linkage of various cadastral objects in the update process.Combined object-oriented modeling technique with spatio-temporal objects' evolution express,the paper proposed a cadastral data updating model based on the spatio-temporal process according to people's thought.Change rules based on the spatio-temporal topological relations of evolution cadastral spatio-temporal objects are drafted and further more cascade updating and history back trace of cadastral features,land use and buildings are realized.This model implemented in cadastral management system-ReGIS.Achieved cascade changes are triggered by the direct driving force or perceived external events.The system records spatio-temporal objects' evolution process to facilitate the reconstruction of history,change tracking,analysis and forecasting future changes.

  12. Dynamic updating atlas for heart segmentation with a nonlinear field-based model.

    Science.gov (United States)

    Cai, Ken; Yang, Rongqian; Yue, Hongwei; Li, Lihua; Ou, Shanxing; Liu, Feng

    2017-09-01

    Segmentation of cardiac computed tomography (CT) images is an effective method for assessing the dynamic function of the heart and lungs. In the atlas-based heart segmentation approach, the quality of segmentation usually relies upon atlas images, and the selection of those reference images is a key step. The optimal goal in this selection process is to have the reference images as close to the target image as possible. This study proposes an atlas dynamic update algorithm using a scheme of nonlinear deformation field. The proposed method is based on the features among double-source CT (DSCT) slices. The extraction of these features will form a base to construct an average model and the created reference atlas image is updated during the registration process. A nonlinear field-based model was used to effectively implement a 4D cardiac segmentation. The proposed segmentation framework was validated with 14 4D cardiac CT sequences. The algorithm achieved an acceptable accuracy (1.0-2.8 mm). Our proposed method that combines a nonlinear field-based model and dynamic updating atlas strategies can provide an effective and accurate way for whole heart segmentation. The success of the proposed method largely relies on the effective use of the prior knowledge of the atlas and the similarity explored among the to-be-segmented DSCT sequences. Copyright © 2016 John Wiley & Sons, Ltd.

  13. Modal testing and finite element model updating of laser spot welds

    Energy Technology Data Exchange (ETDEWEB)

    Husain, N Abu; Khodaparast, H Haddad; Snaylam, A; James, S; Sharp, M; Dearden, G; Ouyang, H, E-mail: h.ouyang@liverpool.ac.u [Department of Engineering, Harrison Hughes Building, University of Liverpool, L69 3GH (United Kingdom)

    2009-08-01

    Spot welds are used extensively in automotive engineering. One of the latest manufacturing techniques for producing spot welds is Laser Welding. Finite element (FE) modelling of laser welds for dynamic analysis is a research issue because of the complexity and uncertainty of the welds and thus formed structures. In this work, FE model of the welds is developed by employing CWELD element in NASTRAN and its feasibility for representing laser spot welds is investigated. The FE model is updated based on the measured modal data of hat-plate structures and cast as a structural minimisation problem by the application of NASTRAN codes.

  14. Automatically updating predictive modeling workflows support decision-making in drug design.

    Science.gov (United States)

    Muegge, Ingo; Bentzien, Jörg; Mukherjee, Prasenjit; Hughes, Robert O

    2016-09-01

    Using predictive models for early decision-making in drug discovery has become standard practice. We suggest that model building needs to be automated with minimum input and low technical maintenance requirements. Models perform best when tailored to answering specific compound optimization related questions. If qualitative answers are required, 2-bin classification models are preferred. Integrating predictive modeling results with structural information stimulates better decision making. For in silico models supporting rapid structure-activity relationship cycles the performance deteriorates within weeks. Frequent automated updates of predictive models ensure best predictions. Consensus between multiple modeling approaches increases the prediction confidence. Combining qualified and nonqualified data optimally uses all available information. Dose predictions provide a holistic alternative to multiple individual property predictions for reaching complex decisions.

  15. Update on Small Modular Reactors Dynamic System Modeling Tool: Web Application

    Energy Technology Data Exchange (ETDEWEB)

    Hale, Richard Edward [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Cetiner, Sacit M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Fugate, David L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Batteh, John J [Modelon Corporation (Sweden); Tiller, Michael M. [Xogeny Corporation (United States)

    2015-01-01

    Previous reports focused on the development of component and system models as well as end-to-end system models using Modelica and Dymola for two advanced reactor architectures: (1) Advanced Liquid Metal Reactor and (2) fluoride high-temperature reactor (FHR). The focus of this report is the release of the first beta version of the web-based application for model use and collaboration, as well as an update on the FHR model. The web-based application allows novice users to configure end-to-end system models from preconfigured choices to investigate the instrumentation and controls implications of these designs and allows for the collaborative development of individual component models that can be benchmarked against test systems for potential inclusion in the model library. A description of this application is provided along with examples of its use and a listing and discussion of all the models that currently exist in the library.

  16. Full waveform modelling and misfit calculation using the VERCE platform

    Science.gov (United States)

    Garth, Thomas; Spinuso, Alessandro; Casarotti, Emanuele; Magnoni, Federica; Krischner, Lion; Igel, Heiner; Schwichtenberg, Horst; Frank, Anton; Vilotte, Jean-Pierre; Rietbrock, Andreas

    2016-04-01

    simulated and recorded waveforms, enabling seismologists to specify and steer their misfit analyses using existing python tools and libraries such as Pyflex and the dispel4py data-intensive processing library. All these processes, including simulation, data access, pre-processing and misfit calculation, are presented to the users of the gateway as dedicated and interactive workspaces. The VERCE platform can also be used to produce animations of seismic wave propagation through the velocity model, and synthetic shake maps. We demonstrate the functionality of the VERCE platform with two case studies, using the pre-loaded velocity model and mesh for Chile and Northern Italy. It is envisioned that this tool will allow a much greater range of seismologists to access these full waveform inversion tools, and aid full waveform tomographic and source inversion, synthetic shake map production and other full waveform applications, in a wide range of tectonic settings.

  17. Updating and prospective validation of a prognostic model for high sickness absence.

    Science.gov (United States)

    Roelen, C A M; Heymans, M W; Twisk, J W R; van Rhenen, W; Pallesen, S; Bjorvatn, B; Moen, B E; Magerøy, N

    2015-01-01

    To further develop and validate a Dutch prognostic model for high sickness absence (SA). Three-wave longitudinal cohort study of 2,059 Norwegian nurses. The Dutch prognostic model was used to predict high SA among Norwegian nurses at wave 2. Subsequently, the model was updated by adding person-related (age, gender, marital status, children at home, and coping strategies), health-related (BMI, physical activity, smoking, and caffeine and alcohol intake), and work-related (job satisfaction, job demands, decision latitude, social support at work, and both work-to-family and family-to-work spillover) variables. The updated model was then prospectively validated for predictions at wave 3. 1,557 (77 %) nurses had complete data at wave 2 and 1,342 (65 %) at wave 3. The risk of high SA was under-estimated by the Dutch model, but discrimination between high-risk and low-risk nurses was fair after re-calibration to the Norwegian data. Gender, marital status, BMI, physical activity, smoking, alcohol intake, job satisfaction, job demands, decision latitude, support at the workplace, and work-to-family spillover were identified as potential predictors of high SA. However, these predictors did not improve the model's discriminative ability, which remained fair at wave 3. The prognostic model correctly identifies 73 % of Norwegian nurses at risk of high SA, although additional predictors are needed before the model can be used to screen working populations for risk of high SA.

  18. Updating representation of land surface-atmosphere feedbacks in airborne campaign modeling analysis

    Science.gov (United States)

    Huang, M.; Carmichael, G. R.; Crawford, J. H.; Chan, S.; Xu, X.; Fisher, J. A.

    2017-12-01

    An updated modeling system to support airborne field campaigns is being built at NASA Ames Pleiades, with focus on adjusting the representation of land surface-atmosphere feedbacks. The main updates, referring to previous experiences with ARCTAS-CARB and CalNex in the western US to study air pollution inflows, include: 1) migrating the WRF (Weather Research and Forecasting) coupled land surface model from Noah to improved/more complex models especially Noah-MP and Rapid Update Cycle; 2) enabling the WRF land initialization with suitably spun-up land model output; 3) incorporating satellite land cover, vegetation dynamics, and soil moisture data (i.e., assimilating Soil Moisture Active Passive data using the ensemble Kalman filter approach) into WRF. Examples are given of comparing the model fields with available aircraft observations during spring-summer 2016 field campaigns taken place at the eastern side of continents (KORUS-AQ in South Korea and ACT-America in the eastern US), the air pollution export regions. Under fair weather and stormy conditions, air pollution vertical distributions and column amounts, as well as the impact from land surface, are compared. These help identify challenges and opportunities for LEO/GEO satellite remote sensing and modeling of air quality in the northern hemisphere. Finally, we briefly show applications of this system on simulating Australian conditions, which would explore the needs for further development of the observing system in the southern hemisphere and inform the Clean Air and Urban Landscapes (https://www.nespurban.edu.au) modelers.

  19. Updated and integrated modelling of the 1995 - 2008 Mise-a-la-masse survey data in Olkiluoto

    International Nuclear Information System (INIS)

    Ahokas, T.; Paananen, M.

    2010-01-01

    Posiva Oy prepares for disposal of spent nuclear fuel into bedrock focusing in Olkiluoto, Eurajoki. This is in accordance of the Decision-in-Principle of the State Council in 2000, and ratification by the Parliament in 2001. The ONKALO underground characterization premises have been constructed since 2004. Posiva Oy is aiming for submitting the construction licence application in 2012. To support the compilation of the safety case and repository and ONKALO design and construction, an integrated Olkiluoto site Description including geological, rock mechanics, hydrogeological and hydrogeochemical models will be depicted. Mise-a-la-masse (MAM) surveys have been carried out in the Olkiluoto area since 1995 to follow electric conductors from drillhole to drillhole, from drillhole to the ground surface and also between the ONKALO access tunnel and drillholes or the ground surface. The data and some visualisation of the data have been presented as part of reporting of the 1995 and 2008 surveys. The work presented in this paper includes modelling of all the measured data and combining single conductors modelled from different surveys to conductive zones. The results from this work will be used in updating the geological and hydrogeological models of the Olkiluoto site area. Several electrically conductive zones were modelled from the examined data, many of them coincide with the known brittle deformation zones but also indications of many so far unknown zones were detected. During the modelling Comsol Multiphysics software for calculating theoretical potential field anomalies of different models was tested. The test calculations showed that this software is useful in confirming the modelling results, especially in complicated cases. (orig.)

  20. Red bone marrow dose calculations in radiotherapy of prostate cancer based on the updated VCH adult male phantom

    Science.gov (United States)

    Ai, Jinqin; Xie, Tianwu; Sun, Wenjuan; Liu, Qian

    2014-04-01

    Red bone marrow (RBM) is an important dose-limiting tissue that has high radiosensitivity but is difficult to identify on clinical medical images. In this study, we investigated dose distribution in RBM for prostate cancer radiotherapy. Four suborgans were identified in the skeleton of the visible Chinese human phantom: cortical bone (CB), trabecular bone (TB), RBM, and yellow bone marrow (YBM). Dose distributions in the phantom were evaluated by the Monte Carlo method. When the left os coxae was taken as the organ-at-risk (OAR), the difference in absorbed dose between RBM and each CB and TB was up to 20%, but was much less (≤3.1%) between RBM and YBM. When the left os coxae and entire bone were both taken as OARs, RBM dose also increased with increasing planning target volume size. The results indicate the validity of using dose to homogeneous bone marrow mixture for estimating dose to RBM when RBM is not available in computational phantoms. In addition, the human skeletal system developed in this study provides a model for considering RBM dose in radiotherapy planning.

  1. Receiver Operating Characteristic Curve-Based Prediction Model for Periodontal Disease Updated With the Calibrated Community Periodontal Index.

    Science.gov (United States)

    Su, Chiu-Wen; Yen, Amy Ming-Fang; Lai, Hongmin; Chen, Hsiu-Hsi; Chen, Sam Li-Sheng

    2017-12-01

    The accuracy of a prediction model for periodontal disease using the community periodontal index (CPI) has been undertaken by using an area under a receiver operating characteristics (AUROC) curve. How the uncalibrated CPI, as measured by general dentists trained by periodontists in a large epidemiologic study, and affects the performance in a prediction model, has not been researched yet. A two-stage design was conducted by first proposing a validation study to calibrate CPI between a senior periodontal specialist and trained general dentists who measured CPIs in the main study of a nationwide survey. A Bayesian hierarchical logistic regression model was applied to estimate the non-updated and updated clinical weights used for building up risk scores. How the calibrated CPI affected performance of the updated prediction model was quantified by comparing AUROC curves between the original and updated models. Estimates regarding calibration of CPI obtained from the validation study were 66% and 85% for sensitivity and specificity, respectively. After updating, clinical weights of each predictor were inflated, and the risk score for the highest risk category was elevated from 434 to 630. Such an update improved the AUROC performance of the two corresponding prediction models from 62.6% (95% confidence interval [CI]: 61.7% to 63.6%) for the non-updated model to 68.9% (95% CI: 68.0% to 69.6%) for the updated one, reaching a statistically significant difference (P prediction model was demonstrated for periodontal disease as measured by the calibrated CPI derived from a large epidemiologic survey.

  2. Improvements in the model of neutron calculations for research reactors

    International Nuclear Information System (INIS)

    Calzetta, O.; Leszczynski, F.

    1987-01-01

    Within the research program in the field of neutron physics calculations being carried out in the Nuclear Engineering Division at the Centro Atomico Bariloche, the errors which due to some typical approximations appear in the final results, are being researched. For research MTR type reactors, two approximations, for high and low enrichment are investigated: the treatment of the geometry and the method of few-group cell cross-sections calculation, particularly in the resonance energy region. Commonly, the cell constants used for the entire reactor calculation are obtained making an homogenization of the full fuel elements by means of one-dimensional calculations. An improvement is made that explicitly includes the fuel element frames in the core calculation geometry. Besides, a detailed treatment-in energy and space- is used to find the resonance few-group cross sections, and a comparison of the results with detailed and approximated calculations is made. The least number and the best mesh of energy groups needed for cell calculations is fixed too. (Author)

  3. Improvements in the model of neutron calculations for research reactors

    International Nuclear Information System (INIS)

    Calzetta, Osvaldo; Leszczynski, Francisco

    1987-01-01

    Within the research program in the field of neutron physics calculations being carried out in the Nuclear Engineering Division at the Centro Atomico Bariloche, the errors which due to some typical approximations appear in the final results are researched. For research MTR type reactors, two approximations, for high and low enrichment are investigated: the treatment of the geometry and the method of few-group cell cross-sections calculation, particularly in the resonance energy region. Commonly, the cell constants used for the entire reactor calculation are obtained making an homogenization of the full fuel elements, by one-dimensional calculations. An improvement is made that explicitly includes the fuel element frames in the core calculation geometry. Besides, a detailed treatment-in energy and space- is used to find the resonance few-group cross sections, and a comparison of the results with detailed and approximated calculations is made. The least number and the best mesh of energy groups needed for cell calculations is fixed too. (Author) [es

  4. EMPIRICAL RATE EQUATION MODEL and RATE CALCULATIONS OF HYDROGEN GENERATION FOR HANFORD TANK WASTE

    International Nuclear Information System (INIS)

    HU, T.A.

    2004-01-01

    Empirical rate equations are derived to estimate hydrogen generation based on chemical reactions, radiolysis of water and organic compounds, and corrosion processes. A comparison of the generation rates observed in the field with the rates calculated for twenty-eight tanks shows agreement within a factor of three. Revision 1 incorporates the available new data to update the equations. It also includes the contribution from total alpha to radiolysis

  5. From Risk Models to Loan Contracts: Austerity as the Continuation of Calculation by Other Means

    Directory of Open Access Journals (Sweden)

    Pierre Pénet

    2014-06-01

    Full Text Available This article analyses how financial actors sought to minimise financial uncertainties during the European sovereign debt crisis by employing simulations as legal instruments of market regulation. We first contrast two roles that simulations can play in sovereign debt markets: ‘simulation-hypotheses’, which work as bundles of constantly updated hypotheses with the goal of better predicting financial risks; and ‘simulation-fictions’, which provide fixed narratives about the present with the purpose of postponing the revision of market risks. Using ratings reports published by Moody’s on Greece and European Central Bank (ECB regulations, we show that Moody’s stuck to a simulationfiction and displayed rating inertia on Greece’s trustworthiness to prevent the destabilising effects that further downgrades would have on Greek borrowing costs. We also show that the multi-notch downgrade issued by Moody’s in June 2010 followed the ECB’s decision to remove ratings from its collateral eligibility requirements. Then, as regulators moved from ‘regulation through model’ to ‘regulation through contract’, ratings stopped functioning as simulation-fictions. Indeed, the conditions of the Greek bailout implemented in May 2010 replaced the CRAs’ models as the main simulation-fiction, which market actors employed to postpone the prospect of a Greek default. We conclude by presenting austerity measures as instruments of calculative governance rather than ideological compacts

  6. An Updated Site Scale Saturated Zone Ground Water Transport Model for Yucca Mountain

    Science.gov (United States)

    Kelkar, S.; Ding, M.; Chu, S.; Robinson, B.; Arnold, B.; Meijer, A.

    2007-12-01

    The Yucca Mountain site scale saturated zone transport model has been revised to incorporate the updated flow model based on a hydrogeologic framework model using the latest lithology data, increased grid resolution that better resolves the geology within the model domain, updated sorption coefficient (Kd ) distributions for radionuclides of interest, and updated retardation factor distributions. The resulting numerical transport model is used for performance assessment predictions of radionuclide transport and to guide future data collection and modeling activities. The transport model results are validated by comparing the model transport pathways with those derived from geochemical data, and by comparing the transit times from the repository footprint to the compliance boundary at the accessible environment with those derived from 14C-based age estimates. The transport model includes the processes of advection, dispersion, fracture flow, matrix diffusion in fractured volcanic formations, sorption, and colloid-facilitated transport. The transport of sorbing radionuclides in the aqueous phase is modeled as a linear, equilibrium process using the Kd model. The colloid-facilitated transport of radionuclides is modeled using two approaches: the colloids with irreversibly embedded radionuclides undergo reversible filtration only, while the migration of radionuclides that reversibly sorb to colloids is modeled with modified values for sorption coefficients and matrix diffusion coefficients. The base case results predict a transport time of 810 years for the breakthrough of half of the mass of a nonreactive radionuclide originating at a point within the footprint of the repository to the compliance boundary of the accessible environment at a distance of ~18 km downstream. The transport time is quite sensitive to the specific discharge through the model, varying between 31 to 52840 years for a range of specific discharge multiplier values between 0.1 to 8.9. Other

  7. A sow replacement model using Bayesian updating in a three-level hierarchic Markov process. I. Biological model

    DEFF Research Database (Denmark)

    Kristensen, Anders Ringgaard; Søllested, Thomas Algot

    2004-01-01

    Several replacement models have been presented in literature. In other applicational areas like dairy cow replacement, various methodological improvements like hierarchical Markov processes and Bayesian updating have been implemented, but not in sow models. Furthermore, there are methodological...... improvements like multi-level hierarchical Markov processes with decisions on multiple time scales, efficient methods for parameter estimations at herd level and standard software that has been hardly implemented at all in any replacement model. The aim of this study is to present a sow replacement model...

  8. A Review of the Updated Pharmacophore for the Alpha 5 GABA(A Benzodiazepine Receptor Model

    Directory of Open Access Journals (Sweden)

    Terry Clayton

    2015-01-01

    Full Text Available An updated model of the GABA(A benzodiazepine receptor pharmacophore of the α5-BzR/GABA(A subtype has been constructed prompted by the synthesis of subtype selective ligands in light of the recent developments in both ligand synthesis, behavioral studies, and molecular modeling studies of the binding site itself. A number of BzR/GABA(A α5 subtype selective compounds were synthesized, notably α5-subtype selective inverse agonist PWZ-029 (1 which is active in enhancing cognition in both rodents and primates. In addition, a chiral positive allosteric modulator (PAM, SH-053-2′F-R-CH3 (2, has been shown to reverse the deleterious effects in the MAM-model of schizophrenia as well as alleviate constriction in airway smooth muscle. Presented here is an updated model of the pharmacophore for α5β2γ2 Bz/GABA(A receptors, including a rendering of PWZ-029 docked within the α5-binding pocket showing specific interactions of the molecule with the receptor. Differences in the included volume as compared to α1β2γ2, α2β2γ2, and α3β2γ2 will be illustrated for clarity. These new models enhance the ability to understand structural characteristics of ligands which act as agonists, antagonists, or inverse agonists at the Bz BS of GABA(A receptors.

  9. Updating Finite Element Model of a Wind Turbine Blade Section Using Experimental Modal Analysis Results

    Directory of Open Access Journals (Sweden)

    Marcin Luczak

    2014-01-01

    Full Text Available This paper presents selected results and aspects of the multidisciplinary and interdisciplinary research oriented for the experimental and numerical study of the structural dynamics of a bend-twist coupled full scale section of a wind turbine blade structure. The main goal of the conducted research is to validate finite element model of the modified wind turbine blade section mounted in the flexible support structure accordingly to the experimental results. Bend-twist coupling was implemented by adding angled unidirectional layers on the suction and pressure side of the blade. Dynamic test and simulations were performed on a section of a full scale wind turbine blade provided by Vestas Wind Systems A/S. The numerical results are compared to the experimental measurements and the discrepancies are assessed by natural frequency difference and modal assurance criterion. Based on sensitivity analysis, set of model parameters was selected for the model updating process. Design of experiment and response surface method was implemented to find values of model parameters yielding results closest to the experimental. The updated finite element model is producing results more consistent with the measurement outcomes.

  10. Experimental Update of the Overtopping Model Used for the Wave Dragon Wave Energy Converter

    Directory of Open Access Journals (Sweden)

    Erik Friis-Madsen

    2013-04-01

    Full Text Available An overtopping model specifically suited for Wave Dragon is needed in order to improve the reliability of its performance estimates. The model shall be comprehensive of all relevant physical processes that affect overtopping and flexible to adapt to any local conditions and device configuration. An experimental investigation is carried out to update an existing formulation suited for 2D draft-limited, low-crested structures, in order to include the effects on the overtopping flow of the wave steepness, the 3D geometry of Wave Dragon, the wing reflectors, the device motions and the non-rigid connection between platform and reflectors. The study is carried out in four phases, each of them specifically targeted at quantifying one of these effects through a sensitivity analysis and at modeling it through custom-made parameters. These are depending on features of the wave or the device configuration, all of which can be measured in real-time. Instead of using new fitting coefficients, this approach allows a broader applicability of the model beyond the Wave Dragon case, to any overtopping WEC or structure within the range of tested conditions. Predictions reliability of overtopping over Wave Dragon increased, as the updated model allows improved accuracy and precision respect to the former version.

  11. Resolving structural errors in a spatially distributed hydrologic model using ensemble Kalman filter state updates

    Directory of Open Access Journals (Sweden)

    J. H. Spaaks

    2013-09-01

    Full Text Available In hydrological modeling, model structures are developed in an iterative cycle as more and different types of measurements become available and our understanding of the hillslope or watershed improves. However, with increasing complexity of the model, it becomes more and more difficult to detect which parts of the model are deficient, or which processes should also be incorporated into the model during the next development step. In this study, we first compare two methods (the Shuffled Complex Evolution Metropolis algorithm (SCEM-UA and the Simultaneous parameter Optimization and Data Assimilation algorithm (SODA to calibrate a purposely deficient 3-D hillslope-scale model to error-free, artificially generated measurements. We use a multi-objective approach based on distributed pressure head at the soil–bedrock interface and hillslope-scale discharge and water balance. For these idealized circumstances, SODA's usefulness as a diagnostic methodology is demonstrated by its ability to identify the timing and location of processes that are missing in the model. We show that SODA's state updates provide information that could readily be incorporated into an improved model structure, and that this type of information cannot be gained from parameter estimation methods such as SCEM-UA. We then expand on the SODA result by performing yet another calibration, in which we investigate whether SODA's state updating patterns are still capable of providing insight into model structure deficiencies when there are fewer measurements, which are moreover subject to measurement noise. We conclude that SODA can help guide the discussion between experimentalists and modelers by providing accurate and detailed information on how to improve spatially distributed hydrologic models.

  12. Image processing of full-field strain data and its use in model updating

    Energy Technology Data Exchange (ETDEWEB)

    Wang, W; Mottershead, J E [Centre for Engineering Dynamics, School of Engineering, University of Liverpool, UK, L69 3GH (United Kingdom); Sebastian, C M; Patterson, E A, E-mail: wangweizhuo@gmail.com [Composite Vehicle Research Center, Michigan State University, Lansing, MI (United States)

    2011-07-19

    Finite element model updating is an inverse problem based on measured structural outputs, typically natural frequencies. Full-field responses such as static stress/strain patterns and vibration mode shapes contain valuable information for model updating but within large volumes of highly-redundant data. Pattern recognition and image processing provide feasible techniques to extract effective and efficient information, often known as shape features, from this data. For instance, the Zernike polynomials having the properties of orthogonality and rotational invariance are powerful decomposition kernels for a shape defined within a unit circle. In this paper, full field strain patterns for a specimen, in the form of a square plate with a circular hole, under a tensile load are considered. Effective shape features can be constructed by a set of modified Zernike polynomials. The modification includes the application of a weighting function to the Zernike polynomials so that high strain magnitudes around the hole are well represented. The Gram-Schmidt process is then used to ensure orthogonality for the obtained decomposition kernels over the domain of the specimen. The difference between full-field strain patterns measured by digital image correlation (DIC) and reconstructed using 15 shape features (Zernike moment descriptors, ZMDs) at different steps in the elasto-plastic deformation of the specimen is found to be very small. It is significant that only a very small number of shape features are necessary and sufficient to represent the full-field data. Model updating of nonlinear elasto-plastic material properties is carried out by adjusting the parameters of a FE model until the FE strain pattern converges upon the measured strains as determined using ZMDs.

  13. Structural updates of alignment of protein domains and consequences on evolutionary models of domain superfamilies.

    Science.gov (United States)

    Mutt, Eshita; Rani, Sudha Sane; Sowdhamini, Ramanathan

    2013-11-15

    Influx of newly determined crystal structures into primary structural databases is increasing at a rapid pace. This leads to updation of primary and their dependent secondary databases which makes large scale analysis of structures even more challenging. Hence, it becomes essential to compare and appreciate replacement of data and inclusion of new data that is critical between two updates. PASS2 is a database that retains structure-based sequence alignments of protein domain superfamilies and relies on SCOP database for its hierarchy and definition of superfamily members. Since, accurate alignments of distantly related proteins are useful evolutionary models for depicting variations within protein superfamilies, this study aims to trace the changes in data in between PASS2 updates. In this study, differences in superfamily compositions, family constituents and length variations between different versions of PASS2 have been tracked. Studying length variations in protein domains, which have been introduced by indels (insertions/deletions), are important because theses indels act as evolutionary signatures in introducing variations in substrate specificity, domain interactions and sometimes even regulating protein stability. With this objective of classifying the nature and source of variations in the superfamilies during transitions (between the different versions of PASS2), increasing length-rigidity of the superfamilies in the recent version is observed. In order to study such length-variant superfamilies in detail, an improved classification approach is also presented, which divides the superfamilies into distinct groups based on their extent of length variation. An objective study in terms of transition between the database updates, detailed investigation of the new/old members and examination of their structural alignments is non-trivial and will help researchers in designing experiments on specific superfamilies, in various modelling studies, in linking

  14. Determination of equilibrium electron temperature and times using an electron swarm model with BOLSIG+ calculated collision frequencies and rate coefficients

    International Nuclear Information System (INIS)

    Pusateri, Elise N.; Morris, Heidi E.; Nelson, Eric M.; Ji, Wei

    2015-01-01

    Electromagnetic pulse (EMP) events produce low-energy conduction electrons from Compton electron or photoelectron ionizations with air. It is important to understand how conduction electrons interact with air in order to accurately predict EMP evolution and propagation. An electron swarm model can be used to monitor the time evolution of conduction electrons in an environment characterized by electric field and pressure. Here a swarm model is developed that is based on the coupled ordinary differential equations (ODEs) described by Higgins et al. (1973), hereinafter HLO. The ODEs characterize the swarm electric field, electron temperature, electron number density, and drift velocity. Important swarm parameters, the momentum transfer collision frequency, energy transfer collision frequency, and ionization rate, are calculated and compared to the previously reported fitted functions given in HLO. These swarm parameters are found using BOLSIG+, a two term Boltzmann solver developed by Hagelaar and Pitchford (2005), which utilizes updated cross sections from the LXcat website created by Pancheshnyi et al. (2012). We validate the swarm model by comparing to experimental effective ionization coefficient data in Dutton (1975) and drift velocity data in Ruiz-Vargas et al. (2010). In addition, we report on electron equilibrium temperatures and times for a uniform electric field of 1 StatV/cm for atmospheric heights from 0 to 40 km. We show that the equilibrium temperature and time are sensitive to the modifications in the collision frequencies and ionization rate based on the updated electron interaction cross sections

  15. Some safe and sensible shortcuts for efficiently upscaled updates of existing elevation models.

    Science.gov (United States)

    Knudsen, Thomas; Aasbjerg Nielsen, Allan

    2013-04-01

    The Danish national elevation model, DK-DEM, was introduced in 2009 and is based on LiDAR data collected in the time frame 2005-2007. Hence, DK-DEM is aging, and it is time to consider how to integrate new data with the current model in a way that improves the representation of new landscape features, while still preserving the overall (very high) quality of the model. In LiDAR terms, 2005 is equivalent to some time between the palaeolithic and the neolithic. So evidently, when (and if) an update project is launched, we may expect some notable improvements due to the technical and scientific developments from the last half decade. To estimate the magnitude of these potential improvements, and to devise efficient and effective ways of integrating the new and old data, we currently carry out a number of case studies based on comparisons between the current terrain model (with a ground sample distance, GSD, of 1.6 m), and a number of new high resolution point clouds (10-70 points/m2). Not knowing anything about the terms of a potential update project, we consider multiple scenarios ranging from business as usual: A new model with the same GSD, but improved precision, to aggressive upscaling: A new model with 4 times better GSD, i.e. a 16-fold increase in the amount of data. Especially in the latter case speeding up the gridding process is important. Luckily recent results from one of our case studies reveal that for very high resolution data in smooth terrain (which is the common case in Denmark), using local mean (LM) as grid value estimator is only negligibly worse than using the theoretically "best" estimator, i.e. ordinary kriging (OK) with rigorous modelling of the semivariogram. The bias in a leave one out cross validation differs on the micrometer level, while the RMSE differs on the 0.1 mm level. This is fortunate, since a LM estimator can be implemented in plain stream mode, letting the points from the unstructured point cloud (i.e. no TIN generation) stream

  16. Calculational advance in the modeling of fuel-coolant interactions

    International Nuclear Information System (INIS)

    Bohl, W.R.

    1982-01-01

    A new technique is applied to numerically simulate a fuel-coolant interaction. The technique is based on the ability to calculate separate space- and time-dependent velocities for each of the participating components. In the limiting case of a vapor explosion, this framework allows calculation of the pre-mixing phase of film boiling and interpenetration of the working fluid by hot liquid, which is required for extrapolating from experiments to a reactor hypothetical accident. Qualitative results are compared favorably to published experimental data where an iron-alumina mixture was poured into water. Differing results are predicted with LMFBR materials

  17. Updating a B. anthracis Risk Model with Field Data from a Bioterrorism Incident.

    Science.gov (United States)

    Hong, Tao; Gurian, Patrick L

    2015-06-02

    In this study, a Bayesian framework was applied to update a model of pathogen fate and transport in the indoor environment. Distributions for model parameters (e.g., release quantity of B. anthracis spores, risk of illness, spore setting velocity, resuspension rate, sample recovery efficiency, etc.) were updated by comparing model predictions with measurements of B. anthracis spores made after one of the 2001 anthrax letter attacks. The updating process, which was implemented by using Markov chain Monte Carlo (MCMC) methods, significantly reduced the uncertainties of inputs with uniformed prior estimates: total quantity of spores released, the amount of spores exiting the room, and risk to occupants. In contrast, uncertainties were not greatly reduced for inputs for which informed prior data were available: deposition rates, resuspension rates, and sample recovery efficiencies. This suggests that prior estimates of these quantities that were obtained from a review of the technical literature are consistent with the observed behavior of spores in an actual attack. Posterior estimates of mortality risk for people in the room, when the spores were released, are on the order of 0.01 to 0.1, which supports the decision to administer prophylactic antibiotics. Multivariate sensitivity analyses were conducted to assess how effective different measurements were at reducing uncertainty in the estimated risk for the prior scenario. This analysis revealed that if the size distribution of the released particulates is known, then environmental sampling can be limited to accurately characterizing floor concentrations; otherwise, samples from multiple locations, as well as particulate and building air circulation parameters, need to be measured.

  18. Comparison of Calculation Models for Bucket Foundation in Sand

    DEFF Research Database (Denmark)

    Vaitkunaite, Evelina; Molina, Salvador Devant; Ibsen, Lars Bo

    The possibility of fast and rather precise preliminary offshore foundation design is desirable. The ultimate limit state of bucket foundation is investigated using three different geotechnical calculation tools: [Ibsen 2001] an analytical method, LimitState:GEO and Plaxis 3D. The study has focuse...

  19. National Stormwater Calculator - Version 1.1 (Model)

    Science.gov (United States)

    EPA’s National Stormwater Calculator (SWC) is a desktop application that estimates the annual amount of rainwater and frequency of runoff from a specific site anywhere in the United States (including Puerto Rico). The SWC estimates runoff at a site based on available information ...

  20. Finite element model updating of concrete structures based on imprecise probability

    Science.gov (United States)

    Biswal, S.; Ramaswamy, A.

    2017-09-01

    Imprecise probability based methods are developed in this study for the parameter estimation, in finite element model updating for concrete structures, when the measurements are imprecisely defined. Bayesian analysis using Metropolis Hastings algorithm for parameter estimation is generalized to incorporate the imprecision present in the prior distribution, in the likelihood function, and in the measured responses. Three different cases are considered (i) imprecision is present in the prior distribution and in the measurements only, (ii) imprecision is present in the parameters of the finite element model and in the measurement only, and (iii) imprecision is present in the prior distribution, in the parameters of the finite element model, and in the measurements. Procedures are also developed for integrating the imprecision in the parameters of the finite element model, in the finite element software Abaqus. The proposed methods are then verified against reinforced concrete beams and prestressed concrete beams tested in our laboratory as part of this study.

  1. State updating of a distributed hydrological model with Ensemble Kalman Filtering: Effects of updating frequency and observation network density on forecast accuracy

    Science.gov (United States)

    Rakovec, O.; Weerts, A.; Hazenberg, P.; Torfs, P.; Uijlenhoet, R.

    2012-12-01

    This paper presents a study on the optimal setup for discharge assimilation within a spatially distributed hydrological model (Rakovec et al., 2012a). The Ensemble Kalman filter (EnKF) is employed to update the grid-based distributed states of such an hourly spatially distributed version of the HBV-96 model. By using a physically based model for the routing, the time delay and attenuation are modelled more realistically. The discharge and states at a given time step are assumed to be dependent on the previous time step only (Markov property). Synthetic and real world experiments are carried out for the Upper Ourthe (1600 km2), a relatively quickly responding catchment in the Belgian Ardennes. The uncertain precipitation model forcings were obtained using a time-dependent multivariate spatial conditional simulation method (Rakovec et al., 2012b), which is further made conditional on preceding simulations. We assess the impact on the forecasted discharge of (1) various sets of the spatially distributed discharge gauges and (2) the filtering frequency. The results show that the hydrological forecast at the catchment outlet is improved by assimilating interior gauges. This augmentation of the observation vector improves the forecast more than increasing the updating frequency. In terms of the model states, the EnKF procedure is found to mainly change the pdfs of the two routing model storages, even when the uncertainty in the discharge simulations is smaller than the defined observation uncertainty. Rakovec, O., Weerts, A. H., Hazenberg, P., Torfs, P. J. J. F., and Uijlenhoet, R.: State updating of a distributed hydrological model with Ensemble Kalman Filtering: effects of updating frequency and observation network density on forecast accuracy, Hydrol. Earth Syst. Sci. Discuss., 9, 3961-3999, doi:10.5194/hessd-9-3961-2012, 2012a. Rakovec, O., Hazenberg, P., Torfs, P. J. J. F., Weerts, A. H., and Uijlenhoet, R.: Generating spatial precipitation ensembles: impact of

  2. State updating of a distributed hydrological model with Ensemble Kalman Filtering: effects of updating frequency and observation network density on forecast accuracy

    Directory of Open Access Journals (Sweden)

    O. Rakovec

    2012-09-01

    Full Text Available This paper presents a study on the optimal setup for discharge assimilation within a spatially distributed hydrological model. The Ensemble Kalman filter (EnKF is employed to update the grid-based distributed states of such an hourly spatially distributed version of the HBV-96 model. By using a physically based model for the routing, the time delay and attenuation are modelled more realistically. The discharge and states at a given time step are assumed to be dependent on the previous time step only (Markov property.

    Synthetic and real world experiments are carried out for the Upper Ourthe (1600 km2, a relatively quickly responding catchment in the Belgian Ardennes. We assess the impact on the forecasted discharge of (1 various sets of the spatially distributed discharge gauges and (2 the filtering frequency. The results show that the hydrological forecast at the catchment outlet is improved by assimilating interior gauges. This augmentation of the observation vector improves the forecast more than increasing the updating frequency. In terms of the model states, the EnKF procedure is found to mainly change the pdfs of the two routing model storages, even when the uncertainty in the discharge simulations is smaller than the defined observation uncertainty.

  3. Finite element model updating using the shadow hybrid Monte Carlo technique

    Science.gov (United States)

    Boulkaibet, I.; Mthembu, L.; Marwala, T.; Friswell, M. I.; Adhikari, S.

    2015-02-01

    Recent research in the field of finite element model updating (FEM) advocates the adoption of Bayesian analysis techniques to dealing with the uncertainties associated with these models. However, Bayesian formulations require the evaluation of the Posterior Distribution Function which may not be available in analytical form. This is the case in FEM updating. In such cases sampling methods can provide good approximations of the Posterior distribution when implemented in the Bayesian context. Markov Chain Monte Carlo (MCMC) algorithms are the most popular sampling tools used to sample probability distributions. However, the efficiency of these algorithms is affected by the complexity of the systems (the size of the parameter space). The Hybrid Monte Carlo (HMC) offers a very important MCMC approach to dealing with higher-dimensional complex problems. The HMC uses the molecular dynamics (MD) steps as the global Monte Carlo (MC) moves to reach areas of high probability where the gradient of the log-density of the Posterior acts as a guide during the search process. However, the acceptance rate of HMC is sensitive to the system size as well as the time step used to evaluate the MD trajectory. To overcome this limitation we propose the use of the Shadow Hybrid Monte Carlo (SHMC) algorithm. The SHMC algorithm is a modified version of the Hybrid Monte Carlo (HMC) and designed to improve sampling for large-system sizes and time steps. This is done by sampling from a modified Hamiltonian function instead of the normal Hamiltonian function. In this paper, the efficiency and accuracy of the SHMC method is tested on the updating of two real structures; an unsymmetrical H-shaped beam structure and a GARTEUR SM-AG19 structure and is compared to the application of the HMC algorithm on the same structures.

  4. Calculating salt loads to Great Salt Lake and the associated uncertainties for water year 2013; updating a 48 year old standard

    Science.gov (United States)

    Shope, Christopher L.; Angeroth, Cory E.

    2015-01-01

    Effective management of surface waters requires a robust understanding of spatiotemporal constituent loadings from upstream sources and the uncertainty associated with these estimates. We compared the total dissolved solids loading into the Great Salt Lake (GSL) for water year 2013 with estimates of previously sampled periods in the early 1960s.We also provide updated results on GSL loading, quantitatively bounded by sampling uncertainties, which are useful for current and future management efforts. Our statistical loading results were more accurate than those from simple regression models. Our results indicate that TDS loading to the GSL in water year 2013 was 14.6 million metric tons with uncertainty ranging from 2.8 to 46.3 million metric tons, which varies greatly from previous regression estimates for water year 1964 of 2.7 million metric tons. Results also indicate that locations with increased sampling frequency are correlated with decreasing confidence intervals. Because time is incorporated into the LOADEST models, discrepancies are largely expected to be a function of temporally lagged salt storage delivery to the GSL associated with terrestrial and in-stream processes. By incorporating temporally variable estimates and statistically derived uncertainty of these estimates,we have provided quantifiable variability in the annual estimates of dissolved solids loading into the GSL. Further, our results support the need for increased monitoring of dissolved solids loading into saline lakes like the GSL by demonstrating the uncertainty associated with different levels of sampling frequency.

  5. Perturbation theory calculations of model pair potential systems

    Energy Technology Data Exchange (ETDEWEB)

    Gong, Jianwu [Iowa State Univ., Ames, IA (United States)

    2016-01-01

    Helmholtz free energy is one of the most important thermodynamic properties for condensed matter systems. It is closely related to other thermodynamic properties such as chemical potential and compressibility. It is also the starting point for studies of interfacial properties and phase coexistence if free energies of different phases can be obtained. In this thesis, we will use an approach based on the Weeks-Chandler-Anderson (WCA) perturbation theory to calculate the free energy of both solid and liquid phases of Lennard-Jones pair potential systems and the free energy of liquid states of Yukawa pair potentials. Our results indicate that the perturbation theory provides an accurate approach to the free energy calculations of liquid and solid phases based upon comparisons with results from molecular dynamics (MD) and Monte Carlo (MC) simulations.

  6. Performance Calculations - and Appendix I - Model XC-120 (M-107)

    Science.gov (United States)

    1950-09-25

    and cargo and& point. Drop nack and return to bass. Take-off cargo Fnd return to Cross weight defined at base without pack. Takel halfway point. off...Steciolonditions or Standard Airaraft Chearsoterim tios Performance pressented herein ir. tha~t requiredby roferernee ()for Standard. Airaraft...horsepower available as used in the performance calculations of this report in defined an: THP : ) ) -• re : BiP = engine brake horsepower from engine

  7. Calculation of single chain cellulose elasticity using fully atomistic modeling

    Science.gov (United States)

    Xiawa Wu; Robert J. Moon; Ashlie Martini

    2011-01-01

    Cellulose nanocrystals, a potential base material for green nanocomposites, are ordered bundles of cellulose chains. The properties of these chains have been studied for many years using atomic-scale modeling. However, model predictions are difficult to interpret because of the significant dependence of predicted properties on model details. The goal of this study is...

  8. A modified calculation model for groundwater flowing to horizontal ...

    Indian Academy of Sciences (India)

    The simulation models for groundwater flowing to horizontal seepage wells proposed by Wang and Zhang (2007) are based on the theory of coupled seepage-pipe flow model which treats the well pipe as a highly permeable medium. However, the limitations of the existing model were found during applications. Specifically ...

  9. Comparison of the performance of net radiation calculation models

    DEFF Research Database (Denmark)

    Kjærsgaard, Jeppe Hvelplund; Cuenca, R.H.; Martinez-Cob, A.

    2009-01-01

    Daily values of net radiation are used in many applications of crop-growth modeling and agricultural water management. Measurements of net radiation are not part of the routine measurement program at many weather stations and are commonly estimated based on other meteorological parameters. Daily....... The performance of the empirical models was nearly identical at all sites. Since the empirical models were easier to use and simpler to calibrate than the physically based models, the results indicate that the empirical models can be used as a good substitute for the physically based ones when available...

  10. Inhibition, Updating Working Memory, and Shifting Predict Reading Disability Symptoms in a Hybrid Model: Project KIDS.

    Science.gov (United States)

    Daucourt, Mia C; Schatschneider, Christopher; Connor, Carol M; Al Otaiba, Stephanie; Hart, Sara A

    2018-01-01

    Recent achievement research suggests that executive function (EF), a set of regulatory processes that control both thought and action necessary for goal-directed behavior, is related to typical and atypical reading performance. This project examines the relation of EF, as measured by its components, Inhibition, Updating Working Memory, and Shifting, with a hybrid model of reading disability (RD). Our sample included 420 children who participated in a broader intervention project when they were in KG-third grade (age M = 6.63 years, SD = 1.04 years, range = 4.79-10.40 years). At the time their EF was assessed, using a parent-report Behavior Rating Inventory of Executive Function (BRIEF), they had a mean age of 13.21 years ( SD = 1.54 years; range = 10.47-16.63 years). The hybrid model of RD was operationalized as a composite consisting of four symptoms, and set so that any child could have any one, any two, any three, any four, or none of the symptoms included in the hybrid model. The four symptoms include low word reading achievement, unexpected low word reading achievement, poorer reading comprehension compared to listening comprehension, and dual-discrepancy response-to-intervention, requiring both low achievement and low growth in word reading. The results of our multilevel ordinal logistic regression analyses showed a significant relation between all three components of EF (Inhibition, Updating Working Memory, and Shifting) and the hybrid model of RD, and that the strength of EF's predictive power for RD classification was the highest when RD was modeled as having at least one or more symptoms. Importantly, the chances of being classified as having RD increased as EF performance worsened and decreased as EF performance improved. The question of whether any one EF component would emerge as a superior predictor was also examined and results showed that Inhibition, Updating Working Memory, and Shifting were equally valuable as predictors of the hybrid model of RD

  11. Inhibition, Updating Working Memory, and Shifting Predict Reading Disability Symptoms in a Hybrid Model: Project KIDS

    Directory of Open Access Journals (Sweden)

    Mia C. Daucourt

    2018-03-01

    Full Text Available Recent achievement research suggests that executive function (EF, a set of regulatory processes that control both thought and action necessary for goal-directed behavior, is related to typical and atypical reading performance. This project examines the relation of EF, as measured by its components, Inhibition, Updating Working Memory, and Shifting, with a hybrid model of reading disability (RD. Our sample included 420 children who participated in a broader intervention project when they were in KG-third grade (age M = 6.63 years, SD = 1.04 years, range = 4.79–10.40 years. At the time their EF was assessed, using a parent-report Behavior Rating Inventory of Executive Function (BRIEF, they had a mean age of 13.21 years (SD = 1.54 years; range = 10.47–16.63 years. The hybrid model of RD was operationalized as a composite consisting of four symptoms, and set so that any child could have any one, any two, any three, any four, or none of the symptoms included in the hybrid model. The four symptoms include low word reading achievement, unexpected low word reading achievement, poorer reading comprehension compared to listening comprehension, and dual-discrepancy response-to-intervention, requiring both low achievement and low growth in word reading. The results of our multilevel ordinal logistic regression analyses showed a significant relation between all three components of EF (Inhibition, Updating Working Memory, and Shifting and the hybrid model of RD, and that the strength of EF’s predictive power for RD classification was the highest when RD was modeled as having at least one or more symptoms. Importantly, the chances of being classified as having RD increased as EF performance worsened and decreased as EF performance improved. The question of whether any one EF component would emerge as a superior predictor was also examined and results showed that Inhibition, Updating Working Memory, and Shifting were equally valuable as predictors of the

  12. Application of a Bayesian algorithm for the Statistical Energy model updating of a railway coach

    DEFF Research Database (Denmark)

    Sadri, Mehran; Brunskog, Jonas; Younesian, Davood

    2016-01-01

    The classical statistical energy analysis (SEA) theory is a common approach for vibroacoustic analysis of coupled complex structures, being efficient to predict high-frequency noise and vibration of engineering systems. There are however some limitations in applying the conventional SEA. The pres......The classical statistical energy analysis (SEA) theory is a common approach for vibroacoustic analysis of coupled complex structures, being efficient to predict high-frequency noise and vibration of engineering systems. There are however some limitations in applying the conventional SEA...... the performance of the proposed strategy, the SEA model updating of a railway passenger coach is carried out. First, a sensitivity analysis is carried out to select the most sensitive parameters of the SEA model. For the selected parameters of the model, prior probability density functions are then taken...

  13. Experimental Update of the Overtopping Model Used for the Wave Dragon Wave Energy Converter

    DEFF Research Database (Denmark)

    Parmeggiani, Stefano; Kofoed, Jens Peter; Friis-Madsen, Erik

    2013-01-01

    An overtopping model specifically suited for Wave Dragon is needed in order to improve the reliability of its performance estimates. The model shall be comprehensive of all relevant physical processes that affect overtopping and flexible to adapt to any local conditions and device configuration....... An experimental investigation is carried out to update an existing formulation suited for 2D draft-limited, low-crested structures, in order to include the effects on the overtopping flow of the wave steepness, the 3D geometry of Wave Dragon, the wing reflectors, the device motions and the non-rigid connection...... between platform and reflectors. The study is carried out in four phases, each of them specifically targeted at quantifying one of these effects through a sensitivity analysis and at modeling it through custom-made parameters. These are depending on features of the wave or the device configuration, all...

  14. CLEAR (Calculates Logical Evacuation And Response): A generic transportation network model for the calculation of evacuation time estimates

    International Nuclear Information System (INIS)

    Moeller, M.P.; Desrosiers, A.E.; Urbanik, T. II

    1982-03-01

    This paper describes the methodology and application of the computer model CLEAR (Calculates Logical Evacuation And Response) which estimates the time required for a specific population density and distribution to evacuate an area using a specific transportation network. The CLEAR model simulates vehicle departure and movement on a transportation network according to the conditions and consequences of traffic flow. These include handling vehicles at intersecting road segments, calculating the velocity of travel on a road segment as a function of its vehicle density, and accounting for the delay of vehicles in traffic queues. The program also models the distribution of times required by individuals to prepare for an evacuation. In order to test its accuracy, the CLEAR model was used to estimate evacuation times for the emergency planning zone surrounding the Beaver Valley Nuclear Power Plant. The Beaver Valley site was selected because evacuation time estimates had previously been prepared by the licensee, Duquesne Light, as well as by the Federal Emergency Management Agency and the Pennsylvania Emergency Management Agency. A lack of documentation prevented a detailed comparison of the estimates based on the CLEAR model and those obtained by Duquesne Light. However, the CLEAR model results compared favorably with the estimates prepared by the other two agencies. (author)

  15. Numerical performance and throughput benchmark for electronic structure calculations in PC-Linux systems with new architectures, updated compilers, and libraries.

    Science.gov (United States)

    Yu, Jen-Shiang K; Hwang, Jenn-Kang; Tang, Chuan Yi; Yu, Chin-Hui

    2004-01-01

    A number of recently released numerical libraries including Automatically Tuned Linear Algebra Subroutines (ATLAS) library, Intel Math Kernel Library (MKL), GOTO numerical library, and AMD Core Math Library (ACML) for AMD Opteron processors, are linked against the executables of the Gaussian 98 electronic structure calculation package, which is compiled by updated versions of Fortran compilers such as Intel Fortran compiler (ifc/efc) 7.1 and PGI Fortran compiler (pgf77/pgf90) 5.0. The ifc 7.1 delivers about 3% of improvement on 32-bit machines compared to the former version 6.0. Performance improved from pgf77 3.3 to 5.0 is also around 3% when utilizing the original unmodified optimization options of the compiler enclosed in the software. Nevertheless, if extensive compiler tuning options are used, the speed can be further accelerated to about 25%. The performances of these fully optimized numerical libraries are similar. The double-precision floating-point (FP) instruction sets (SSE2) are also functional on AMD Opteron processors operated in 32-bit compilation, and Intel Fortran compiler has performed better optimization. Hardware-level tuning is able to improve memory bandwidth by adjusting the DRAM timing, and the efficiency in the CL2 mode is further accelerated by 2.6% compared to that of the CL2.5 mode. The FP throughput is measured by simultaneous execution of two identical copies of each of the test jobs. Resultant performance impact suggests that IA64 and AMD64 architectures are able to fulfill significantly higher throughput than the IA32, which is consistent with the SpecFPrate2000 benchmarks.

  16. Summary of Expansions, Updates, and Results in GREET® 2016 Suite of Models

    Energy Technology Data Exchange (ETDEWEB)

    None, None

    2016-10-01

    This report documents the technical content of the expansions and updates in Argonne National Laboratory’s GREET® 2016 release and provides references and links to key documents related to these expansions and updates.

  17. Feature Classification for Robust Shape-Based Collaborative Tracking and Model Updating

    Directory of Open Access Journals (Sweden)

    C. S. Regazzoni

    2008-09-01

    Full Text Available A new collaborative tracking approach is introduced which takes advantage of classified features. The core of this tracker is a single tracker that is able to detect occlusions and classify features contributing in localizing the object. Features are classified in four classes: good, suspicious, malicious, and neutral. Good features are estimated to be parts of the object with a high degree of confidence. Suspicious ones have a lower, yet significantly high, degree of confidence to be a part of the object. Malicious features are estimated to be generated by clutter, while neutral features are characterized with not a sufficient level of uncertainty to be assigned to the tracked object. When there is no occlusion, the single tracker acts alone, and the feature classification module helps it to overcome distracters such as still objects or little clutter in the scene. When more than one desired moving objects bounding boxes are close enough, the collaborative tracker is activated and it exploits the advantages of the classified features to localize each object precisely as well as updating the objects shape models more precisely by assigning again the classified features to the objects. The experimental results show successful tracking compared with the collaborative tracker that does not use the classified features. Moreover, more precise updated object shape models will be shown.

  18. Updated Life-Cycle Assessment of Aluminum Production and Semi-fabrication for the GREET Model

    Energy Technology Data Exchange (ETDEWEB)

    Dai, Qiang [Argonne National Lab. (ANL), Argonne, IL (United States); Kelly, Jarod C. [Argonne National Lab. (ANL), Argonne, IL (United States); Burnham, Andrew [Argonne National Lab. (ANL), Argonne, IL (United States); Elgowainy, Amgad [Argonne National Lab. (ANL), Argonne, IL (United States)

    2015-09-01

    This report serves as an update for the life-cycle analysis (LCA) of aluminum production based on the most recent data representing the state-of-the-art of the industry in North America. The 2013 Aluminum Association (AA) LCA report on the environmental footprint of semifinished aluminum products in North America provides the basis for the update (The Aluminum Association, 2013). The scope of this study covers primary aluminum production, secondary aluminum production, as well as aluminum semi-fabrication processes including hot rolling, cold rolling, extrusion and shape casting. This report focuses on energy consumptions, material inputs and criteria air pollutant emissions for each process from the cradle-to-gate of aluminum, which starts from bauxite extraction, and ends with manufacturing of semi-fabricated aluminum products. The life-cycle inventory (LCI) tables compiled are to be incorporated into the vehicle cycle model of Argonne National Laboratory’s Greenhouse Gases, Regulated Emissions, and Energy Use in Transportation (GREET) Model for the release of its 2015 version.

  19. Extraproximal approach to calculating equilibriums in pure exchange models

    Science.gov (United States)

    Antipin, A. S.

    2006-10-01

    Models of economic equilibrium are a powerful tool of mathematical modeling of various markets. However, according to many publications, there are as yet no universal techniques for finding equilibrium prices that are solutions to such models. A technique of this kind that is a natural implementation of the Walras idea of tatonnements (i.e., groping for equilibrium prices) is proposed, and its convergence is proved.

  20. A sow replacement model using Bayesian updating in a three-level hierarchic Markov process. I. Biological model

    DEFF Research Database (Denmark)

    Kristensen, Anders Ringgaard; Søllested, Thomas Algot

    2004-01-01

    Several replacement models have been presented in literature. In other applicational areas like dairy cow replacement, various methodological improvements like hierarchical Markov processes and Bayesian updating have been implemented, but not in sow models. Furthermore, there are methodological...... improvements like multi-level hierarchical Markov processes with decisions on multiple time scales, efficient methods for parameter estimations at herd level and standard software that has been hardly implemented at all in any replacement model. The aim of this study is to present a sow replacement model...... that really uses all these methodological improvements. In this paper, the biological model describing the performance and feed intake of sows is presented. In particular, estimation of herd specific parameters is emphasized. The optimization model is described in a subsequent paper...

  1. Expanding of reactor power calculation model of RELAP5 code

    International Nuclear Information System (INIS)

    Lin Meng; Yang Yanhua; Chen Yuqing; Zhang Hong; Liu Dingming

    2007-01-01

    For better analyzing of the nuclear power transient in rod-controlled reactor core by RELAP5 code, a nuclear reactor thermal-hydraulic best-estimate system code, it is expected to get the nuclear power using not only the point neutron kinetics model but also one-dimension neutron kinetics model. Thus an existing one-dimension nuclear reactor physics code was modified, to couple its neutron kinetics model with the RELAP5 thermal-hydraulic model. The detailed example test proves that the coupling is valid and correct. (authors)

  2. A Monte Carlo model of complex spectra of opacity calculations

    International Nuclear Information System (INIS)

    Klapisch, M.; Duffy, P.; Goldstein, W.H.

    1991-01-01

    We are developing a Monte Carlo method for calculating opacities of complex spectra. It should be faster than atomic structure codes and is more accurate than the UTA method. We use the idea that wavelength-averaged opacities depend on the overall properties, but not the details, of the spectrum; our spectra have the same statistical properties as real ones but the strength and energy of each line is random. In preliminary tests we can get Rosseland mean opacities within 20% of actual values. (orig.)

  3. Carbon dioxide fluid-flow modeling and injectivity calculations

    Science.gov (United States)

    Burke, Lauri

    2011-01-01

    At present, the literature lacks a geologic-based assessment methodology for numerically estimating injectivity, lateral migration, and subsequent long-term containment of supercritical carbon dioxide that has undergone geologic sequestration into subsurface formations. This study provides a method for and quantification of first-order approximations for the time scale of supercritical carbon dioxide lateral migration over a one-kilometer distance through a representative volume of rock. These calculations provide a quantified foundation for estimating injectivity and geologic storage of carbon dioxide.

  4. Long-Term Calculations with Large Air Pollution Models

    DEFF Research Database (Denmark)

    Ambelas Skjøth, C.; Bastrup-Birk, A.; Brandt, J.

    1999-01-01

    Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998......Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998...

  5. Numerical calculation of path integrals : The small-polaron model

    NARCIS (Netherlands)

    Raedt, Hans De; Lagendijk, Ad

    1983-01-01

    The thermodynamic properties of the small-polaron model are studied by means of a discrete version of the Feynman path-integral representation of the partition function. This lattice model describes a fermion interacting with a boson field. The bosons are treated analytically, the fermion

  6. A review of Higgs mass calculations in supersymmetric models

    DEFF Research Database (Denmark)

    Draper, P.; Rzehak, H.

    2016-01-01

    The discovery of the Higgs boson is both a milestone achievement for the Standard Model and an exciting probe of new physics beyond the SM. One of the most important properties of the Higgs is its mass, a number that has proven to be highly constraining for models of new physics, particularly those...... related to the electroweak hierarchy problem. Perhaps the most extensively studied examples are supersymmetric models, which, while capable of producing a 125 GeV Higgs boson with SM-like properties, do so in non-generic parts of their parameter spaces. We review the computation of the Higgs mass...

  7. Update of the hydrogeologic model of the Cerro Prieto field based on recent well data

    Energy Technology Data Exchange (ETDEWEB)

    Halfman, S.E.; Manon, A.; Lippmann, M.J.

    1986-01-01

    The hydrogeologic model of the Cerro Prieto geothermal field in Baja California, Mexico has been updated and modified on the basis of geologic and reservoir engineering data from 21 newly completed wells. Previously, only two reservoirs had been discovered: the shallow ..cap alpha.. reservoir and the deeper ..beta.. reservoir. Recently, three deep wells drilled east of the main wellfield penetrated a third geothermal reservoir (called the ..gamma.. reservoir) below the sandstones corresponding to the ..beta.. reservoir in the main part of the field. The new well data delimit the ..beta.. reservoir, confirm the important role of Fault H in controlling the flow of geothermal fluids, and enable us to refine the hydrogeologic model of the field.

  8. Updating the CHAOS series of field models using Swarm data and resulting candidate models for IGRF-12

    DEFF Research Database (Denmark)

    Finlay, Chris; Olsen, Nils; Tøffner-Clausen, Lars

    Ten months of data from ESA's Swarm mission, together with recent ground observatory monthly means, are used to update the CHAOS series of geomagnetic field models with a focus on time-changes of the core field. As for previous CHAOS field models quiet-time, night-side, data selection criteria......th order spline representation with knot points spaced at 0.5 year intervals. The resulting field model is able to consistently fit data from six independent low Earth orbit satellites: Oersted, CHAMP, SAC-C and the three Swarm satellites. As an example, we present comparisons of the excellent model...... fit obtained to both the Swarm data and the CHAMP data. The new model also provides a good description of observatory secular variation, capturing rapid field evolution events during the past decade. Maps of the core surface field and its secular variation can already be extracted in the Swarm-era. We...

  9. What do business models do? Narratives, calculation and market exploration

    OpenAIRE

    Liliana Doganova; Marie Renault

    2008-01-01

    http://www.csi.ensmp.fr/Items/WorkingPapers/Download/DLWP.php?wp=WP_CSI_012.pdf; CSI WORKING PAPERS SERIES 012; International audience; Building on a case study of an entrepreneurial venture, we investigate the role played by business models in the innovation process. Rather than debating their accuracy and efficiency, we adopt a pragmatic approach to business models -- we examine them as market devices, focusing on their materiality, use and dynamics. Taking into account the variety of its f...

  10. A simple model for calculating air pollution within street canyons

    Science.gov (United States)

    Venegas, Laura E.; Mazzeo, Nicolás A.; Dezzutti, Mariana C.

    2014-04-01

    This paper introduces the Semi-Empirical Urban Street (SEUS) model. SEUS is a simple mathematical model based on the scaling of air pollution concentration inside street canyons employing the emission rate, the width of the canyon, the dispersive velocity scale and the background concentration. Dispersive velocity scale depends on turbulent motions related to wind and traffic. The parameterisations of these turbulent motions include two dimensionless empirical parameters. Functional forms of these parameters have been obtained from full scale data measured in street canyons at four European cities. The sensitivity of SEUS model is studied analytically. Results show that relative errors in the evaluation of the two dimensionless empirical parameters have less influence on model uncertainties than uncertainties in other input variables. The model estimates NO2 concentrations using a simple photochemistry scheme. SEUS is applied to estimate NOx and NO2 hourly concentrations in an irregular and busy street canyon in the city of Buenos Aires. The statistical evaluation of results shows that there is a good agreement between estimated and observed hourly concentrations (e.g. fractional bias are -10.3% for NOx and +7.8% for NO2). The agreement between the estimated and observed values has also been analysed in terms of its dependence on wind speed and direction. The model shows a better performance for wind speeds >2 m s-1 than for lower wind speeds and for leeward situations than for others. No significant discrepancies have been found between the results of the proposed model and that of a widely used operational dispersion model (OSPM), both using the same input information.

  11. PACIAE 2.0: An updated parton and hadron cascade model (program) for the relativistic nuclear collisions

    Science.gov (United States)

    Sa, Ben-Hao; Zhou, Dai-Mei; Yan, Yu-Liang; Li, Xiao-Mei; Feng, Sheng-Qin; Dong, Bao-Guo; Cai, Xu

    2012-02-01

    We have updated the parton and hadron cascade model PACIAE for the relativistic nuclear collisions, from based on JETSET 6.4 and PYTHIA 5.7 to based on PYTHIA 6.4, and renamed as PACIAE 2.0. The main physics concerning the stages of the parton initiation, parton rescattering, hadronization, and hadron rescattering were discussed. The structures of the programs were briefly explained. In addition, some calculated examples were compared with the experimental data. It turns out that this model (program) works well. Program summaryProgram title: PACIAE version 2.0 Catalogue identifier: AEKI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKI_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 297 523 No. of bytes in distributed program, including test data, etc.: 2 051 274 Distribution format: tar.gz Programming language: FORTRAN 77 Computer: DELL Studio XPS and others with a FORTRAN 77 or GFORTRAN compiler Operating system: Unix/Linux RAM: 1 G words Word size: 64 bits Classification: 11.2 Nature of problem: The Monte Carlo simulation of hadron transport (cascade) model is successful in studying the observables at final state in the relativistic nuclear collisions. However the high p suppression, the jet quenching (energy loss), and the eccentricity scaling of v etc., observed in high energy nuclear collisions, indicates the important effect of the initial partonic state on the final hadronic state. Therefore better parton and hadron transport (cascade) models for the relativistic nuclear collisions are highly required. Solution method: The parton and hadron cascade model PACIAE is originally based on the JETSET 7.4 and PYTHIA 5.7. The PYTHIA model has been updated to PYTHIA 6.4 with the additions of new physics, the improvements in existing physics, and the

  12. The Cornell Net Carbohydrate and Protein System: Updates to the model and evaluation of version 6.5.

    Science.gov (United States)

    Van Amburgh, M E; Collao-Saenz, E A; Higgs, R J; Ross, D A; Recktenwald, E B; Raffrenato, E; Chase, L E; Overton, T R; Mills, J K; Foskolos, A

    2015-09-01

    New laboratory and animal sampling methods and data have been generated over the last 10 yr that had the potential to improve the predictions for energy, protein, and AA supply and requirements in the Cornell Net Carbohydrate and Protein System (CNCPS). The objectives of this study were to describe updates to the CNCPS and evaluate model performance against both literature and on-farm data. The changes to the feed library were significant and are reported in a separate manuscript. Degradation rates of protein and carbohydrate fractions were adjusted according to new fractionation schemes, and corresponding changes to equations used to calculate rumen outflows and postrumen digestion were presented. In response to the feed-library changes and an increased supply of essential AA because of updated contents of AA, a combined efficiency of use was adopted in place of separate calculations for maintenance and lactation to better represent the biology of the cow. Four different data sets were developed to evaluate Lys and Met requirements, rumen N balance, and milk yield predictions. In total 99 peer-reviewed studies with 389 treatments and 15 regional farms with 50 different diets were included. The broken-line model with plateau was used to identify the concentration of Lys and Met that maximizes milk protein yield and content. Results suggested concentrations of 7.00 and 2.60% of metabolizable protein (MP) for Lys and Met, respectively, for maximal protein yield and 6.77 and 2.85% of MP for Lys and Met, respectively, for maximal protein content. Updated AA concentrations were numerically higher for Lys and 11 to 18% higher for Met compared with CNCPS v6.0, and this is attributed to the increased content of Met and Lys in feeds that were previously incorrectly analyzed and described. The prediction of postruminal flows of N and milk yield were evaluated using the correlation coefficient from the BLUP (R(2)BLUP) procedure or model predictions (R(2)MDP) and the

  13. Uncertainty modelling and analysis of volume calculations based on a regular grid digital elevation model (DEM)

    Science.gov (United States)

    Li, Chang; Wang, Qing; Shi, Wenzhong; Zhao, Sisi

    2018-05-01

    The accuracy of earthwork calculations that compute terrain volume is critical to digital terrain analysis (DTA). The uncertainties in volume calculations (VCs) based on a DEM are primarily related to three factors: 1) model error (ME), which is caused by an adopted algorithm for a VC model, 2) discrete error (DE), which is usually caused by DEM resolution and terrain complexity, and 3) propagation error (PE), which is caused by the variables' error. Based on these factors, the uncertainty modelling and analysis of VCs based on a regular grid DEM are investigated in this paper. Especially, how to quantify the uncertainty of VCs is proposed by a confidence interval based on truncation error (TE). In the experiments, the trapezoidal double rule (TDR) and Simpson's double rule (SDR) were used to calculate volume, where the TE is the major ME, and six simulated regular grid DEMs with different terrain complexity and resolution (i.e. DE) were generated by a Gauss synthetic surface to easily obtain the theoretical true value and eliminate the interference of data errors. For PE, Monte-Carlo simulation techniques and spatial autocorrelation were used to represent DEM uncertainty. This study can enrich uncertainty modelling and analysis-related theories of geographic information science.

  14. Role of sensory information in updating internal models of the effector during arm tracking.

    Science.gov (United States)

    Vercher, Jean-Louis; Sarès, Frédéric; Blouin, Jean; Bourdin, Christophe; Gauthier, Gabriel

    2003-01-01

    This chapter is divided into three main parts. Firstly, on the basis of the literature, we will shortly discuss how the recent introduction of the concept of internal models by Daniel Wolpert and Mitsuo Kawato contributes to a better understanding of what is motor learning and what is motor adaptation. Then, we will present a model of eye-hand co-ordination during self-moved target tracking, which we used as a way to specifically address these topics. Finally, we will show some evidence about the use of proprioceptive information for updating the internal models, in the context of eye-hand co-ordination. Motor and afferent information appears to contribute to the parametric adjustment (adaptation) between arm motor command and visual information about arm motion. The study reported here was aimed at assessing the contribution of arm proprioception in building (learning) and updating (adaptation) these representations. The subjects (including a deafferented subject) had to make back and forth movements with their forearm in the horizontal plane, over learned amplitude and at constant frequency, and to track an arm-driven target with their eyes. The dynamical conditions of arm movement were altered (unexpectedly or systematically) during the movement by changing the mechanical properties of the manipulandum. The results showed a significant change of the latency and the gain of the smooth pursuit system, before and after the perturbation for the control subjects, but not for the deafferented subject. Moreover, in control subjects, vibrations of the arm muscles prevented adaptation to the mechanical perturbation. These results suggest that in a self-moved target tracking task, the arm motor system shares with the smooth pursuit system an internal representation of the arm dynamical properties, and that arm proprioception is necessary to build this internal model. As suggested by Ghez et al. (1990) (Cold Spring Harbor Symp. Quant. Biol., 55: 837-8471), proprioception

  15. An hydrodynamic model for the calculation of oil spills trajectories

    Energy Technology Data Exchange (ETDEWEB)

    Paladino, Emilio Ernesto; Maliska, Clovis Raimundo [Santa Catarina Univ., Florianopolis, SC (Brazil). Dept. de Engenharia Mecanica. Lab. de Dinamica dos Fluidos Computacionais]. E-mails: emilio@sinmec.ufsc.br; maliska@sinmec.ufsc.br

    2000-07-01

    The aim of this paper is to present a mathematical model and its numerical treatment to forecast oil spills trajectories in the sea. The knowledge of the trajectory followed by an oil slick spilled on the sea is of fundamental importance in the estimation of potential risks for pipeline and tankers route selection, and in combating the pollution using floating barriers, detergents, etc. In order to estimate these slicks trajectories a new model, based on the mass and momentum conservation equations is presented. The model considers the spreading in the regimes when the inertial and viscous forces counterbalance gravity and takes into account the effects of winds and water currents. The inertial forces are considered for the spreading and the displacement of the oil slick, i.e., is considered its effects on the movement of the mass center of the slick. The mass loss caused by oil evaporation is also taken into account. The numerical model is developed in generalized coordinates, making the model easily applicable to complex coastal geographies. (author)

  16. Uncertain hybrid model for the response calculation of an alternator

    International Nuclear Information System (INIS)

    Kuczkowiak, Antoine

    2014-01-01

    The complex structural dynamic behavior of alternator must be well understood in order to insure their reliable and safe operation. The numerical model is however difficult to construct mainly due to the presence of a high level of uncertainty. The objective of this work is to provide decision support tools in order to assess the vibratory levels in operation before to restart the alternator. Based on info-gap theory, a first decision support tool is proposed: the objective here is to assess the robustness of the dynamical response to the uncertain modal model. Based on real data, the calibration of an info-gap model of uncertainty is also proposed in order to enhance its fidelity to reality. Then, the extended constitutive relation error is used to expand identified mode shapes which are used to assess the vibratory levels. The robust expansion process is proposed in order to obtain robust expanded mode shapes to parametric uncertainties. In presence of lack-of knowledge, the trade-off between fidelity-to-data and robustness-to-uncertainties which expresses that robustness improves as fidelity deteriorates is emphasized on an industrial structure by using both reduced order model and surrogate model techniques. (author)

  17. Model for calculation of concentration and load on behalf of accidents with radioactive materials

    International Nuclear Information System (INIS)

    Janssen, L.A.M.; Heugten, W.H.H. van

    1987-04-01

    In the project 'Information- and calculation-system for disaster combatment', by order of the Dutch government, a demonstration model has been developed for a diagnosis system for accidents. In this demonstration a model is used to calculate the concentration- and dose-distributions caused by incidental emissions of limited time. This model is described in this report. 4 refs.; 2 figs.; 3 tabs

  18. HEMCO v1.0: A Versatile, ESMF-Compliant Component for Calculating Emissions in Atmospheric Models

    Science.gov (United States)

    Keller, C. A.; Long, M. S.; Yantosca, R. M.; Da Silva, A. M.; Pawson, S.; Jacob, D. J.

    2014-01-01

    We describe the Harvard-NASA Emission Component version 1.0 (HEMCO), a stand-alone software component for computing emissions in global atmospheric models. HEMCO determines emissions from different sources, regions, and species on a user-defined grid and can combine, overlay, and update a set of data inventories and scale factors, as specified by the user through the HEMCO configuration file. New emission inventories at any spatial and temporal resolution are readily added to HEMCO and can be accessed by the user without any preprocessing of the data files or modification of the source code. Emissions that depend on dynamic source types and local environmental variables such as wind speed or surface temperature are calculated in separate HEMCO extensions. HEMCO is fully compliant with the Earth System Modeling Framework (ESMF) environment. It is highly portable and can be deployed in a new model environment with only few adjustments at the top-level interface. So far, we have implemented HEMCO in the NASA Goddard Earth Observing System (GEOS-5) Earth system model (ESM) and in the GEOS-Chem chemical transport model (CTM). By providing a widely applicable framework for specifying constituent emissions, HEMCO is designed to ease sensitivity studies and model comparisons, as well as inverse modeling in which emissions are adjusted iteratively. The HEMCO code, extensions, and the full set of emissions data files used in GEOS-Chem are available at http: //wiki.geos-chem.org/HEMCO.

  19. Detection of Earthquake-Induced Damage in a Framed Structure Using a Finite Element Model Updating Procedure

    Science.gov (United States)

    Kim, Seung-Nam; Park, Taewon; Lee, Sang-Hyun

    2014-01-01

    Damage of a 5-story framed structure was identified from two types of measured data, which are frequency response functions (FRF) and natural frequencies, using a finite element (FE) model updating procedure. In this study, a procedure to determine the appropriate weightings for different groups of observations was proposed. In addition, a modified frame element which included rotational springs was used to construct the FE model for updating to represent concentrated damage at the member ends (a formulation for plastic hinges in framed structures subjected to strong earthquakes). The results of the model updating and subsequent damage detection when the rotational springs (RS model) were used were compared with those obtained using the conventional frame elements (FS model). Comparisons indicated that the RS model gave more accurate results than the FS model. That is, the errors in the natural frequencies of the updated models were smaller, and the identified damage showed clearer distinctions between damaged and undamaged members and was more consistent with observed damage. PMID:24574888

  20. Detection of Earthquake-Induced Damage in a Framed Structure Using a Finite Element Model Updating Procedure

    Directory of Open Access Journals (Sweden)

    Eunjong Yu

    2014-01-01

    Full Text Available Damage of a 5-story framed structure was identified from two types of measured data, which are frequency response functions (FRF and natural frequencies, using a finite element (FE model updating procedure. In this study, a procedure to determine the appropriate weightings for different groups of observations was proposed. In addition, a modified frame element which included rotational springs was used to construct the FE model for updating to represent concentrated damage at the member ends (a formulation for plastic hinges in framed structures subjected to strong earthquakes. The results of the model updating and subsequent damage detection when the rotational springs (RS model were used were compared with those obtained using the conventional frame elements (FS model. Comparisons indicated that the RS model gave more accurate results than the FS model. That is, the errors in the natural frequencies of the updated models were smaller, and the identified damage showed clearer distinctions between damaged and undamaged members and was more consistent with observed damage.

  1. Detection of earthquake-induced damage in a framed structure using a finite element model updating procedure.

    Science.gov (United States)

    Yu, Eunjong; Kim, Seung-Nam; Park, Taewon; Lee, Sang-Hyun

    2014-01-01

    Damage of a 5-story framed structure was identified from two types of measured data, which are frequency response functions (FRF) and natural frequencies, using a finite element (FE) model updating procedure. In this study, a procedure to determine the appropriate weightings for different groups of observations was proposed. In addition, a modified frame element which included rotational springs was used to construct the FE model for updating to represent concentrated damage at the member ends (a formulation for plastic hinges in framed structures subjected to strong earthquakes). The results of the model updating and subsequent damage detection when the rotational springs (RS model) were used were compared with those obtained using the conventional frame elements (FS model). Comparisons indicated that the RS model gave more accurate results than the FS model. That is, the errors in the natural frequencies of the updated models were smaller, and the identified damage showed clearer distinctions between damaged and undamaged members and was more consistent with observed damage.

  2. A modified calculation model for groundwater flowing to horizontal ...

    Indian Academy of Sciences (India)

    well pipe and aquifer couples the turbulent flow inside the horizontal seepage well with laminar flow in the aquifer. .... In the well pipe, the relationship between hydraulic head loss and flow velocity .... the steady-state mathematic model is developed for groundwater flowing to the horizontal seepage well under a river valley.

  3. Source data for modeling of thermal engineering calculations

    Directory of Open Access Journals (Sweden)

    Charvátová Pavlína

    2018-01-01

    Full Text Available Increasing demands on thermal insulation. Their more accurate assessment by computers lead to increasingly bigger differences between computational models and reality. The result is an increasingly problematic optimization of building design. One of the key initial parameters is climatological data.

  4. A calculation model for a HTR core seismic response

    International Nuclear Information System (INIS)

    Buland, P.; Berriaud, C.; Cebe, E.; Livolant, M.

    1975-01-01

    The paper presents the experimental results obtained at Saclay on a HTGR core model and comparisons with analytical results. Two series of horizontal tests have been performed on the shaking table VESUVE: sinusoidal test and time history response. Acceleration of graphite blocks, forces on the boundaries, relative displacement of the core and PCRB model, impact velocity of the blocks on the boundaries were recorded. These tests have shown the strongly non-linear dynamic behaviour of the core. The resonant frequency of the core is dependent on the level of the excitation. These phenomena have been explained by a computer code, which is a lumped mass non-linear model. Good correlation between experimental and analytical results was obtained for impact velocities and forces on the boundaries. This comparison has shown that the damping of the core is a critical parameter for the estimation of forces and velocities. Time history displacement at the level of PCRV was reproduced on the shaking table. The analytical model was applied to this excitation and good agreement was obtained for forces and velocities. (orig./HP) [de

  5. Calculation of benchmarks with a shear beam model

    NARCIS (Netherlands)

    Hendriks, M.A.N.; Boer, A.; Rots, J.G.; Ferreira, D.

    2015-01-01

    Fiber models for beam and shell elements allow for relatively rapid finite element analysis of concrete structures and structural elements. This project aims at the development of the formulation of such elements and a pilot implementation. Standard nonlinear fiber beam formulations do not account

  6. Reactor accident calculation models in use in the Nordic countries

    International Nuclear Information System (INIS)

    Tveten, U.

    1984-01-01

    The report relates to a subproject under a Nordic project called ''Large reactor accidents - consequences and mitigating actions''. In the first part of the report short descriptions of the various models are given. A systematic list by subject is then given. In the main body of the report chapter and subchapter headings are by subject. (Auth.)

  7. Semiclassical calculation for collision induced dissociation. II. Morse oscillator model

    International Nuclear Information System (INIS)

    Rusinek, I.; Roberts, R.E.

    1978-01-01

    A recently developed semiclassical procedure for calculating collision induced dissociation probabilities P/sup diss/ is applied to the collinear collision between a particle and a Morse oscillator diatomic. The particle--diatom interaction is described with a repulsive exponential potential function. P/sup diss/ is reported for a system of three identical particles, as a function of collision energy E/sub t/ and initial vibrational state of the diatomic n 1 . The results are compared with the previously reported values for the collision between a particle and a truncated harmonic oscillator. The two studies show similar features, namely: (a) there is an oscillatory structure in the P/sup diss/ energy profiles, which is directly related to n 1 ; (b) P/sup diss/ becomes noticeable (> or approx. =10 -3 ) for E/sub t/ values appreciably higher than the energetic threshold; (c) vibrational enhancement (inhibition) of collision induced dissociation persists at low (high) energies; and (d) good agreement between the classical and semiclassical results is found above the classical dynamic threshold. Finally, the convergence of P/sup diss/ for increasing box length is shown to be rapid and satisfactory

  8. Approximate models for neutral particle transport calculations in ducts

    International Nuclear Information System (INIS)

    Ono, Shizuca

    2000-01-01

    The problem of neutral particle transport in evacuated ducts of arbitrary, but axially uniform, cross-sectional geometry and isotropic reflection at the wall is studied. The model makes use of basis functions to represent the transverse and azimuthal dependences of the particle angular flux in the duct. For the approximation in terms of two basis functions, an improvement in the method is implemented by decomposing the problem into uncollided and collided components. A new quadrature set, more suitable to the problem, is developed and generated by one of the techniques of the constructive theory of orthogonal polynomials. The approximation in terms of three basis functions is developed and implemented to improve the precision of the results. For both models of two and three basis functions, the energy dependence of the problem is introduced through the multigroup formalism. The results of sample problems are compared to literature results and to results of the Monte Carlo code, MCNP. (author)

  9. Generic model for calculating carbon footprint of milk using four different LCA modelling approaches

    DEFF Research Database (Denmark)

    Dalgaard, Randi; Schmidt, Jannick Højrup; Flysjö, Anna

    2014-01-01

    is LCA. The model includes switches that enables for, within the same scope, transforming the results to comply with 1) consequential LCA, 2) allocation/average modelling (or ‘attributional LCA’), 3) PAS 2050 and 4) The International Dairy Federations (IDF) guide to standard life cycle assessment......The aim of the study is to develop a tool, which can be used for calculation of carbon footprint (using a life cycle assessment (LCA) approach) of milk both at a farm level and at a national level. The functional unit is ‘1 kg energy corrected milk (ECM) at farm gate’ and the applied methodology...

  10. Worst case prediction of additives migration from polystyrene for food safety purposes: a model update.

    Science.gov (United States)

    Martínez-López, Brais; Gontard, Nathalie; Peyron, Stéphane

    2018-03-01

    A reliable prediction of migration levels of plastic additives into food requires a robust estimation of diffusivity. Predictive modelling of diffusivity as recommended by the EU commission is carried out using a semi-empirical equation that relies on two polymer-dependent parameters. These parameters were determined for the polymers most used by packaging industry (LLDPE, HDPE, PP, PET, PS, HIPS) from the diffusivity data available at that time. In the specific case of general purpose polystyrene, the diffusivity data published since then shows that the use of the equation with the original parameters results in systematic underestimation of diffusivity. The goal of this study was therefore, to propose an update of the aforementioned parameters for PS on the basis of up to date diffusivity data, so the equation can be used for a reasoned overestimation of diffusivity.

  11. [Social determinants of health and disability: updating the model for determination].

    Science.gov (United States)

    Tamayo, Mauro; Besoaín, Álvaro; Rebolledo, Jaime

    Social determinants of health (SDH) are conditions in which people live. These conditions impact their lives, health status and social inclusion level. In line with the conceptual and comprehensive progression of disability, it is important to update SDH due to their broad implications in implementing health interventions in society. This proposal supports incorporating disability in the model as a structural determinant, as it would lead to the same social inclusion/exclusion of people described in other structural SDH. This proposal encourages giving importance to designing and implementing public policies to improve societal conditions and contribute to social equity. This will be an act of reparation, justice and fulfilment with the Convention on the Rights of Persons with Disabilities. Copyright © 2017 SESPAS. Publicado por Elsevier España, S.L.U. All rights reserved.

  12. Towards a neural basis of music perception -- A review and updated model

    Directory of Open Access Journals (Sweden)

    Stefan eKoelsch

    2011-06-01

    Full Text Available Music perception involves acoustic analysis, auditory memory, auditoryscene analysis, processing of interval relations, of musical syntax and semantics,and activation of (premotor representations of actions. Moreover, music percep-tion potentially elicits emotions, thus giving rise to the modulation of emotionaleffector systems such as the subjective feeling system, the autonomic nervoussystem, the hormonal, and the immune system. Building on a previous article(Koelsch & Siebel, 2005, this review presents an updated model of music percep-tion and its neural correlates. The article describes processes involved in musicperception, and reports EEG and fMRI studies that inform about the time courseof these processes, as well as about where in the brain these processes might belocated.

  13. Finite element model validation of bridge based on structural health monitoring—Part I: Response surface-based finite element model updating

    Directory of Open Access Journals (Sweden)

    Zhouhong Zong

    2015-08-01

    Full Text Available In the engineering practice, merging statistical analysis into structural evaluation and assessment is a tendency in the future. As a combination of mathematical and statistical techniques, response surface (RS methodology has been successfully applied to design optimization, response prediction and model validation. With the aid of RS methodology, these two serial papers present a finite element (FE model updating and validation method for bridge structures based on structural health monitoring. The key issues to implement such a model updating are discussed in this paper, such as design of experiment, parameter screening, construction of high-order polynomial response surface model, optimization methods and precision inspection of RS model. The proposed procedure is illustrated by a prestressed concrete continuous rigid-frame bridge monitored under operational conditions. The results from the updated FE model have been compared with those obtained from online health monitoring system. The real application to a full-size bridge has demonstrated that the FE model updating process is efficient and convenient. The updated FE model can relatively reflect the actual condition of Xiabaishi Bridge in the design space of parameters and can be further applied to FE model validation and damage identification.

  14. Metal-rich, Metal-poor: Updated Stellar Population Models for Old Stellar Systems

    Science.gov (United States)

    Conroy, Charlie; Villaume, Alexa; van Dokkum, Pieter G.; Lind, Karin

    2018-02-01

    We present updated stellar population models appropriate for old ages (>1 Gyr) and covering a wide range in metallicities (‑1.5 ≲ [Fe/H] ≲ 0.3). These models predict the full spectral variation associated with individual element abundance variation as a function of metallicity and age. The models span the optical–NIR wavelength range (0.37–2.4 μm), include a range of initial mass functions, and contain the flexibility to vary 18 individual elements including C, N, O, Mg, Si, Ca, Ti, and Fe. To test the fidelity of the models, we fit them to integrated light optical spectra of 41 Galactic globular clusters (GCs). The value of testing models against GCs is that their ages, metallicities, and detailed abundance patterns have been derived from the Hertzsprung–Russell diagram in combination with high-resolution spectroscopy of individual stars. We determine stellar population parameters from fits to all wavelengths simultaneously (“full spectrum fitting”), and demonstrate explicitly with mock tests that this approach produces smaller uncertainties at fixed signal-to-noise ratio than fitting a standard set of 14 line indices. Comparison of our integrated-light results to literature values reveals good agreement in metallicity, [Fe/H]. When restricting to GCs without prominent blue horizontal branch populations, we also find good agreement with literature values for ages, [Mg/Fe], [Si/Fe], and [Ti/Fe].

  15. The curvature calculation mechanism based on simple cell model.

    Science.gov (United States)

    Yu, Haiyang; Fan, Xingyu; Song, Aiqi

    2017-07-20

    A conclusion has not yet been reached on how exactly the human visual system detects curvature. This paper demonstrates how orientation-selective simple cells can be used to construct curvature-detecting neural units. Through fixed arrangements, multiple plurality cells were constructed to simulate curvature cells with a proportional output to their curvature. In addition, this paper offers a solution to the problem of narrow detection range under fixed resolution by selecting an output value under multiple resolution. Curvature cells can be treated as concrete models of an end-stopped mechanism, and they can be used to further understand "curvature-selective" characteristics and to explain basic psychophysical findings and perceptual phenomena in current studies.

  16. Accurate modeling of defects in graphene transport calculations

    Science.gov (United States)

    Linhart, Lukas; Burgdörfer, Joachim; Libisch, Florian

    2018-01-01

    We present an approach for embedding defect structures modeled by density functional theory into large-scale tight-binding simulations. We extract local tight-binding parameters for the vicinity of the defect site using Wannier functions. In the transition region between the bulk lattice and the defect the tight-binding parameters are continuously adjusted to approach the bulk limit far away from the defect. This embedding approach allows for an accurate high-level treatment of the defect orbitals using as many as ten nearest neighbors while keeping a small number of nearest neighbors in the bulk to render the overall computational cost reasonable. As an example of our approach, we consider an extended graphene lattice decorated with Stone-Wales defects, flower defects, double vacancies, or silicon substitutes. We predict distinct scattering patterns mirroring the defect symmetries and magnitude that should be experimentally accessible.

  17. Automated detection of healthcare associated infections: external validation and updating of a model for surveillance of drain-related meningitis.

    Directory of Open Access Journals (Sweden)

    Maaike S M van Mourik

    Full Text Available OBJECTIVE: Automated surveillance of healthcare-associated infections can improve efficiency and reliability of surveillance. The aim was to validate and update a previously developed multivariable prediction model for the detection of drain-related meningitis (DRM. DESIGN: Retrospective cohort study using traditional surveillance by infection control professionals as reference standard. PATIENTS: Patients receiving an external cerebrospinal fluid drain, either ventricular (EVD or lumbar (ELD in a tertiary medical care center. Children, patients with simultaneous drains, <1 day of follow-up or pre-existing meningitis were excluded leaving 105 patients in validation set (2010-2011 and 653 in updating set (2004-2011. METHODS: For validation, the original model was applied. Discrimination, classification and calibration were assessed. For updating, data from all available years was used to optimally re-estimate coefficients and determine whether extension with new predictors is necessary. The updated model was validated and adjusted for optimism (overfitting using bootstrapping techniques. RESULTS: In model validation, the rate of DRM was 17.4/1000 days at risk. All cases were detected by the model. The area under the ROC curve was 0.951. The positive predictive value was 58.8% (95% CI 40.7-75.4 and calibration was good. The revised model also includes Gram stain results. Area under the ROC curve after correction for optimism was 0.963 (95% CI 0.953- 0.974. Group-level prediction was adequate. CONCLUSIONS: The previously developed multivariable prediction model maintains discriminatory power and calibration in an independent patient population. The updated model incorporates all available data and performs well, also after elaborate adjustment for optimism.

  18. User Guide for GoldSim Model to Calculate PA/CA Doses and Limits

    International Nuclear Information System (INIS)

    Smith, F.

    2016-01-01

    A model to calculate doses for solid waste disposal at the Savannah River Site (SRS) and corresponding disposal limits has been developed using the GoldSim commercial software. The model implements the dose calculations documented in SRNL-STI-2015-00056, Rev. 0 ''Dose Calculation Methodology and Data for Solid Waste Performance Assessment (PA) and Composite Analysis (CA) at the Savannah River Site''.

  19. User Guide for GoldSim Model to Calculate PA/CA Doses and Limits

    Energy Technology Data Exchange (ETDEWEB)

    Smith, F. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2016-10-31

    A model to calculate doses for solid waste disposal at the Savannah River Site (SRS) and corresponding disposal limits has been developed using the GoldSim commercial software. The model implements the dose calculations documented in SRNL-STI-2015-00056, Rev. 0 “Dose Calculation Methodology and Data for Solid Waste Performance Assessment (PA) and Composite Analysis (CA) at the Savannah River Site”.

  20. Finite element model updating of a tied-arch bridge using Douglas-Reid method and Rosenbrock optimization algorithm

    Directory of Open Access Journals (Sweden)

    Tobia Zordan

    2014-08-01

    Full Text Available Condition assessment of bridges has become increasingly important. In order to accurately simulate the real bridge, finite element (FE model updating method is often applied. This paper presents the calibration of the FE model of a reinforced concrete tied-arch bridge using Douglas-Reid method in combination with Rosenbrock optimization algorithm. Based on original drawings and topographie survey, a FE model of the investigated bridge is created. Eight global modes of vibration of the bridge are identified by ambient vibration tests and the frequency domain decomposition technique. Then, eight structural parameters are selected for FE model updating procedure through sensitivity analysis. Finally, the optimal structural parameters are identified using Rosenbrock optimization algorithm. Results show that although the identified parameters lead to a perfect agreement between approximate and measured natural frequencies, they may not be the optimal variables which minimize the differences between numerical and experimental modal data. However, a satisfied agreement between them is still presented. Hence, FE model updating based on Douglas-Reid method and Rosenbrock optimization algorithm could be used as an alternative to other complex updating procedures.

  1. Investigation of a model to verify software for 3-D static force calculation

    OpenAIRE

    Takahashi, Norio; Nakata, Takayoshi; Morishige, H.

    1994-01-01

    Requirements for a model to verify software for 3-D static force calculation are examined, and a 3-D model for static force calculation is proposed. Some factors affecting the analysis and experiments are investigated in order to obtain accurate and reproducible results

  2. A model for calculating expected performance of the Apollo unified S-band (USB) communication system

    Science.gov (United States)

    Schroeder, N. W.

    1971-01-01

    A model for calculating the expected performance of the Apollo unified S-band (USB) communication system is presented. The general organization of the Apollo USB is described. The mathematical model is reviewed and the computer program for implementation of the calculations is included.

  3. Cost calculation model concerning small-scale production of chips and split firewood

    International Nuclear Information System (INIS)

    Ryynaenen, S.; Naett, H.; Valkonen, J.

    1995-01-01

    The TTS-Institute's Forestry Department has developed a computer-based cost calculation model for the production of wood chips and split firewood. This development work was carried out in conjunction with the nation-wide BIOENERGY -research programme. The said calculation model eases and speeds up the calculation of unit costs and resource needs in harvesting systems for wood chips and split firewood. The model also enables the user to find out how changes in the productivity and costs bases of different harvesting chains influences the unit costs of the system as a whole. The undertaking was composed of the following parts: clarification and modification of productivity bases for application in the model as mathematical models, clarification of machine and device costs bases, designing of the structure and functions of the calculation model, construction and testing of the model's 0-version, model calculations concerning typical chains, review of calculation bases, and charting of development needs focusing on the model. The calculation model was developed to serve research needs, but with further development it could be useful as a tool in forestry and agricultural extension work, related schools and colleges, and in the hands of firewood producers. (author)

  4. Experimental liver fibrosis research: update on animal models, legal issues and translational aspects

    Science.gov (United States)

    2013-01-01

    Liver fibrosis is defined as excessive extracellular matrix deposition and is based on complex interactions between matrix-producing hepatic stellate cells and an abundance of liver-resident and infiltrating cells. Investigation of these processes requires in vitro and in vivo experimental work in animals. However, the use of animals in translational research will be increasingly challenged, at least in countries of the European Union, because of the adoption of new animal welfare rules in 2013. These rules will create an urgent need for optimized standard operating procedures regarding animal experimentation and improved international communication in the liver fibrosis community. This review gives an update on current animal models, techniques and underlying pathomechanisms with the aim of fostering a critical discussion of the limitations and potential of up-to-date animal experimentation. We discuss potential complications in experimental liver fibrosis and provide examples of how the findings of studies in which these models are used can be translated to human disease and therapy. In this review, we want to motivate the international community to design more standardized animal models which might help to address the legally requested replacement, refinement and reduction of animals in fibrosis research. PMID:24274743

  5. An updated conceptual model of Delta Smelt biology: Our evolving understanding of an estuarine fish

    Science.gov (United States)

    Baxter, Randy; Brown, Larry R.; Castillo, Gonzalo; Conrad, Louise; Culberson, Steven D.; Dekar, Matthew P.; Dekar, Melissa; Feyrer, Frederick; Hunt, Thaddeus; Jones, Kristopher; Kirsch, Joseph; Mueller-Solger, Anke; Nobriga, Matthew; Slater, Steven B.; Sommer, Ted; Souza, Kelly; Erickson, Gregg; Fong, Stephanie; Gehrts, Karen; Grimaldo, Lenny; Herbold, Bruce

    2015-01-01

    The main purpose of this report is to provide an up-to-date assessment and conceptual model of factors affecting Delta Smelt (Hypomesus transpacificus) throughout its primarily annual life cycle and to demonstrate how this conceptual model can be used for scientific and management purposes. The Delta Smelt is a small estuarine fish that only occurs in the San Francisco Estuary. Once abundant, it is now rare and has been protected under the federal and California Endangered Species Acts since 1993. The Delta Smelt listing was related to a step decline in the early 1980s; however, population abundance decreased even further with the onset of the “pelagic organism decline” (POD) around 2002. A substantial, albeit short-lived, increase in abundance of all life stages in 2011 showed that the Delta Smelt population can still rebound when conditions are favorable for spawning, growth, and survival. In this report, we update previous conceptual models for Delta Smelt to reflect new data and information since the release of the last synthesis report about the POD by the Interagency Ecological Program for the San Francisco Estuary (IEP) in 2010. Specific objectives include:

  6. Inverse calculation of biochemical oxygen demand models based on time domain for the tidal Foshan River.

    Science.gov (United States)

    Er, Li; Xiangying, Zeng

    2014-01-01

    To simulate the variation of biochemical oxygen demand (BOD) in the tidal Foshan River, inverse calculations based on time domain are applied to the longitudinal dispersion coefficient (E(x)) and BOD decay rate (K(x)) in the BOD model for the tidal Foshan River. The derivatives of the inverse calculation have been respectively established on the basis of different flow directions in the tidal river. The results of this paper indicate that the calculated values of BOD based on the inverse calculation developed for the tidal Foshan River match the measured ones well. According to the calibration and verification of the inversely calculated BOD models, K(x) is more sensitive to the models than E(x) and different data sets of E(x) and K(x) hardly affect the precision of the models.

  7. Reference Models of Information Systems Constructed with the use of Technologies of Cloud Calculations

    Directory of Open Access Journals (Sweden)

    Darya Sergeevna Simonenkova

    2013-09-01

    Full Text Available The subject of the research is analysis of various models of the information system constructed with the use of technologies of cloud calculations. Analysis of models is required for constructing a new reference model which will be used for develop a security threats model.

  8. Updated model for radionuclide transport in the near-surface till at Forsmark - Implementation of decay chains and sensitivity analyses

    Energy Technology Data Exchange (ETDEWEB)

    Pique, Angels; Pekala, Marek; Molinero, Jorge; Duro, Lara; Trinchero, Paolo; Vries, Luis Manuel de [Amphos 21 Consulting S.L., Barcelona (Spain)

    2013-02-15

    The Forsmark area has been proposed for potential siting of a deep underground (geological) repository for radioactive waste in Sweden. Safety assessment of the repository requires radionuclide transport from the disposal depth to recipients at the surface to be studied quantitatively. The near-surface quaternary deposits at Forsmark are considered a pathway for potential discharge of radioactivity from the underground facility to the biosphere, thus radionuclide transport in this system has been extensively investigated over the last years. The most recent work of Pique and co-workers (reported in SKB report R-10-30) demonstrated that in case of release of radioactivity the near-surface sedimentary system at Forsmark would act as an important geochemical barrier, retarding the transport of reactive radionuclides through a combination of retention processes. In this report the conceptual model of radionuclide transport in the quaternary till at Forsmark has been updated, by considering recent revisions regarding the near-surface lithology. In addition, the impact of important conceptual assumptions made in the model has been evaluated through a series of deterministic and probabilistic (Monte Carlo) sensitivity calculations. The sensitivity study focused on the following effects: 1. Radioactive decay of {sup 135}Cs, {sup 59}Ni, {sup 230}Th and {sup 226}Ra and effects on their transport. 2. Variability in key geochemical parameters, such as the composition of the deep groundwater, availability of sorbing materials in the till, and mineral equilibria. 3. Variability in hydraulic parameters, such as the definition of hydraulic boundaries, and values of hydraulic conductivity, dispersivity and the deep groundwater inflow rate. The overarching conclusion from this study is that the current implementation of the model is robust (the model is largely insensitive to variations in the parameters within the studied ranges) and conservative (the Base Case calculations have a

  9. Fena Valley Reservoir watershed and water-balance model updates and expansion of watershed modeling to southern Guam

    Science.gov (United States)

    Rosa, Sarah N.; Hay, Lauren E.

    2017-12-01

    In 2014, the U.S. Geological Survey, in cooperation with the U.S. Department of Defense’s Strategic Environmental Research and Development Program, initiated a project to evaluate the potential impacts of projected climate-change on Department of Defense installations that rely on Guam’s water resources. A major task of that project was to develop a watershed model of southern Guam and a water-balance model for the Fena Valley Reservoir. The southern Guam watershed model provides a physically based tool to estimate surface-water availability in southern Guam. The U.S. Geological Survey’s Precipitation Runoff Modeling System, PRMS-IV, was used to construct the watershed model. The PRMS-IV code simulates different parts of the hydrologic cycle based on a set of user-defined modules. The southern Guam watershed model was constructed by updating a watershed model for the Fena Valley watersheds, and expanding the modeled area to include all of southern Guam. The Fena Valley watershed model was combined with a previously developed, but recently updated and recalibrated Fena Valley Reservoir water-balance model.Two important surface-water resources for the U.S. Navy and the citizens of Guam were modeled in this study; the extended model now includes the Ugum River watershed and improves upon the previous model of the Fena Valley watersheds. Surface water from the Ugum River watershed is diverted and treated for drinking water, and the Fena Valley watersheds feed the largest surface-water reservoir on Guam. The southern Guam watershed model performed “very good,” according to the criteria of Moriasi and others (2007), in the Ugum River watershed above Talofofo Falls with monthly Nash-Sutcliffe efficiency statistic values of 0.97 for the calibration period and 0.93 for the verification period (a value of 1.0 represents perfect model fit). In the Fena Valley watershed, monthly simulated streamflow volumes from the watershed model compared reasonably well with the

  10. Avoiding drift related to linear analysis update with Lagrangian coordinate models

    Science.gov (United States)

    Wang, Yiguo; Counillon, Francois; Bertino, Laurent

    2015-04-01

    When applying data assimilation to Lagrangian coordinate models, it is profitable to correct its grid (position, volume). In isopycnal ocean coordinate model, such information is provided by the layer thickness that can be massless but must remains positive (truncated Gaussian distribution). A linear gaussian analysis does not ensure positivity for such variable. Existing methods have been proposed to handle this issue - e.g. post processing, anamorphosis or resampling - but none ensures conservation of the mean, which is imperative in climate application. Here, a framework is introduced to test a new method, which proceed as following. First, layers for which analysis yields negative values are iteratively grouped with neighboring layers, resulting in a probability density function with a larger mean and smaller standard deviation that prevent appearance of negative values. Second, analysis increments of the grouped layer are uniformly distributed, which prevent massless layers to become filled and vice-versa. The new method is proved fully conservative with e.g. OI or 3DVAR but a small drift remains with ensemble-based methods (e.g. EnKF, DEnKF, …) during the update of the ensemble anomaly. However, the resulting drift with the latter is small (an order of magnitude smaller than with post-processing) and the increase of the computational cost moderate. The new method is demonstrated with a realistic application in the Norwegian Climate Prediction Model (NorCPM) that provides climate prediction by assimilating sea surface temperature with the Ensemble Kalman Filter in a fully coupled Earth System model (NorESM) with an isopycnal ocean model (MICOM). Over 25-year analysis period, the new method does not impair the predictive skill of the system but corrects the artificial steric drift introduced by data assimilation, and provide estimate in good agreement with IPCC AR5.

  11. Implementation of the neutronics model of HEXTRAN/HEXBU-3D into APROS for WWER calculations

    International Nuclear Information System (INIS)

    Rintala, J.

    2008-01-01

    A new three-dimensional nodal model for neutronics calculation is currently under implementation into APROS - Advanced PROcess Simulation environment - to conform the increasing accuracy requirements. The new model is based on an advanced nodal code HEXTRAN and its static version HEXBU-3D by VTT, Technical Research Centre of Finland. Currently the new APROS is under a testing programme. Later a systematic validation will be performed. In the first phase, a goal is to obtain a fully validated model for VVER-440 calculations. Thus, all the current test calculations are performed by using Loviisa NPP's VVER-440 model of APROS. In future, the model is planned to be applied for the calculations of VVER-1000 type reactors as well as in rectangular fuel geometry. The paper outlines first the general aspects of the method, and then the current situation of the implementation. Because of the identical model with the models of HEXTRAN and HEXBU-3D, the results in the test calculations are compared to the results of those. In the paper, results of two static test calculations are shown. Currently the model works well already in static analyses. Only minor problems with the control assemblies of VVER-440 type reactor still exist but the reasons are known and will be corrected in near future. Dynamical characteristics of the model are up to now tested only by some empirical tests. (author)

  12. Advanced Test Reactor Core Modeling Update Project Annual Report for Fiscal Year 2010

    Energy Technology Data Exchange (ETDEWEB)

    Rahmat Aryaeinejad; Douglas S. Crawford; Mark D. DeHart; George W. Griffith; D. Scott Lucas; Joseph W. Nielsen; David W. Nigg; James R. Parry; Jorge Navarro

    2010-09-01

    Legacy computational reactor physics software tools and protocols currently used for support of Advanced Test Reactor (ATR) core fuel management and safety assurance and, to some extent, experiment management are obsolete, inconsistent with the state of modern nuclear engineering practice, and are becoming increasingly difficult to properly verify and validate (V&V). Furthermore, the legacy staff knowledge required for application of these tools and protocols from the 1960s and 1970s is rapidly being lost due to staff turnover and retirements. In 2009 the Idaho National Laboratory (INL) initiated a focused effort to address this situation through the introduction of modern high-fidelity computational software and protocols, with appropriate V&V, within the next 3-4 years via the ATR Core Modeling and Simulation and V&V Update (or “Core Modeling Update”) Project. This aggressive computational and experimental campaign will have a broad strategic impact on the operation of the ATR, both in terms of improved computational efficiency and accuracy for support of ongoing DOE programs as well as in terms of national and international recognition of the ATR National Scientific User Facility (NSUF).

  13. An Updated Geophysical Model for AMSR-E and SSMIS Brightness Temperature Simulations over Oceans

    Directory of Open Access Journals (Sweden)

    Elizaveta Zabolotskikh

    2014-03-01

    Full Text Available In this study, we considered the geophysical model for microwave brightness temperature (BT simulation for the Atmosphere-Ocean System under non-precipitating conditions. The model is presented as a combination of atmospheric absorption and ocean emission models. We validated this model for two satellite instruments—for Advanced Microwave Sounding Radiometer-Earth Observing System (AMSR-E onboard Aqua satellite and for Special Sensor Microwave Imager/Sounder (SSMIS onboard F16 satellite of Defense Meteorological Satellite Program (DMSP series. We compared simulated BT values with satellite BT measurements for different combinations of various water vapor and oxygen absorption models and wind induced ocean emission models. A dataset of clear sky atmospheric and oceanic parameters, collocated in time and space with satellite measurements, was used for the comparison. We found the best model combination, providing the least root mean square error between calculations and measurements. A single combination of models ensured the best results for all considered radiometric channels. We also obtained the adjustments to simulated BT values, as averaged differences between the model simulations and satellite measurements. These adjustments can be used in any research based on modeling data for removing model/calibration inconsistencies. We demonstrated the application of the model by means of the development of the new algorithm for sea surface wind speed retrieval from AMSR-E data.

  14. Calculation of DC Arc Plasma Torch Voltage- Current Characteristics Based on Steebeck Model

    International Nuclear Information System (INIS)

    Gnedenko, V.G.; Ivanov, A.A.; Pereslavtsev, A.V.; Tresviatsky, S.S.

    2006-01-01

    The work is devoted to the problem of the determination of plasma torches parameters and power sources parameters (working voltage and current of plasma torch) at the predesigning stage. The sequence of calculation of voltage-current characteristics of DC arc plasma torch is proposed. It is shown that the simple Steenbeck model of arc discharge in cylindrical channel makes it possible to carry out this calculation. The results of the calculation are confirmed by the experiments

  15. Data base structure and Management for Automatic Calculation of 210Pb Dating Methods Applying Different Models

    International Nuclear Information System (INIS)

    Gasco, C.; Anton, M. P.; Ampudia, J.

    2003-01-01

    The introduction of macros in try calculation sheets allows the automatic application of various dating models using unsupported ''210 Pb data from a data base. The calculation books the contain the models have been modified to permit the implementation of these macros. The Marine and Aquatic Radioecology group of CIEMAT (MARG) will be involved in new European Projects, thus new models have been developed. This report contains a detailed description of: a) the new implement macros b) the design of a dating Menu in the calculation sheet and c) organization and structure of the data base. (Author) 4 refs

  16. Do lateral boundary condition update frequency and the resolution of the boundary data affect the regional model COSMO-CLM? A sensitivity study.

    Science.gov (United States)

    Pankatz, K.; Kerkweg, A.

    2014-12-01

    The work presented is part of the joint project "DecReg" ("Regional decadal predictability") which is in turn part of the project "MiKlip" ("Decadal predictions"), an effort funded by the german Federal Ministry of Education and Research to improve decadal predictions on a global and regional scale. In regional climate modeling it is common to update the lateral boundary conditions (LBC) of the regional model every six hours. This is mainly due to the fact, that reference data sets like ERA are only available every six hours. Additionally, for offline coupling procedures it would be too costly to store LBC data in higher temporal resolution for climate simulations. However, theoretically, the coupling frequency could be as high as the time step of the driving model. Meanwhile, it is unclear if a more frequent update of the LBC has a significant effect on the climate in the domain of the regional model (RCM). This study uses the RCM COSMO-CLM/MESSy (Kerkweg and Jöckel, 2012) to couple COSMO-CLM offline to the GCM ECHAM5. One study examines a 30 year time slice experiment for three update frequencies of the LBC, namely six hours, one hour and six minutes. The evaluation of means, standard deviations and statistics of the climate in regional domain shows only small deviations, some stastically significant though, of 2m temperature, sea level pressure and precipitaion.The second scope of the study assesses parameters linked to cyclone activity, which is affected by the LBC update frequency. Differences in track density and strength are found when comparing the simulations.The second study examines the quality of decadal hind-casts of the decade 2001-2010 when the horizontal resolution of the driving model, namely T42, T63, T85, T106, from which the LBC are calculated, is altered. Two sets of simulations are evaluated. For the first set of simulations, the GCM simulations are performed at different resolutions using the same boundary conditions for GHGs and SSTs, thus

  17. Benchmarking Exercises To Validate The Updated ELLWF GoldSim Slit Trench Model

    International Nuclear Information System (INIS)

    Taylor, G. A.; Hiergesell, R. A.

    2013-01-01

    The Savannah River National Laboratory (SRNL) results of the 2008 Performance Assessment (PA) (WSRC, 2008) sensitivity/uncertainty analyses conducted for the trenches located in the EArea LowLevel Waste Facility (ELLWF) were subject to review by the United States Department of Energy (U.S. DOE) Low-Level Waste Disposal Facility Federal Review Group (LFRG) (LFRG, 2008). LFRG comments were generally approving of the use of probabilistic modeling in GoldSim to support the quantitative sensitivity analysis. A recommendation was made, however, that the probabilistic models be revised and updated to bolster their defensibility. SRS committed to addressing those comments and, in response, contracted with Neptune and Company to rewrite the three GoldSim models. The initial portion of this work, development of Slit Trench (ST), Engineered Trench (ET) and Components-in-Grout (CIG) trench GoldSim models, has been completed. The work described in this report utilizes these revised models to test and evaluate the results against the 2008 PORFLOW model results. This was accomplished by first performing a rigorous code-to-code comparison of the PORFLOW and GoldSim codes and then performing a deterministic comparison of the two-dimensional (2D) unsaturated zone and three-dimensional (3D) saturated zone PORFLOW Slit Trench models against results from the one-dimensional (1D) GoldSim Slit Trench model. The results of the code-to-code comparison indicate that when the mechanisms of radioactive decay, partitioning of contaminants between solid and fluid, implementation of specific boundary conditions and the imposition of solubility controls were all tested using identical flow fields, that GoldSim and PORFLOW produce nearly identical results. It is also noted that GoldSim has an advantage over PORFLOW in that it simulates all radionuclides simultaneously - thus avoiding a potential problem as demonstrated in the Case Study (see Section 2.6). Hence, it was concluded that the follow

  18. Benchmarking Exercises To Validate The Updated ELLWF GoldSim Slit Trench Model

    Energy Technology Data Exchange (ETDEWEB)

    Taylor, G. A.; Hiergesell, R. A.

    2013-11-12

    The Savannah River National Laboratory (SRNL) results of the 2008 Performance Assessment (PA) (WSRC, 2008) sensitivity/uncertainty analyses conducted for the trenches located in the EArea LowLevel Waste Facility (ELLWF) were subject to review by the United States Department of Energy (U.S. DOE) Low-Level Waste Disposal Facility Federal Review Group (LFRG) (LFRG, 2008). LFRG comments were generally approving of the use of probabilistic modeling in GoldSim to support the quantitative sensitivity analysis. A recommendation was made, however, that the probabilistic models be revised and updated to bolster their defensibility. SRS committed to addressing those comments and, in response, contracted with Neptune and Company to rewrite the three GoldSim models. The initial portion of this work, development of Slit Trench (ST), Engineered Trench (ET) and Components-in-Grout (CIG) trench GoldSim models, has been completed. The work described in this report utilizes these revised models to test and evaluate the results against the 2008 PORFLOW model results. This was accomplished by first performing a rigorous code-to-code comparison of the PORFLOW and GoldSim codes and then performing a deterministic comparison of the two-dimensional (2D) unsaturated zone and three-dimensional (3D) saturated zone PORFLOW Slit Trench models against results from the one-dimensional (1D) GoldSim Slit Trench model. The results of the code-to-code comparison indicate that when the mechanisms of radioactive decay, partitioning of contaminants between solid and fluid, implementation of specific boundary conditions and the imposition of solubility controls were all tested using identical flow fields, that GoldSim and PORFLOW produce nearly identical results. It is also noted that GoldSim has an advantage over PORFLOW in that it simulates all radionuclides simultaneously - thus avoiding a potential problem as demonstrated in the Case Study (see Section 2.6). Hence, it was concluded that the follow

  19. A hybrid analytical model for open-circuit field calculation of multilayer interior permanent magnet machines

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Zhen [School of Electrical Engineering and Automation, Tianjin University, Tianjin 300072 (China); Xia, Changliang [School of Electrical Engineering and Automation, Tianjin University, Tianjin 300072 (China); Tianjin Engineering Center of Electric Machine System Design and Control, Tianjin 300387 (China); Yan, Yan, E-mail: yanyan@tju.edu.cn [School of Electrical Engineering and Automation, Tianjin University, Tianjin 300072 (China); Geng, Qiang [Tianjin Engineering Center of Electric Machine System Design and Control, Tianjin 300387 (China); Shi, Tingna [School of Electrical Engineering and Automation, Tianjin University, Tianjin 300072 (China)

    2017-08-01

    Highlights: • A hybrid analytical model is developed for field calculation of multilayer IPM machines. • The rotor magnetic field is calculated by the magnetic equivalent circuit method. • The field in the stator and air-gap is calculated by subdomain technique. • The magnetic scalar potential on rotor surface is modeled as trapezoidal distribution. - Abstract: Due to the complicated rotor structure and nonlinear saturation of rotor bridges, it is difficult to build a fast and accurate analytical field calculation model for multilayer interior permanent magnet (IPM) machines. In this paper, a hybrid analytical model suitable for the open-circuit field calculation of multilayer IPM machines is proposed by coupling the magnetic equivalent circuit (MEC) method and the subdomain technique. In the proposed analytical model, the rotor magnetic field is calculated by the MEC method based on the Kirchhoff’s law, while the field in the stator slot, slot opening and air-gap is calculated by subdomain technique based on the Maxwell’s equation. To solve the whole field distribution of the multilayer IPM machines, the coupled boundary conditions on the rotor surface are deduced for the coupling of the rotor MEC and the analytical field distribution of the stator slot, slot opening and air-gap. The hybrid analytical model can be used to calculate the open-circuit air-gap field distribution, back electromotive force (EMF) and cogging torque of multilayer IPM machines. Compared with finite element analysis (FEA), it has the advantages of faster modeling, less computation source occupying and shorter time consuming, and meanwhile achieves the approximate accuracy. The analytical model is helpful and applicable for the open-circuit field calculation of multilayer IPM machines with any size and pole/slot number combination.

  20. COMPARISON OF VIRTUAL FIELDS METHOD, PARALLEL NETWORK MATERIAL MODEL AND FINITE ELEMENT UPDATING FOR MATERIAL PARAMETER DETERMINATION

    Directory of Open Access Journals (Sweden)

    Florian Dirisamer

    2016-12-01

    Full Text Available Extracting material parameters from test specimens is very intensive in terms of cost and time, especially for viscoelastic material models, where the parameters are dependent of time (frequency, temperature and environmental conditions. Therefore, three different methods for extracting these parameters were tested. Firstly, digital image correlation combined with virtual fields method, secondly, a parallel network material model and thirdly, finite element updating. These three methods are shown and the results are compared in terms of accuracy and experimental effort.

  1. HOMCOS: an updated server to search and model complex 3D structures.

    Science.gov (United States)

    Kawabata, Takeshi

    2016-12-01

    The HOMCOS server ( http://homcos.pdbj.org ) was updated for both searching and modeling the 3D complexes for all molecules in the PDB. As compared to the previous HOMCOS server, the current server targets all of the molecules in the PDB including proteins, nucleic acids, small compounds and metal ions. Their binding relationships are stored in the database. Five services are available for users. For the services "Modeling a Homo Protein Multimer" and "Modeling a Hetero Protein Multimer", a user can input one or two proteins as the queries, while for the service "Protein-Compound Complex", a user can input one chemical compound and one protein. The server searches similar molecules by BLAST and KCOMBU. Based on each similar complex found, a simple sequence-replaced model is quickly generated by replacing the residue names and numbers with those of the query protein. A target compound is flexibly superimposed onto the template compound using the program fkcombu. If monomeric 3D structures are input as the query, then template-based docking can be performed. For the service "Searching Contact Molecules for a Query Protein", a user inputs one protein sequence as the query, and then the server searches for its homologous proteins in PDB and summarizes their contacting molecules as the predicted contacting molecules. The results are summarized in "Summary Bars" or "Site Table"display. The latter shows the results as a one-site-one-row table, which is useful for annotating the effects of mutations. The service "Searching Contact Molecules for a Query Compound" is also available.

  2. An updated PREDICT breast cancer prognostication and treatment benefit prediction model with independent validation.

    Science.gov (United States)

    Candido Dos Reis, Francisco J; Wishart, Gordon C; Dicks, Ed M; Greenberg, David; Rashbass, Jem; Schmidt, Marjanka K; van den Broek, Alexandra J; Ellis, Ian O; Green, Andrew; Rakha, Emad; Maishman, Tom; Eccles, Diana M; Pharoah, Paul D P

    2017-05-22

    PREDICT is a breast cancer prognostic and treatment benefit model implemented online. The overall fit of the model has been good in multiple independent case series, but PREDICT has been shown to underestimate breast cancer specific mortality in women diagnosed under the age of 40. Another limitation is the use of discrete categories for tumour size and node status resulting in 'step' changes in risk estimates on moving between categories. We have refitted the PREDICT prognostic model using the original cohort of cases from East Anglia with updated survival time in order to take into account age at diagnosis and to smooth out the survival function for tumour size and node status. Multivariable Cox regression models were used to fit separate models for ER negative and ER positive disease. Continuous variables were fitted using fractional polynomials and a smoothed baseline hazard was obtained by regressing the baseline cumulative hazard for each patients against time using fractional polynomials. The fit of the prognostic models were then tested in three independent data sets that had also been used to validate the original version of PREDICT. In the model fitting data, after adjusting for other prognostic variables, there is an increase in risk of breast cancer specific mortality in younger and older patients with ER positive disease, with a substantial increase in risk for women diagnosed before the age of 35. In ER negative disease the risk increases slightly with age. The association between breast cancer specific mortality and both tumour size and number of positive nodes was non-linear with a more marked increase in risk with increasing size and increasing number of nodes in ER positive disease. The overall calibration and discrimination of the new version of PREDICT (v2) was good and comparable to that of the previous version in both model development and validation data sets. However, the calibration of v2 improved over v1 in patients diagnosed under the age

  3. Calculation Method of Kinetic Constants for the Mathematical Model Peat Pyrolysis

    Directory of Open Access Journals (Sweden)

    Plakhova Tatyana

    2014-01-01

    Full Text Available Relevance of the work is related to necessity to simplify the calculation of kinetic constants for the mathematical model peat pyrolysis. Execute transformations of formula Arrhenius law. Degree of conversion is expressed in terms mass changes of sample. The obtained formulas help to calculate the kinetic constants for any type of solid organic fuels

  4. FEM Updating of the Heritage Court Building Structure

    DEFF Research Database (Denmark)

    Ventura, C. E.; Brincker, Rune; Dascotte, E.

    2001-01-01

    . The starting model of the structure was developed from the information provided in the design documentation of the building. Different parameters of the model were then modified using an automated procedure to improve the correlation between measured and calculated modal parameters. Careful attention......This paper describes results of a model updating study conducted on a 15-storey reinforced concrete shear core building. The output-only modal identification results obtained from ambient vibration measurements of the building were used to update a finite element model of the structure...... was placed to the selection of the parameters to be modified by the updating software in order to ensure that the necessary changes to the model were realistic and physically realisable and meaningful. The paper highlights the model updating process and provides an assessment of the usefulness of using...

  5. Modeling for Dose Rate Calculation of the External Exposure to Gamma Emitters in Soil

    International Nuclear Information System (INIS)

    Allam, K. A.; El-Mongy, S. A.; El-Tahawy, M. S.; Mohsen, M. A.

    2004-01-01

    Based on the model proposed and developed in Ph.D thesis of the first author of this work, the dose rate conversion factors (absorbed dose rate in air per specific activity of soil in nGy.hr - 1 per Bq.kg - 1) are calculated 1 m above the ground for photon emitters of natural radionuclides uniformly distributed in the soil. This new and simple dose rate calculation software was used for calculation of the dose rate in air 1 m above the ground. Then the results were compared with those obtained by five different groups. Although the developed model is extremely simple, the obtained results of calculations, based on this model, show excellent agreement with those obtained by the above-mentioned models specially that one adopted by UNSCEAR. (authors)

  6. Model to Calculate the Effectiveness of an Airborne Jammer on Analog Communications

    National Research Council Canada - National Science Library

    Vingson, Narciso A., Jr; Muhammad, Vaqar

    2005-01-01

    The objective of this study is to develop a statistical model to calculate the effectiveness of an airborne jammer on analog communication and broadcast receivers, such as AM and FM Broadcast Radio...

  7. On thermal vibration effects in diffusion model calculations of blocking dips

    International Nuclear Information System (INIS)

    Fuschini, E.; Ugozzoni, A.

    1983-01-01

    In the framework of the diffusion model, a method for calculating blocking dips is suggested that takes into account thermal vibrations of the crystal lattice. Results of calculations of the diffusion factor and the transverse energy distribution taking into accoUnt scattering of the channeled particles at thermal vibrations of lattice nuclei, are presented. Calculations are performed for α-particles with the energy of 2.12 MeV at 300 K scattered by Al crystal. It is shown that calculations performed according to the above method prove the necessity of taking into account effects of multiple scattering under blocking conditions

  8. An evolutionary cascade model for sauropod dinosaur gigantism--overview, update and tests.

    Directory of Open Access Journals (Sweden)

    P Martin Sander

    Full Text Available Sauropod dinosaurs are a group of herbivorous dinosaurs which exceeded all other terrestrial vertebrates in mean and maximal body size. Sauropod dinosaurs were also the most successful and long-lived herbivorous tetrapod clade, but no abiological factors such as global environmental parameters conducive to their gigantism can be identified. These facts justify major efforts by evolutionary biologists and paleontologists to understand sauropods as living animals and to explain their evolutionary success and uniquely gigantic body size. Contributions to this research program have come from many fields and can be synthesized into a biological evolutionary cascade model of sauropod dinosaur gigantism (sauropod gigantism ECM. This review focuses on the sauropod gigantism ECM, providing an updated version based on the contributions to the PLoS ONE sauropod gigantism collection and on other very recent published evidence. The model consist of five separate evolutionary cascades ("Reproduction", "Feeding", "Head and neck", "Avian-style lung", and "Metabolism". Each cascade starts with observed or inferred basal traits that either may be plesiomorphic or derived at the level of Sauropoda. Each trait confers hypothetical selective advantages which permit the evolution of the next trait. Feedback loops in the ECM consist of selective advantages originating from traits higher in the cascades but affecting lower traits. All cascades end in the trait "Very high body mass". Each cascade is linked to at least one other cascade. Important plesiomorphic traits of sauropod dinosaurs that entered the model were ovipary as well as no mastication of food. Important evolutionary innovations (derived traits were an avian-style respiratory system and an elevated basal metabolic rate. Comparison with other tetrapod lineages identifies factors limiting body size.

  9. An evolutionary cascade model for sauropod dinosaur gigantism--overview, update and tests.

    Science.gov (United States)

    Sander, P Martin

    2013-01-01

    Sauropod dinosaurs are a group of herbivorous dinosaurs which exceeded all other terrestrial vertebrates in mean and maximal body size. Sauropod dinosaurs were also the most successful and long-lived herbivorous tetrapod clade, but no abiological factors such as global environmental parameters conducive to their gigantism can be identified. These facts justify major efforts by evolutionary biologists and paleontologists to understand sauropods as living animals and to explain their evolutionary success and uniquely gigantic body size. Contributions to this research program have come from many fields and can be synthesized into a biological evolutionary cascade model of sauropod dinosaur gigantism (sauropod gigantism ECM). This review focuses on the sauropod gigantism ECM, providing an updated version based on the contributions to the PLoS ONE sauropod gigantism collection and on other very recent published evidence. The model consist of five separate evolutionary cascades ("Reproduction", "Feeding", "Head and neck", "Avian-style lung", and "Metabolism"). Each cascade starts with observed or inferred basal traits that either may be plesiomorphic or derived at the level of Sauropoda. Each trait confers hypothetical selective advantages which permit the evolution of the next trait. Feedback loops in the ECM consist of selective advantages originating from traits higher in the cascades but affecting lower traits. All cascades end in the trait "Very high body mass". Each cascade is linked to at least one other cascade. Important plesiomorphic traits of sauropod dinosaurs that entered the model were ovipary as well as no mastication of food. Important evolutionary innovations (derived traits) were an avian-style respiratory system and an elevated basal metabolic rate. Comparison with other tetrapod lineages identifies factors limiting body size.

  10. An Evolutionary Cascade Model for Sauropod Dinosaur Gigantism - Overview, Update and Tests

    Science.gov (United States)

    Sander, P. Martin

    2013-01-01

    Sauropod dinosaurs are a group of herbivorous dinosaurs which exceeded all other terrestrial vertebrates in mean and maximal body size. Sauropod dinosaurs were also the most successful and long-lived herbivorous tetrapod clade, but no abiological factors such as global environmental parameters conducive to their gigantism can be identified. These facts justify major efforts by evolutionary biologists and paleontologists to understand sauropods as living animals and to explain their evolutionary success and uniquely gigantic body size. Contributions to this research program have come from many fields and can be synthesized into a biological evolutionary cascade model of sauropod dinosaur gigantism (sauropod gigantism ECM). This review focuses on the sauropod gigantism ECM, providing an updated version based on the contributions to the PLoS ONE sauropod gigantism collection and on other very recent published evidence. The model consist of five separate evolutionary cascades (“Reproduction”, “Feeding”, “Head and neck”, “Avian-style lung”, and “Metabolism”). Each cascade starts with observed or inferred basal traits that either may be plesiomorphic or derived at the level of Sauropoda. Each trait confers hypothetical selective advantages which permit the evolution of the next trait. Feedback loops in the ECM consist of selective advantages originating from traits higher in the cascades but affecting lower traits. All cascades end in the trait “Very high body mass”. Each cascade is linked to at least one other cascade. Important plesiomorphic traits of sauropod dinosaurs that entered the model were ovipary as well as no mastication of food. Important evolutionary innovations (derived traits) were an avian-style respiratory system and an elevated basal metabolic rate. Comparison with other tetrapod lineages identifies factors limiting body size. PMID:24205267

  11. Advanced Test Reactor Core Modeling Update Project Annual Report for Fiscal Year 2013

    Energy Technology Data Exchange (ETDEWEB)

    Nigg, David W. [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2013-09-01

    Legacy computational reactor physics software tools and protocols currently used for support of Advanced Test Reactor (ATR) core fuel management and safety assurance, and to some extent, experiment management, are inconsistent with the state of modern nuclear engineering practice, and are difficult, if not impossible, to verify and validate (V&V) according to modern standards. Furthermore, the legacy staff knowledge required for effective application of these tools and protocols from the 1960s and 1970s is rapidly being lost due to staff turnover and retirements. In late 2009, the Idaho National Laboratory (INL) initiated a focused effort, the ATR Core Modeling Update Project, to address this situation through the introduction of modern high-fidelity computational software and protocols. This aggressive computational and experimental campaign will have a broad strategic impact on the operation of the ATR, both in terms of improved computational efficiency and accuracy for support of ongoing DOE programs as well as in terms of national and international recognition of the ATR National Scientific User Facility (NSUF).

  12. Updated comparison of groundwater flow model results and isotopic data in the Leon Valley, Mexico

    Science.gov (United States)

    Hernandez-Garcia, G. D.

    2015-12-01

    Northwest of Mexico City, the study area is located in the State of Guanajuato. Leon Valley has covered with groundwater its demand of water, estimated in 20.6 cubic meters per second. The constant increase of population and economic activities in the region, mainly in cities and automobile factories, has also a constant growth in water needs. Related extraction rate has produced an average decrease of approximately 1.0 m per year over the past two decades. This suggests that the present management of the groundwater should be checked. Management of groundwater in the study area involves the possibility of producing environmental impacts by extraction. This vital resource under stress becomes necessary studying its hydrogeological functioning to achieve scientific management of groundwater in the Valley. This research was based on the analysis and integration of existing information and the field generated by the authors. On the base of updated concepts like the geological structure of the area, the hydraulic parameters and the composition of deuterium-delta and delta-oxygen -18, this research has new results. This information has been fully analyzed by applying a groundwater flow model with particle tracking: the result has also a similar result in terms of travel time and paths derived from isotopic data.

  13. Life cycle reliability assessment of new products—A Bayesian model updating approach

    International Nuclear Information System (INIS)

    Peng, Weiwen; Huang, Hong-Zhong; Li, Yanfeng; Zuo, Ming J.; Xie, Min

    2013-01-01

    The rapidly increasing pace and continuously evolving reliability requirements of new products have made life cycle reliability assessment of new products an imperative yet difficult work. While much work has been done to separately estimate reliability of new products in specific stages, a gap exists in carrying out life cycle reliability assessment throughout all life cycle stages. We present a Bayesian model updating approach (BMUA) for life cycle reliability assessment of new products. Novel features of this approach are the development of Bayesian information toolkits by separately including “reliability improvement factor” and “information fusion factor”, which allow the integration of subjective information in a specific life cycle stage and the transition of integrated information between adjacent life cycle stages. They lead to the unique characteristics of the BMUA in which information generated throughout life cycle stages are integrated coherently. To illustrate the approach, an application to the life cycle reliability assessment of a newly developed Gantry Machining Center is shown

  14. An updated fracture-flow model for total-system performance assessment of Yucca Mountain

    Energy Technology Data Exchange (ETDEWEB)

    Gauthier, J.H. [Spectra Research Inst., Albuquerque, NM (United States)

    1994-07-01

    Improvements have been made to the fracture-flow model being used in the total-system performance assessment of a potential high-level radioactive waste repository at Yucca Mountain, Nevada. The ``weeps model`` now includes (1) weeps of varied sizes, (2) flow-pattern fluctuations caused by climate change, and (3) flow-pattern perturbations caused by repository heat generation. Comparison with the original weeps model indicates that allowing weeps of varied sizes substantially reduces the number of weeps and the number of containers contacted by weeps. However, flow-pattern perturbations caused by either climate change or repository heat generation greatly increases the number of containers contacted by weeps. In preliminary total-system calculations, using a phenomenological container-failure and radionuclide-release model, the weeps model predicts that radionuclide releases from a high-level radioactive waste repository at Yucca Mountain will be below the EPA standard specified in 40 CFR 191, but that the maximum radiation dose to an individual could be significant. Specific data from the site are required to determine the validity of the weep-flow mechanism and to better determine the parameters to which the dose calculation is sensitive.

  15. An updated fracture-flow model for total-system performance assessment of Yucca Mountain

    International Nuclear Information System (INIS)

    Gauthier, J.H.

    1994-01-01

    Improvements have been made to the fracture-flow model being used in the total-system performance assessment of a potential high-level radioactive waste repository at Yucca Mountain, Nevada. The ''weeps model'' now includes (1) weeps of varied sizes, (2) flow-pattern fluctuations caused by climate change, and (3) flow-pattern perturbations caused by repository heat generation. Comparison with the original weeps model indicates that allowing weeps of varied sizes substantially reduces the number of weeps and the number of containers contacted by weeps. However, flow-pattern perturbations caused by either climate change or repository heat generation greatly increases the number of containers contacted by weeps. In preliminary total-system calculations, using a phenomenological container-failure and radionuclide-release model, the weeps model predicts that radionuclide releases from a high-level radioactive waste repository at Yucca Mountain will be below the EPA standard specified in 40 CFR 191, but that the maximum radiation dose to an individual could be significant. Specific data from the site are required to determine the validity of the weep-flow mechanism and to better determine the parameters to which the dose calculation is sensitive

  16. Declination Calculator

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Declination is calculated using the current International Geomagnetic Reference Field (IGRF) model. Declination is calculated using the current World Magnetic Model...

  17. Calculation of delayed-neutron energy spectra in a QRPA-Hauser-Feshbach model

    Energy Technology Data Exchange (ETDEWEB)

    Kawano, Toshihiko [Los Alamos National Laboratory; Moller, Peter [Los Alamos National Laboratory; Wilson, William B [Los Alamos National Laboratory

    2008-01-01

    Theoretical {beta}-delayed-neutron spectra are calculated based on the Quasiparticle Random-Phase Approximation (QRPA) and the Hauser-Feshbach statistical model. Neutron emissions from an excited daughter nucleus after {beta} decay to the granddaughter residual are more accurately calculated than in previous evaluations, including all the microscopic nuclear structure information, such as a Gamow-Teller strength distribution and discrete states in the granddaughter. The calculated delayed-neutron spectra agree reasonably well with those evaluations in the ENDF decay library, which are based on experimental data. The model was adopted to generate the delayed-neutron spectra for all 271 precursors.

  18. Shell model calculations for the mass 18 nuclei in the sd-shell

    International Nuclear Information System (INIS)

    Hamoudi, A.

    1997-01-01

    A simple effective nucleon-nucleon interaction for shell model calculations in the sd-shell is derived from the Reid soft-core potential folded with two-body correlation functions which take account of the strong short-range repulsion and large tensor component in the Reid force. Calculations of binding energies and low-lying spectra are performed for the mass A=18 with T=0 and 1 nuclei using this interaction. The results of this shell model calculations show a reasonable agreement with experiment

  19. Nuclear model calculations below 200 MeV and evaluation prospects

    International Nuclear Information System (INIS)

    Koning, A.J.; Bersillon, O.; Delaroche, J.P.

    1994-08-01

    A computational method is outlined for the quantum-mechanical prediction of the whole double-differential energy spectrum. Cross sections as calculated with the code system MINGUS are presented for (n,xn) and (p,xn) reactions on 208 Pb and 209 Bi. Our approach involves a dispersive optical model, comprehensive discrete state calculations, renormalized particle-hole state densities, a combined MSD/MSC model for pre-equilibrium reactions and compound nucleus calculations. The relation with the evaluation of nuclear data files is discussed. (orig.)

  20. Thermal-hydraulic feedback model to calculate the neutronic cross-section in PWR reactions

    International Nuclear Information System (INIS)

    Santiago, Daniela Maiolino Norberto

    2011-01-01

    In neutronic codes,it is important to have a thermal-hydraulic feedback module. This module calculates the thermal-hydraulic feedback of the fuel, that feeds the neutronic cross sections. In the neutronic co de developed at PEN / COPPE / UFRJ, the fuel temperature is obtained through an empirical model. This work presents a physical model to calculate this temperature. We used the finite volume technique of discretized the equation of temperature distribution, while calculation the moderator coefficient of heat transfer, was carried out using the ASME table, and using some of their routines to our program. The model allows one to calculate an average radial temperature per node, since the thermal-hydraulic feedback must follow the conditions imposed by the neutronic code. The results were compared with to the empirical model. Our results show that for the fuel elements near periphery, the empirical model overestimates the temperature in the fuel, as compared to our model, which may indicate that the physical model is more appropriate to calculate the thermal-hydraulic feedback temperatures. The proposed model was validated by the neutronic simulator developed in the PEN / COPPE / UFRJ for analysis of PWR reactors. (author)

  1. Recent updates in the aerosol component of the C-IFS model run by ECMWF

    Science.gov (United States)

    Remy, Samuel; Boucher, Olivier; Hauglustaine, Didier; Kipling, Zak; Flemming, Johannes

    2017-04-01

    The Composition-Integrated Forecast System (C-IFS) is a global atmospheric composition forecasting tool, run by ECMWF within the framework of the Copernicus Atmospheric Monitoring Service (CAMS). The aerosol model of C-IFS is a simple bulk scheme that forecasts 5 species: dust, sea-salt, black carbon, organic matter and sulfate. Three bins represent the dust and sea-salt, for the super-coarse, coarse and fine mode of these species (Morcrette et al., 2009). This talk will present recent updates of the aerosol model, and also introduce forthcoming developments. It will also present the impact of these changes as measured scores against AERONET Aerosol Optical Depth (AOD) and Airbase PM10 observations. The next cycle of C-IFS will include a mass fixer, because the semi-Lagrangian advection scheme used in C-IFS is not mass-conservative. C-IFS now offers the possibility to emit biomass-burning aerosols at an injection height that is provided by a new version of the Global Fire Assimilation System (GFAS). Secondary Organic Aerosols (SOA) production will be scaled on non-biomass burning CO fluxes. This approach allows to represent the anthropogenic contribution to SOA production; it brought a notable improvement in the skill of the model, especially over Europe. Lastly, the emissions of SO2 are now provided by the MACCity inventory instead of and older version of the EDGAR dataset. The seasonal and yearly variability of SO2 emissions are better captured by the MACCity dataset. Upcoming developments of the aerosol model of C-IFS consist mainly in the implementation of a nitrate and ammonium module, with 2 bins (fine and coarse) for nitrate. Nitrate and ammonium sulfate particle formation from gaseous precursors is represented following Hauglustaine et al. (2014); formation of coarse nitrate over pre-existing sea-salt or dust particles is also represented. This extension of the forward model improved scores over heavily populated areas such as Europe, China and Eastern

  2. Reducing the computational requirements for simulating tunnel fires by combining multiscale modelling and multiple processor calculation

    DEFF Research Database (Denmark)

    Vermesi, Izabella; Rein, Guillermo; Colella, Francesco

    2017-01-01

    in FDS version 6.0, a widely used fire-specific, open source CFD software. Furthermore, it compares the reduction in simulation time given by multiscale modelling with the one given by the use of multiple processor calculation. This was done using a 1200m long tunnel with a rectangular cross...... processor calculation (97% faster when using a single mesh and multiscale modelling; only 46% faster when using the full tunnel and multiple meshes). In summary, it was found that multiscale modelling with FDS v.6.0 is feasible, and the combination of multiple meshes and multiscale modelling was established...

  3. The Risoe model for calculating the consequences of the release of radioactive material to the atmosphere

    International Nuclear Information System (INIS)

    Thykier-Nielsen, S.

    1980-07-01

    A brief description is given of the model used at Risoe for calculating the consequences of releases of radioactive material to the atmosphere. The model is based on the Gaussian plume model, and it provides possibilities for calculation of: doses to individuals, collective doses, contamination of the ground, probability distribution of doses, and the consequences of doses for give dose-risk relationships. The model is implemented as a computer program PLUCON2, written in ALGOL for the Burroughs B6700 computer at Risoe. A short description of PLUCON2 is given. (author)

  4. An updated fracture-flow model for total-system performance assessment of Yucca Mountain

    International Nuclear Information System (INIS)

    Gauthier, J.H.

    1994-01-01

    Improvements have been made to the fracture-flow model being used in the total-system performance assessment of a potential high-level radioactive waste repository at Yucca Mountain, Nevada. The open-quotes weeps modelclose quotes now includes (1) weeps of varied sizes, (2) flow-pattern fluctuations caused by climate change, and (3) flow-pattern perturbations caused by repository heat generation. Comparison with the original weeps model indicates that allowing weeps of varied sizes substantially reduces the number of weeps and the number of containers contacted by weeps. However, flow-pattern perturbations caused by either climate change or repository heat generation greatly increases the number of containers contacted by weeps. In preliminary total-system calculations, using a phenomenological container-failure and radionuclide-release model, the weeps model predicts that radionuclide releases from a high-level radioactive waste repository at Yucca Mountain will be below the EPA standard specified in 40 CFR 191, but that the maximum radiation dose to an individual could be significant. Specific data from the site are required to determine the validity of the weep-flow mechanism and to better determine the parameters to which the dose calculation is sensitive

  5. Parabolic Trough Collector Cost Update for the System Advisor Model (SAM)

    Energy Technology Data Exchange (ETDEWEB)

    Kurup, Parthiv [National Renewable Energy Lab. (NREL), Golden, CO (United States); Turchi, Craig S. [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2015-11-01

    This report updates the baseline cost for parabolic trough solar fields in the United States within NREL's System Advisor Model (SAM). SAM, available at no cost at https://sam.nrel.gov/, is a performance and financial model designed to facilitate decision making for people involved in the renewable energy industry. SAM is the primary tool used by NREL and the U.S. Department of Energy (DOE) for estimating the performance and cost of concentrating solar power (CSP) technologies and projects. The study performed a bottom-up build and cost estimate for two state-of-the-art parabolic trough designs -- the SkyTrough and the Ultimate Trough. The SkyTrough analysis estimated the potential installed cost for a solar field of 1500 SCAs as $170/m2 +/- $6/m2. The investigation found that SkyTrough installed costs were sensitive to factors such as raw aluminum alloy cost and production volume. For example, in the case of the SkyTrough, the installed cost would rise to nearly $210/m2 if the aluminum alloy cost was $1.70/lb instead of $1.03/lb. Accordingly, one must be aware of fluctuations in the relevant commodities markets to track system cost over time. The estimated installed cost for the Ultimate Trough was only slightly higher at $178/m2, which includes an assembly facility of $11.6 million amortized over the required production volume. Considering the size and overall cost of a 700 SCA Ultimate Trough solar field, two parallel production lines in a fully covered assembly facility, each with the specific torque box, module and mirror jigs, would be justified for a full CSP plant.

  6. Bilinear slack span calculation model. Slack span calculations for high-temperature cables; Bilineares Berechnungsmodell fuer Durchhangberechnungen. Durchhangberechnungen bei Hochtemperaturleitern

    Energy Technology Data Exchange (ETDEWEB)

    Scheel, Joerg; Dib, Ramzi [Fachhochschule Giessen-Friedberg, Friedberg (Germany); Sassmannshausen, Achim [DB Energie GmbH, Frankfurt (Main) (Germany). Arbeitsgebiet Bahnstromleitungen Energieerzeugungs- und Uebertragungssysteme; Riedl, Markus [Eon Netz GmbH, Bayreuth (Germany). Systemtechnik Leitungen

    2010-12-13

    Increasingly, high-temperature cables are used in high-voltage grids. Beyond a given temperature level, their slack span cannot be calculated accurately by conventional simple linear methods. The contribution investigates the behaviour of composite cables at high operating temperatures and its influence on the slack span and presents a more accurate, bilingual calculation method. (orig.)

  7. Combining Multi-Source Remotely Sensed Data and a Process-Based Model for Forest Aboveground Biomass Updating.

    Science.gov (United States)

    Lu, Xiaoman; Zheng, Guang; Miller, Colton; Alvarado, Ernesto

    2017-09-08

    Monitoring and understanding the spatio-temporal variations of forest aboveground biomass (AGB) is a key basis to quantitatively assess the carbon sequestration capacity of a forest ecosystem. To map and update forest AGB in the Greater Khingan Mountains (GKM) of China, this work proposes a physical-based approach. Based on the baseline forest AGB from Landsat Enhanced Thematic Mapper Plus (ETM+) images in 2008, we dynamically updated the annual forest AGB from 2009 to 2012 by adding the annual AGB increment (ABI) obtained from the simulated daily and annual net primary productivity (NPP) using the Boreal Ecosystem Productivity Simulator (BEPS) model. The 2012 result was validated by both field- and aerial laser scanning (ALS)-based AGBs. The predicted forest AGB for 2012 estimated from the process-based model can explain 31% ( n = 35, p BEPS-based AGB tended to underestimate/overestimate the AGB for dense/sparse forests. Generally, our results showed that the remotely sensed forest AGB estimates could serve as the initial carbon pool to parameterize the process-based model for NPP simulation, and the combination of the baseline forest AGB and BEPS model could effectively update the spatiotemporal distribution of forest AGB.

  8. A heterogeneous model for burnup calculation in high temperature gas-cooled reactors

    International Nuclear Information System (INIS)

    Perfetti, C. M.; Angahie, S.; Baxter, A.; Ellis, C.

    2008-01-01

    A high resolution MCNPX model is developed to simulate nuclear design characteristics and fuel cycle features of High Temperature Gas-Cooled Reactors. Contrary to the conventional approach in the MCNPX model, fuel regions containing TRISO particles are not homogenized. A cube corner distribution approximation is used to directly model randomly dispersed TRISO fuel particles in a graphite matrix. The universe filling technique is used cover the entire range of fuel particles in the core. The heterogeneous MCNPX model is applied to simulate and analyze the complete fuel cycle of the General Atomics Plutonium-Consumption Modular Helium Reactor (PC-MHR). The PC-MHR reactor design is a variation of the General Atomic MHR design and is designed for the consumption or burning of excess Russian weapons plutonium. The MCNPX burnup calculation of the PC-MHR includes the simulation of a 260 effective full-power day fuel cycle at 600 MWt. Results of the MCNPX calculations suggest that during 260 effective full-power day cycle, 40% reduction in the whole core Pu-239 inventory could be achieved. Results of heterogeneous MCNPX burnup calculations in PC-MHR are compared with results of deterministically calculated values obtained from DIF3D codes. For the 260 effective full-power day cycle, the difference in mass Pu-239 mass reduction calculation using heterogeneous MCNPX and homogeneous DIF3D models is 6%. The difference in MCNPX and DIF3D calculated results for higher actinides are mostly higher than 6%. (authors)

  9. Formation of decontamination cost calculation model for severe accident consequence assessment

    International Nuclear Information System (INIS)

    Silva, Kampanart; Promping, Jiraporn; Okamoto, Koji; Ishiwatari, Yuki

    2014-01-01

    In previous studies, the authors developed an index “cost per severe accident” to perform a severe accident consequence assessment that can cover various kinds of accident consequences, namely health effects, economic, social and environmental impacts. Though decontamination cost was identified as a major component, it was taken into account using simple and conservative assumptions, which make it difficult to have further discussions. The decontamination cost calculation model was therefore reconsidered. 99 parameters were selected to take into account all decontamination-related issues, and the decontamination cost calculation model was formed. The distributions of all parameters were determined. A sensitivity analysis using the Morris method was performed in order to identify important parameters that have large influence on the cost per severe accident and large extent of interactions with other parameters. We identified 25 important parameters, and fixed most negligible parameters to the median of their distributions to form a simplified decontamination cost calculation model. Calculations of cost per severe accident with the full model (all parameters distributed), and with the simplified model were performed and compared. The differences of the cost per severe accident and its components were not significant, which ensure the validity of the simplified model. The simplified model is used to perform a full scope calculation of the cost per severe accident and compared with the previous study. The decontamination cost increased its importance significantly. (author)

  10. Updates on Modeling the Water Cycle with the NASA Ames Mars Global Climate Model

    Science.gov (United States)

    Kahre, M. A.; Haberle, R. M.; Hollingsworth, J. L.; Montmessin, F.; Brecht, A. S.; Urata, R.; Klassen, D. R.; Wolff, M. J.

    2017-01-01

    Global Circulation Models (GCMs) have made steady progress in simulating the current Mars water cycle. It is now widely recognized that clouds are a critical component that can significantly affect the nature of the simulated water cycle. Two processes in particular are key to implementing clouds in a GCM: the microphysical processes of formation and dissipation, and their radiative effects on heating/ cooling rates. Together, these processes alter the thermal structure, change the dynamics, and regulate inter-hemispheric transport. We have made considerable progress representing these processes in the NASA Ames GCM, particularly in the presence of radiatively active water ice clouds. We present the current state of our group's water cycle modeling efforts, show results from selected simulations, highlight some of the issues, and discuss avenues for further investigation.­

  11. Recent updates in the aerosol model of C-IFS and their impact on skill scores

    Science.gov (United States)

    Remy, Samuel; Boucher, Olivier; Hauglustaine, Didier

    2016-04-01

    The Composition-Integrated Forecast System (C-IFS) is a global atmospheric composition forecasting tool, run by ECMWF within the framework of the Copernicus Atmospheric Monitoring Services (CAMS). The aerosol model of C-IFS is a simple bulk scheme that forecasts 5 species: dust, sea-salt, black carbon, organic matter and sulfates. Three bins represent the dust and sea-salt, for the super-coarse, coarse and fine mode of these species (Morcrette et al., 2009). This talk will present recent updates of the aerosol model, and also introduce coming upgrades. It will also present evaluations of these scores against AERONET observations. Next cycle of the C-IFS will include a mass fixer, because the semi-Lagrangian advection scheme used in C-IFS is not mass-conservative. This modification has a negligible impact for most species except for black carbon and organic matter; it allows to close the budgets between sources and sinks in the diagnostics. Dust emissions have been tuned to favor the emissions of large particles, which were under-represented. This brought an overall decrease of the burden of dust aerosol and improved scores especially close to source regions. The biomass-burning aerosol emissions are now emitted at an injection height that is provided by a new version of the Global Fire Assimilation System (GFAS). This brought a small increase in biomass burning aerosols, and a better representation of some large fire events. Lastly, SO2 emissions are now provided by the MACCity dataset instead of and older version of the EDGAR dataset. The seasonal and yearly variability of SO2 emissions are better captured by the MACCity dataset; the use of which brought significant improvements of the forecasts against observations. Upcoming upgrades of the aerosol model of C-IFS consist mainly in the overhaul of the representation of secondary aerosols. Secondary Organic Aerosols (SOA) production will be dynamically estimated by scaling them on CO fluxes. This approach has been

  12. Medical Updates Number 5 to the International Space Station Probability Risk Assessment (PRA) Model Using the Integrated Medical Model

    Science.gov (United States)

    Butler, Doug; Bauman, David; Johnson-Throop, Kathy

    2011-01-01

    The Integrated Medical Model (IMM) Project has been developing a probabilistic risk assessment tool, the IMM, to help evaluate in-flight crew health needs and impacts to the mission due to medical events. This package is a follow-up to a data package provided in June 2009. The IMM currently represents 83 medical conditions and associated ISS resources required to mitigate medical events. IMM end state forecasts relevant to the ISS PRA model include evacuation (EVAC) and loss of crew life (LOCL). The current version of the IMM provides the basis for the operational version of IMM expected in the January 2011 timeframe. The objectives of this data package are: 1. To provide a preliminary understanding of medical risk data used to update the ISS PRA Model. The IMM has had limited validation and an initial characterization of maturity has been completed using NASA STD 7009 Standard for Models and Simulation. The IMM has been internally validated by IMM personnel but has not been validated by an independent body external to the IMM Project. 2. To support a continued dialogue between the ISS PRA and IMM teams. To ensure accurate data interpretation, and that IMM output format and content meets the needs of the ISS Risk Management Office and ISS PRA Model, periodic discussions are anticipated between the risk teams. 3. To help assess the differences between the current ISS PRA and IMM medical risk forecasts of EVAC and LOCL. Follow-on activities are anticipated based on the differences between the current ISS PRA medical risk data and the latest medical risk data produced by IMM.

  13. Characterization of model errors in the calculation of tangent heights for atmospheric infrared limb measurements

    Directory of Open Access Journals (Sweden)

    M. Ridolfi

    2014-12-01

    Full Text Available We review the main factors driving the calculation of the tangent height of spaceborne limb measurements: the ray-tracing method, the refractive index model and the assumed atmosphere. We find that commonly used ray tracing and refraction models are very accurate, at least in the mid-infrared. The factor with largest effect in the tangent height calculation is the assumed atmosphere. Using a climatological model in place of the real atmosphere may cause tangent height errors up to ± 200 m. Depending on the adopted retrieval scheme, these errors may have a significant impact on the derived profiles.

  14. Power Loss Calculation and Thermal Modelling for a Three Phase Inverter Drive System

    Directory of Open Access Journals (Sweden)

    Z. Zhou

    2005-12-01

    Full Text Available Power losses calculation and thermal modelling for a three-phase inverter power system is presented in this paper. Aiming a long real time thermal simulation, an accurate average power losses calculation based on PWM reconstruction technique is proposed. For carrying out the thermal simulation, a compact thermal model for a three-phase inverter power module is built. The thermal interference of adjacent heat sources is analysed using 3D thermal simulation. The proposed model can provide accurate power losses with a large simulation time-step and suitable for a long real time thermal simulation for a three phase inverter drive system for hybrid vehicle applications.

  15. Sensitivity study of a method for updating a finite element model of a nuclear power station cooling tower; Recalage d`un modele de refrigerant atmospherique: etude de sensibilite

    Energy Technology Data Exchange (ETDEWEB)

    Billet, L.

    1994-12-31

    The Research and Development Division of Electricite de France is developing a surveillance method of cooling towers involving on-site wind-induced measurements. The method is supposed to detect structural damage in the tower. The damage is identified by tuning a finite element model of the tower on experimental mode shapes and eigenfrequencies. The sensitivity of the method was evaluated through numerical tests. First, the dynamic response of a damaged tower was simulated by varying the stiffness of some area of the model shell (from 1 % to 24 % of the total shell area). Second, the structural parameters of the undamaged cooling tower model were updated in order to make the output of the undamaged model as close as possible to the synthetic experimental data. The updating method, based on the minimization of the differences between experimental modal energies and modal energies calculated by the model, did not detect a stiffness change over less than 3 % of the shell area. Such a sensitivity is thought to be insufficient to detect tower cracks which behave like highly localized defaults. (author). 8 refs., 9 figs., 6 tabs.

  16. 3D Printing of Molecular Models with Calculated Geometries and p Orbital Isosurfaces

    Science.gov (United States)

    Carroll, Felix A.; Blauch, David N.

    2017-01-01

    3D printing was used to prepare models of the calculated geometries of unsaturated organic structures. Incorporation of p orbital isosurfaces into the models enables students in introductory organic chemistry courses to have hands-on experience with the concept of orbital alignment in strained and unstrained p systems.

  17. A model for bootstrap current calculations with bounce averaged Fokker-Planck codes

    NARCIS (Netherlands)

    Westerhof, E.; Peeters, A.G.

    1996-01-01

    A model is presented that allows the calculation of the neoclassical bootstrap current originating from the radial electron density and pressure gradients in standard (2+1)D bounce averaged Fokker-Planck codes. The model leads to an electron momentum source located almost exclusively at the

  18. Development of a risk-based mine closure cost calculation model

    CSIR Research Space (South Africa)

    Du

    2006-06-01

    Full Text Available The study summarised in this paper focused on expanding existing South African mine closure cost calculation models to provide a new model that incorporates risks, which could have an effect on the closure costs during the life cycle of the mine...

  19. On the applicability of nearly free electron model for resistivity calculations in liquid metals

    International Nuclear Information System (INIS)

    Gorecki, J.; Popielawski, J.

    1982-09-01

    The calculations of resistivity based on the nearly free electron model are presented for many noble and transition liquid metals. The triple ion correlation is included in resistivity formula according to SCQCA approximation. Two different methods for describing the conduction band are used. The problem of applicability of the nearly free electron model for different metals is discussed. (author)

  20. Diameter structure modeling and the calculation of plantation volume of black poplar clones

    Directory of Open Access Journals (Sweden)

    Andrašev Siniša

    2004-01-01

    Full Text Available A method of diameter structure modeling was applied in the calculation of plantation (stand volume of two black poplar clones in the section Aigeiros (Duby: 618 (Lux and S1-8. Diameter structure modeling by Weibull function makes it possible to calculate the plantation volume by volume line. Based on the comparison of the proposed method with the existing methods, the obtained error of plantation volume was less than 2%. Diameter structure modeling and the calculation of plantation volume by diameter structure model, by the regularity of diameter distribution, enables a better analysis of the production level and assortment structure and it can be used in the construction of yield and increment tables.

  1. Calculation method of water injection forward modeling and inversion process in oilfield water injection network

    Science.gov (United States)

    Liu, Long; Liu, Wei

    2018-04-01

    A forward modeling and inversion algorithm is adopted in order to determine the water injection plan in the oilfield water injection network. The main idea of the algorithm is shown as follows: firstly, the oilfield water injection network is inversely calculated. The pumping station demand flow is calculated. Then, forward modeling calculation is carried out for judging whether all water injection wells meet the requirements of injection allocation or not. If all water injection wells meet the requirements of injection allocation, calculation is stopped, otherwise the demand injection allocation flow rate of certain step size is reduced aiming at water injection wells which do not meet requirements, and next iterative operation is started. It is not necessary to list the algorithm into water injection network system algorithm, which can be realized easily. Iterative method is used, which is suitable for computer programming. Experimental result shows that the algorithm is fast and accurate.

  2. Modeling of water lighting process and calculation of the reactor-clarifier to improve energy efficiency

    Science.gov (United States)

    Skolubovich, Yuriy; Skolubovich, Aleksandr; Voitov, Evgeniy; Soppa, Mikhail; Chirkunov, Yuriy

    2017-10-01

    The article considers the current questions of technological modeling and calculation of the new facility for cleaning natural waters, the clarifier reactor for the optimal operating mode, which was developed in Novosibirsk State University of Architecture and Civil Engineering (SibSTRIN). A calculation technique based on well-known dependences of hydraulics is presented. A calculation example of a structure on experimental data is considered. The maximum possible rate of ascending flow of purified water was determined, based on the 24 hour clarification cycle. The fractional composition of the contact mass was determined with minimal expansion of contact mass layer, which ensured the elimination of stagnant zones. The clarification cycle duration was clarified by the parameters of technological modeling by recalculating maximum possible upward flow rate of clarified water. The thickness of the contact mass layer was determined. Likewise, clarification reactors can be calculated for any other lightening conditions.

  3. Updating the U.S. Life Cycle GHG Petroleum Baseline to 2014 with Projections to 2040 Using Open-Source Engineering-Based Models.

    Science.gov (United States)

    Cooney, Gregory; Jamieson, Matthew; Marriott, Joe; Bergerson, Joule; Brandt, Adam; Skone, Timothy J

    2017-01-17

    The National Energy Technology Laboratory produced a well-to-wheels (WTW) life cycle greenhouse gas analysis of petroleum-based fuels consumed in the U.S. in 2005, known as the NETL 2005 Petroleum Baseline. This study uses a set of engineering-based, open-source models combined with publicly available data to calculate baseline results for 2014. An increase between the 2005 baseline and the 2014 results presented here (e.g., 92.4 vs 96.2 g CO 2 e/MJ gasoline, + 4.1%) are due to changes both in modeling platform and in the U.S. petroleum sector. An updated result for 2005 was calculated to minimize the effect of the change in modeling platform, and emissions for gasoline in 2014 were about 2% lower than in 2005 (98.1 vs 96.2 g CO 2 e/MJ gasoline). The same methods were utilized to forecast emissions from fuels out to 2040, indicating maximum changes from the 2014 gasoline result between +2.1% and -1.4%. The changing baseline values lead to potential compliance challenges with frameworks such as the Energy Independence and Security Act (EISA) Section 526, which states that Federal agencies should not purchase alternative fuels unless their life cycle GHG emissions are less than those of conventionally produced, petroleum-derived fuels.

  4. Cassini Spacecraft Uncertainty Analysis Data and Methodology Review and Update/Volume 1: Updated Parameter Uncertainty Models for the Consequence Analysis

    Energy Technology Data Exchange (ETDEWEB)

    WHEELER, TIMOTHY A.; WYSS, GREGORY D.; HARPER, FREDERICK T.

    2000-11-01

    Uncertainty distributions for specific parameters of the Cassini General Purpose Heat Source Radioisotope Thermoelectric Generator (GPHS-RTG) Final Safety Analysis Report consequence risk analysis were revised and updated. The revisions and updates were done for all consequence parameters for which relevant information exists from the joint project on Probabilistic Accident Consequence Uncertainty Analysis by the United States Nuclear Regulatory Commission and the Commission of European Communities.

  5. Modelling precipitation extremes in the Czech Republic: update of intensity–duration–frequency curves

    Directory of Open Access Journals (Sweden)

    Michal Fusek

    2016-11-01

    Full Text Available Precipitation records from six stations of the Czech Hydrometeorological Institute were subject to statistical analysis with the objectives of updating the intensity–duration–frequency (IDF curves, by applying extreme value distributions, and comparing the updated curves against those produced by an empirical procedure in 1958. Another objective was to investigate differences between both sets of curves, which could be explained by such factors as different measuring instruments, measuring stations altitudes and data analysis methods. It has been shown that the differences between the two sets of IDF curves are significantly influenced by the chosen method of data analysis.

  6. Updated U.S. Geothermal Supply Characterization and Representation for Market Penetration Model Input

    Energy Technology Data Exchange (ETDEWEB)

    Augustine, C.

    2011-10-01

    The U.S. Department of Energy (DOE) Geothermal Technologies Program (GTP) tasked the National Renewable Energy Laboratory (NREL) with conducting the annual geothermal supply curve update. This report documents the approach taken to identify geothermal resources, determine the electrical producing potential of these resources, and estimate the levelized cost of electricity (LCOE), capital costs, and operating and maintenance costs from these geothermal resources at present and future timeframes under various GTP funding levels. Finally, this report discusses the resulting supply curve representation and how improvements can be made to future supply curve updates.

  7. Efficient matrix-vector products for large-scale nuclear Shell-Model calculations

    OpenAIRE

    Toivanen, J.

    2006-01-01

    A method to accelerate the matrix-vector products of j-scheme nuclear Shell-Model Configuration Interaction (SMCI) calculations is presented. The method takes advantage of the matrix product form of the j-scheme proton-neutron Hamiltonian matrix. It is shown that the method can speed up unrestricted large-scale pf-shell calculations by up to two orders of magnitude compared to previously existing related j-scheme method. The new method allows unrestricted SMCI calculations up to j-scheme dime...

  8. SITE-94. Adaptation of mechanistic sorption models for performance assessment calculations

    International Nuclear Information System (INIS)

    Arthur, R.C.

    1996-10-01

    Sorption is considered in most predictive models of radionuclide transport in geologic systems. Most models simulate the effects of sorption in terms of empirical parameters, which however can be criticized because the data are only strictly valid under the experimental conditions at which they were measured. An alternative is to adopt a more mechanistic modeling framework based on recent advances in understanding the electrical properties of oxide mineral-water interfaces. It has recently been proposed that these 'surface-complexation' models may be directly applicable to natural systems. A possible approach for adapting mechanistic sorption models for use in performance assessments, using this 'surface-film' concept, is described in this report. Surface-acidity parameters in the Generalized Two-Layer surface complexation model are combined with surface-complexation constants for Np(V) sorption ob hydrous ferric oxide to derive an analytical model enabling direct calculation of corresponding intrinsic distribution coefficients as a function of pH, and Ca 2+ , Cl - , and HCO 3 - concentrations. The surface film concept is then used to calculate whole-rock distribution coefficients for Np(V) sorption by altered granitic rocks coexisting with a hypothetical, oxidized Aespoe groundwater. The calculated results suggest that the distribution coefficients for Np adsorption on these rocks could range from 10 to 100 ml/g. Independent estimates of K d for Np sorption in similar systems, based on an extensive review of experimental data, are consistent, though slightly conservative, with respect to the calculated values. 31 refs

  9. Assessment model validity document. NAMMU: A program for calculating groundwater flow and transport through porous media

    International Nuclear Information System (INIS)

    Cliffe, K.A.; Morris, S.T.; Porter, J.D.

    1998-05-01

    NAMMU is a computer program for modelling groundwater flow and transport through porous media. This document provides an overview of the use of the program for geosphere modelling in performance assessment calculations and gives a detailed description of the program itself. The aim of the document is to give an indication of the grounds for having confidence in NAMMU as a performance assessment tool. In order to achieve this the following topics are discussed. The basic premises of the assessment approach and the purpose of and nature of the calculations that can be undertaken using NAMMU are outlined. The concepts of the validation of models and the considerations that can lead to increased confidence in models are described. The physical processes that can be modelled using NAMMU and the mathematical models and numerical techniques that are used to represent them are discussed in some detail. Finally, the grounds that would lead one to have confidence that NAMMU is fit for purpose are summarised

  10. A revised oceanographic model to calculate the limiting capacity of the ocean to accept radioactive waste

    International Nuclear Information System (INIS)

    Webb, G.A.M.; Grimwood, P.D.

    1976-12-01

    This report describes an oceanographic model which has been developed for the use in calculating the capacity of the oceans to accept radioactive wastes. One component is a relatively short-term diffusion model which is based on that described in an earlier report (Webb et al., NRPB-R14(1973)), but which has been generalised to some extent. Another component is a compartment model which is used to calculate long-term widespread water concentrations. This addition overcomes some of the short comings of the earlier diffusion model. Incorporation of radioactivity into deep ocean sediments is included in this long-term model as a removal mechanism. The combined model is used to provide a conservative (safe) estimate of the maximum concentrations of radioactivity in water as a function of time after the start of a continuous disposal operation. These results can then be used to assess the limiting capacity of an ocean to accept radioactive waste. (author)

  11. Comparison of Steady-State SVC Models in Load Flow Calculations

    DEFF Research Database (Denmark)

    Chen, Peiyuan; Chen, Zhe; Bak-Jensen, Birgitte

    2008-01-01

    This paper compares in a load flow calculation three existing steady-state models of static var compensator (SVC), i.e. the generator-fixed susceptance model, the total susceptance model and the firing angle model. The comparison is made in terms of the voltage at the SVC regulated bus, equivalent...... SVC susceptance at the fundamental frequency and the load flow convergence rate both when SVC is operating within and on the limits. The latter two models give inaccurate results of the equivalent SVC susceptance as compared to the generator model due to the assumption of constant voltage when the SVC...... of the calculated SVC susceptance while retaining acceptable load flow convergence rate....

  12. Study on Finite Element Model Updating in Highway Bridge Static Loading Test Using Spatially-Distributed Optical Fiber Sensors.

    Science.gov (United States)

    Wu, Bitao; Lu, Huaxi; Chen, Bo; Gao, Zhicheng

    2017-07-19

    A finite model updating method that combines dynamic-static long-gauge strain responses is proposed for highway bridge static loading tests. For this method, the objective function consisting of static long-gauge stains and the first order modal macro-strain parameter (frequency) is established, wherein the local bending stiffness, density and boundary conditions of the structures are selected as the design variables. The relationship between the macro-strain and local element stiffness was studied first. It is revealed that the macro-strain is inversely proportional to the local stiffness covered by the long-gauge strain sensor. This corresponding relation is important for the modification of the local stiffness based on the macro-strain. The local and global parameters can be simultaneously updated. Then, a series of numerical simulation and experiments were conducted to verify the effectiveness of the proposed method. The results show that the static deformation, macro-strain and macro-strain modal can be predicted well by using the proposed updating model.

  13. Weather Correlations to Calculate Infiltration Rates for U. S. Commercial Building Energy Models.

    Science.gov (United States)

    Ng, Lisa C; Quiles, Nelson Ojeda; Dols, W Stuart; Emmerich, Steven J

    2018-01-01

    As building envelope performance improves, a greater percentage of building energy loss will occur through envelope leakage. Although the energy impacts of infiltration on building energy use can be significant, current energy simulation software have limited ability to accurately account for envelope infiltration and the impacts of improved airtightness. This paper extends previous work by the National Institute of Standards and Technology that developed a set of EnergyPlus inputs for modeling infiltration in several commercial reference buildings using Chicago weather. The current work includes cities in seven additional climate zones and uses the updated versions of the prototype commercial building types developed by the Pacific Northwest National Laboratory for the U. S. Department of Energy. Comparisons were made between the predicted infiltration rates using three representations of the commercial building types: PNNL EnergyPlus models, CONTAM models, and EnergyPlus models using the infiltration inputs developed in this paper. The newly developed infiltration inputs in EnergyPlus yielded average annual increases of 3 % and 8 % in the HVAC electrical and gas use, respectively, over the original infiltration inputs in the PNNL EnergyPlus models. When analyzing the benefits of building envelope airtightening, greater HVAC energy savings were predicted using the newly developed infiltration inputs in EnergyPlus compared with using the original infiltration inputs. These results indicate that the effects of infiltration on HVAC energy use can be significant and that infiltration can and should be better accounted for in whole-building energy models.

  14. Calculation of atmospheric neutrino flux using the interaction model calibrated with atmospheric muon data

    International Nuclear Information System (INIS)

    Honda, M.; Kajita, T.; Kasahara, K.; Midorikawa, S.; Sanuki, T.

    2007-01-01

    Using the 'modified DPMJET-III' model explained in the previous paper [T. Sanuki et al., preceding Article, Phys. Rev. D 75, 043005 (2007).], we calculate the atmospheric neutrino flux. The calculation scheme is almost the same as HKKM04 [M. Honda, T. Kajita, K. Kasahara, and S. Midorikawa, Phys. Rev. D 70, 043008 (2004).], but the usage of the 'virtual detector' is improved to reduce the error due to it. Then we study the uncertainty of the calculated atmospheric neutrino flux summarizing the uncertainties of individual components of the simulation. The uncertainty of K-production in the interaction model is estimated using other interaction models: FLUKA'97 and FRITIOF 7.02, and modifying them so that they also reproduce the atmospheric muon flux data correctly. The uncertainties of the flux ratio and zenith angle dependence of the atmospheric neutrino flux are also studied

  15. New model for mines and transportation tunnels external dose calculation using Monte Carlo simulation

    International Nuclear Information System (INIS)

    Allam, Kh. A.

    2017-01-01

    In this work, a new methodology is developed based on Monte Carlo simulation for tunnels and mines external dose calculation. Tunnels external dose evaluation model of a cylindrical shape of finite thickness with an entrance and with or without exit. A photon transportation model was applied for exposure dose calculations. A new software based on Monte Carlo solution was designed and programmed using Delphi programming language. The variation of external dose due to radioactive nuclei in a mine tunnel and the corresponding experimental data lies in the range 7.3 19.9%. The variation of specific external dose rate with position in, tunnel building material density and composition were studied. The given new model has more flexible for real external dose in any cylindrical tunnel structure calculations. (authors)

  16. A model for calculating the quantum potential for time-varying multi-slit systems

    CERN Document Server

    Bracken, P

    2003-01-01

    A model is proposed and applied to the single and double slit experiments. The model is designed to take into account a change in the experimental setup. This includes opening and closing the slits in some way, or by introducing some object which can be thought of as having a perturbing effect on the space-time background. The single and double slits could be closed simultaneously or one after the other in such a way as to transform from one arrangement to the other. The model consists in using modified free particle propagators in such a way that the required integrals for calculating the overall wave function can be calculated. It is supposed that these constants reflect the ambient structure as the experimental situation is modified, and might be calculable with regard to a more fundamental theory.

  17. AMORPHOUS SILICON ELECTRONIC STRUCTURE MODELING AND BASIC ELECTRO-PHYSICAL PARAMETERS CALCULATION

    Directory of Open Access Journals (Sweden)

    B. A. Golodenko

    2014-01-01

    Full Text Available Summary. The amorphous semiconductor has any unique processing characteristics and it is perspective material for electronic engineering. However, we have not authentic information about they atomic structure and it is essential knot for execution calculation they electronic states and electro physical properties. The author's methods give to us decision such problem. This method allowed to calculation the amorphous silicon modeling cluster atomics Cartesian coordinates, determined spectrum and density its electronic states and calculation the basics electro physical properties of the modeling cluster. At that determined numerical means of the energy gap, energy Fermi, electron concentration inside valence and conduction band for modeling cluster. The find results provides real ability for purposeful control to type and amorphous semiconductor charge carriers concentration and else provides relation between atomic construction and other amorphous substance physical properties, for example, heat capacity, magnetic susceptibility and other thermodynamic sizes.

  18. Development of a model for the primary system CAREM reactor's stationary thermohydraulic calculation

    International Nuclear Information System (INIS)

    Gaspar, C.; Abbate, P.

    1990-01-01

    The ESCAREM program oriented to CAREM reactors' stationary thermohydraulic calculation is presented. As CAREM gives variations in relation to models for BWR (Boiling Water Reactors)/PWR (Pressurized Water Reactors) reactors, it was decided to develop a suitable model which allows to calculate: a) if the Steam Generator design is adequate to transfer the power required; b) the circulation flow that occurs in the Primary System; c) the temperature at the entrance (cool branch) and d) the contribution of each component to the pressure drop in the circulation connection. Results were verified against manual calculations and alternative numerical models. An experimental validation at the Thermohydraulic Essays Laboratory is suggested. A parametric analysis series is presented on CAREM 25 reactor, demonstrating operating conditions, at different power levels, as well as the influence of different design aspects. (Author) [es

  19. Tabulation of Mie scattering calculation results for microwave radiative transfer modeling

    Science.gov (United States)

    Yeh, Hwa-Young M.; Prasad, N.

    1988-01-01

    In microwave radiative transfer model simulations, the Mie calculations usually consume the majority of the computer time necessary for the calculations (70 to 86 percent for frequencies ranging from 6.6 to 183 GHz). For a large array of atmospheric profiles, the repeated calculations of the Mie codes make the radiative transfer computations not only expensive, but sometimes impossible. It is desirable, therefore, to develop a set of Mie tables to replace the Mie codes for the designated ranges of temperature and frequency in the microwave radiative transfer calculation. Results of using the Mie tables in the transfer calculations show that the total CPU time (IBM 3081) used for the modeling simulation is reduced by a factor of 7 to 16, depending on the frequency. The tables are tested by computing the upwelling radiance of 144 atmospheric profiles generated by a 3-D cloud model (Tao, 1986). Results are compared with those using Mie quantities computed from the Mie codes. The bias and root-mean-square deviation (RMSD) of the model results using the Mie tables, in general, are less than 1 K except for 37 and 90 GHz. Overall, neither the bias nor RMSD is worse than 1.7 K for any frequency and any viewing angle.

  20. Model representations of kerogen structures: An insight from density functional theory calculations and spectroscopic measurements.

    Science.gov (United States)

    Weck, Philippe F; Kim, Eunja; Wang, Yifeng; Kruichak, Jessica N; Mills, Melissa M; Matteo, Edward N; Pellenq, Roland J-M

    2017-08-01

    Molecular structures of kerogen control hydrocarbon production in unconventional reservoirs. Significant progress has been made in developing model representations of various kerogen structures. These models have been widely used for the prediction of gas adsorption and migration in shale matrix. However, using density functional perturbation theory (DFPT) calculations and vibrational spectroscopic measurements, we here show that a large gap may still remain between the existing model representations and actual kerogen structures, therefore calling for new model development. Using DFPT, we calculated Fourier transform infrared (FTIR) spectra for six most widely used kerogen structure models. The computed spectra were then systematically compared to the FTIR absorption spectra collected for kerogen samples isolated from Mancos, Woodford and Marcellus formations representing a wide range of kerogen origin and maturation conditions. Limited agreement between the model predictions and the measurements highlights that the existing kerogen models may still miss some key features in structural representation. A combination of DFPT calculations with spectroscopic measurements may provide a useful diagnostic tool for assessing the adequacy of a proposed structural model as well as for future model development. This approach may eventually help develop comprehensive infrared (IR)-fingerprints for tracing kerogen evolution.

  1. Application of the mathematical modelling and human phantoms for calculation of the organ doses

    International Nuclear Information System (INIS)

    Kluson, J.; Cechak, T.

    2005-01-01

    Increasing power of the computers hardware and new versions of the software for the radiation transport simulation and modelling of the complex experimental setups and geometrical arrangement enable to dramatically improve calculation of organ or target volume doses ( dose distributions) in the wide field of medical physics and radiation protection applications. Increase of computers memory and new software features makes it possible to use not only analytical (mathematical) phantoms but also allow constructing the voxel models of human or phantoms with voxels fine enough (e.g. 1·1·1 mm) to represent all required details. CT data can be used for the description of such voxel model geometry .Advanced scoring methods are available in the new software versions. Contribution gives the overview of such new possibilities in the modelling and doses calculations, discusses the simulation/approximation of the dosimetric quantities ( especially dose ) and calculated data interpretation. Some examples of application and demonstrations will be shown, compared and discussed. Present computational tools enables to calculate organ or target volumes doses with new quality of large voxel models/phantoms (including CT based patient specific model ), approximating the human body with high precision. Due to these features has more and more importance and use in the fields of medical and radiological physics, radiation protection, etc. (authors)

  2. Calculation of the band structure of 2d conducting polymers using the network model

    International Nuclear Information System (INIS)

    Sabra, M. K.; Suman, H.

    2007-01-01

    the network model has been used to calculate the band structure the gap energy and Fermi level of conducting polymers in two dimensions. For this purpose, a geometrical classification of possible polymer chains configurations in two dimensions has been introduced leading to a classification of the unit cells based on the number of bonds in them. The model has been applied to graphite in 2D, represented by a three bonds unit cell, and, as a new case, the anti-parallel Polyacetylene chains (PA) in two dimensions, represented by a unit cell with four bons. The results are in good agreement with the first principles calculations. (author)

  3. Calculation of accurate small angle X-ray scattering curves from coarse-grained protein models

    DEFF Research Database (Denmark)

    Stovgaard, Kasper; Andreetta, Christian; Ferkinghoff-Borg, Jesper

    2010-01-01

    scattering bodies per amino acid led to significantly better results than a single scattering body. Conclusion: We show that the obtained point estimates allow the calculation of accurate SAXS curves from coarse-grained protein models. The resulting curves are on par with the current state-of-the-art program...... CRYSOL, which requires full atomic detail. Our method was also comparable to CRYSOL in recognizing native structures among native-like decoys. As a proof-of-concept, we combined the coarse-grained Debye calculation with a previously described probabilistic model of protein structure, Torus...

  4. A calculation of the ZH → γ H decay in the Littlest Higgs Model

    International Nuclear Information System (INIS)

    Aranda, J I; Ramirez-Zavaleta, F; Tututi, E S; Cortés-Maldonado, I

    2016-01-01

    New heavy neutral gauge bosons are predicted in many extensions of the Standard Model, those new bosons are associated with additional gauge symmetries. We present a preliminary calculation of the branching ratio decay for heavy neutral gauge bosons ( Z h ) into γ H in the most popular version of the Little Higgs models. The calculation involves the main contributions at one-loop level induced by fermions, scalars and gauge bosons. Preliminary results show a very suppressed branching ratio of the order of 10 -6 . (paper)

  5. Recoilless fractions calculated with the nearest-neighbour interaction model by Kagan and Maslow

    Science.gov (United States)

    Kemerink, G. J.; Pleiter, F.

    1986-08-01

    The recoilless fraction is calculated for a number of Mössbauer atoms that are natural constituents of HfC, TaC, NdSb, FeO, NiO, EuO, EuS, EuSe, EuTe, SnTe, PbTe and CsF. The calculations are based on a model developed by Kagan and Maslow for binary compounds with rocksalt structure. With the exception of SnTe and, to a lesser extent, PbTe, the results are in reasonable agreement with the available experimental data and values derived from other models.

  6. Numerical calculation of flashing from long pipes using a two-field model

    International Nuclear Information System (INIS)

    Rivard, W.C.; Torrey, M.D.

    1976-05-01

    A two-field model for two-phase flows, in which the vapor and liquid phases have different densities, velocities, and temperatures, has been used to calculate the flashing of water from long pipes. The IMF (Implicit Multifield) technique is used to numerically solve the transient equations that govern the dynamics of each phase. The flow physics is described with finite rate phase transitions, interfacial friction, heat transfer, pipe wall friction, and appropriate state equations. The results of the calculations are compared with measured histories of pressure, temperature, and void fraction. A parameter study indicates the relative sensitivity of the results to the various physical models that are used

  7. Dayside ionosphere of Titan: Impact on calculated plasma densities due to variations in the model parameters

    Science.gov (United States)

    Mukundan, Vrinda; Bhardwaj, Anil

    2018-01-01

    A one dimensional photochemical model for the dayside ionosphere of Titan has been developed for calculating the density profiles of ions and electrons under steady state photochemical equilibrium condition. We concentrated on the T40 flyby of Cassini orbiter and used the in-situ measurements from instruments onboard Cassini as input to the model. An energy deposition model is employed for calculating the attenuated photon flux and photoelectron flux at different altitudes in Titan's ionosphere. We used the Analytical Yield Spectrum approach for calculating the photoelectron fluxes. Volume production rates of major primary ions, like, N2+, N+ , CH4+, CH3+, etc due to photon and photoelectron impact are calculated and used as input to the model. The modeled profiles are compared with the Cassini Ion Neutral Mass Spectrometer (INMS) and Langmuir Probe (LP) measurements. The calculated electron density is higher than the observation by a factor of 2 to 3 around the peak. We studied the impact of different model parameters, viz. photoelectron flux, ion production rates, electron temperature, dissociative recombination rate coefficients, neutral densities of minor species, and solar flux on the calculated electron density to understand the possible reasons for this discrepancy. Recent studies have shown that there is an overestimation in the modeled photoelectron flux and N2+ ion production rates which may contribute towards this disagreement. But decreasing the photoelectron flux (by a factor of 3) and N2+ ion production rate (by a factor of 2) decreases the electron density only by 10 to 20%. Reduction in the measured electron temperature by a factor of 5 provides a good agreement between the modeled and observed electron density. The change in HCN and NH3 densities affects the calculated densities of the major ions (HCNH+ , C2H5+, and CH5+); however the overall impact on electron density is not appreciable ( < 20%). Even though increasing the dissociative

  8. Quantum-mechanical calculation of H on Ni(001) using a model potential based on first-principles calculations

    DEFF Research Database (Denmark)

    Mattsson, T.R.; Wahnström, G.; Bengtsson, L.

    1997-01-01

    First-principles density-functional calculations of hydrogen adsorption on the Ni (001) surface have been performed in order to get a better understanding of adsorption and diffusion of hydrogen on metal surfaces. We find good agreement with experiments for the adsorption energy, binding distance...

  9. A steady-state target calculation method based on "point" model for integrating processes.

    Science.gov (United States)

    Pang, Qiang; Zou, Tao; Zhang, Yanyan; Cong, Qiumei

    2015-05-01

    Aiming to eliminate the influences of model uncertainty on the steady-state target calculation for integrating processes, this paper presented an optimization method based on "point" model and a method determining whether or not there is a feasible solution of steady-state target. The optimization method resolves the steady-state optimization problem of integrating processes under the framework of two-stage structure, which builds a simple "point" model for the steady-state prediction, and compensates the error between "point" model and real process in each sampling interval. Simulation results illustrate that the outputs of integrating variables can be restricted within the constraints, and the calculation errors between actual outputs and optimal set-points are small, which indicate that the steady-state prediction model can predict the future outputs of integrating variables accurately. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  10. Planned updates and refinements to the Central Valley hydrologic model with an emphasis on improving the simulation of land subsidence in the San Joaquin Valley

    Science.gov (United States)

    Faunt, Claudia C.; Hanson, Randall T.; Martin, Peter; Schmid, Wolfgang

    2011-01-01

    California's Central Valley has been one of the most productive agricultural regions in the world for more than 50 years. To better understand the groundwater availability in the valley, the U.S. Geological Survey (USGS) developed the Central Valley hydrologic model (CVHM). Because of recent water-level declines and renewed subsidence, the CVHM is being updated to better simulate the geohydrologic system. The CVHM updates and refinements can be grouped into two general categories: (1) model code changes and (2) data updates. The CVHM updates and refinements will require that the model be recalibrated. The updated CVHM will provide a detailed transient analysis of changes in groundwater availability and flow paths in relation to climatic variability, urbanization, stream flow, and changes in irrigated agricultural practices and crops. The updated CVHM is particularly focused on more accurately simulating the locations and magnitudes of land subsidence. The intent of the updated CVHM is to help scientists better understand the availability and sustainability of water resources and the interaction of groundwater levels with land subsidence.

  11. Calculations of thermophysical properties of cubic carbides and nitrides using the Debye-Grueneisen model

    Energy Technology Data Exchange (ETDEWEB)

    Lu Xiaogang [Department of Materials Science and Engineering, Royal Institute of Technology, SE-100 44 Stockholm (Sweden)]. E-mail: xiaogang@thermocalc.se; Selleby, Malin [Department of Materials Science and Engineering, Royal Institute of Technology, SE-100 44 Stockholm (Sweden); Sundman, Bo [Department of Materials Science and Engineering, Royal Institute of Technology, SE-100 44 Stockholm (Sweden)

    2007-02-15

    The thermal expansivities and heat capacities of MX (M = Ti, Zr, Hf, V, Nb, Ta; X = C, N) carbides and nitrides with NaCl structure were calculated using the Debye-Grueneisen model combined with ab initio calculations. Two different approximations for the Grueneisen parameter {gamma} were used in the Debye-Grueneisen model, i.e. the expressions proposed by Slater and by Dugdale and MacDonald. The thermal electronic contribution was evaluated from ab initio calculations of the electronic density of states. The calculated results were compared with CALPHAD assessments and experimental data. It was found that the calculations using the Dugdale-MacDonald {gamma} can account for most of the experimental data. By fitting experimental heat capacity and thermal expansivity data below the Debye temperatures, an estimation of Poisson's ratio was obtained and Young's and shear moduli were evaluated. In order to reach a reasonable agreement with experimental data, it was necessary to use the logarithmic averaged mass of the constituent atoms. The agreements between the calculated and the experimental values for the bulk and Young's moduli are generally better than the agreement for shear modulus.

  12. Calculations of thermophysical properties of cubic carbides and nitrides using the Debye-Grueneisen model

    International Nuclear Information System (INIS)

    Lu Xiaogang; Selleby, Malin; Sundman, Bo

    2007-01-01

    The thermal expansivities and heat capacities of MX (M = Ti, Zr, Hf, V, Nb, Ta; X = C, N) carbides and nitrides with NaCl structure were calculated using the Debye-Grueneisen model combined with ab initio calculations. Two different approximations for the Grueneisen parameter γ were used in the Debye-Grueneisen model, i.e. the expressions proposed by Slater and by Dugdale and MacDonald. The thermal electronic contribution was evaluated from ab initio calculations of the electronic density of states. The calculated results were compared with CALPHAD assessments and experimental data. It was found that the calculations using the Dugdale-MacDonald γ can account for most of the experimental data. By fitting experimental heat capacity and thermal expansivity data below the Debye temperatures, an estimation of Poisson's ratio was obtained and Young's and shear moduli were evaluated. In order to reach a reasonable agreement with experimental data, it was necessary to use the logarithmic averaged mass of the constituent atoms. The agreements between the calculated and the experimental values for the bulk and Young's moduli are generally better than the agreement for shear modulus

  13. Random geometry model in criticality calculations of solutions containing Raschig rings

    International Nuclear Information System (INIS)

    Teng, S.P.; Lindstrom, D.G.

    1979-01-01

    The criticality constants of fissile solutions containing borated Raschig rings are evaluated using the Monte Carlo code KENO IV with various geometry models. In addition to those used by other investigators, a new geometry model, the random geometry model, is presented to simulate the system of randomly oriented Raschig rings in solution. A technique to obtain the material thickness distribution functions of solution and rings for use in the random geometry model is also presented. Comparison between the experimental data and the calculated results using Monte Carlo method with various geometry models indicates that the random geometry model is a reasonable alternative to models previously used in describing the system of Raschig-ring-filled solution. The random geometry model also provides a solution to the problem of describing an array containing Raschig-ring-filled tanks that is not available to techniques using other models

  14. Propagation of Uncertainty in System Parameters of a LWR Model by Sampling MCNPX Calculations - Burnup Analysis

    Science.gov (United States)

    Campolina, Daniel de A. M.; Lima, Claubia P. B.; Veloso, Maria Auxiliadora F.

    2014-06-01

    For all the physical components that comprise a nuclear system there is an uncertainty. Assessing the impact of uncertainties in the simulation of fissionable material systems is essential for a best estimate calculation that has been replacing the conservative model calculations as the computational power increases. The propagation of uncertainty in a simulation using a Monte Carlo code by sampling the input parameters is recent because of the huge computational effort required. In this work a sample space of MCNPX calculations was used to propagate the uncertainty. The sample size was optimized using the Wilks formula for a 95th percentile and a two-sided statistical tolerance interval of 95%. Uncertainties in input parameters of the reactor considered included geometry dimensions and densities. It was showed the capacity of the sampling-based method for burnup when the calculations sample size is optimized and many parameter uncertainties are investigated together, in the same input.

  15. Hydroelastic model of PWR reactor internals SAFRAN 1 - Validation of a vibration calculation method

    International Nuclear Information System (INIS)

    Epstein, A.; Gibert, R.J.; Jeanpierre, F.; Livolant, M.

    1978-01-01

    The SAFRAN 1 test loop consists of an hydroelastic similitude of a 1/8 scale model of a 3 loop P.W.R. Vibrations of the main internals (thermal shield and core barrel) and pressure fluctuations in water thin sections between vessel and internals, and in inlet and outlet pipes, have been measured. The calculation method consists of: an evaluation of the main vibration and acoustic sources owing to the flow (unsteady jet impingement on the core barrel, turbulent flow in a water thin section). A calculation of the internal modal parameters taking into account the inertial effects of fluid (the computer codes AQUAMODE and TRISTANA have been used). A calculation of the acoustic response of the circuit (the computer code VIBRAPHONE has been used). The good agreement between the calculation and the experimental results allows using this method with better security for the prediction of the vibration levels of full scale P.W.R. internals

  16. Unified description of pf-shell nuclei by the Monte Carlo shell model calculations

    Energy Technology Data Exchange (ETDEWEB)

    Mizusaki, Takahiro; Otsuka, Takaharu [Tokyo Univ. (Japan). Dept. of Physics; Honma, Michio

    1998-03-01

    The attempts to solve shell model by new methods are briefed. The shell model calculation by quantum Monte Carlo diagonalization which was proposed by the authors is a more practical method, and it became to be known that it can solve the problem with sufficiently good accuracy. As to the treatment of angular momentum, in the method of the authors, deformed Slater determinant is used as the basis, therefore, for making angular momentum into the peculiar state, projected operator is used. The space determined dynamically is treated mainly stochastically, and the energy of the multibody by the basis formed as the result is evaluated and selectively adopted. The symmetry is discussed, and the method of decomposing shell model space into dynamically determined space and the product of spin and isospin spaces was devised. The calculation processes are shown with the example of {sup 50}Mn nuclei. The calculation of the level structure of {sup 48}Cr with known exact energy can be done with the accuracy of peculiar absolute energy value within 200 keV. {sup 56}Ni nuclei are the self-conjugate nuclei of Z=N=28. The results of the shell model calculation of {sup 56}Ni nucleus structure by using the interactions of nuclear models are reported. (K.I.)

  17. Preliminary integrated calculation of radionuclide cation and anion transport at Yucca Mountain using a geochemical model

    International Nuclear Information System (INIS)

    Birdsell, K.H.; Campbell, K.; Eggert, K.G.; Travis, B.J.

    1989-01-01

    This paper presents preliminary transport calculations for radionuclide movement at Yucca Mountain using preliminary data for mineral distributions, retardation parameter distributions, and hypothetical recharge scenarios. These calculations are not performance assessments, but are used to study the effectiveness of the geochemical barriers at the site at mechanistic level. The preliminary calculations presented have many shortcomings and should be viewed only as a demonstration of the modeling methodology. The simulations were run with TRACRN, a finite-difference porous flow and radionuclide transport code developed for the Yucca Mountain Project. Approximately 30,000 finite-difference nodes are used to represent the unsaturated and saturated zones underlying the repository in three dimensions. Sorption ratios for the radionuclides modeled are assumed to be functions of mineralogic assemblages of the underlying rock. These transport calculations present a representative radionuclide cation, 135 Cs and anion, 99 Tc. The effects on transport of many of the processes thought to be active at Yucca Mountain may be examined using this approach. The model provides a method for examining the integration of flow scenarios, transport, and retardation processes as currently understood for the site. It will also form the basis for estimates of the sensitivity of transport calculations to retardation processes. 11 refs., 17 figs., 1 tab

  18. Program realization of mathematical model of kinetostatical calculation of flat lever mechanisms

    Directory of Open Access Journals (Sweden)

    M. A. Vasechkin

    2016-01-01

    Full Text Available Global computerization determined the dominant position of the analytical methods for the study of mechanisms. As a result, kinetostatics analysis of mechanisms using software packages is an important part of scientific and practical activities of engineers and designers. Therefore, software implementation of mathematical models kinetostatical calculating mechanisms is of practical interest. The mathematical model obtained in [1]. In the language of Turbo Pascal developed a computer procedure that calculates the forces in kinematic pairs in groups Assur (GA and a balancing force at the primary level. Before use appropriate computational procedures it is necessary to know all external forces and moments acting on the GA and to determine the inertial forces and moments of inertia forces. The process of calculations and constructions of the provisions of the mechanism can be summarized as follows. Organized cycle in which to calculate the position of an initial link of the mechanism. Calculate the position of the remaining links of the mechanism by referring to relevant procedures module DIADA in GA [2,3]. Using the graphics mode of the computer displaying on the display the position of the mechanism. The computed inertial forces and moments of inertia forces. Turning to the corresponding procedures of the module, calculated all the forces in kinematic pairs and the balancing force at the primary level. In each kinematic pair build forces and their direction with the help of simple graphical procedures. The magnitude of these forces and their direction are displayed in a special window with text mode. This work contains listings of the test programs MyTеst, is an example of using computing capabilities of the developed module. As a check on the calculation procedures of module in the program is reproduced an example of calculating the balancing forces according to the method of Zhukovsky (Zhukovsky lever.

  19. Microscopic calculation of level densities: the shell model Monte Carlo approach

    International Nuclear Information System (INIS)

    Alhassid, Yoram

    2012-01-01

    The shell model Monte Carlo (SMMC) approach provides a powerful technique for the microscopic calculation of level densities in model spaces that are many orders of magnitude larger than those that can be treated by conventional methods. We discuss a number of developments: (i) Spin distribution. We used a spin projection method to calculate the exact spin distribution of energy levels as a function of excitation energy. In even-even nuclei we find an odd-even staggering effect (in spin). Our results were confirmed in recent analysis of experimental data. (ii) Heavy nuclei. The SMMC approach was extended to heavy nuclei. We have studied the crossover between vibrational and rotational collectivity in families of samarium and neodymium isotopes in model spaces of dimension approx. 10 29 . We find good agreement with experimental results for both state densities and 2 > (where J is the total spin). (iii) Collective enhancement factors. We have calculated microscopically the vibrational and rotational enhancement factors of level densities versus excitation energy. We find that the decay of these enhancement factors in heavy nuclei is correlated with the pairing and shape phase transitions. (iv) Odd-even and odd-odd nuclei. The projection on an odd number of particles leads to a sign problem in SMMC. We discuss a novel method to calculate state densities in odd-even and odd-odd nuclei despite the sign problem. (v) State densities versus level densities. The SMMC approach has been used extensively to calculate state densities. However, experiments often measure level densities (where levels are counted without including their spin degeneracies.) A spin projection method enables us to also calculate level densities in SMMC. We have calculated the SMMC level density of 162 Dy and found it to agree well with experiments

  20. Spin-splitting calculation for zincblende semiconductors using an atomic bond-orbital model.

    Science.gov (United States)

    Kao, Hsiu-Fen; Lo, Ikai; Chiang, Jih-Chen; Chen, Chun-Nan; Wang, Wan-Tsang; Hsu, Yu-Chi; Ren, Chung-Yuan; Lee, Meng-En; Wu, Chieh-Lung; Gau, Ming-Hong

    2012-10-17

    We develop a 16-band atomic bond-orbital model (16ABOM) to compute the spin splitting induced by bulk inversion asymmetry in zincblende materials. This model is derived from the linear combination of atomic-orbital (LCAO) scheme such that the characteristics of the real atomic orbitals can be preserved to calculate the spin splitting. The Hamiltonian of 16ABOM is based on a similarity transformation performed on the nearest-neighbor LCAO Hamiltonian with a second-order Taylor expansion k at the Γ point. The spin-splitting energies in bulk zincblende semiconductors, GaAs and InSb, are calculated, and the results agree with the LCAO and first-principles calculations. However, we find that the spin-orbit coupling between bonding and antibonding p-like states, evaluated by the 16ABOM, dominates the spin splitting of the lowest conduction bands in the zincblende materials.

  1. Calculational model for condensation of water vapor during an underground nuclear detonation

    International Nuclear Information System (INIS)

    Knox, R.J.

    1975-01-01

    An empirally derived mathematical model was developed to calculate the pressure and temperature history during condensation of water vapor in an underground-nuclear-explosion cavity. The condensation process is non-isothermal. Use has been made of the Clapeyron-Clausius equation as a basis for development of the model. Analytic fits to the vapor pressure and the latent heat of vaporization for saturated-water vapor, together with an estimated value for the heat-transfer coefficient, have been used to describe the phenomena. The calculated pressure-history during condensation has been determined to be exponential, with a time constant somewhat less than that observed during the cooling of the superheated steam from the explosion. The behavior of the calculated condensation-pressure compares well with the observed-pressure record (until just prior to cavity collapse) for a particular nuclear-detonation event for which data is available

  2. Land Boundary Conditions for the Goddard Earth Observing System Model Version 5 (GEOS-5) Climate Modeling System: Recent Updates and Data File Descriptions

    Science.gov (United States)

    Mahanama, Sarith P.; Koster, Randal D.; Walker, Gregory K.; Takacs, Lawrence L.; Reichle, Rolf H.; De Lannoy, Gabrielle; Liu, Qing; Zhao, Bin; Suarez, Max J.

    2015-01-01

    The Earths land surface boundary conditions in the Goddard Earth Observing System version 5 (GEOS-5) modeling system were updated using recent high spatial and temporal resolution global data products. The updates include: (i) construction of a global 10-arcsec land-ocean lakes-ice mask; (ii) incorporation of a 10-arcsec Globcover 2009 land cover dataset; (iii) implementation of Level 12 Pfafstetter hydrologic catchments; (iv) use of hybridized SRTM global topography data; (v) construction of the HWSDv1.21-STATSGO2 merged global 30 arc second soil mineral and carbon data in conjunction with a highly-refined soil classification system; (vi) production of diffuse visible and near-infrared 8-day MODIS albedo climatologies at 30-arcsec from the period 2001-2011; and (vii) production of the GEOLAND2 and MODIS merged 8-day LAI climatology at 30-arcsec for GEOS-5. The global data sets were preprocessed and used to construct global raster data files for the software (mkCatchParam) that computes parameters on catchment-tiles for various atmospheric grids. The updates also include a few bug fixes in mkCatchParam, as well as changes (improvements in algorithms, etc.) to mkCatchParam that allow it to produce tile-space parameters efficiently for high resolution AGCM grids. The update process also includes the construction of data files describing the vegetation type fractions, soil background albedo, nitrogen deposition and mean annual 2m air temperature to be used with the future Catchment CN model and the global stream channel network to be used with the future global runoff routing model. This report provides detailed descriptions of the data production process and data file format of each updated data set.

  3. Influence of delayed neutron parameter calculation accuracy on results of modeled WWER scram experiments

    International Nuclear Information System (INIS)

    Artemov, V.G.; Gusev, V.I.; Zinatullin, R.E.; Karpov, A.S.

    2007-01-01

    Using modeled WWER cram rod drop experiments, performed at the Rostov NPP, as an example, the influence of delayed neutron parameters on the modeling results was investigated. The delayed neutron parameter values were taken from both domestic and foreign nuclear databases. Numerical modeling was carried out on the basis of SAPFIR 9 5andWWERrogram package. Parameters of delayed neutrons were acquired from ENDF/B-VI and BNAB-78 validated data files. It was demonstrated that using delay fraction data from different databases in reactivity meters led to significantly different reactivity results. Based on the results of numerically modeled experiments, delayed neutron parameters providing the best agreement between calculated and measured data were selected and recommended for use in reactor calculations (Authors)

  4. Investigation of the influence of the open cell foam models geometry on hydrodynamic calculation

    Science.gov (United States)

    Soloveva, O. V.; Solovev, S. A.; Khusainov, R. R.; Popkova, O. S.; Panenko, D. O.

    2018-01-01

    A geometrical model of the open cell foam was created as an ordered set of intersecting spheres. The proposed model closely describes a real porous cellular structure. The hydrodynamics flow was calculated on the basis of a simple model in the ANSYS Fluent software package. A pressure drop was determined, the value of which was compared with the experimental data of other authors. As a result of the conducted studies, we found that a porous structure with smoothed faces provides the smallest pressure drop with the same porosity of the package. Analysis of the calculated data demonstrated that the approximation of an elementary porous cell substantially distorts the flow field. This is undesirable in detailed modeling of the open cell foam.

  5. Calculation model for 16N transit time in the secondary side of steam generators

    International Nuclear Information System (INIS)

    Liu Songyu; Xu Jijun; Xu Ming

    1998-01-01

    The 16 N transit time is essential to determine the leak-rate of steam generator tubes leaks with 16 N monitoring system, which is a new technique. A model was developed for calculation 16 N transit time in the secondary side of steam generators. According to the flow characters of secondary side fluid, the transit times divide into four sectors from tube sheet to the sensor on steam line. The model assumes that 16 N is moving as vapor phase in the secondary-side. So the model for vapor velocity distribution in tube bundle is presented in detail. The 16 N transit time calculation results of this model compare with these of EDF on steam generator of Qinshan NPP

  6. Calculations of Inflaton Decays and Reheating: with Applications to No-Scale Inflation Models

    CERN Document Server

    Ellis, John; Nanopoulos, Dimitri V; Olive, Keith A

    2015-01-01

    We discuss inflaton decays and reheating in no-scale Starobinsky-like models of inflation, calculating the effective equation-of-state parameter, $w$, during the epoch of inflaton decay, the reheating temperature, $T_{\\rm reh}$, and the number of inflationary e-folds, $N_*$, comparing analytical approximations with numerical calculations. We then illustrate these results with applications to models based on no-scale supergravity and motivated by generic string compactifications, including scenarios where the inflaton is identified as an untwisted-sector matter field with direct Yukawa couplings to MSSM fields, and where the inflaton decays via gravitational-strength interactions. Finally, we use our results to discuss the constraints on these models imposed by present measurements of the scalar spectral index $n_s$ and the tensor-to-scalar perturbation ratio $r$, converting them into constraints on $N_*$, the inflaton decay rate and other parameters of specific no-scale inflationary models.

  7. Utilization of ''RASO'' for a historical survey of Angra-1 power from 26.10.88 for updating the inventory calculation

    International Nuclear Information System (INIS)

    Guimaraes, A.C.F.; Goes, A.G.

    1988-12-01

    The ''RASO'', an activity and operational situation report for Angra 1, is transmitted daily by phone to CNEN by inspectors living at Angra 1 site, anal shows the parameters about the operational status of the nuclear power plant. In the period of 26.10.88 to 04.12.88 a discretized power serie was determined for greater time intervals than the original serie, and then new concentrations were calculated using origem 2 computer code. (author) [pt

  8. Comments on the Updated Tetrapartite Pallium Model in the Mouse and Chick, Featuring a Homologous Claustro-Insular Complex.

    Science.gov (United States)

    Puelles, Luis

    2017-01-01

    This essay reviews step by step the conceptual changes of the updated tetrapartite pallium model from its tripartite and early tetrapartite antecedents. The crucial observations in mouse material are explained first in the context of assumptions, tentative interpretations, and literature data. Errors and the solutions offered to resolve them are made explicit. Next, attention is centered on the lateral pallium sector of the updated model, whose definition is novel in incorporating a claustro-insular complex distinct from both olfactory centers (ventral pallium) and the isocortex (dorsal pallium). The general validity of the model is postulated at least for tetrapods. Genoarchitectonic studies performed to check the presence of a claustro-insular field homolog in the avian brain are reviewed next. These studies have indeed revealed the existence of such a complex in the avian mesopallium (though stratified outside-in rather than inside-out as in mammals), and there are indications that the same pattern may be found in reptiles as well. Peculiar pallio-pallial tangential migratory phenomena are apparently shared as well between mice and chicks. The issue of whether the avian mesopallium has connections that are similar to the known connections of the mammalian claustro-insular complex is considered next. Accrued data are consistent with similar connections for the avian insula homolog, but they are judged to be insufficient to reach definitive conclusions about the avian claustrum. An aside discusses that conserved connections are not a necessary feature of field-homologous neural centers. Finally, the present scenario on the evolution of the pallium of sauropsids and mammals is briefly visited, as highlighted by the updated tetrapartite model and present results. © 2017 S. Karger AG, Basel.

  9. Updating the Cornell Net Carbohydrate and Protein System feed library and analyzing model sensitivity to feed inputs.

    Science.gov (United States)

    Higgs, R J; Chase, L E; Ross, D A; Van Amburgh, M E

    2015-09-01

    The Cornell Net Carbohydrate and Protein System (CNCPS) is a nutritional model that evaluates the environmental and nutritional resources available in an animal production system and enables the formulation of diets that closely match the predicted animal requirements. The model includes a library of approximately 800 different ingredients that provide the platform for describing the chemical composition of the diet to be formulated. Each feed in the feed library was evaluated against data from 2 commercial laboratories and updated when required to enable more precise predictions of dietary energy and protein supply. A multistep approach was developed to predict uncertain values using linear regression, matrix regression, and optimization. The approach provided an efficient and repeatable way of evaluating and refining the composition of a large number of different feeds against commercially generated data similar to that used by CNCPS users on a daily basis. The protein A fraction in the CNCPS, formerly classified as nonprotein nitrogen, was reclassified to ammonia for ease and availability of analysis and to provide a better prediction of the contribution of metabolizable protein from free AA and small peptides. Amino acid profiles were updated using contemporary data sets and now represent the profile of AA in the whole feed rather than the insoluble residue. Model sensitivity to variation in feed library inputs was investigated using Monte Carlo simulation. Results showed the prediction of metabolizable energy was most sensitive to variation in feed chemistry and fractionation, whereas predictions of metabolizable protein were most sensitive to variation in digestion rates. Regular laboratory analysis of samples taken on-farm remains the recommended approach to characterizing the chemical components of feeds in a ration. However, updates to the CNCPS feed library provide a database of ingredients that are consistent with current feed chemistry information and

  10. A new simulation model for calculating the internal exposure of some radionuclides

    Directory of Open Access Journals (Sweden)

    Mahrous Ayman

    2009-01-01

    Full Text Available A new model based on a series of mathematical functions for estimating excretion rates following the intake of nine different radionuclides is presented in this work. The radionuclides under investigation are: cobalt, iodine, cesium, strontium, ruthenium, radium, thorium, plutonium, and uranium. The committed effective dose has been calculated by our model so as to obtain the urinary and faecal excretion rates for each radionuclide. The said model is further validated by a comparison with the widely spread Mondal software and a simulation program. The results obtained show a harmony between the Mondal package and the model we have constructed.

  11. A computer code for calculations in the algebraic collective model of the atomic nucleus

    OpenAIRE

    Welsh, T. A.; Rowe, D. J.

    2014-01-01

    A Maple code is presented for algebraic collective model (ACM) calculations. The ACM is an algebraic version of the Bohr model of the atomic nucleus, in which all required matrix elements are derived by exploiting the model's SU(1,1) x SO(5) dynamical group. This paper reviews the mathematical formulation of the ACM, and serves as a manual for the code. The code enables a wide range of model Hamiltonians to be analysed. This range includes essentially all Hamiltonians that are rational functi...

  12. Calculation of spherical models of lead with a source of 14 MeV-neutrons

    International Nuclear Information System (INIS)

    Markovskij, D.V.; Borisov, A.A.

    1989-01-01

    Neutron transport calculations for spherical models of lead have been done with the one-dimensional code BLANK realizing the direct Monte Carlo method in the whole range of neutron energies and they are compared with the experimental results. 6 refs, 10 figs, 3 tabs

  13. Fast and accurate calculations for cumulative first-passage time distributions in Wiener diffusion models

    DEFF Research Database (Denmark)

    Blurton, Steven Paul; Kesselmeier, M.; Gondan, Matthias

    2012-01-01

    We propose an improved method for calculating the cumulative first-passage time distribution in Wiener diffusion models with two absorbing barriers. This distribution function is frequently used to describe responses and error probabilities in choice reaction time tasks. The present work extends ...

  14. Black Hole Entropy Calculation in a Modified Thin Film Model Jingyi ...

    Indian Academy of Sciences (India)

    Abstract. The thin film model is modified to calculate the black hole entropy. The difference from the original method is that the Parikh–. Wilczek tunnelling framework is introduced and the self-gravitation of the emission particles is taken into account. In terms of our improvement, if the entropy is still proportional to the area, ...

  15. Model-Independent Calculation of Radiative Neutron Capture on Lithium-7

    NARCIS (Netherlands)

    Rupak, Gautam; Higa, Renato

    2011-01-01

    The radiative neutron capture on lithium-7 is calculated model independently using a low-energy halo effective field theory. The cross section is expressed in terms of scattering parameters directly related to the S-matrix elements. It depends on the poorly known p-wave effective range parameter

  16. Scheme for calculation of multi-layer cloudiness and precipitation for climate models of intermediate complexity

    NARCIS (Netherlands)

    Eliseev, A. V.; Coumou, D.; Chernokulsky, A. V.; Petoukhov, V.; Petri, S.

    2013-01-01

    In this study we present a scheme for calculating the characteristics of multi-layer cloudiness and precipitation for Earth system models of intermediate complexity (EMICs). This scheme considers three-layer stratiform cloudiness and single-column convective clouds. It distinguishes between ice and

  17. SHARC, a model for calculating atmospheric infrared radiation under non-equilibrium conditions

    Science.gov (United States)

    Sundberg, R. L.; Duff, J. W.; Gruninger, J. H.; Bernstein, L. S.; Matthew, M. W.; Adler-Golden, S. M.; Robertson, D. C.; Sharma, R. D.; Brown, J. H.; Healey, R. J.

    A new computer model, SHARC, has been developed by the U.S. Air Force for calculating high-altitude atmospheric IR radiance and transmittance spectra with a resolution of better than 1 cm 4. Comprehensive coverage of the 2 to 40 μm (250 to 5,000 cm-1) wavelength region is provided for arbitrary lines of sight in the 50-300 km altitude regime. SHARC accounts for the deviation from local thermodynamic equilibrium (LTE) in state populations by explicitly modeling the detailed production, loss, and energy transfer processes among the contributing molecular vibrational states. The calculated vibrational populations are found to be similar to those obtained from other non-LTE codes. The radiation transport algorithm is based on a single-line equivalent width approximation along with a statistical correction for line overlap. This approach calculates LOS radiance values which are accurate to ±10% and is roughly two orders of magnitude faster than the traditional LBL methods which explicitly integrate over individual line shapes. In addition to quiescent atmospheric processes, this model calculates the auroral production and excitation of CO2, NO, and NO+ in localized regions of the atmosphere. Illustrative comparisons of SHARC predictions to other models and to data from the CIRRIS, SPIRE and FWI field experiments are presented.

  18. Recursive calculation of matrix elements for the generalized seniority shell model

    International Nuclear Information System (INIS)

    Luo, F.Q.; Caprio, M.A.

    2011-01-01

    A recursive calculational scheme is developed for matrix elements in the generalized seniority scheme for the nuclear shell model. Recurrence relations are derived which permit straightforward and efficient computation of matrix elements of one-body and two-body operators and basis state overlaps.

  19. Ab initio calculation of the sound velocity of dense hydrogen: implications for models of Jupiter

    NARCIS (Netherlands)

    Alavi, A.; Parrinello, M.; Frenkel, D.

    1995-01-01

    First-principles molecular dynamics simulations were used to calculate the sound velocity of dense hydrogen, and the results were compared with extrapolations of experimental data that currently conflict with either astrophysical models or data obtained from recent global oscillation measurements of

  20. Improved method for the cutting coefficients calculation in micromilling force modeling

    NARCIS (Netherlands)

    Li, P.; Oosterling, J.A.J.; Hoogstrate, A.M.; Langen, H.H.

    2008-01-01

    This paper discusses the influence of runout on the calculation of the coefficients of mechanistic force models in micromilling. A runout mode is used to study the change of chip thickness, tool angles, and immersion period of two cutting edges of micro endmills due to runout. A new method to find

  1. A new timing model for calculating the intrinsic timing resolution of a scintillator detector

    International Nuclear Information System (INIS)

    Shao Yiping

    2007-01-01

    The coincidence timing resolution is a critical parameter which to a large extent determines the system performance of positron emission tomography (PET). This is particularly true for time-of-flight (TOF) PET that requires an excellent coincidence timing resolution (<<1 ns) in order to significantly improve the image quality. The intrinsic timing resolution is conventionally calculated with a single-exponential timing model that includes two parameters of a scintillator detector: scintillation decay time and total photoelectron yield from the photon-electron conversion. However, this calculation has led to significant errors when the coincidence timing resolution reaches 1 ns or less. In this paper, a bi-exponential timing model is derived and evaluated. The new timing model includes an additional parameter of a scintillator detector: scintillation rise time. The effect of rise time on the timing resolution has been investigated analytically, and the results reveal that the rise time can significantly change the timing resolution of fast scintillators that have short decay time constants. Compared with measured data, the calculations have shown that the new timing model significantly improves the accuracy in the calculation of timing resolutions

  2. On large-scale shell-model calculations in sup 4 He

    Energy Technology Data Exchange (ETDEWEB)

    Bishop, R.F.; Flynn, M.F. (Manchester Univ. (UK). Inst. of Science and Technology); Bosca, M.C.; Buendia, E.; Guardiola, R. (Granada Univ. (Spain). Dept. de Fisica Moderna)

    1990-03-01

    Most shell-model calculations of {sup 4}He require very large basis spaces for the energy spectrum to stabilise. Coupled cluster methods and an exact treatment of the centre-of-mass motion dramatically reduce the number of configurations. We thereby obtain almost exact results with small bases, but which include states of very high excitation energy. (author).

  3. Covariance matrices for nuclear cross sections derived from nuclear model calculations

    International Nuclear Information System (INIS)

    Smith, D. L.

    2005-01-01

    The growing need for covariance information to accompany the evaluated cross section data libraries utilized in contemporary nuclear applications is spurring the development of new methods to provide this information. Many of the current general purpose libraries of evaluated nuclear data used in applications are derived either almost entirely from nuclear model calculations or from nuclear model calculations benchmarked by available experimental data. Consequently, a consistent method for generating covariance information under these circumstances is required. This report discusses a new approach to producing covariance matrices for cross sections calculated using nuclear models. The present method involves establishing uncertainty information for the underlying parameters of nuclear models used in the calculations and then propagating these uncertainties through to the derived cross sections and related nuclear quantities by means of a Monte Carlo technique rather than the more conventional matrix error propagation approach used in some alternative methods. The formalism to be used in such analyses is discussed in this report along with various issues and caveats that need to be considered in order to proceed with a practical implementation of the methodology

  4. «Soft Power»: the Updated Theoretical Concept and Russian Assembly Model

    Directory of Open Access Journals (Sweden)

    Владимир Сергеевич Изотов

    2011-12-01

    Full Text Available The article is dedicated to critically important informational and ideological aspects of Russia's foreign policy. The goal is to revise and specify the notion soft power in the context of rapidly changing space of global politics. During the last years international isolation of Russia, including informational and ideological sphere is increasing. The way to overcome this negative trend is modernization of foreign policy strategy on the basis of updating of operational tools and ideological accents. It's becoming obvious that the real foreign policy success in the global world system is achieved by the use of soft power. The author tries to specify and conceptualize the phenomenon of Russia's soft power as a purposeful external ideology facing the urgent need of updating.

  5. SHINE Virtual Machine Model for In-flight Updates of Critical Mission Software

    Science.gov (United States)

    Plesea, Lucian

    2008-01-01

    This software is a new target for the Spacecraft Health Inference Engine (SHINE) knowledge base that compiles a knowledge base to a language called Tiny C - an interpreted version of C that can be embedded on flight processors. This new target allows portions of a running SHINE knowledge base to be updated on a "live" system without needing to halt and restart the containing SHINE application. This enhancement will directly provide this capability without the risk of software validation problems and can also enable complete integration of BEAM and SHINE into a single application. This innovation enables SHINE deployment in domains where autonomy is used during flight-critical applications that require updates. This capability eliminates the need for halting the application and performing potentially serious total system uploads before resuming the application with the loss of system integrity. This software enables additional applications at JPL (microsensors, embedded mission hardware) and increases the marketability of these applications outside of JPL.

  6. MODEL OF TAKEOFF AND LANDING OPERATIONS FOR CALCULATING OF AERODROME capacity

    Directory of Open Access Journals (Sweden)

    I. Yu. Agafonova

    2014-01-01

    Full Text Available The procedures for takeoff and landing of aircraft flow are discussed. An approach to the construction of a model for calculation of aerodrome capacity is proposed. Decomposition of model is conducted and one of its elements - the approach mode is investigated. The estimation of the time interval for this mode and limitations on the minimum distances between aircraft in the stream are shown.

  7. Significance of predictive models/risk calculators for HBV-related hepatocellular carcinoma

    OpenAIRE

    DONG Jing

    2015-01-01

    Hepatitis B virus (HBV)-related hepatocellular carcinoma (HCC) is a major public health problem in Southeast Asia. In recent years, researchers from Hong Kong and Taiwan have reported predictive models or risk calculators for HBV-associated HCC by studying its natural history, which, to some extent, predicts the possibility of HCC development. Generally, risk factors of each model involve age, sex, HBV DNA level, and liver cirrhosis. This article discusses the evolution and clinical significa...

  8. A three-dimensional model for calculating the micro disk laser resonant-modes

    International Nuclear Information System (INIS)

    Sabetjoo, H.; Bahrampor, A.; Farrahi-Moghaddam, R.

    2006-01-01

    In this article, a semi-analytical model for theoretical analysis of micro disk lasers is presented. Using this model, the necessary conditions for the existence of loss less and low-loss modes of micro-resonators are obtained. The resonance frequency of the resonant modes and also the attenuation of low-loss modes are calculated. By comparing the results with results of finite difference method, their validity is certified.

  9. A mathematical model of the nine-month pregnant woman for calculating specific absorbed fractions

    International Nuclear Information System (INIS)

    Watson, E.E.; Stabin, M.G.

    1986-01-01

    Existing models that allow calculation of internal doses from radionuclide intakes by both men and women are based on a mathematical model of Reference Man. No attempt has been made to allow for the changing geometric relationships that occur during pregnancy which would affect the doses to the mother's organs and to the fetus. As pregnancy progresses, many of the mother's abdominal organs are repositioned, and their shapes may be somewhat changed. Estimation of specific absorbed fractions requires that existing mathematical models be modified to accommodate these changes. Specific absorbed fractions for Reference Woman at three, six, and nine months of pregnancy should be sufficient for estimating the doses to the pregnant woman and the fetus. This report describes a model for the pregnant woman at nine months. An enlarged uterus was incorporated into a model for Reference Woman. Several abdominal organs as well as the exterior of the trunk were modified to accommodate the new uterus. This model will allow calculation of specific absorbed fractions for the fetus from photon emitters in maternal organs. Specific absorbed fractions for the repositioned maternal organs from other organs can also be calculated. 14 refs., 2 figs

  10. PACIAE 2.1: An updated issue of the parton and hadron cascade model PACIAE 2.0

    Science.gov (United States)

    Sa, Ben-Hao; Zhou, Dai-Mei; Yan, Yu-Liang; Dong, Bao-Guo; Cai, Xu

    2013-05-01

    We have updated the parton and hadron cascade model PACIAE 2.0 (cf. Ben-Hao Sa, Dai-Mei Zhou, Yu-Liang Yan, Xiao-Mei Li, Sheng-Qin Feng, Bao-Guo Dong, Xu Cai, Comput. Phys. Comm. 183 (2012) 333.) to the new issue of PACIAE 2.1. The PACIAE model is based on PYTHIA. In the PYTHIA model, once the hadron transverse momentum pT is randomly sampled in the string fragmentation, the px and py components are originally put on the circle with radius pT randomly. Now it is put on the circumference of ellipse with half major and minor axes of pT(1+δp) and pT(1-δp), respectively, in order to better investigate the final state transverse momentum anisotropy. New version program summaryManuscript title: PACIAE 2.1: An updated issue of the parton and hadron cascade model PACIAE 2.0 Authors: Ben-Hao Sa, Dai-Mei Zhou, Yu-Liang Yan, Bao-Guo Dong, and Xu Cai Program title: PACIAE version 2.1 Journal reference: Catalogue identifier: Licensing provisions: none Programming language: FORTRAN 77 or GFORTRAN Computer: DELL Studio XPS and others with a FORTRAN 77 or GFORTRAN compiler Operating system: Linux or Windows with FORTRAN 77 or GFORTRAN compiler RAM: ≈ 1GB Number of processors used: Supplementary material: Keywords: relativistic nuclear collision; PYTHIA model; PACIAE model Classification: 11.1, 17.8 External routines/libraries: Subprograms used: Catalogue identifier of previous version: aeki_v1_0* Journal reference of previous version: Comput. Phys. Comm. 183(2012)333. Does the new version supersede the previous version?: Yes* Nature of problem: PACIAE is based on PYTHIA. In the PYTHIA model, once the hadron transverse momentum(pT)is randomly sampled in the string fragmentation, thepxandpycomponents are randomly placed on the circle with radius ofpT. This strongly cancels the final state transverse momentum asymmetry developed dynamically. Solution method: Thepxandpycomponent of hadron in the string fragmentation is now randomly placed on the circumference of an ellipse with

  11. In the absence of physical practice, observation and imagery do not result in updating of internal models for aiming.

    Science.gov (United States)

    Ong, Nicole T; Larssen, Beverley C; Hodges, Nicola J

    2012-04-01

    The presence of after-effects in adaptation tasks implies that an existing internal model has been updated. Previously, we showed that although observers adapted to a visuomotor perturbation, they did not show after-effects. In this experiment, we tested 2 further observer groups and an actor group. Observers were now actively engaged in watching (encouraged through imagery and movement estimation), with one group physically practising for 25% of the trials (mixed). Participants estimated the hand movements that produced various cursor trajectories and/or their own hand movement from a preceding trial. These trials also allowed us to assess the development of explicit knowledge as a function of the three practice conditions. The pure observation group did not show after-effects, whereas the actor and mixed groups did. The pure observation group improved their ability to estimate hand movement of the video model. Although the actor and mixed groups improved in actual reaching accuracy, they did not improve in explicit estimation. The mixed group was more accurate in reaching during adaptation and showed larger after-effects than the actors. We suggest that observation encourages an explicit mode of learning, enabling performance benefits without corresponding changes to an internal model of the mapping between output and sensory input. However, some physical practice interspersed with observation can change the manner with which learning is achieved, encouraging implicit learning and the updating of an existing internal model.

  12. Calculational analysis of errors for various models of an experiment on measuring leakage neutron spectra

    International Nuclear Information System (INIS)

    Androsenko, A.A.; Androsenko, P.A.; Deeva, V.V.; Prokof'eva, Z.A.

    1990-01-01

    Analysis is made for the effect of mathematical model accuracy of the system concerned on the calculation results using the BRAND program system. Consideration is given to the impact of the following factors: accuracy of neutron source energy-angular characteristics description, various degrees of system geometry approximation, adequacy of Monte-Carlo method estimation to a real physical neutron detector. The calculation results analysis is made on the basis of the experiments on leakage neutron spectra measurement in spherical lead assemblies with the 14 MeV-neutron source in the centre. 4 refs.; 2 figs.; 10 tabs

  13. Review of calculational models and computer codes for environmental dose assessment of radioactive releases

    International Nuclear Information System (INIS)

    Strenge, D.L.; Watson, E.C.; Droppo, J.G.

    1976-06-01

    The development of technological bases for siting nuclear fuel cycle facilities requires calculational models and computer codes for the evaluation of risks and the assessment of environmental impact of radioactive effluents. A literature search and review of available computer programs revealed that no one program was capable of performing all of the great variety of calculations (i.e., external dose, internal dose, population dose, chronic release, accidental release, etc.). Available literature on existing computer programs has been reviewed and a description of each program reviewed is given

  14. Review of calculational models and computer codes for environmental dose assessment of radioactive releases

    Energy Technology Data Exchange (ETDEWEB)

    Strenge, D.L.; Watson, E.C.; Droppo, J.G.

    1976-06-01

    The development of technological bases for siting nuclear fuel cycle facilities requires calculational models and computer codes for the evaluation of risks and the assessment of environmental impact of radioactive effluents. A literature search and review of available computer programs revealed that no one program was capable of performing all of the great variety of calculations (i.e., external dose, internal dose, population dose, chronic release, accidental release, etc.). Available literature on existing computer programs has been reviewed and a description of each program reviewed is given.

  15. Shell model calculation for Te and Sn isotopes in the vicinity of {sup 100}Sn

    Energy Technology Data Exchange (ETDEWEB)

    Yakhelef, A.; Bouldjedri, A. [Physics Department, Farhat abbas University, Setif (Algeria); Physics Department, Hadj Lakhdar University, Batna (Algeria)

    2012-06-27

    New Shell Model calculations for even-even isotopes {sup 104-108}Sn and {sup 106,108}Te, in the vicinity of {sup 100}Sn have been performed. The calculations have been carried out using the windows version of NuShell-MSU. The two body matrix elements TBMEs of the effective interaction between valence nucleons are obtained from the renormalized two body effective interaction based on G-matrix derived from the CD-bonn nucleon-nucleon potential. The single particle energies of the proton and neutron valence spaces orbitals are defined from the available spectra of lightest odd isotopes of Sb and Sn respectively.

  16. A brief look at model-based dose calculation principles, practicalities, and promise.

    Science.gov (United States)

    Sloboda, Ron S; Morrison, Hali; Cawston-Grant, Brie; Menon, Geetha V

    2017-02-01

    Model-based dose calculation algorithms (MBDCAs) have recently emerged as potential successors to the highly practical, but sometimes inaccurate TG-43 formalism for brachytherapy treatment planning. So named for their capacity to more accurately calculate dose deposition in a patient using information from medical images, these approaches to solve the linear Boltzmann radiation transport equation include point kernel superposition, the discrete ordinates method, and Monte Carlo simulation. In this overview, we describe three MBDCAs that are commercially available at the present time, and identify guidance from professional societies and the broader peer-reviewed literature intended to facilitate their safe and appropriate use. We also highlight several important considerations to keep in mind when introducing an MBDCA into clinical practice, and look briefly at early applications reported in the literature and selected from our own ongoing work. The enhanced dose calculation accuracy offered by a MBDCA comes at the additional cost of modelling the geometry and material composition of the patient in treatment position (as determined from imaging), and the treatment applicator (as characterized by the vendor). The adequacy of these inputs and of the radiation source model, which needs to be assessed for each treatment site, treatment technique, and radiation source type, determines the accuracy of the resultant dose calculations. Although new challenges associated with their familiarization, commissioning, clinical implementation, and quality assurance exist, MBDCAs clearly afford an opportunity to improve brachytherapy practice, particularly for low-energy sources.

  17. A model for the calculation of the radiation dose from natural radionuclides in The Netherlands

    International Nuclear Information System (INIS)

    Ackers, J.G.

    1986-02-01

    A model has been developed to calculate the radiation dose incurred from natural radioactivity indoors and outdoors, expressed in effective dose equivalence/year. The model is applied on a three rooms dwelling characterized by interconnecting air flows and on a dwelling with crawlspace. In this model the distinct parameters are variable in order to allow the investigation of the relative influence. The calculated effective dose equivalent for an adult in the dwelling was calculated to be about 1.7 mSv/year, composed of 15% from cosmic radiation, 35% from terrestrial radioactivity, 20% from radioactivity in the body and 30% from natural radionuclides in building materials. The calculations show an enhancement of about a factor of two in radon concentration in air in a room which is ventilated by air from an adjacent room. It is also shown that the attachment rate of radon products to aerosols and the plate-out effect are relatively important parameters influencing the magnitude of the dose rate. (Auth.)

  18. The development of early pediatric models and their application to radiation absorbed dose calculations

    International Nuclear Information System (INIS)

    Poston, J.W.

    1989-01-01

    This presentation will review and describe the development of pediatric phantoms for use in radiation dose calculations . The development of pediatric models for dose calculations essentially paralleled that of the adult. In fact, Snyder and Fisher at the Oak Ridge National Laboratory reported on a series of phantoms for such calculations in 1966 about two years before the first MIRD publication on the adult human phantom. These phantoms, for a newborn, one-, five-, ten-, and fifteen-year old, were derived from the adult phantom. The ''pediatric'' models were obtained through a series of transformations applied to the major dimensions of the adult, which were specified in a Cartesian coordinate system. These phantoms suffered from the fact that no real consideration was given to the influence of these mathematical transformations on the actual organ sizes in the other models nor to the relation of the resulting organ masses to those in humans of the particular age. Later, an extensive effort was invested in designing ''individual'' pediatric phantoms for each age based upon a careful review of the literature. Unfortunately, the phantoms had limited use and only a small number of calculations were made available to the user community. Examples of the phantoms, their typical dimensions, common weaknesses, etc. will be discussed

  19. A new model for the accurate calculation of natural gas viscosity

    Directory of Open Access Journals (Sweden)

    Xiaohong Yang

    2017-03-01

    Full Text Available Viscosity of natural gas is a basic and important parameter, of theoretical and practical significance in the domain of natural gas recovery, transmission and processing. In order to obtain the accurate viscosity data efficiently at a low cost, a new model and its corresponding functional relation are derived on the basis of the relationship among viscosity, temperature and density derived from the kinetic theory of gases. After the model parameters were optimized using a lot of experimental data, the diagram showing the variation of viscosity along with temperature and density is prepared, showing that: ① the gas viscosity increases with the increase of density as well as the increase of temperature in the low density region; ② the gas viscosity increases with the decrease of temperature in high density region. With this new model, the viscosity of 9 natural gas samples was calculated precisely. The average relative deviation between these calculated values and 1539 experimental data measured at 250–450 K and 0.10–140.0 MPa is less than 1.9%. Compared with the 793 experimental data with a measurement error less than 0.5%, the maximum relative deviation is less than 0.98%. It is concluded that this new model is more advantageous than the previous 8 models in terms of simplicity, accuracy, fast calculation, and direct applicability to the CO2 bearing gas samples.

  20. Development of a model to calculate the economic implications of improving the indoor climate

    DEFF Research Database (Denmark)

    Jensen, Kasper Lynge

    in the indoor environment. Office workers exposed to the same indoor environment conditions will in many cases wear different clothing, have different metabolic rates, experience micro environment differences etc. all factors that make it difficult to estimate the effects of the indoor environment...... have been developed; one model estimating the effects of indoor temperature on mental performance and one model estimating the effects of air quality on mental performance. Combined with dynamic building simulations and dose-response relationships, the derived models were used to calculate the total...... on performance. The Bayesian Network uses a probabilistic approach by which a probability distribution can take this variation of the different indoor variables into account. The result from total building economy calculations indicated that depending on the indoor environmental change (improvement...

  1. OPT13B and OPTIM4 - computer codes for optical model calculations

    International Nuclear Information System (INIS)

    Pal, S.; Srivastava, D.K.; Mukhopadhyay, S.; Ganguly, N.K.

    1975-01-01

    OPT13B is a computer code in FORTRAN for optical model calculations with automatic search. A summary of different formulae used for computation is given. Numerical methods are discussed. The 'search' technique followed to obtain the set of optical model parameters which produce best fit to experimental data in a least-square sense is also discussed. Different subroutines of the program are briefly described. Input-output specifications are given in detail. A modified version of OPT13B specifications are given in detail. A modified version of OPT13B is OPTIM4. It can be used for optical model calculations where the form factors of different parts of the optical potential are known point by point. A brief description of the modifications is given. (author)

  2. Animal models of autism with a particular focus on the neural basis of changes in social behaviour: an update article.

    Science.gov (United States)

    Olexová, Lucia; Talarovičová, Alžbeta; Lewis-Evans, Ben; Borbélyová, Veronika; Kršková, Lucia

    2012-12-01

    Research on autism has been gaining more and more attention. However, its aetiology is not entirely known and several factors are thought to contribute to the development of this neurodevelopmental disorder. These potential contributing factors range from genetic heritability to environmental effects. A significant number of reviews have already been published on different aspects of autism research as well as focusing on using animal models to help expand current knowledge around its aetiology. However, the diverse range of symptoms and possible causes of autism have resulted in as equally wide variety of animal models of autism. In this update article we focus only on the animal models with neurobehavioural characteristics of social deficit related to autism and present an overview of the animal models with alterations in brain regions, neurotransmitters, or hormones that are involved in a decrease in sociability. Copyright © 2012 Elsevier Ireland Ltd and the Japan Neuroscience Society. All rights reserved.

  3. A four-equation friction model for water hammer calculation in quasi-rigid pipelines

    International Nuclear Information System (INIS)

    Ghodhbani, Abdelaziz; Haj Taïeb, Ezzeddine

    2017-01-01

    Friction coupling affects water hammer evolution in pipelines according to the initial flow regime. Unsteady friction models are only validated with uncoupled formulation. On the other hand, coupled models such as four-equation model, provide more accurate prediction of water hammer since fluid-structure interaction (FSI) is taken into account, but they are limited to steady-state friction formulation. This paper deals with the creation of the “four-equation friction model” which is based on the incorporation of the unsteady head loss given by an unsteady friction model into the four-equation model. For transient laminar flow cases, the Zielke model is considered. The proposed model is applied to a quasi-rigid pipe with axial moving valve, and then calculated by the method of characteristics (MOC). Damping and shape of the numerical solution are in good agreement with experimental data. Thus, the proposed model can be incorporated into a new computer code. - Highlights: • Both Zielke model and four-equation model are insufficient to predict water hammer. • The four-equation friction model proposed is obtained by incorporating the unsteady head loss in the four-equation model. • The solution obtained by the proposed model is in good agreement with experimental data. • The wave-speed adjustment scheme is more efficient than interpolations schemes.

  4. The High Level Mathematical Models in Calculating Aircraft Gas Turbine Engine Parameters

    Directory of Open Access Journals (Sweden)

    Yu. A. Ezrokhi

    2017-01-01

    Full Text Available The article describes high-level mathematical models developed to solve special problems arising at later stages of design with regard to calculation of the aircraft gas turbine engine (GTE under real operating conditions. The use of blade row mathematics models, as well as mathematical models of a higher level, including 2D and 3D description of the working process in the engine units and components, makes it possible to determine parameters and characteristics of the aircraft engine under conditions significantly different from the calculated ones.The paper considers application of mathematical modelling methods (MMM for solving a wide range of practical problems, such as forcing the engine by injection of water into the flowing part, estimate of the thermal instability effect on the GTE characteristics, simulation of engine start-up and windmill starting condition, etc. It shows that the MMM use, when optimizing the laws of the compressor stator control, as well as supplying cooling air to the hot turbine components in the motor system, can significantly improve the integral traction and economic characteristics of the engine in terms of its gas-dynamic stability, reliability and resource.It ought to bear in mind that blade row mathematical models of the engine are designed to solve purely "motor" problems and do not replace the existing models of various complexity levels used in calculation and design of compressors and turbines, because in “quality” a description of the working processes in these units is inevitably inferior to such specialized models.It is shown that the choice of the mathematical modelling level of an aircraft engine for solving a particular problem arising in its designing and computational study is to a large extent a compromise problem. Despite the significantly higher "resolution" and information ability the motor mathematical models containing 2D and 3D approaches to the calculation of flow in blade machine

  5. ISICS2011, an updated version of ISICS: A program for calculation K-, L-, and M-shell cross sections from PWBA and ECPSSR theories using a personal computer

    Science.gov (United States)

    Cipolla, Sam J.

    2011-11-01

    In this new version of ISICS, called ISICS2011, a few omissions and incorrect entries in the built-in file of electron binding energies have been corrected; operational situations leading to un-physical behavior have been identified and flagged. New version program summaryProgram title: ISICS2011 Catalogue identifier: ADDS_v5_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADDS_v5_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 6011 No. of bytes in distributed program, including test data, etc.: 130 587 Distribution format: tar.gz Programming language: C Computer: 80486 or higher-level PCs Operating system: WINDOWS XP and all earlier operating systems Classification: 16.7 Catalogue identifier of previous version: ADDS_v4_0 Journal reference of previous version: Comput. Phys. Commun. 180 (2009) 1716. Does the new version supersede the previous version?: Yes Nature of problem: Ionization and X-ray production cross section calculations for ion-atom collisions. Solution method: Numerical integration of form factor using a logarithmic transform and Gaussian quadrature, plus exact integration limits. Reasons for new version: General need for higher precision in output format for projectile energies; some built-in binding energies needed correcting; some anomalous results occur due to faulty read-in data or calculated parameters becoming un-physical; erroneous calculations could result for the L and M shells when restricted K-shell options are inadvertently chosen; to achieve general compatibility with ISICSoo, a companion C++ version that is portable to Linux and MacOS platforms, has been submitted for publication in the CPC Program Library approximately at the same time as this present new standalone version of ISICS [1]. Summary of revisions: The format field for

  6. Fast Pencil Beam Dose Calculation for Proton Therapy Using a Double-Gaussian Beam Model.

    Science.gov (United States)

    da Silva, Joakim; Ansorge, Richard; Jena, Rajesh

    2015-01-01

    The highly conformal dose distributions produced by scanned proton pencil beams (PBs) are more sensitive to motion and anatomical changes than those produced by conventional radiotherapy. The ability to calculate the dose in real-time as it is being delivered would enable, for example, online dose monitoring, and is therefore highly desirable. We have previously described an implementation of a PB algorithm running on graphics processing units (GPUs) intended specifically for online dose calculation. Here, we present an extension to the dose calculation engine employing a double-Gaussian beam model to better account for the low-dose halo. To the best of our knowledge, it is the first such PB algorithm for proton therapy running on a GPU. We employ two different parameterizations for the halo dose, one describing the distribution of secondary particles from nuclear interactions found in the literature and one relying on directly fitting the model to Monte Carlo simulations of PBs in water. Despite the large width of the halo contribution, we show how in either case the second Gaussian can be included while prolonging the calculation of the investigated plans by no more than 16%, or the calculation of the most time-consuming energy layers by about 25%. Furthermore, the calculation time is relatively unaffected by the parameterization used, which suggests that these results should hold also for different systems. Finally, since the implementation is based on an algorithm employed by a commercial treatment planning system, it is expected that with adequate tuning, it should be able to reproduce the halo dose from a general beam line with sufficient accuracy.

  7. Fast pencil beam dose calculation for proton therapy using a double-Gaussian beam model

    Directory of Open Access Journals (Sweden)

    Joakim eda Silva

    2015-12-01

    Full Text Available The highly conformal dose distributions produced by scanned proton pencil beams are more sensitive to motion and anatomical changes than those produced by conventional radiotherapy. The ability to calculate the dose in real time as it is being delivered would enable, for example, online dose monitoring, and is therefore highly desirable. We have previously described an implementation of a pencil beam algorithm running on graphics processing units (GPUs intended specifically for online dose calculation. Here we present an extension to the dose calculation engine employing a double-Gaussian beam model to better account for the low-dose halo. To the best of our knowledge, it is the first such pencil beam algorithm for proton therapy running on a GPU. We employ two different parametrizations for the halo dose, one describing the distribution of secondary particles from nuclear interactions found in the literature and one relying on directly fitting the model to Monte Carlo simulations of pencil beams in water. Despite the large width of the halo contribution, we show how in either case the second Gaussian can be included whilst prolonging the calculation of the investigated plans by no more than 16%, or the calculation of the most time-consuming energy layers by about 25%. Further, the calculation time is relatively unaffected by the parametrization used, which suggests that these results should hold also for different systems. Finally, since the implementation is based on an algorithm employed by a commercial treatment planning system, it is expected that with adequate tuning, it should be able to reproduce the halo dose from a general beam line with sufficient accuracy.

  8. Measurement-based aerosol forcing calculations: The influence of model complexity

    Directory of Open Access Journals (Sweden)

    Manfred Wendisch

    2001-03-01

    Full Text Available On the basis of ground-based microphysical and chemical aerosol measurements a simple 'two-layer-single-wavelength' and a complex 'multiple-layer-multiple-wavelength' radiative transfer model are used to calculate the local solar radiative forcing of black carbon (BC and (NH42SO4 (ammonium sulfate particles and mixtures (external and internal of both materials. The focal points of our approach are (a that the radiative forcing calculations are based on detailed aerosol measurements with special emphasis of particle absorption, and (b the results of the radiative forcing calculations with two different types of models (with regards to model complexity are compared using identical input data. The sensitivity of the radiative forcing due to key input parameters (type of particle mixture, particle growth due to humidity, surface albedo, solar zenith angle, boundary layer height is investigated. It is shown that the model results for external particle mixtures (wet and dry only slightly differ from those of the corresponding internal mixture. This conclusion is valid for the results of both model types and for both surface albedo scenarios considered (grass and snow. Furthermore, it is concluded that the results of the two model types approximately agree if it is assumed that the aerosol particles are composed of pure BC. As soon as a mainly scattering substance is included alone or in (internal or external mixture with BC, the differences between the radiative forcings of both models become significant. This discrepancy results from neglecting multiple scattering effects in the simple radiative transfer model.

  9. Study on the Calculation Models of Bus Delay at Bays Using Queueing Theory and Markov Chain

    Directory of Open Access Journals (Sweden)

    Feng Sun

    2015-01-01

    Full Text Available Traffic congestion at bus bays has decreased the service efficiency of public transit seriously in China, so it is crucial to systematically study its theory and methods. However, the existing studies lack theoretical model on computing efficiency. Therefore, the calculation models of bus delay at bays are studied. Firstly, the process that buses are delayed at bays is analyzed, and it was found that the delay can be divided into entering delay and exiting delay. Secondly, the queueing models of bus bays are formed, and the equilibrium distribution functions are proposed by applying the embedded Markov chain to the traditional model of queuing theory in the steady state; then the calculation models of entering delay are derived at bays. Thirdly, the exiting delay is studied by using the queueing theory and the gap acceptance theory. Finally, the proposed models are validated using field-measured data, and then the influencing factors are discussed. With these models the delay is easily assessed knowing the characteristics of the dwell time distribution and traffic volume at the curb lane in different locations and different periods. It can provide basis for the efficiency evaluation of bus bays.

  10. Generalized Born and Explicit Solvent Models for Free Energy Calculations in Organic Solvents: Cyclodextrin Dimerization.

    Science.gov (United States)

    Zhang, Haiyang; Tan, Tianwei; van der Spoel, David

    2015-11-10

    Evaluation of solvation (binding) free energies with implicit solvent models in different dielectric environments for biological simulations as well as high throughput ligand screening remain challenging endeavors. In order to address how well implicit solvent models approximate explicit ones we examined four generalized Born models (GB(Still), GB(HCT), GB(OBC)I, and GB(OBC)II) for determining the dimerization free energy (ΔG(0)) of β-cyclodextrin monomers in 17 implicit solvents with dielectric constants (D) ranging from 5 to 80 and compared the results to previous free energy calculations with explicit solvents ( Zhang et al. J. Phys. Chem. B 2012 , 116 , 12684 - 12693 ). The comparison indicates that neglecting the environmental dependence of Born radii appears acceptable for such calculations involving cyclodextrin and that the GB(Still) and GB(OBC)I models yield a reasonable estimation of ΔG(0), although the details of binding are quite different from explicit solvents. Large discrepancies between implicit and explicit solvent models occur in high-dielectric media with strong hydrogen bond (HB) interruption properties. ΔG(0) with the GB models is shown to correlate strongly to 2(D-1)/(2D+1) (R(2) ∼ 0.90) in line with the Onsager reaction field ( Onsager J. Am. Chem. Soc. 1936 , 58 , 1486 - 1493 ) but to be very sensitive to D (D J. Chem. Inf. Model . 2015 , 55 , 1192 - 1201 ) reproduce the weak experimental correlations with 2(D-1)/(2D+1) very well.

  11. DIDEM - An integrated model for comparative health damage costs calculation of air pollution

    Science.gov (United States)

    Ravina, Marco; Panepinto, Deborah; Zanetti, Maria Chiara

    2018-01-01

    Air pollution represents a continuous hazard to human health. Administration, companies and population need efficient indicators of the possible effects given by a change in decision, strategy or habit. The monetary quantification of health effects of air pollution through the definition of external costs is increasingly recognized as a useful indicator to support decision and information at all levels. The development of modelling tools for the calculation of external costs can provide support to analysts in the development of consistent and comparable assessments. In this paper, the DIATI Dispersion and Externalities Model (DIDEM) is presented. The DIDEM model calculates the delta-external costs of air pollution comparing two alternative emission scenarios. This tool integrates CALPUFF's advanced dispersion modelling with the latest WHO recommendations on concentration-response functions. The model is based on the impact pathway method. It was designed to work with a fine spatial resolution and a local or national geographic scope. The modular structure allows users to input their own data sets. The DIDEM model was tested on a real case study, represented by a comparative analysis of the district heating system in Turin, Italy. Additional advantages and drawbacks of the tool are discussed in the paper. A comparison with other existing models worldwide is reported.

  12. Iron -chromium alloys and free surfaces: from ab initio calculations to thermodynamic modeling

    International Nuclear Information System (INIS)

    Levesque, M.

    2010-11-01

    Ferritic steels possibly strengthened by oxide dispersion are candidates as structural materials for generation IV and fusion nuclear reactors. Their use is limited by incomplete knowledge of the iron-chromium phase diagram at low temperatures and of the phenomena inducing preferential segregation of one element at grain boundaries or at surfaces. In this context, this work contributes to the multi-scale study of the model iron-chromium alloy and their free surfaces by numerical simulations. This study begins with ab initio calculations of properties related to the mixture of atoms of iron and chromium. We highlight complex dependency of the magnetic moments of the chromium atoms on their local chemical environment. Surface properties are also proving sensitive to magnetism. This is the case of impurity segregation of chromium in iron and of their interactions near the surface. In a second step, we construct a simple energy model for high numerical efficiency. It is based on pair interactions on a rigid lattice to which are given local chemical environment and temperature dependencies. With this model, we reproduce the ab initio results at zero temperature and experimental results at high temperature. We also deduce the solubility limits at all intermediate temperatures with mean field approximations that we compare to Monte Carlo simulations. The last step of our work is to introduce free surfaces in our model. We then study the effect of ab initio calculated bulk and surface properties on surface segregation.Finally, we calculate segregation isotherms. We therefore propose an evolution model of surface composition of iron-chromium alloys as a function of bulk composition. which are given local chemical environment and temperature dependencies. With this model, we reproduce the ab initio results at zero temperature and experimental results at high temperature. We also deduce the solubility limits at all intermediate temperatures with mean field approximations that

  13. Optimal Calculation of Residuals for ARMAX Models with Applications to Model Verification

    DEFF Research Database (Denmark)

    Knudsen, Torben

    1997-01-01

    Residual tests for sufficient model orders are based on the assumption that prediction errors are white when the model is correct. If an ARMAX system has zeros in the MA part which are close to the unit circle, then the standard predictor can have large transients. Even when the correct model...

  14. Development and application of the PBMR fission product release calculation model

    International Nuclear Information System (INIS)

    Merwe, J.J. van der; Clifford, I.

    2008-01-01

    At PBMR, long-lived fission product release from spherical fuel spheres is calculated using the German legacy software product GETTER. GETTER is a good tool when performing calculations for fuel spheres under controlled operating conditions, including irradiation tests and post-irradiation heat-up experiments. It has proved itself as a versatile reactor analysis tool, but is rather cumbersome when used for accident and sensitivity analysis. Developments in depressurized loss of forced cooling (DLOFC) accident analysis using GETTER led to the creation of FIssion Product RElease under accident (X) conditions (FIPREX), and later FIPREX-GETTER. FIPREX-GETTER is designed as a wrapper around GETTER so that calculations can be carried out for large numbers of fuel spheres with design and operating parameters that can be stochastically varied. This allows full Monte Carlo sensitivity analyses to be performed for representative cores containing many fuel spheres. The development process and application of FIPREX-GETTER in reactor analysis at PBMR is explained and the requirements for future developments of the code are discussed. Results are presented for a sample PBMR core design under normal operating conditions as well as a suite of design-base accident events, illustrating the functionality of FIPREX-GETTER. Monte Carlo sensitivity analysis principles are explained and presented for each calculation type. The plan and current status of verification and validation (V and V) is described. This is an important and necessary process for all software and calculation model development at PBMR

  15. MATHEMATICAL MODEL FOR CALCULATION OF INFORMATION RISKS FOR INFORMATION AND LOGISTICS SYSTEM

    Directory of Open Access Journals (Sweden)

    A. G. Korobeynikov

    2015-05-01

    Full Text Available Subject of research. The paper deals with mathematical model for assessment calculation of information risks arising during transporting and distribution of material resources in the conditions of uncertainty. Meanwhile information risks imply the danger of origin of losses or damage as a result of application of information technologies by the company. Method. The solution is based on ideology of the transport task solution in stochastic statement with mobilization of mathematical modeling theory methods, the theory of graphs, probability theory, Markov chains. Creation of mathematical model is performed through the several stages. At the initial stage, capacity on different sites depending on time is calculated, on the basis of information received from information and logistic system, the weight matrix is formed and the digraph is under construction. Then there is a search of the minimum route which covers all specified vertexes by means of Dejkstra algorithm. At the second stage, systems of differential Kolmogorov equations are formed using information about the calculated route. The received decisions show probabilities of resources location in concrete vertex depending on time. At the third stage, general probability of the whole route passing depending on time is calculated on the basis of multiplication theorem of probabilities. Information risk, as time function, is defined by multiplication of the greatest possible damage by the general probability of the whole route passing. In this case information risk is measured in units of damage which corresponds to that monetary unit which the information and logistic system operates with. Main results. Operability of the presented mathematical model is shown on a concrete example of transportation of material resources where places of shipment and delivery, routes and their capacity, the greatest possible damage and admissible risk are specified. The calculations presented on a diagram showed

  16. Analytical calculation of detailed model parameters of cast resin dry-type transformers

    International Nuclear Information System (INIS)

    Eslamian, M.; Vahidi, B.; Hosseinian, S.H.

    2011-01-01

    Highlights: → In this paper high frequency behavior of cast resin dry-type transformers was simulated. → Parameters of detailed model were calculated using analytical method and compared with FEM results. → A lab transformer was constructed in order to compare theoretical and experimental results. -- Abstract: Non-flammable characteristic of cast resin dry-type transformers make them suitable for different kind of usages. This paper presents an analytical method of how to obtain parameters of detailed model of these transformers. The calculated parameters are compared and verified with the corresponding FEM results and if it was necessary, correction factors are introduced for modification of the analytical solutions. Transient voltages under full and chopped test impulses are calculated using the obtained detailed model. In order to validate the model, a setup was constructed for testing on high-voltage winding of cast resin dry-type transformer. The simulation results were compared with the experimental data measured from FRA and impulse tests.

  17. SHARC, a model for calculating atmospheric and infrared radiation under non-equilibrium conditions

    Science.gov (United States)

    Sundberg, R. L.; Duff, J. W.; Gruninger, J. H.; Bernstein, L. S.; Sharma, R. D.

    1994-01-01

    A new computer model, SHARC, has been developed by the Air Force for calculating high-altitude atmospheric IR radiance and transmittance spectra with a resolution of better than 1/cm. Comprehensive coverage of the 2 to 40 microns (250/cm to 5,000/cm) wavelength region is provided for arbitrary lines of sight in the 50-300 km altitude regime. SHARC accounts for the deviation from local thermodynamic equilibrium (LTE) in vibrational state populations by explicitly modeling the detailed production, loss, and energy transfer process among the important molecular vibrational states. The calculated vibrational populations are found to be similar to those obtained from other non-LTE codes. The radiation transport algorithm is based on a single-line equivalent width approximation along with a statistical correction for line overlap. This approach is reasonably accurate for most applications and is roughly two orders of magnitude faster than the traditional LBL methods which explicitly integrate over individual line shapes. In addition to quiescent atmospheric processes, this model calculates the auroral production and excitation of CO2, NO, and NO(+) in localized regions of the atmosphere. Illustrative comparisons of SHARC predictions to other models and to data from the CIRRIS, SPIRE, and FWI field experiments are presented.

  18. Modelling of electron contamination in clinical photon beams for Monte Carlo dose calculation

    International Nuclear Information System (INIS)

    Yang, J; Li, J S; Qin, L; Xiong, W; Ma, C-M

    2004-01-01

    The purpose of this work is to model electron contamination in clinical photon beams and to commission the source model using measured data for Monte Carlo treatment planning. In this work, a planar source is used to represent the contaminant electrons at a plane above the upper jaws. The source size depends on the dimensions of the field size at the isocentre. The energy spectra of the contaminant electrons are predetermined using Monte Carlo simulations for photon beams from different clinical accelerators. A 'random creep' method is employed to derive the weight of the electron contamination source by matching Monte Carlo calculated monoenergetic photon and electron percent depth-dose (PDD) curves with measured PDD curves. We have integrated this electron contamination source into a previously developed multiple source model and validated the model for photon beams from Siemens PRIMUS accelerators. The EGS4 based Monte Carlo user code BEAM and MCSIM were used for linac head simulation and dose calculation. The Monte Carlo calculated dose distributions were compared with measured data. Our results showed good agreement (less than 2% or 2 mm) for 6, 10 and 18 MV photon beams

  19. An Updated Subsequent Injury Categorisation Model (SIC-2.0): Data-Driven Categorisation of Subsequent Injuries in Sport.

    Science.gov (United States)

    Toohey, Liam A; Drew, Michael K; Fortington, Lauren V; Finch, Caroline F; Cook, Jill L

    2018-03-03

    Accounting for subsequent injuries is critical for sports injury epidemiology. The subsequent injury categorisation (SIC-1.0) model was developed to create a framework for accurate categorisation of subsequent injuries but its operationalisation has been challenging. The objective of this study was to update the subsequent injury categorisation (SIC-1.0 to SIC-2.0) model to improve its utility and application to sports injury datasets, and to test its applicability to a sports injury dataset. The SIC-1.0 model was expanded to include two levels of categorisation describing how previous injuries relate to subsequent events. A data-driven classification level was established containing eight discrete injury categories identifiable without clinical input. A sequential classification level that sub-categorised the data-driven categories according to their level of clinical relatedness has 16 distinct subsequent injury types. Manual and automated SIC-2.0 model categorisation were applied to a prospective injury dataset collected for elite rugby sevens players over a 2-year period. Absolute agreement between the two coding methods was assessed. An automated script for automatic data-driven categorisation and a flowchart for manual coding were developed for the SIC-2.0 model. The SIC-2.0 model was applied to 246 injuries sustained by 55 players (median four injuries, range 1-12), 46 (83.6%) of whom experienced more than one injury. The majority of subsequent injuries (78.7%) were sustained to a different site and were of a different nature. Absolute agreement between the manual coding and automated statistical script category allocation was 100%. The updated SIC-2.0 model provides a simple flowchart and automated electronic script to allow both an accurate and efficient method of categorising subsequent injury data in sport.

  20. Implementation of a model of atmospheric dispersion and dose calculation in the release of radioactive effluents in the Nuclear Centre

    International Nuclear Information System (INIS)

    Cruz L, C. A.

    2015-01-01

    In the present thesis, the software DERA (Dispersion of Radioactive Effluents into the Atmosphere) was developed in order to calculate the equivalent dose, external and internal, associated with the release of radioactive effluents into the atmosphere from a nuclear facility. The software describes such emissions in normal operation, and not considering the exceptional situations such as accidents. Several tools were integrated for describing the dispersion of radioactive effluents using site meteorological information (average speed and wind direction and the stability profile). Starting with the calculation of the concentration of the effluent as a function of position, DERA estimates equivalent doses using a set of EPA s and ICRP s coefficients. The software contains a module that integrates a database with these coefficients for a set of 825 different radioisotopes and uses the Gaussian method to calculate the effluents dispersion. This work analyzes how adequate is the Gaussian model to describe emissions type -puff-. Chapter 4 concludes, on the basis of a comparison of the recommended correlations of emissions type -puff-, that under certain conditions (in particular with intermittent emissions) it is possible to perform an adequate description using the Gaussian model. The dispersion coefficients (σ y and σ z ), that using the Gaussian model, were obtained from different correlations given in the literature. Also in Chapter 5 is presented the construction of a particular correlation using Lagrange polynomials, which takes information from the Pasquill-Gifford-Turner curves (PGT). This work also contains a state of the art about the coefficients that relate the concentration with the equivalent dose. This topic is discussed in Chapter 6, including a brief description of the biological-compartmental models developed by the ICRP. The software s development was performed using the programming language Python 2.7, for the Windows operating system (the XP