WorldWideScience

Sample records for validate computational models

  1. Validation of Computer Models for Homeland Security Purposes

    International Nuclear Information System (INIS)

    Schweppe, John E.; Ely, James; Kouzes, Richard T.; McConn, Ronald J.; Pagh, Richard T.; Robinson, Sean M.; Siciliano, Edward R.; Borgardt, James D.; Bender, Sarah E.; Earnhart, Alison H.

    2005-01-01

    At Pacific Northwest National Laboratory, we are developing computer models of radiation portal monitors for screening vehicles and cargo. Detailed models of the radiation detection equipment, vehicles, cargo containers, cargos, and radioactive sources have been created. These are used to determine the optimal configuration of detectors and the best alarm algorithms for the detection of items of interest while minimizing nuisance alarms due to the presence of legitimate radioactive material in the commerce stream. Most of the modeling is done with the Monte Carlo code MCNP to describe the transport of gammas and neutrons from extended sources through large, irregularly shaped absorbers to large detectors. A fundamental prerequisite is the validation of the computational models against field measurements. We describe the first step of this validation process, the comparison of the models to measurements with bare static sources

  2. Validation of a phytoremediation computer model

    Energy Technology Data Exchange (ETDEWEB)

    Corapcioglu, M Y; Sung, K; Rhykerd, R L; Munster, C; Drew, M [Texas A and M Univ., College Station, TX (United States)

    1999-01-01

    The use of plants to stimulate remediation of contaminated soil is an effective, low-cost cleanup method which can be applied to many different sites. A phytoremediation computer model has been developed to simulate how recalcitrant hydrocarbons interact with plant roots in unsaturated soil. A study was conducted to provide data to validate and calibrate the model. During the study, lysimeters were constructed and filled with soil contaminated with 10 [mg kg[sub -1

  3. Importance of Computer Model Validation in Pyroprocessing Technology Development

    Energy Technology Data Exchange (ETDEWEB)

    Jung, Y. E.; Li, Hui; Yim, M. S. [Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of)

    2014-05-15

    In this research, we developed a plan for experimental validation of one of the computer models developed for ER process modeling, i. e., the ERAD code. Several candidate surrogate materials are selected for the experiment considering the chemical and physical properties. Molten salt-based pyroprocessing technology is being examined internationally as an alternative to treat spent nuclear fuel over aqueous technology. The central process in pyroprocessing is electrorefining(ER) which separates uranium from transuranic elements and fission products present in spent nuclear fuel. ER is a widely used process in the minerals industry to purify impure metals. Studies of ER by using actual spent nuclear fuel materials are problematic for both technical and political reasons. Therefore, the initial effort for ER process optimization is made by using computer models. A number of models have been developed for this purpose. But as validation of these models is incomplete and often times problematic, the simulation results from these models are inherently uncertain.

  4. Effects of Task Performance and Task Complexity on the Validity of Computational Models of Attention

    NARCIS (Netherlands)

    Koning, L. de; Maanen, P.P. van; Dongen, K. van

    2008-01-01

    Computational models of attention can be used as a component of decision support systems. For accurate support, a computational model of attention has to be valid and robust. The effects of task performance and task complexity on the validity of three different computational models of attention were

  5. Validation of a phytoremediation computer model

    International Nuclear Information System (INIS)

    Corapcioglu, M.Y.; Sung, K.; Rhykerd, R.L.; Munster, C.; Drew, M.

    1999-01-01

    The use of plants to stimulate remediation of contaminated soil is an effective, low-cost cleanup method which can be applied to many different sites. A phytoremediation computer model has been developed to simulate how recalcitrant hydrocarbons interact with plant roots in unsaturated soil. A study was conducted to provide data to validate and calibrate the model. During the study, lysimeters were constructed and filled with soil contaminated with 10 [mg kg -1 ] TNT, PBB and chrysene. Vegetated and unvegetated treatments were conducted in triplicate to obtain data regarding contaminant concentrations in the soil, plant roots, root distribution, microbial activity, plant water use and soil moisture. When given the parameters of time and depth, the model successfully predicted contaminant concentrations under actual field conditions. Other model parameters are currently being evaluated. 15 refs., 2 figs

  6. Validation od computational model ALDERSON/EGSnrc for chest radiography

    International Nuclear Information System (INIS)

    Muniz, Bianca C.; Santos, André L. dos; Menezes, Claudio J.M.

    2017-01-01

    To perform dose studies in situations of exposure to radiation, without exposing individuals, the numerical dosimetry uses Computational Exposure Models (ECM). Composed essentially by a radioactive source simulator algorithm, a voxel phantom representing the human anatomy and a Monte Carlo code, the ECMs must be validated to determine the reliability of the physical array representation. The objective of this work is to validate the ALDERSON / EGSnrc MCE by through comparisons between the experimental measurements obtained with the ionization chamber and virtual simulations using Monte Carlo Method to determine the ratio of the input and output radiation dose. Preliminary results of these comparisons showed that the ECM reproduced the results of the experimental measurements performed with the physical phantom with a relative error of less than 10%, validating the use of this model for simulations of chest radiographs and estimates of radiation doses in tissues in the irradiated structures

  7. Validation of HEDR models

    International Nuclear Information System (INIS)

    Napier, B.A.; Simpson, J.C.; Eslinger, P.W.; Ramsdell, J.V. Jr.; Thiede, M.E.; Walters, W.H.

    1994-05-01

    The Hanford Environmental Dose Reconstruction (HEDR) Project has developed a set of computer models for estimating the possible radiation doses that individuals may have received from past Hanford Site operations. This document describes the validation of these models. In the HEDR Project, the model validation exercise consisted of comparing computational model estimates with limited historical field measurements and experimental measurements that are independent of those used to develop the models. The results of any one test do not mean that a model is valid. Rather, the collection of tests together provide a level of confidence that the HEDR models are valid

  8. Mathematical Capture of Human Crowd Behavioral Data for Computational Model Building, Verification, and Validation

    Science.gov (United States)

    2011-03-21

    throughout the experimental runs. Reliable and validated measures of anxiety ( Spielberger , 1983), as well as custom-constructed questionnaires about...Crowd modeling and simulation technologies. Transactions on modeling and computer simulation, 20(4). Spielberger , C. D. (1983

  9. Computational Fluid Dynamics Modeling of the Human Pulmonary Arteries with Experimental Validation.

    Science.gov (United States)

    Bordones, Alifer D; Leroux, Matthew; Kheyfets, Vitaly O; Wu, Yu-An; Chen, Chia-Yuan; Finol, Ender A

    2018-05-21

    Pulmonary hypertension (PH) is a chronic progressive disease characterized by elevated pulmonary arterial pressure, caused by an increase in pulmonary arterial impedance. Computational fluid dynamics (CFD) can be used to identify metrics representative of the stage of PH disease. However, experimental validation of CFD models is often not pursued due to the geometric complexity of the model or uncertainties in the reproduction of the required flow conditions. The goal of this work is to validate experimentally a CFD model of a pulmonary artery phantom using a particle image velocimetry (PIV) technique. Rapid prototyping was used for the construction of the patient-specific pulmonary geometry, derived from chest computed tomography angiography images. CFD simulations were performed with the pulmonary model with a Reynolds number matching those of the experiments. Flow rates, the velocity field, and shear stress distributions obtained with the CFD simulations were compared to their counterparts from the PIV flow visualization experiments. Computationally predicted flow rates were within 1% of the experimental measurements for three of the four branches of the CFD model. The mean velocities in four transversal planes of study were within 5.9 to 13.1% of the experimental mean velocities. Shear stresses were qualitatively similar between the two methods with some discrepancies in the regions of high velocity gradients. The fluid flow differences between the CFD model and the PIV phantom are attributed to experimental inaccuracies and the relative compliance of the phantom. This comparative analysis yielded valuable information on the accuracy of CFD predicted hemodynamics in pulmonary circulation models.

  10. Experimental validation of a kilovoltage x-ray source model for computing imaging dose

    Energy Technology Data Exchange (ETDEWEB)

    Poirier, Yannick, E-mail: yannick.poirier@cancercare.mb.ca [CancerCare Manitoba, 675 McDermot Ave, Winnipeg, Manitoba R3E 0V9 (Canada); Kouznetsov, Alexei; Koger, Brandon [Department of Physics and Astronomy, University of Calgary, Calgary, Alberta T2N 1N4 (Canada); Tambasco, Mauro, E-mail: mtambasco@mail.sdsu.edu [Department of Physics, San Diego State University, San Diego, California 92182-1233 and Department of Physics and Astronomy and Department of Oncology, University of Calgary, Calgary, Alberta T2N 1N4 (Canada)

    2014-04-15

    Purpose: To introduce and validate a kilovoltage (kV) x-ray source model and characterization method to compute absorbed dose accrued from kV x-rays. Methods: The authors propose a simplified virtual point source model and characterization method for a kV x-ray source. The source is modeled by: (1) characterizing the spatial spectral and fluence distributions of the photons at a plane at the isocenter, and (2) creating a virtual point source from which photons are generated to yield the derived spatial spectral and fluence distribution at isocenter of an imaging system. The spatial photon distribution is determined by in-air relative dose measurements along the transverse (x) and radial (y) directions. The spectrum is characterized using transverse axis half-value layer measurements and the nominal peak potential (kVp). This source modeling approach is used to characterize a Varian{sup ®} on-board-imager (OBI{sup ®}) for four default cone-beam CT beam qualities: beams using a half bowtie filter (HBT) with 110 and 125 kVp, and a full bowtie filter (FBT) with 100 and 125 kVp. The source model and characterization method was validated by comparing dose computed by the authors’ inhouse software (kVDoseCalc) to relative dose measurements in a homogeneous and a heterogeneous block phantom comprised of tissue, bone, and lung-equivalent materials. Results: The characterized beam qualities and spatial photon distributions are comparable to reported values in the literature. Agreement between computed and measured percent depth-dose curves is ⩽2% in the homogeneous block phantom and ⩽2.5% in the heterogeneous block phantom. Transverse axis profiles taken at depths of 2 and 6 cm in the homogeneous block phantom show an agreement within 4%. All transverse axis dose profiles in water, in bone, and lung-equivalent materials for beams using a HBT, have an agreement within 5%. Measured profiles of FBT beams in bone and lung-equivalent materials were higher than their

  11. HEDR model validation plan

    International Nuclear Information System (INIS)

    Napier, B.A.; Gilbert, R.O.; Simpson, J.C.; Ramsdell, J.V. Jr.; Thiede, M.E.; Walters, W.H.

    1993-06-01

    The Hanford Environmental Dose Reconstruction (HEDR) Project has developed a set of computational ''tools'' for estimating the possible radiation dose that individuals may have received from past Hanford Site operations. This document describes the planned activities to ''validate'' these tools. In the sense of the HEDR Project, ''validation'' is a process carried out by comparing computational model predictions with field observations and experimental measurements that are independent of those used to develop the model

  12. Validation of simulation models

    DEFF Research Database (Denmark)

    Rehman, Muniza; Pedersen, Stig Andur

    2012-01-01

    In philosophy of science, the interest for computational models and simulations has increased heavily during the past decades. Different positions regarding the validity of models have emerged but the views have not succeeded in capturing the diversity of validation methods. The wide variety...

  13. High-resolution computational algorithms for simulating offshore wind turbines and farms: Model development and validation

    Energy Technology Data Exchange (ETDEWEB)

    Calderer, Antoni [Univ. of Minnesota, Minneapolis, MN (United States); Yang, Xiaolei [Stony Brook Univ., NY (United States); Angelidis, Dionysios [Univ. of Minnesota, Minneapolis, MN (United States); Feist, Chris [Univ. of Minnesota, Minneapolis, MN (United States); Guala, Michele [Univ. of Minnesota, Minneapolis, MN (United States); Ruehl, Kelley [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Guo, Xin [Univ. of Minnesota, Minneapolis, MN (United States); Boomsma, Aaron [Univ. of Minnesota, Minneapolis, MN (United States); Shen, Lian [Univ. of Minnesota, Minneapolis, MN (United States); Sotiropoulos, Fotis [Stony Brook Univ., NY (United States)

    2015-10-30

    The present project involves the development of modeling and analysis design tools for assessing offshore wind turbine technologies. The computational tools developed herein are able to resolve the effects of the coupled interaction of atmospheric turbulence and ocean waves on aerodynamic performance and structural stability and reliability of offshore wind turbines and farms. Laboratory scale experiments have been carried out to derive data sets for validating the computational models.

  14. Computational Modelling of Patella Femoral Kinematics During Gait Cycle and Experimental Validation

    Science.gov (United States)

    Maiti, Raman

    2016-06-01

    The effect of loading and boundary conditions on patellar mechanics is significant due to the complications arising in patella femoral joints during total knee replacements. To understand the patellar mechanics with respect to loading and motion, a computational model representing the patella femoral joint was developed and validated against experimental results. The computational model was created in IDEAS NX and simulated in MSC ADAMS/VIEW software. The results obtained in the form of internal external rotations and anterior posterior displacements for a new and experimentally simulated specimen for patella femoral joint under standard gait condition were compared with experimental measurements performed on the Leeds ProSim knee simulator. A good overall agreement between the computational prediction and the experimental data was obtained for patella femoral kinematics. Good agreement between the model and the past studies was observed when the ligament load was removed and the medial lateral displacement was constrained. The model is sensitive to ±5 % change in kinematics, frictional, force and stiffness coefficients and insensitive to time step.

  15. Computer Modeling and Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Pronskikh, V. S. [Fermilab

    2014-05-09

    Verification and validation of computer codes and models used in simulation are two aspects of the scientific practice of high importance and have recently been discussed by philosophers of science. While verification is predominantly associated with the correctness of the way a model is represented by a computer code or algorithm, validation more often refers to model’s relation to the real world and its intended use. It has been argued that because complex simulations are generally not transparent to a practitioner, the Duhem problem can arise for verification and validation due to their entanglement; such an entanglement makes it impossible to distinguish whether a coding error or model’s general inadequacy to its target should be blamed in the case of the model failure. I argue that in order to disentangle verification and validation, a clear distinction between computer modeling (construction of mathematical computer models of elementary processes) and simulation (construction of models of composite objects and processes by means of numerical experimenting with them) needs to be made. Holding on to that distinction, I propose to relate verification (based on theoretical strategies such as inferences) to modeling and validation, which shares the common epistemology with experimentation, to simulation. To explain reasons of their intermittent entanglement I propose a weberian ideal-typical model of modeling and simulation as roles in practice. I suggest an approach to alleviate the Duhem problem for verification and validation generally applicable in practice and based on differences in epistemic strategies and scopes

  16. Verification and validation in computational fluid dynamics

    Science.gov (United States)

    Oberkampf, William L.; Trucano, Timothy G.

    2002-04-01

    Verification and validation (V&V) are the primary means to assess accuracy and reliability in computational simulations. This paper presents an extensive review of the literature in V&V in computational fluid dynamics (CFD), discusses methods and procedures for assessing V&V, and develops a number of extensions to existing ideas. The review of the development of V&V terminology and methodology points out the contributions from members of the operations research, statistics, and CFD communities. Fundamental issues in V&V are addressed, such as code verification versus solution verification, model validation versus solution validation, the distinction between error and uncertainty, conceptual sources of error and uncertainty, and the relationship between validation and prediction. The fundamental strategy of verification is the identification and quantification of errors in the computational model and its solution. In verification activities, the accuracy of a computational solution is primarily measured relative to two types of highly accurate solutions: analytical solutions and highly accurate numerical solutions. Methods for determining the accuracy of numerical solutions are presented and the importance of software testing during verification activities is emphasized. The fundamental strategy of validation is to assess how accurately the computational results compare with the experimental data, with quantified error and uncertainty estimates for both. This strategy employs a hierarchical methodology that segregates and simplifies the physical and coupling phenomena involved in the complex engineering system of interest. A hypersonic cruise missile is used as an example of how this hierarchical structure is formulated. The discussion of validation assessment also encompasses a number of other important topics. A set of guidelines is proposed for designing and conducting validation experiments, supported by an explanation of how validation experiments are different

  17. Validation of the vault computer model 'VERMIN' for post-closure behaviour of repositories in geological formations

    International Nuclear Information System (INIS)

    Laurens, J.-M.

    1987-10-01

    A validation methodology is proposed and applied to the computer model VERMIN, which is used to simulate radionuclide release from repositories for solid radioactive wastes. The processes included in the validation exercise are leaching from the waste matrix, diffusive transport and advective transport combined with sorption. Suggestions are made for new experimental studies relevant to VERMIN validation. In addition, VERMIN was used to simulate the behaviour of a repository in a clay formation and thus provide input for the geosphere model being used in the PACOMA study. Results from these simulations are reported. (author)

  18. Model validation and calibration based on component functions of model output

    International Nuclear Information System (INIS)

    Wu, Danqing; Lu, Zhenzhou; Wang, Yanping; Cheng, Lei

    2015-01-01

    The target in this work is to validate the component functions of model output between physical observation and computational model with the area metric. Based on the theory of high dimensional model representations (HDMR) of independent input variables, conditional expectations are component functions of model output, and the conditional expectations reflect partial information of model output. Therefore, the model validation of conditional expectations tells the discrepancy between the partial information of the computational model output and that of the observations. Then a calibration of the conditional expectations is carried out to reduce the value of model validation metric. After that, a recalculation of the model validation metric of model output is taken with the calibrated model parameters, and the result shows that a reduction of the discrepancy in the conditional expectations can help decrease the difference in model output. At last, several examples are employed to demonstrate the rationality and necessity of the methodology in case of both single validation site and multiple validation sites. - Highlights: • A validation metric of conditional expectations of model output is proposed. • HDRM explains the relationship of conditional expectations and model output. • An improved approach of parameter calibration updates the computational models. • Validation and calibration process are applied at single site and multiple sites. • Validation and calibration process show a superiority than existing methods

  19. Validation of models with multivariate output

    International Nuclear Information System (INIS)

    Rebba, Ramesh; Mahadevan, Sankaran

    2006-01-01

    This paper develops metrics for validating computational models with experimental data, considering uncertainties in both. A computational model may generate multiple response quantities and the validation experiment might yield corresponding measured values. Alternatively, a single response quantity may be predicted and observed at different spatial and temporal points. Model validation in such cases involves comparison of multiple correlated quantities. Multiple univariate comparisons may give conflicting inferences. Therefore, aggregate validation metrics are developed in this paper. Both classical and Bayesian hypothesis testing are investigated for this purpose, using multivariate analysis. Since, commonly used statistical significance tests are based on normality assumptions, appropriate transformations are investigated in the case of non-normal data. The methodology is implemented to validate an empirical model for energy dissipation in lap joints under dynamic loading

  20. Use of the FDA nozzle model to illustrate validation techniques in computational fluid dynamics (CFD) simulations.

    Science.gov (United States)

    Hariharan, Prasanna; D'Souza, Gavin A; Horner, Marc; Morrison, Tina M; Malinauskas, Richard A; Myers, Matthew R

    2017-01-01

    A "credible" computational fluid dynamics (CFD) model has the potential to provide a meaningful evaluation of safety in medical devices. One major challenge in establishing "model credibility" is to determine the required degree of similarity between the model and experimental results for the model to be considered sufficiently validated. This study proposes a "threshold-based" validation approach that provides a well-defined acceptance criteria, which is a function of how close the simulation and experimental results are to the safety threshold, for establishing the model validity. The validation criteria developed following the threshold approach is not only a function of Comparison Error, E (which is the difference between experiments and simulations) but also takes in to account the risk to patient safety because of E. The method is applicable for scenarios in which a safety threshold can be clearly defined (e.g., the viscous shear-stress threshold for hemolysis in blood contacting devices). The applicability of the new validation approach was tested on the FDA nozzle geometry. The context of use (COU) was to evaluate if the instantaneous viscous shear stress in the nozzle geometry at Reynolds numbers (Re) of 3500 and 6500 was below the commonly accepted threshold for hemolysis. The CFD results ("S") of velocity and viscous shear stress were compared with inter-laboratory experimental measurements ("D"). The uncertainties in the CFD and experimental results due to input parameter uncertainties were quantified following the ASME V&V 20 standard. The CFD models for both Re = 3500 and 6500 could not be sufficiently validated by performing a direct comparison between CFD and experimental results using the Student's t-test. However, following the threshold-based approach, a Student's t-test comparing |S-D| and |Threshold-S| showed that relative to the threshold, the CFD and experimental datasets for Re = 3500 were statistically similar and the model could be

  1. Bibliography for Verification and Validation in Computational Simulation

    International Nuclear Information System (INIS)

    Oberkampf, W.L.

    1998-01-01

    A bibliography has been compiled dealing with the verification and validation of computational simulations. The references listed in this bibliography are concentrated in the field of computational fluid dynamics (CFD). However, references from the following fields are also included: operations research, heat transfer, solid dynamics, software quality assurance, software accreditation, military systems, and nuclear reactor safety. This bibliography, containing 221 references, is not meant to be comprehensive. It was compiled during the last ten years in response to the author's interest and research in the methodology for verification and validation. The emphasis in the bibliography is in the following areas: philosophy of science underpinnings, development of terminology and methodology, high accuracy solutions for CFD verification, experimental datasets for CFD validation, and the statistical quantification of model validation. This bibliography should provide a starting point for individual researchers in many fields of computational simulation in science and engineering

  2. Bibliography for Verification and Validation in Computational Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, W.L.

    1998-10-01

    A bibliography has been compiled dealing with the verification and validation of computational simulations. The references listed in this bibliography are concentrated in the field of computational fluid dynamics (CFD). However, references from the following fields are also included: operations research, heat transfer, solid dynamics, software quality assurance, software accreditation, military systems, and nuclear reactor safety. This bibliography, containing 221 references, is not meant to be comprehensive. It was compiled during the last ten years in response to the author's interest and research in the methodology for verification and validation. The emphasis in the bibliography is in the following areas: philosophy of science underpinnings, development of terminology and methodology, high accuracy solutions for CFD verification, experimental datasets for CFD validation, and the statistical quantification of model validation. This bibliography should provide a starting point for individual researchers in many fields of computational simulation in science and engineering.

  3. Measures of agreement between computation and experiment:validation metrics.

    Energy Technology Data Exchange (ETDEWEB)

    Barone, Matthew Franklin; Oberkampf, William Louis

    2005-08-01

    With the increasing role of computational modeling in engineering design, performance estimation, and safety assessment, improved methods are needed for comparing computational results and experimental measurements. Traditional methods of graphically comparing computational and experimental results, though valuable, are essentially qualitative. Computable measures are needed that can quantitatively compare computational and experimental results over a range of input, or control, variables and sharpen assessment of computational accuracy. This type of measure has been recently referred to as a validation metric. We discuss various features that we believe should be incorporated in a validation metric and also features that should be excluded. We develop a new validation metric that is based on the statistical concept of confidence intervals. Using this fundamental concept, we construct two specific metrics: one that requires interpolation of experimental data and one that requires regression (curve fitting) of experimental data. We apply the metrics to three example problems: thermal decomposition of a polyurethane foam, a turbulent buoyant plume of helium, and compressibility effects on the growth rate of a turbulent free-shear layer. We discuss how the present metrics are easily interpretable for assessing computational model accuracy, as well as the impact of experimental measurement uncertainty on the accuracy assessment.

  4. R&D for computational cognitive and social models : foundations for model evaluation through verification and validation (final LDRD report).

    Energy Technology Data Exchange (ETDEWEB)

    Slepoy, Alexander; Mitchell, Scott A.; Backus, George A.; McNamara, Laura A.; Trucano, Timothy Guy

    2008-09-01

    Sandia National Laboratories is investing in projects that aim to develop computational modeling and simulation applications that explore human cognitive and social phenomena. While some of these modeling and simulation projects are explicitly research oriented, others are intended to support or provide insight for people involved in high consequence decision-making. This raises the issue of how to evaluate computational modeling and simulation applications in both research and applied settings where human behavior is the focus of the model: when is a simulation 'good enough' for the goals its designers want to achieve? In this report, we discuss two years' worth of review and assessment of the ASC program's approach to computational model verification and validation, uncertainty quantification, and decision making. We present a framework that extends the principles of the ASC approach into the area of computational social and cognitive modeling and simulation. In doing so, we argue that the potential for evaluation is a function of how the modeling and simulation software will be used in a particular setting. In making this argument, we move from strict, engineering and physics oriented approaches to V&V to a broader project of model evaluation, which asserts that the systematic, rigorous, and transparent accumulation of evidence about a model's performance under conditions of uncertainty is a reasonable and necessary goal for model evaluation, regardless of discipline. How to achieve the accumulation of evidence in areas outside physics and engineering is a significant research challenge, but one that requires addressing as modeling and simulation tools move out of research laboratories and into the hands of decision makers. This report provides an assessment of our thinking on ASC Verification and Validation, and argues for further extending V&V research in the physical and engineering sciences toward a broader program of model

  5. Development and validation of rear impact computer simulation model of an adult manual transit wheelchair with a seated occupant.

    Science.gov (United States)

    Salipur, Zdravko; Bertocci, Gina

    2010-01-01

    It has been shown that ANSI WC19 transit wheelchairs that are crashworthy in frontal impact exhibit catastrophic failures in rear impact and may not be able to provide stable seating support and thus occupant protection for the wheelchair occupant. Thus far only limited sled test and computer simulation data have been available to study rear impact wheelchair safety. Computer modeling can be used as an economic and comprehensive tool to gain critical knowledge regarding wheelchair integrity and occupant safety. This study describes the development and validation of a computer model simulating an adult wheelchair-seated occupant subjected to a rear impact event. The model was developed in MADYMO and validated rigorously using the results of three similar sled tests conducted to specifications provided in the draft ISO/TC 173 standard. Outcomes from the model can provide critical wheelchair loading information to wheelchair and tiedown manufacturers, resulting in safer wheelchair designs for rear impact conditions. (c) 2009 IPEM. Published by Elsevier Ltd. All rights reserved.

  6. Computational modelling of an operational wind turbine and validation with LIDAR

    Science.gov (United States)

    Creech, Angus; Fruh, Wolf-Gerrit; Clive, Peter

    2010-05-01

    We present a computationally efficient method to model the interaction of wind turbines with the surrounding flow, where the interaction provides information on the power generation of the turbine and the generated wake behind the turbine. The turbine representation is based on the principle of an actuator volume, whereby the energy extraction and balancing forces on the fluids are formulated as body forces which avoids the extremely high computational costs of boundary conditions and forces. Depending on the turbine information available, those forces can be derived either from published turbine performance specifications or from their rotor and blade design. This turbine representation is then coupled to a Computational Fluid Dynamics package, in this case the hr-adaptive Finite-Element code Fluidity from Imperial College, London. Here we present a simulation of an operational 950kW NEG Micon NM54 wind turbine installed in the west of Scotland. The calculated wind is compared with LIDAR measurements using a Galion LIDAR from SgurrEnergy. The computational domain extends over an area of 6km by 6km and a height of 750m, centred on the turbine. The lower boundary includes the orography of the terrain and surface roughness values representing the vegetation - some forested areas and some grassland. The boundary conditions on the sides are relaxed Dirichlet conditions, relaxed to an observed prevailing wind speed and direction. Within instrumental errors and model limitations, the overall flow field in general and the wake behind the turbine in particular, show a very high degree of agreement, demonstrating the validity and value of this approach. The computational costs of this approach are such that it is possible to extend this single-turbine example to a full wind farm, as the number of required mesh nodes is given by the domain and then increases only linearly with the number of turbines

  7. Groundwater Model Validation

    Energy Technology Data Exchange (ETDEWEB)

    Ahmed E. Hassan

    2006-01-24

    Models have an inherent uncertainty. The difficulty in fully characterizing the subsurface environment makes uncertainty an integral component of groundwater flow and transport models, which dictates the need for continuous monitoring and improvement. Building and sustaining confidence in closure decisions and monitoring networks based on models of subsurface conditions require developing confidence in the models through an iterative process. The definition of model validation is postulated as a confidence building and long-term iterative process (Hassan, 2004a). Model validation should be viewed as a process not an end result. Following Hassan (2004b), an approach is proposed for the validation process of stochastic groundwater models. The approach is briefly summarized herein and detailed analyses of acceptance criteria for stochastic realizations and of using validation data to reduce input parameter uncertainty are presented and applied to two case studies. During the validation process for stochastic models, a question arises as to the sufficiency of the number of acceptable model realizations (in terms of conformity with validation data). Using a hierarchical approach to make this determination is proposed. This approach is based on computing five measures or metrics and following a decision tree to determine if a sufficient number of realizations attain satisfactory scores regarding how they represent the field data used for calibration (old) and used for validation (new). The first two of these measures are applied to hypothetical scenarios using the first case study and assuming field data consistent with the model or significantly different from the model results. In both cases it is shown how the two measures would lead to the appropriate decision about the model performance. Standard statistical tests are used to evaluate these measures with the results indicating they are appropriate measures for evaluating model realizations. The use of validation

  8. Computational Modeling of Space Physiology

    Science.gov (United States)

    Lewandowski, Beth E.; Griffin, Devon W.

    2016-01-01

    The Digital Astronaut Project (DAP), within NASAs Human Research Program, develops and implements computational modeling for use in the mitigation of human health and performance risks associated with long duration spaceflight. Over the past decade, DAP developed models to provide insights into space flight related changes to the central nervous system, cardiovascular system and the musculoskeletal system. Examples of the models and their applications include biomechanical models applied to advanced exercise device development, bone fracture risk quantification for mission planning, accident investigation, bone health standards development, and occupant protection. The International Space Station (ISS), in its role as a testing ground for long duration spaceflight, has been an important platform for obtaining human spaceflight data. DAP has used preflight, in-flight and post-flight data from short and long duration astronauts for computational model development and validation. Examples include preflight and post-flight bone mineral density data, muscle cross-sectional area, and muscle strength measurements. Results from computational modeling supplement space physiology research by informing experimental design. Using these computational models, DAP personnel can easily identify both important factors associated with a phenomenon and areas where data are lacking. This presentation will provide examples of DAP computational models, the data used in model development and validation, and applications of the model.

  9. Experimentally validated multiphysics computational model of focusing and shock wave formation in an electromagnetic lithotripter.

    Science.gov (United States)

    Fovargue, Daniel E; Mitran, Sorin; Smith, Nathan B; Sankin, Georgy N; Simmons, Walter N; Zhong, Pei

    2013-08-01

    A multiphysics computational model of the focusing of an acoustic pulse and subsequent shock wave formation that occurs during extracorporeal shock wave lithotripsy is presented. In the electromagnetic lithotripter modeled in this work the focusing is achieved via a polystyrene acoustic lens. The transition of the acoustic pulse through the solid lens is modeled by the linear elasticity equations and the subsequent shock wave formation in water is modeled by the Euler equations with a Tait equation of state. Both sets of equations are solved simultaneously in subsets of a single computational domain within the BEARCLAW framework which uses a finite-volume Riemann solver approach. This model is first validated against experimental measurements with a standard (or original) lens design. The model is then used to successfully predict the effects of a lens modification in the form of an annular ring cut. A second model which includes a kidney stone simulant in the domain is also presented. Within the stone the linear elasticity equations incorporate a simple damage model.

  10. Validation of the STAFF-5 computer model

    International Nuclear Information System (INIS)

    Fletcher, J.F.; Fields, S.R.

    1981-04-01

    STAFF-5 is a dynamic heat-transfer-fluid-flow stress model designed for computerized prediction of the temperature-stress performance of spent LWR fuel assemblies under storage/disposal conditions. Validation of the temperature calculating abilities of this model was performed by comparing temperature calculations under specified conditions to experimental data from the Engine Maintenance and Dissassembly (EMAD) Fuel Temperature Test Facility and to calculations performed by Battelle Pacific Northwest Laboratory (PNL) using the HYDRA-1 model. The comparisons confirmed the ability of STAFF-5 to calculate representative fuel temperatures over a considerable range of conditions, as a first step in the evaluation and prediction of fuel temperature-stress performance

  11. Application of a computational situation assessment model to human system interface design and experimental validation of its effectiveness

    International Nuclear Information System (INIS)

    Lee, Hyun-Chul; Koh, Kwang-Yong; Seong, Poong-Hyun

    2013-01-01

    Highlights: ► We validate the effectiveness of a proposed procedure thru an experiment. ► The proposed procedure addresses the salient coding of the key information. ► It was found that salience coding affects operators’ attention significantly. ► The first observation to the key information quickly guided to the correct situation awareness. ► It was validated the proposed procedure is effective for better situation awareness. - Abstract: To evaluate the effects of human cognitive characteristics on situation awareness, a computational situation assessment model of nuclear power plant operators has been developed, as well as a procedure to apply the developed model to the design of human system interfaces (HSIs). The concept of the proposed procedure is to identify the key information source, which is expected to guarantee fast and accurate diagnosis when operators attend to it. The developed computational model is used to search the diagnostic paths and the key information source. In this study, an experiment with twelve trained participants was executed to validate the effectiveness of the proposed procedure. Eighteen scenarios covering various accidents were administered twice for each subject, and experimental data were collected and analyzed. As a result of the data analysis, it was validated that the salience level of information sources significantly influences the attention of operators, and the first observation of the key information sources leads operators to a quick and correct situation assessment. Therefore, we conclude that the proposed procedure for applying the developed model to HSI design is effective

  12. Development and validation of a computational model of the knee joint for the evaluation of surgical treatments for osteoarthritis.

    Science.gov (United States)

    Mootanah, R; Imhauser, C W; Reisse, F; Carpanen, D; Walker, R W; Koff, M F; Lenhoff, M W; Rozbruch, S R; Fragomen, A T; Dewan, Z; Kirane, Y M; Cheah, K; Dowell, J K; Hillstrom, H J

    2014-01-01

    A three-dimensional (3D) knee joint computational model was developed and validated to predict knee joint contact forces and pressures for different degrees of malalignment. A 3D computational knee model was created from high-resolution radiological images to emulate passive sagittal rotation (full-extension to 65°-flexion) and weight acceptance. A cadaveric knee mounted on a six-degree-of-freedom robot was subjected to matching boundary and loading conditions. A ligament-tuning process minimised kinematic differences between the robotically loaded cadaver specimen and the finite element (FE) model. The model was validated by measured intra-articular force and pressure measurements. Percent full scale error between FE-predicted and in vitro-measured values in the medial and lateral compartments were 6.67% and 5.94%, respectively, for normalised peak pressure values, and 7.56% and 4.48%, respectively, for normalised force values. The knee model can accurately predict normalised intra-articular pressure and forces for different loading conditions and could be further developed for subject-specific surgical planning.

  13. Validation of models in an imaging infrared simulation

    CSIR Research Space (South Africa)

    Willers, C

    2007-10-01

    Full Text Available threeprocessesfortransformingtheinformationbetweentheentities. Reality/ Problem Entity Conceptual Model Computerized Model Model Validation ModelVerification Model Qualification Computer Implementation Analysisand Modelling Simulationand Experimentation “Substantiationthata....C.Refsgaard ,ModellingGuidelines-terminology andguidingprinciples, AdvancesinWaterResources, Vol27,No1,January2004,?pp.71-82(12),Elsevier. et.al. [5]N.Oreskes,et.al.,Verification,Validation,andConfirmationof NumericalModelsintheEarthSciences,Science,Vol263, Number...

  14. Computational Fluid Dynamics Modeling of Bubbling in a Viscous Fluid for Validation of Waste Glass Melter Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Abboud, Alexander William [Idaho National Laboratory; Guillen, Donna Post [Idaho National Laboratory

    2016-01-01

    At the Hanford site, radioactive waste stored in underground tanks is slated for vitrification for final disposal. A comprehensive knowledge of the glass batch melting process will be useful in optimizing the process, which could potentially reduce the cost and duration of this multi-billion dollar cleanup effort. We are developing a high-fidelity heat transfer model of a Joule-heated ceramic lined melter to improve the understanding of the complex, inter-related processes occurring with the melter. The glass conversion rates in the cold cap layer are dependent on promoting efficient heat transfer. In practice, heat transfer is augmented by inserting air bubblers into the molten glass. However, the computational simulations must be validated to provide confidence in the solutions. As part of a larger validation procedure, it is beneficial to split the physics of the melter into smaller systems to validate individually. The substitution of molten glass for a simulant liquid with similar density and viscosity at room temperature provides a way to study mixing through bubbling as an isolated effect without considering the heat transfer dynamics. The simulation results are compared to experimental data obtained by the Vitreous State Laboratory at the Catholic University of America using bubblers placed within a large acrylic tank that is similar in scale to a pilot glass waste melter. Comparisons are made for surface area of the rising air bubbles between experiments and CFD simulations for a variety of air flow rates and bubble injection depths. Also, computed bubble rise velocity is compared to a well-accepted expression for bubble terminal velocity.

  15. Model Validation Using Coordinate Distance with Performance Sensitivity

    Directory of Open Access Journals (Sweden)

    Jiann-Shiun Lew

    2008-01-01

    Full Text Available This paper presents an innovative approach to model validation for a structure with significant parameter variations. Model uncertainty of the structural dynamics is quantified with the use of a singular value decomposition technique to extract the principal components of parameter change, and an interval model is generated to represent the system with parameter uncertainty. The coordinate vector, corresponding to the identified principal directions, of the validation system is computed. The coordinate distance between the validation system and the identified interval model is used as a metric for model validation. A beam structure with an attached subsystem, which has significant parameter uncertainty, is used to demonstrate the proposed approach.

  16. Uncertainty in biology a computational modeling approach

    CERN Document Server

    Gomez-Cabrero, David

    2016-01-01

    Computational modeling of biomedical processes is gaining more and more weight in the current research into the etiology of biomedical problems and potential treatment strategies.  Computational modeling allows to reduce, refine and replace animal experimentation as well as to translate findings obtained in these experiments to the human background. However these biomedical problems are inherently complex with a myriad of influencing factors, which strongly complicates the model building and validation process.  This book wants to address four main issues related to the building and validation of computational models of biomedical processes: Modeling establishment under uncertainty Model selection and parameter fitting Sensitivity analysis and model adaptation Model predictions under uncertainty In each of the abovementioned areas, the book discusses a number of key-techniques by means of a general theoretical description followed by one or more practical examples.  This book is intended for graduate stude...

  17. Validation of a multidimensional computational fluid dynamics model for subcooled flow boiling analysis

    Energy Technology Data Exchange (ETDEWEB)

    Braz Filho, Francisco A.; Caldeira, Alexandre D.; Borges, Eduardo M., E-mail: fbraz@ieav.cta.b, E-mail: alexdc@ieav.cta.b, E-mail: eduardo@ieav.cta.b [Instituto de Estudos Avancados (IEAv/CTA), Sao Jose dos Campos, SP (Brazil). Div. de Energia Nuclear

    2011-07-01

    In a heated vertical channel, the subcooled flow boiling regime occurs when the bulk fluid temperature is lower than the saturation temperature, but the fluid temperature reaches the saturation point near the channel wall. This phenomenon produces a significant increase in heat flux, limited by the critical heat flux. This study is particularly important to the thermal-hydraulics analysis of pressurized water reactors. The purpose of this work is the validation of a multidimensional model to analyze the subcooled flow boiling comparing the results with experimental data found in literature. The computational fluid dynamics code FLUENT was used with Eulerian multiphase model option. The calculated values of wall temperature in the liquid-solid interface presented an excellent agreement when compared to the experimental data. Void fraction calculations presented satisfactory results in relation to the experimental data in pressures of 15, 30 and 45 bars. (author)

  18. Validation of a multidimensional computational fluid dynamics model for subcooled flow boiling analysis

    International Nuclear Information System (INIS)

    Braz Filho, Francisco A.; Caldeira, Alexandre D.; Borges, Eduardo M.

    2011-01-01

    In a heated vertical channel, the subcooled flow boiling regime occurs when the bulk fluid temperature is lower than the saturation temperature, but the fluid temperature reaches the saturation point near the channel wall. This phenomenon produces a significant increase in heat flux, limited by the critical heat flux. This study is particularly important to the thermal-hydraulics analysis of pressurized water reactors. The purpose of this work is the validation of a multidimensional model to analyze the subcooled flow boiling comparing the results with experimental data found in literature. The computational fluid dynamics code FLUENT was used with Eulerian multiphase model option. The calculated values of wall temperature in the liquid-solid interface presented an excellent agreement when compared to the experimental data. Void fraction calculations presented satisfactory results in relation to the experimental data in pressures of 15, 30 and 45 bars. (author)

  19. Validity, reliability, and reproducibility of linear measurements on digital models obtained from intraoral and cone-beam computed tomography scans of alginate impressions

    NARCIS (Netherlands)

    Wiranto, Matthew G.; Engelbrecht, W. Petrie; Nolthenius, Heleen E. Tutein; van der Meer, W. Joerd; Ren, Yijin

    INTRODUCTION: Digital 3-dimensional models are widely used for orthodontic diagnosis. The aim of this study was to assess the validity, reliability, and reproducibility of digital models obtained from the Lava Chairside Oral scanner (3M ESPE, Seefeld, Germany) and cone-beam computed tomography scans

  20. Validated Computational Model to Compute Re-apposition Pressures for Treating Type-B Aortic Dissections

    Directory of Open Access Journals (Sweden)

    Aashish Ahuja

    2018-05-01

    Full Text Available The use of endovascular treatment in the thoracic aorta has revolutionized the clinical approach for treating Stanford type B aortic dissection. The endograft procedure is a minimally invasive alternative to traditional surgery for the management of complicated type-B patients. The endograft is first deployed to exclude the proximal entry tear to redirect blood flow toward the true lumen and then a stent graft is used to push the intimal flap against the false lumen (FL wall such that the aorta is reconstituted by sealing the FL. Although endovascular treatment has reduced the mortality rate in patients compared to those undergoing surgical repair, more than 30% of patients who were initially successfully treated require a new endovascular or surgical intervention in the aortic segments distal to the endograft. One reason for failure of the repair is persistent FL perfusion from distal entry tears. This creates a patent FL channel which can be associated with FL growth. Thus, it is necessary to develop stents that can promote full re-apposition of the flap leading to complete closure of the FL. In the current study, we determine the radial pressures required to re-appose the mid and distal ends of a dissected porcine thoracic aorta using a balloon catheter under static inflation pressure. The same analysis is simulated using finite element analysis (FEA models by incorporating the hyperelastic properties of porcine aortic tissues. It is shown that the FEA models capture the change in the radial pressures required to re-appose the intimal flap as a function of pressure. The predictions from the simulation models match closely the results from the bench experiments. The use of validated computational models can support development of better stents by calculating the proper radial pressures required for complete re-apposition of the intimal flap.

  1. Hurricane Sandy Economic Impacts Assessment: A Computable General Equilibrium Approach and Validation

    Energy Technology Data Exchange (ETDEWEB)

    Boero, Riccardo [Los Alamos National Laboratory; Edwards, Brian Keith [Los Alamos National Laboratory

    2017-08-07

    Economists use computable general equilibrium (CGE) models to assess how economies react and self-organize after changes in policies, technology, and other exogenous shocks. CGE models are equation-based, empirically calibrated, and inspired by Neoclassical economic theory. The focus of this work was to validate the National Infrastructure Simulation and Analysis Center (NISAC) CGE model and apply it to the problem of assessing the economic impacts of severe events. We used the 2012 Hurricane Sandy event as our validation case. In particular, this work first introduces the model and then describes the validation approach and the empirical data available for studying the event of focus. Shocks to the model are then formalized and applied. Finally, model results and limitations are presented and discussed, pointing out both the model degree of accuracy and the assessed total damage caused by Hurricane Sandy.

  2. WSRC approach to validation of criticality safety computer codes

    International Nuclear Information System (INIS)

    Finch, D.R.; Mincey, J.F.

    1991-01-01

    Recent hardware and operating system changes at Westinghouse Savannah River Site (WSRC) have necessitated review of the validation for JOSHUA criticality safety computer codes. As part of the planning for this effort, a policy for validation of JOSHUA and other criticality safety codes has been developed. This policy will be illustrated with the steps being taken at WSRC. The objective in validating a specific computational method is to reliably correlate its calculated neutron multiplication factor (K eff ) with known values over a well-defined set of neutronic conditions. Said another way, such correlations should be: (1) repeatable; (2) demonstrated with defined confidence; and (3) identify the range of neutronic conditions (area of applicability) for which the correlations are valid. The general approach to validation of computational methods at WSRC must encompass a large number of diverse types of fissile material processes in different operations. Special problems are presented in validating computational methods when very few experiments are available (such as for enriched uranium systems with principal second isotope 236 U). To cover all process conditions at WSRC, a broad validation approach has been used. Broad validation is based upon calculation of many experiments to span all possible ranges of reflection, nuclide concentrations, moderation ratios, etc. Narrow validation, in comparison, relies on calculations of a few experiments very near anticipated worst-case process conditions. The methods and problems of broad validation are discussed

  3. An approach to model validation and model-based prediction -- polyurethane foam case study.

    Energy Technology Data Exchange (ETDEWEB)

    Dowding, Kevin J.; Rutherford, Brian Milne

    2003-07-01

    Enhanced software methodology and improved computing hardware have advanced the state of simulation technology to a point where large physics-based codes can be a major contributor in many systems analyses. This shift toward the use of computational methods has brought with it new research challenges in a number of areas including characterization of uncertainty, model validation, and the analysis of computer output. It is these challenges that have motivated the work described in this report. Approaches to and methods for model validation and (model-based) prediction have been developed recently in the engineering, mathematics and statistical literatures. In this report we have provided a fairly detailed account of one approach to model validation and prediction applied to an analysis investigating thermal decomposition of polyurethane foam. A model simulates the evolution of the foam in a high temperature environment as it transforms from a solid to a gas phase. The available modeling and experimental results serve as data for a case study focusing our model validation and prediction developmental efforts on this specific thermal application. We discuss several elements of the ''philosophy'' behind the validation and prediction approach: (1) We view the validation process as an activity applying to the use of a specific computational model for a specific application. We do acknowledge, however, that an important part of the overall development of a computational simulation initiative is the feedback provided to model developers and analysts associated with the application. (2) We utilize information obtained for the calibration of model parameters to estimate the parameters and quantify uncertainty in the estimates. We rely, however, on validation data (or data from similar analyses) to measure the variability that contributes to the uncertainty in predictions for specific systems or units (unit-to-unit variability). (3) We perform statistical

  4. Validation of the transportation computer codes HIGHWAY, INTERLINE, RADTRAN 4, and RISKIND

    International Nuclear Information System (INIS)

    Maheras, S.J.; Pippen, H.K.

    1995-05-01

    The computer codes HIGHWAY, INTERLINE, RADTRAN 4, and RISKIND were used to estimate radiation doses from the transportation of radioactive material in the Department of Energy Programmatic Spent Nuclear Fuel Management and Idaho National Engineering Laboratory Environmental Restoration and Waste Management Programs Environmental Impact Statement. HIGHWAY and INTERLINE were used to estimate transportation routes for truck and rail shipments, respectively. RADTRAN 4 was used to estimate collective doses from incident-free transportation and the risk (probability x consequence) from transportation accidents. RISKIND was used to estimate incident-free radiation doses for maximally exposed individuals and the consequences from reasonably foreseeable transportation accidents. The purpose of this analysis is to validate the estimates made by these computer codes; critiques of the conceptual models used in RADTRAN 4 are also discussed. Validation is defined as ''the test and evaluation of the completed software to ensure compliance with software requirements.'' In this analysis, validation means that the differences between the estimates generated by these codes and independent observations are small (i.e., within the acceptance criterion established for the validation analysis). In some cases, the independent observations used in the validation were measurements; in other cases, the independent observations used in the validation analysis were generated using hand calculations. The results of the validation analyses performed for HIGHWAY, INTERLINE, RADTRAN 4, and RISKIND show that the differences between the estimates generated using the computer codes and independent observations were small. Based on the acceptance criterion established for the validation analyses, the codes yielded acceptable results; in all cases the estimates met the requirements for successful validation

  5. PEMFC modeling and experimental validation

    Energy Technology Data Exchange (ETDEWEB)

    Vargas, J.V.C. [Federal University of Parana (UFPR), Curitiba, PR (Brazil). Dept. of Mechanical Engineering], E-mail: jvargas@demec.ufpr.br; Ordonez, J.C.; Martins, L.S. [Florida State University, Tallahassee, FL (United States). Center for Advanced Power Systems], Emails: ordonez@caps.fsu.edu, martins@caps.fsu.edu

    2009-07-01

    In this paper, a simplified and comprehensive PEMFC mathematical model introduced in previous studies is experimentally validated. Numerical results are obtained for an existing set of commercial unit PEM fuel cells. The model accounts for pressure drops in the gas channels, and for temperature gradients with respect to space in the flow direction, that are investigated by direct infrared imaging, showing that even at low current operation such gradients are present in fuel cell operation, and therefore should be considered by a PEMFC model, since large coolant flow rates are limited due to induced high pressure drops in the cooling channels. The computed polarization and power curves are directly compared to the experimentally measured ones with good qualitative and quantitative agreement. The combination of accuracy and low computational time allow for the future utilization of the model as a reliable tool for PEMFC simulation, control, design and optimization purposes. (author)

  6. Irrigant flow in the root canal: experimental validation of an unsteady Computational Fluid Dynamics model using high-speed imaging.

    Science.gov (United States)

    Boutsioukis, C; Verhaagen, B; Versluis, M; Kastrinakis, E; van der Sluis, L W M

    2010-05-01

    To compare the results of a Computational Fluid Dynamics (CFD) simulation of the irrigant flow within a prepared root canal, during final irrigation with a syringe and a needle, with experimental high-speed visualizations and theoretical calculations of an identical geometry and to evaluate the effect of off-centre positioning of the needle inside the root canal. A CFD model was created to simulate irrigant flow from a side-vented needle inside a prepared root canal. Calculations were carried out for four different positions of the needle inside a prepared root canal. An identical root canal model was made from poly-dimethyl-siloxane (PDMS). High-speed imaging of the flow seeded with particles and Particle Image Velocimetry (PIV) were combined to obtain the velocity field inside the root canal experimentally. Computational, theoretical and experimental results were compared to assess the validity of the computational model. Comparison between CFD computations and experiments revealed good agreement in the velocity magnitude and vortex location and size. Small lateral displacements of the needle inside the canal had a limited effect on the flow field. High-speed imaging experiments together with PIV of the flow inside a simulated root canal showed a good agreement with the CFD model, even though the flow was unsteady. Therefore, the CFD model is able to predict reliably the flow in similar domains.

  7. Methodology for Computational Fluid Dynamic Validation for Medical Use: Application to Intracranial Aneurysm.

    Science.gov (United States)

    Paliwal, Nikhil; Damiano, Robert J; Varble, Nicole A; Tutino, Vincent M; Dou, Zhongwang; Siddiqui, Adnan H; Meng, Hui

    2017-12-01

    Computational fluid dynamics (CFD) is a promising tool to aid in clinical diagnoses of cardiovascular diseases. However, it uses assumptions that simplify the complexities of the real cardiovascular flow. Due to high-stakes in the clinical setting, it is critical to calculate the effect of these assumptions in the CFD simulation results. However, existing CFD validation approaches do not quantify error in the simulation results due to the CFD solver's modeling assumptions. Instead, they directly compare CFD simulation results against validation data. Thus, to quantify the accuracy of a CFD solver, we developed a validation methodology that calculates the CFD model error (arising from modeling assumptions). Our methodology identifies independent error sources in CFD and validation experiments, and calculates the model error by parsing out other sources of error inherent in simulation and experiments. To demonstrate the method, we simulated the flow field of a patient-specific intracranial aneurysm (IA) in the commercial CFD software star-ccm+. Particle image velocimetry (PIV) provided validation datasets for the flow field on two orthogonal planes. The average model error in the star-ccm+ solver was 5.63 ± 5.49% along the intersecting validation line of the orthogonal planes. Furthermore, we demonstrated that our validation method is superior to existing validation approaches by applying three representative existing validation techniques to our CFD and experimental dataset, and comparing the validation results. Our validation methodology offers a streamlined workflow to extract the "true" accuracy of a CFD solver.

  8. Valence-Dependent Belief Updating: Computational Validation

    Directory of Open Access Journals (Sweden)

    Bojana Kuzmanovic

    2017-06-01

    Full Text Available People tend to update beliefs about their future outcomes in a valence-dependent way: they are likely to incorporate good news and to neglect bad news. However, belief formation is a complex process which depends not only on motivational factors such as the desire for favorable conclusions, but also on multiple cognitive variables such as prior beliefs, knowledge about personal vulnerabilities and resources, and the size of the probabilities and estimation errors. Thus, we applied computational modeling in order to test for valence-induced biases in updating while formally controlling for relevant cognitive factors. We compared biased and unbiased Bayesian models of belief updating, and specified alternative models based on reinforcement learning. The experiment consisted of 80 trials with 80 different adverse future life events. In each trial, participants estimated the base rate of one of these events and estimated their own risk of experiencing the event before and after being confronted with the actual base rate. Belief updates corresponded to the difference between the two self-risk estimates. Valence-dependent updating was assessed by comparing trials with good news (better-than-expected base rates with trials with bad news (worse-than-expected base rates. After receiving bad relative to good news, participants' updates were smaller and deviated more strongly from rational Bayesian predictions, indicating a valence-induced bias. Model comparison revealed that the biased (i.e., optimistic Bayesian model of belief updating better accounted for data than the unbiased (i.e., rational Bayesian model, confirming that the valence of the new information influenced the amount of updating. Moreover, alternative computational modeling based on reinforcement learning demonstrated higher learning rates for good than for bad news, as well as a moderating role of personal knowledge. Finally, in this specific experimental context, the approach based on

  9. Bayesian risk-based decision method for model validation under uncertainty

    International Nuclear Information System (INIS)

    Jiang Xiaomo; Mahadevan, Sankaran

    2007-01-01

    This paper develops a decision-making methodology for computational model validation, considering the risk of using the current model, data support for the current model, and cost of acquiring new information to improve the model. A Bayesian decision theory-based method is developed for this purpose, using a likelihood ratio as the validation metric for model assessment. An expected risk or cost function is defined as a function of the decision costs, and the likelihood and prior of each hypothesis. The risk is minimized through correctly assigning experimental data to two decision regions based on the comparison of the likelihood ratio with a decision threshold. A Bayesian validation metric is derived based on the risk minimization criterion. Two types of validation tests are considered: pass/fail tests and system response value measurement tests. The methodology is illustrated for the validation of reliability prediction models in a tension bar and an engine blade subjected to high cycle fatigue. The proposed method can effectively integrate optimal experimental design into model validation to simultaneously reduce the cost and improve the accuracy of reliability model assessment

  10. Validation of containment thermal hydraulic computer codes for VVER reactor

    Energy Technology Data Exchange (ETDEWEB)

    Jiri Macek; Lubomir Denk [Nuclear Research Institute Rez plc Thermal-Hydraulic Analyses Department CZ 250 68 Husinec-Rez (Czech Republic)

    2005-07-01

    Full text of publication follows: The Czech Republic operates 4 VVER-440 units, two VVER-1000 units are being finalized (one of them is undergoing commissioning). Thermal-hydraulics Department of the Nuclear Research Institute Rez performs accident analyses for these plants using a number of computer codes. To model the primary and secondary circuits behaviour the system codes ATHLET, CATHARE, RELAP, TRAC are applied. Containment and pressure-suppression system are modelled with COCOSYS and MELCOR codes, the reactor power calculations (point and space-neutron kinetics) are made with DYN3D, NESTLE and CDF codes (FLUENT, TRIO) are used for some specific problems.An integral part of the current Czech project 'New Energy Sources' is selection of a new nuclear source. Within this and the preceding projects financed by the Czech Ministry of Industry and Trade and the EU PHARE, the Department carries and has carried out the systematic validation of thermal-hydraulic and reactor physics computer codes applying data obtained on several experimental facilities as well as the real operational data. One of the important components of the VVER 440/213 NPP is its containment with pressure suppression system (bubble condenser). For safety analyses of this system, computer codes of the type MELCOR and COCOSYS are used in the Czech Republic. These codes were developed for containments of classic PWRs or BWRs. In order to apply these codes for VVER 440 systems, their validation on experimental facilities must be performed.The paper provides concise information on these activities of the NRI and its Thermal-hydraulics Department. The containment system of the VVER 440/213, its functions and approaches to solution of its safety is described with definition of acceptance criteria. A detailed example of the containment code validation on EREC Test facility (LOCA and MSLB) and the consequent utilisation of the results for a real NPP purposes is included. An approach to

  11. Verification and validation of models

    International Nuclear Information System (INIS)

    Herbert, A.W.; Hodgkinson, D.P.; Jackson, C.P.; Lever, D.A.; Robinson, P.C.

    1986-12-01

    The numerical accuracy of the computer models for groundwater flow and radionuclide transport that are to be used in repository safety assessment must be tested, and their ability to describe experimental data assessed: they must be verified and validated respectively. Also appropriate ways to use the codes in performance assessments, taking into account uncertainties in present data and future conditions, must be studied. These objectives are being met by participation in international exercises, by developing bench-mark problems, and by analysing experiments. In particular the project has funded participation in the HYDROCOIN project for groundwater flow models, the Natural Analogues Working Group, and the INTRAVAL project for geosphere models. (author)

  12. Concepts of Model Verification and Validation

    International Nuclear Information System (INIS)

    Thacker, B.H.; Doebling, S.W.; Hemez, F.M.; Anderson, M.C.; Pepin, J.E.; Rodriguez, E.A.

    2004-01-01

    Model verification and validation (VandV) is an enabling methodology for the development of computational models that can be used to make engineering predictions with quantified confidence. Model VandV procedures are needed by government and industry to reduce the time, cost, and risk associated with full-scale testing of products, materials, and weapon systems. Quantifying the confidence and predictive accuracy of model calculations provides the decision-maker with the information necessary for making high-consequence decisions. The development of guidelines and procedures for conducting a model VandV program are currently being defined by a broad spectrum of researchers. This report reviews the concepts involved in such a program. Model VandV is a current topic of great interest to both government and industry. In response to a ban on the production of new strategic weapons and nuclear testing, the Department of Energy (DOE) initiated the Science-Based Stockpile Stewardship Program (SSP). An objective of the SSP is to maintain a high level of confidence in the safety, reliability, and performance of the existing nuclear weapons stockpile in the absence of nuclear testing. This objective has challenged the national laboratories to develop high-confidence tools and methods that can be used to provide credible models needed for stockpile certification via numerical simulation. There has been a significant increase in activity recently to define VandV methods and procedures. The U.S. Department of Defense (DoD) Modeling and Simulation Office (DMSO) is working to develop fundamental concepts and terminology for VandV applied to high-level systems such as ballistic missile defense and battle management simulations. The American Society of Mechanical Engineers (ASME) has recently formed a Standards Committee for the development of VandV procedures for computational solid mechanics models. The Defense Nuclear Facilities Safety Board (DNFSB) has been a proponent of model

  13. Computer-Based CPR Simulation Towards Validation of AHA/ERC Guidelines.

    Science.gov (United States)

    John, Alka Rachel; Manivannan, M; Ramakrishnan, T V

    2017-06-01

    As per the AHA 2015 and ERC 2015 guidelines for resuscitation, chest compression depth should be between 5 and 6 cm with a rate of 100-120 compressions per minute. Theoretical validation of these guidelines is still elusive. We developed a computer model of the cardiopulmonary resuscitation (CPR) system to validate these guidelines. A lumped element computer model of the cardiovascular system was developed to simulate cardiac arrest and CPR. Cardiac output was compared for a range of compression pressures and frequencies. It was observed from our investigation that there is an optimum compression pressure and rate. The maximum cardiac output occurred at 100 mmHg, which is approximately 5.7 cm, and in the range of 100 to 120 compressions per minute with an optimum value at 110 compressions per minute, validating the guidelines. Increasing the pressure or the depth of compression beyond the optimum, limits the blood flow by depleting the volume in the cardiac chambers and not allowing for an effective stroke volume. Similarly increasing the compression rate beyond the optimum degrades the ability of the chambers to pump blood. The results also bring out the importance of complete recoil of the chest after each compression with more than 400% increase in cardiac output from 90% recoil to 100% recoil. Our simulation predicts that the recommendation to compress harder and faster is not the best counsel as there is an optimum compression pressure and rate for high-quality CPR.

  14. Modelling computer networks

    International Nuclear Information System (INIS)

    Max, G

    2011-01-01

    Traffic models in computer networks can be described as a complicated system. These systems show non-linear features and to simulate behaviours of these systems are also difficult. Before implementing network equipments users wants to know capability of their computer network. They do not want the servers to be overloaded during temporary traffic peaks when more requests arrive than the server is designed for. As a starting point for our study a non-linear system model of network traffic is established to exam behaviour of the network planned. The paper presents setting up a non-linear simulation model that helps us to observe dataflow problems of the networks. This simple model captures the relationship between the competing traffic and the input and output dataflow. In this paper, we also focus on measuring the bottleneck of the network, which was defined as the difference between the link capacity and the competing traffic volume on the link that limits end-to-end throughput. We validate the model using measurements on a working network. The results show that the initial model estimates well main behaviours and critical parameters of the network. Based on this study, we propose to develop a new algorithm, which experimentally determines and predict the available parameters of the network modelled.

  15. Computer system validation: an overview of official requirements and standards.

    Science.gov (United States)

    Hoffmann, A; Kähny-Simonius, J; Plattner, M; Schmidli-Vckovski, V; Kronseder, C

    1998-02-01

    A brief overview of the relevant documents for companies in the pharmaceutical industry, which are to be taken into consideration to fulfil computer system validation requirements, is presented. We concentrate on official requirements and valid standards in the USA, European Community and Switzerland. There are basically three GMP-guidelines. their interpretations by the associations of interests like APV and PDA as well as the GAMP Suppliers Guide. However, the three GMP-guidelines imply the same philosophy about computer system validation. They describe more a what-to-do approach for validation, whereas the GAMP Suppliers Guide describes a how-to-do validation. Nevertheless, they do not contain major discrepancies.

  16. Getting computer models to communicate

    International Nuclear Information System (INIS)

    Caremoli, Ch.; Erhard, P.

    1999-01-01

    Today's computers have the processing power to deliver detailed and global simulations of complex industrial processes such as the operation of a nuclear reactor core. So should we be producing new, global numerical models to take full advantage of this new-found power? If so, it would be a long-term job. There is, however, another solution; to couple the existing validated numerical models together so that they work as one. (authors)

  17. Computer-controlled mechanical lung model for application in pulmonary function studies

    NARCIS (Netherlands)

    A.F.M. Verbraak (Anton); J.E.W. Beneken; J.M. Bogaard (Jan); A. Versprille (Adrian)

    1995-01-01

    textabstractA computer controlled mechanical lung model has been developed for testing lung function equipment, validation of computer programs and simulation of impaired pulmonary mechanics. The construction, function and some applications are described. The physical model is constructed from two

  18. Computer arithmetic and validity theory, implementation, and applications

    CERN Document Server

    Kulisch, Ulrich

    2013-01-01

    This is the revised and extended second edition of the successful basic book on computer arithmetic. It is consistent with the newest recent standard developments in the field. The book shows how the arithmetic capability of the computer can be enhanced. The work is motivated by the desire and the need to improve the accuracy of numerical computing and to control the quality of the computed results (validity). The accuracy requirements for the elementary floating-point operations are extended to the customary product spaces of computations including interval spaces. The mathematical properties

  19. Computational quench model applicable to the SMES/CICC

    Science.gov (United States)

    Luongo, Cesar A.; Chang, Chih-Lien; Partain, Kenneth D.

    1994-07-01

    A computational quench model accounting for the hydraulic peculiarities of the 200 kA SMES cable-in-conduit conductor has been developed. The model is presented and used to simulate the quench on the SMES-ETM. Conclusions are drawn concerning quench detection and protection. A plan for quench model validation is presented.

  20. A computer literacy scale for newly enrolled nursing college students: development and validation.

    Science.gov (United States)

    Lin, Tung-Cheng

    2011-12-01

    Increasing application and use of information systems and mobile technologies in the healthcare industry require increasing nurse competency in computer use. Computer literacy is defined as basic computer skills, whereas computer competency is defined as the computer skills necessary to accomplish job tasks. Inadequate attention has been paid to computer literacy and computer competency scale validity. This study developed a computer literacy scale with good reliability and validity and investigated the current computer literacy of newly enrolled students to develop computer courses appropriate to students' skill levels and needs. This study referenced Hinkin's process to develop a computer literacy scale. Participants were newly enrolled first-year undergraduate students, with nursing or nursing-related backgrounds, currently attending a course entitled Information Literacy and Internet Applications. Researchers examined reliability and validity using confirmatory factor analysis. The final version of the developed computer literacy scale included six constructs (software, hardware, multimedia, networks, information ethics, and information security) and 22 measurement items. Confirmatory factor analysis showed that the scale possessed good content validity, reliability, convergent validity, and discriminant validity. This study also found that participants earned the highest scores for the network domain and the lowest score for the hardware domain. With increasing use of information technology applications, courses related to hardware topic should be increased to improve nurse problem-solving abilities. This study recommends that emphases on word processing and network-related topics may be reduced in favor of an increased emphasis on database, statistical software, hospital information systems, and information ethics.

  1. Model Validation Status Review

    International Nuclear Information System (INIS)

    E.L. Hardin

    2001-01-01

    The primary objective for the Model Validation Status Review was to perform a one-time evaluation of model validation associated with the analysis/model reports (AMRs) containing model input to total-system performance assessment (TSPA) for the Yucca Mountain site recommendation (SR). This review was performed in response to Corrective Action Request BSC-01-C-01 (Clark 2001, Krisha 2001) pursuant to Quality Assurance review findings of an adverse trend in model validation deficiency. The review findings in this report provide the following information which defines the extent of model validation deficiency and the corrective action needed: (1) AMRs that contain or support models are identified, and conversely, for each model the supporting documentation is identified. (2) The use for each model is determined based on whether the output is used directly for TSPA-SR, or for screening (exclusion) of features, events, and processes (FEPs), and the nature of the model output. (3) Two approaches are used to evaluate the extent to which the validation for each model is compliant with AP-3.10Q (Analyses and Models). The approaches differ in regard to whether model validation is achieved within individual AMRs as originally intended, or whether model validation could be readily achieved by incorporating information from other sources. (4) Recommendations are presented for changes to the AMRs, and additional model development activities or data collection, that will remedy model validation review findings, in support of licensing activities. The Model Validation Status Review emphasized those AMRs that support TSPA-SR (CRWMS M and O 2000bl and 2000bm). A series of workshops and teleconferences was held to discuss and integrate the review findings. The review encompassed 125 AMRs (Table 1) plus certain other supporting documents and data needed to assess model validity. The AMRs were grouped in 21 model areas representing the modeling of processes affecting the natural and

  2. Model Validation Status Review

    Energy Technology Data Exchange (ETDEWEB)

    E.L. Hardin

    2001-11-28

    The primary objective for the Model Validation Status Review was to perform a one-time evaluation of model validation associated with the analysis/model reports (AMRs) containing model input to total-system performance assessment (TSPA) for the Yucca Mountain site recommendation (SR). This review was performed in response to Corrective Action Request BSC-01-C-01 (Clark 2001, Krisha 2001) pursuant to Quality Assurance review findings of an adverse trend in model validation deficiency. The review findings in this report provide the following information which defines the extent of model validation deficiency and the corrective action needed: (1) AMRs that contain or support models are identified, and conversely, for each model the supporting documentation is identified. (2) The use for each model is determined based on whether the output is used directly for TSPA-SR, or for screening (exclusion) of features, events, and processes (FEPs), and the nature of the model output. (3) Two approaches are used to evaluate the extent to which the validation for each model is compliant with AP-3.10Q (Analyses and Models). The approaches differ in regard to whether model validation is achieved within individual AMRs as originally intended, or whether model validation could be readily achieved by incorporating information from other sources. (4) Recommendations are presented for changes to the AMRs, and additional model development activities or data collection, that will remedy model validation review findings, in support of licensing activities. The Model Validation Status Review emphasized those AMRs that support TSPA-SR (CRWMS M&O 2000bl and 2000bm). A series of workshops and teleconferences was held to discuss and integrate the review findings. The review encompassed 125 AMRs (Table 1) plus certain other supporting documents and data needed to assess model validity. The AMRs were grouped in 21 model areas representing the modeling of processes affecting the natural and

  3. An integrative computational modelling of music structure apprehension

    DEFF Research Database (Denmark)

    Lartillot, Olivier

    2014-01-01

    , the computational model, by virtue of its generality, extensiveness and operationality, is suggested as a blueprint for the establishment of cognitively validated model of music structure apprehension. Available as a Matlab module, it can be used for practical musicological uses.......An objectivization of music analysis requires a detailed formalization of the underlying principles and methods. The formalization of the most elementary structural processes is hindered by the complexity of music, both in terms of profusions of entities (such as notes) and of tight interactions...... between a large number of dimensions. Computational modeling would enable systematic and exhaustive tests on sizeable pieces of music, yet current researches cover particular musical dimensions with limited success. The aim of this research is to conceive a computational modeling of music analysis...

  4. Online In-Core Thermal Neutron Flux Measurement for the Validation of Computational Methods

    International Nuclear Information System (INIS)

    Mohamad Hairie Rabir; Muhammad Rawi Mohamed Zin; Yahya Ismail

    2016-01-01

    In order to verify and validate the computational methods for neutron flux calculation in RTP calculations, a series of thermal neutron flux measurement has been performed. The Self Powered Neutron Detector (SPND) was used to measure thermal neutron flux to verify the calculated neutron flux distribution in the TRIGA reactor. Measurements results obtained online for different power level of the reactor. The experimental results were compared to the calculations performed with Monte Carlo code MCNP using detailed geometrical model of the reactor. The calculated and measured thermal neutron flux in the core are in very good agreement indicating that the material and geometrical properties of the reactor core are modelled well. In conclusion one can state that our computational model describes very well the neutron flux distribution in the reactor core. Since the computational model properly describes the reactor core it can be used for calculations of reactor core parameters and for optimization of RTP utilization. (author)

  5. Validation and comparison of dispersion models of RTARC DSS

    International Nuclear Information System (INIS)

    Duran, J.; Pospisil, M.

    2004-01-01

    RTARC DSS (Real Time Accident Release Consequences - Decision Support System) is a computer code developed at the VUJE Trnava, Inc. (Stubna, M. et al, 1993). The code calculations include atmospheric transport and diffusion, dose assessment, evaluation and displaying of the affected zones, evaluation of the early health effects, concentration and dose rate time dependence in the selected sites etc. The simulation of the protective measures (sheltering, iodine administration) is involved. The aim of this paper is to present the process of validation of the RTARC dispersion models. RTARC includes models for calculations of release for very short (Method Monte Carlo - MEMOC), short (Gaussian Straight-Line Model) and long distances (Puff Trajectory Model - PTM). Validation of the code RTARC was performed using the results of comparisons and experiments summarized in the Table 1.: 1. Experiments and comparisons in the process of validation of the system RTARC - experiments or comparison - distance - model. Wind tunnel experiments (Universitaet der Bundeswehr, Muenchen) - Area of NPP - Method Monte Carlo. INEL (Idaho National Engineering Laboratory) - short/medium - Gaussian model and multi tracer atmospheric experiment - distances - PTM. Model Validation Kit - short distances - Gaussian model. STEP II.b 'Realistic Case Studies' - long distances - PTM. ENSEMBLE comparison - long distances - PTM (orig.)

  6. Cross-validation pitfalls when selecting and assessing regression and classification models.

    Science.gov (United States)

    Krstajic, Damjan; Buturovic, Ljubomir J; Leahy, David E; Thomas, Simon

    2014-03-29

    We address the problem of selecting and assessing classification and regression models using cross-validation. Current state-of-the-art methods can yield models with high variance, rendering them unsuitable for a number of practical applications including QSAR. In this paper we describe and evaluate best practices which improve reliability and increase confidence in selected models. A key operational component of the proposed methods is cloud computing which enables routine use of previously infeasible approaches. We describe in detail an algorithm for repeated grid-search V-fold cross-validation for parameter tuning in classification and regression, and we define a repeated nested cross-validation algorithm for model assessment. As regards variable selection and parameter tuning we define two algorithms (repeated grid-search cross-validation and double cross-validation), and provide arguments for using the repeated grid-search in the general case. We show results of our algorithms on seven QSAR datasets. The variation of the prediction performance, which is the result of choosing different splits of the dataset in V-fold cross-validation, needs to be taken into account when selecting and assessing classification and regression models. We demonstrate the importance of repeating cross-validation when selecting an optimal model, as well as the importance of repeating nested cross-validation when assessing a prediction error.

  7. Validation of physics and thermalhydraulic computer codes for advanced Candu reactor applications

    International Nuclear Information System (INIS)

    Wren, D.J.; Popov, N.; Snell, V.G.

    2004-01-01

    Atomic Energy of Canada Ltd. (AECL) is developing an Advanced Candu Reactor (ACR) that is an evolutionary advancement of the currently operating Candu 6 reactors. The ACR is being designed to produce electrical power for a capital cost and at a unit-energy cost significantly less than that of the current reactor designs. The ACR retains the modular Candu concept of horizontal fuel channels surrounded by a heavy water moderator. However, ACR uses slightly enriched uranium fuel compared to the natural uranium used in Candu 6. This achieves the twin goals of improved economics (via large reductions in the heavy water moderator volume and replacement of the heavy water coolant with light water coolant) and improved safety. AECL has developed and implemented a software quality assurance program to ensure that its analytical, scientific and design computer codes meet the required standards for software used in safety analyses. Since the basic design of the ACR is equivalent to that of the Candu 6, most of the key phenomena associated with the safety analyses of ACR are common, and the Candu industry standard tool-set of safety analysis codes can be applied to the analysis of the ACR. A systematic assessment of computer code applicability addressing the unique features of the ACR design was performed covering the important aspects of the computer code structure, models, constitutive correlations, and validation database. Arising from this assessment, limited additional requirements for code modifications and extensions to the validation databases have been identified. This paper provides an outline of the AECL software quality assurance program process for the validation of computer codes used to perform physics and thermal-hydraulics safety analyses of the ACR. It describes the additional validation work that has been identified for these codes and the planned, and ongoing, experimental programs to extend the code validation as required to address specific ACR design

  8. Development and validation of a new dynamic computer-controlled model of the human stomach and small intestine.

    Science.gov (United States)

    Guerra, Aurélie; Denis, Sylvain; le Goff, Olivier; Sicardi, Vincent; François, Olivier; Yao, Anne-Françoise; Garrait, Ghislain; Manzi, Aimé Pacifique; Beyssac, Eric; Alric, Monique; Blanquet-Diot, Stéphanie

    2016-06-01

    For ethical, regulatory, and economic reasons, in vitro human digestion models are increasingly used as an alternative to in vivo assays. This study aims to present the new Engineered Stomach and small INtestine (ESIN) model and its validation for pharmaceutical applications. This dynamic computer-controlled system reproduces, according to in vivo data, the complex physiology of the human stomach and small intestine, including pH, transit times, chyme mixing, digestive secretions, and passive absorption of digestion products. Its innovative design allows a progressive meal intake and the differential gastric emptying of solids and liquids. The pharmaceutical behavior of two model drugs (paracetamol immediate release form and theophylline sustained release tablet) was studied in ESIN during liquid digestion. The results were compared to those found with a classical compendial method (paddle apparatus) and in human volunteers. Paracetamol and theophylline tablets showed similar absorption profiles in ESIN and in healthy subjects. For theophylline, a level A in vitro-in vivo correlation could be established between the results obtained in ESIN and in humans. Interestingly, using a pharmaceutical basket, the swelling and erosion of the theophylline sustained release form was followed during transit throughout ESIN. ESIN emerges as a relevant tool for pharmaceutical studies but once further validated may find many other applications in nutritional, toxicological, and microbiological fields. Biotechnol. Bioeng. 2016;113: 1325-1335. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  9. Signal validation with control-room information-processing computers

    International Nuclear Information System (INIS)

    Belblidia, L.A.; Carlson, R.W.; Russell, J.L. Jr.

    1985-01-01

    One of the 'lessons learned' from the Three Mile Island accident focuses upon the need for a validated source of plant-status information in the control room. The utilization of computer-generated graphics to display the readings of the major plant instrumentation has introduced the capability of validating signals prior to their presentation to the reactor operations staff. The current operations philosophies allow the operator a quick look at the gauges to form an impression of the fraction of full scale as the basis for knowledge of the current plant conditions. After the introduction of a computer-based information-display system such as the Safety Parameter Display System (SPDS), operational decisions can be based upon precise knowledge of the parameters that define the operation of the reactor and auxiliary systems. The principal impact of this system on the operator will be to remove the continuing concern for the validity of the instruments which provide the information that governs the operator's decisions. (author)

  10. A framework to establish credibility of computational models in biology.

    Science.gov (United States)

    Patterson, Eann A; Whelan, Maurice P

    2017-10-01

    Computational models in biology and biomedical science are often constructed to aid people's understanding of phenomena or to inform decisions with socioeconomic consequences. Model credibility is the willingness of people to trust a model's predictions and is often difficult to establish for computational biology models. A 3 × 3 matrix has been proposed to allow such models to be categorised with respect to their testability and epistemic foundation in order to guide the selection of an appropriate process of validation to supply evidence to establish credibility. Three approaches to validation are identified that can be deployed depending on whether a model is deemed untestable, testable or lies somewhere in between. In the latter two cases, the validation process involves the quantification of uncertainty which is a key output. The issues arising due to the complexity and inherent variability of biological systems are discussed and the creation of 'digital twins' proposed as a means to alleviate the issues and provide a more robust, transparent and traceable route to model credibility and acceptance. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  11. Validation of thermal hydraulic computer codes for advanced light water reactor

    International Nuclear Information System (INIS)

    Macek, J.

    2001-01-01

    The Czech Republic operates 4 WWER-440 units, two WWER-1000 units are being finalised (one of them is undergoing commissioning). Thermal-hydraulics Department of the Nuclear Research Institute Rez performs accident analyses for these plants using a number of computer codes. To model the primary and secondary circuits behaviour the system codes ATHLET, CATHARE, RELAP, TRAC are applied. Containment and pressure-suppressure system are modelled with RALOC and MELCOR codes, the reactor power calculations (point and space-neutron kinetics) are made with DYN3D, NESTLE and CDF codes (FLUENT, TRIO) are used for some specific problems. An integral part of the current Czech project 'New Energy Sources' is selection of a new nuclear source. Within this and the preceding projects financed by the Czech Ministry of Industry and Trade and the EU PHARE, the Department carries and has carried out the systematic validation of thermal-hydraulic and reactor physics computer codes applying data obtained on several experimental facilities as well as the real operational data. The paper provides a concise information on these activities of the NRI and its Thermal-hydraulics Department. A detailed example of the system code validation and the consequent utilisation of the results for a real NPP purposes is included. (author)

  12. Fault-tolerant clock synchronization validation methodology. [in computer systems

    Science.gov (United States)

    Butler, Ricky W.; Palumbo, Daniel L.; Johnson, Sally C.

    1987-01-01

    A validation method for the synchronization subsystem of a fault-tolerant computer system is presented. The high reliability requirement of flight-crucial systems precludes the use of most traditional validation methods. The method presented utilizes formal design proof to uncover design and coding errors and experimentation to validate the assumptions of the design proof. The experimental method is described and illustrated by validating the clock synchronization system of the Software Implemented Fault Tolerance computer. The design proof of the algorithm includes a theorem that defines the maximum skew between any two nonfaulty clocks in the system in terms of specific system parameters. Most of these parameters are deterministic. One crucial parameter is the upper bound on the clock read error, which is stochastic. The probability that this upper bound is exceeded is calculated from data obtained by the measurement of system parameters. This probability is then included in a detailed reliability analysis of the system.

  13. Development of Computational Models for Pyrochemical Electrorefiners of Nuclear Waste Transmutation Systems

    International Nuclear Information System (INIS)

    Kim, K. R.; Lee, H. S.; Hwang, I. S.

    2010-12-01

    The objective of this project is to develop multi-dimensional computational models in order to improve the operation of uranium electrorefiners currently used in pyroprocessing technology. These 2-D (US) and 3-D (ROK) mathematical models are based on the fundamental physical and chemical properties of the electrorefiner processes. The validated models by compiled and evaluated experimental data could provide better information for developing advanced electrorefiners for uranium recovery. The research results in this period are as follows: - Successfully assessed a common computational platform for the modeling work and identify spatial characterization requirements. - Successfully developed a 3-D electro-fluid dynamic electrorefiner model. - Successfully validated and benchmarked the two multi-dimensional models with compiled experimental data sets

  14. Climate models on massively parallel computers

    International Nuclear Information System (INIS)

    Vitart, F.; Rouvillois, P.

    1993-01-01

    First results got on massively parallel computers (Multiple Instruction Multiple Data and Simple Instruction Multiple Data) allow to consider building of coupled models with high resolutions. This would make possible simulation of thermoaline circulation and other interaction phenomena between atmosphere and ocean. The increasing of computers powers, and then the improvement of resolution will go us to revise our approximations. Then hydrostatic approximation (in ocean circulation) will not be valid when the grid mesh will be of a dimension lower than a few kilometers: We shall have to find other models. The expert appraisement got in numerical analysis at the Center of Limeil-Valenton (CEL-V) will be used again to imagine global models taking in account atmosphere, ocean, ice floe and biosphere, allowing climate simulation until a regional scale

  15. Explicit validation of a surface shortwave radiation balance model over snow-covered complex terrain

    Science.gov (United States)

    Helbig, N.; Löwe, H.; Mayer, B.; Lehning, M.

    2010-09-01

    A model that computes the surface radiation balance for all sky conditions in complex terrain is presented. The spatial distribution of direct and diffuse sky radiation is determined from observations of incident global radiation, air temperature, and relative humidity at a single measurement location. Incident radiation under cloudless sky is spatially derived from a parameterization of the atmospheric transmittance. Direct and diffuse sky radiation for all sky conditions are obtained by decomposing the measured global radiation value. Spatial incident radiation values under all atmospheric conditions are computed by adjusting the spatial radiation values obtained from the parametric model with the radiation components obtained from the decomposition model at the measurement site. Topographic influences such as shading are accounted for. The radiosity approach is used to compute anisotropic terrain reflected radiation. Validations of the shortwave radiation balance model are presented in detail for a day with cloudless sky. For a day with overcast sky a first validation is presented. Validation of a section of the horizon line as well as of individual radiation components is performed with high-quality measurements. A new measurement setup was designed to determine terrain reflected radiation. There is good agreement between the measurements and the modeled terrain reflected radiation values as well as with incident radiation values. A comparison of the model with a fully three-dimensional radiative transfer Monte Carlo model is presented. That validation reveals a good agreement between modeled radiation values.

  16. Architecture and VHDL behavioural validation of a parallel processor dedicated to computer vision

    International Nuclear Information System (INIS)

    Collette, Thierry

    1992-01-01

    Speeding up image processing is mainly obtained using parallel computers; SIMD processors (single instruction stream, multiple data stream) have been developed, and have proven highly efficient regarding low-level image processing operations. Nevertheless, their performances drop for most intermediate of high level operations, mainly when random data reorganisations in processor memories are involved. The aim of this thesis was to extend the SIMD computer capabilities to allow it to perform more efficiently at the image processing intermediate level. The study of some representative algorithms of this class, points out the limits of this computer. Nevertheless, these limits can be erased by architectural modifications. This leads us to propose SYMPATIX, a new SIMD parallel computer. To valid its new concept, a behavioural model written in VHDL - Hardware Description Language - has been elaborated. With this model, the new computer performances have been estimated running image processing algorithm simulations. VHDL modeling approach allows to perform the system top down electronic design giving an easy coupling between system architectural modifications and their electronic cost. The obtained results show SYMPATIX to be an efficient computer for low and intermediate level image processing. It can be connected to a high level computer, opening up the development of new computer vision applications. This thesis also presents, a top down design method, based on the VHDL, intended for electronic system architects. (author) [fr

  17. Validation of a loss of vacuum accident (LOVA) Computational Fluid Dynamics (CFD) model

    International Nuclear Information System (INIS)

    Bellecci, C.; Gaudio, P.; Lupelli, I.; Malizia, A.; Porfiri, M.T.; Quaranta, R.; Richetta, M.

    2011-01-01

    Intense thermal loads in fusion devices occur during plasma disruptions, Edge Localized Modes (ELM) and Vertical Displacement Events (VDE). They will result in macroscopic erosion of the plasma facing materials and consequent accumulation of activated dust into the ITER Vacuum Vessel (VV). A recognized safety issue for future fusion reactors fueled with deuterium and tritium is the generation of sizeable quantities of dust. In case of LOVA, air inlet occurs due to the pressure difference between the atmospheric condition and the internal condition. It causes mobilization of the dust that can exit the VV threatening public safety because it may contain tritium, may be radioactive from activation products, and may be chemically reactive and/or toxic (Sharpe et al.; Sharpe and Humrickhouse). Several experiments have been conducted with STARDUST facility in order to reproduce a low pressurization rate (300 Pa/s) LOVA event in ITER due to a small air leakage for two different positions of the leak, at the equatorial port level and at the divertor port level, in order to evaluate the velocity magnitude in case of a LOVA that is strictly connected with dust mobilization phenomena. A two-dimensional (2D) modelling of STARDUST, made with the CFD commercial code FLUENT, has been carried out. The results of these simulations were compared against the experimental data for CFD code validation. For validation purposes, the CFD simulation data were extracted at the same locations as the experimental data were collected. In this paper, the authors present and discuss the computer-simulation data and compare them with data collected during the laboratory studies at the University of Rome 'Tor Vergata' Quantum Electronics and Plasmas lab.

  18. Computed simulation of radiographies of pipes - validation of techniques for wall thickness measurements

    International Nuclear Information System (INIS)

    Bellon, C.; Tillack, G.R.; Nockemann, C.; Wenzel, L.

    1995-01-01

    A macroscopic model of radiographic NDE methods and applications is given. A computer-aided approach for determination of wall thickness from radiographs is presented, guaranteeing high accuracy and reproducibility of wall thickness determination by means of projection radiography. The algorithm was applied to computed simulations of radiographies. The simulation thus offers an effective means for testing such automated wall thickness determination as a function of imaging conditions, pipe geometries, coatings, and media tracking, and likewise is a tool for validation and optimization of the method. (orig.) [de

  19. Model-based verification and validation of the SMAP uplink processes

    Science.gov (United States)

    Khan, M. O.; Dubos, G. F.; Tirona, J.; Standley, S.

    Model-Based Systems Engineering (MBSE) is being used increasingly within the spacecraft design community because of its benefits when compared to document-based approaches. As the complexity of projects expands dramatically with continually increasing computational power and technology infusion, the time and effort needed for verification and validation (V& V) increases geometrically. Using simulation to perform design validation with system-level models earlier in the life cycle stands to bridge the gap between design of the system (based on system-level requirements) and verifying those requirements/validating the system as a whole. This case study stands as an example of how a project can validate a system-level design earlier in the project life cycle than traditional V& V processes by using simulation on a system model. Specifically, this paper describes how simulation was added to a system model of the Soil Moisture Active-Passive (SMAP) mission's uplink process. Also discussed are the advantages and disadvantages of the methods employed and the lessons learned; which are intended to benefit future model-based and simulation-based development efforts.

  20. Development and validation of a two-dimensional fast-response flood estimation model

    Energy Technology Data Exchange (ETDEWEB)

    Judi, David R [Los Alamos National Laboratory; Mcpherson, Timothy N [Los Alamos National Laboratory; Burian, Steven J [UNIV OF UTAK

    2009-01-01

    A finite difference formulation of the shallow water equations using an upwind differencing method was developed maintaining computational efficiency and accuracy such that it can be used as a fast-response flood estimation tool. The model was validated using both laboratory controlled experiments and an actual dam breach. Through the laboratory experiments, the model was shown to give good estimations of depth and velocity when compared to the measured data, as well as when compared to a more complex two-dimensional model. Additionally, the model was compared to high water mark data obtained from the failure of the Taum Sauk dam. The simulated inundation extent agreed well with the observed extent, with the most notable differences resulting from the inability to model sediment transport. The results of these validation studies complex two-dimensional model. Additionally, the model was compared to high water mark data obtained from the failure of the Taum Sauk dam. The simulated inundation extent agreed well with the observed extent, with the most notable differences resulting from the inability to model sediment transport. The results of these validation studies show that a relatively numerical scheme used to solve the complete shallow water equations can be used to accurately estimate flood inundation. Future work will focus on further reducing the computation time needed to provide flood inundation estimates for fast-response analyses. This will be accomplished through the efficient use of multi-core, multi-processor computers coupled with an efficient domain-tracking algorithm, as well as an understanding of the impacts of grid resolution on model results.

  1. Computational Design and Discovery of Ni-Based Alloys and Coatings: Thermodynamic Approaches Validated by Experiments

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Zi-Kui [Pennsylvania State University; Gleeson, Brian [University of Pittsburgh; Shang, Shunli [Pennsylvania State University; Gheno, Thomas [University of Pittsburgh; Lindwall, Greta [Pennsylvania State University; Zhou, Bi-Cheng [Pennsylvania State University; Liu, Xuan [Pennsylvania State University; Ross, Austin [Pennsylvania State University

    2018-04-23

    This project developed computational tools that can complement and support experimental efforts in order to enable discovery and more efficient development of Ni-base structural materials and coatings. The project goal was reached through an integrated computation-predictive and experimental-validation approach, including first-principles calculations, thermodynamic CALPHAD (CALculation of PHAse Diagram), and experimental investigations on compositions relevant to Ni-base superalloys and coatings in terms of oxide layer growth and microstructure stabilities. The developed description included composition ranges typical for coating alloys and, hence, allow for prediction of thermodynamic properties for these material systems. The calculation of phase compositions, phase fraction, and phase stabilities, which are directly related to properties such as ductility and strength, was a valuable contribution, along with the collection of computational tools that are required to meet the increasing demands for strong, ductile and environmentally-protective coatings. Specifically, a suitable thermodynamic description for the Ni-Al-Cr-Co-Si-Hf-Y system was developed for bulk alloy and coating compositions. Experiments were performed to validate and refine the thermodynamics from the CALPHAD modeling approach. Additionally, alloys produced using predictions from the current computational models were studied in terms of their oxidation performance. Finally, results obtained from experiments aided in the development of a thermodynamic modeling automation tool called ESPEI/pycalphad - for more rapid discovery and development of new materials.

  2. Developing and validating an instrument for measuring mobile computing self-efficacy.

    Science.gov (United States)

    Wang, Yi-Shun; Wang, Hsiu-Yuan

    2008-08-01

    IT-related self-efficacy has been found to have a critical influence on system use. However, traditional measures of computer self-efficacy and Internet-related self-efficacy are perceived to be inapplicable in the context of mobile computing and commerce because they are targeted primarily at either desktop computer or wire-based technology contexts. Based on previous research, this study develops and validates a multidimensional instrument for measuring mobile computing self-efficacy (MCSE). This empirically validated instrument will be useful to researchers in developing and testing the theories of mobile user behavior, and to practitioners in assessing the mobile computing self-efficacy of users and promoting the use of mobile commerce systems.

  3. Validation od computational model ALDERSON/EGSnrc for chest radiography; Validação do modelo computacional Alderson/EGSnrc para radiografias de tórax

    Energy Technology Data Exchange (ETDEWEB)

    Muniz, Bianca C. [Instituto Federal de Educação, Ciência e Tecnologia de Pernambuco - IFPE, Recife, PE (Brazil); Santos, André L. dos; Menezes, Claudio J.M., E-mail: andre.luiz_76@yahoo.com.br, E-mail: cjmm@cnen.gov.br [Centro Regional de Ciências Nucleares do Nordeste (CRCN-NE/CNEN-PE), Recife, PE (Brazil)

    2017-07-01

    To perform dose studies in situations of exposure to radiation, without exposing individuals, the numerical dosimetry uses Computational Exposure Models (ECM). Composed essentially by a radioactive source simulator algorithm, a voxel phantom representing the human anatomy and a Monte Carlo code, the ECMs must be validated to determine the reliability of the physical array representation. The objective of this work is to validate the ALDERSON / EGSnrc MCE by through comparisons between the experimental measurements obtained with the ionization chamber and virtual simulations using Monte Carlo Method to determine the ratio of the input and output radiation dose. Preliminary results of these comparisons showed that the ECM reproduced the results of the experimental measurements performed with the physical phantom with a relative error of less than 10%, validating the use of this model for simulations of chest radiographs and estimates of radiation doses in tissues in the irradiated structures.

  4. Empirical Validation and Application of the Computing Attitudes Survey

    Science.gov (United States)

    Dorn, Brian; Elliott Tew, Allison

    2015-01-01

    Student attitudes play an important role in shaping learning experiences. However, few validated instruments exist for measuring student attitude development in a discipline-specific way. In this paper, we present the design, development, and validation of the computing attitudes survey (CAS). The CAS is an extension of the Colorado Learning…

  5. Bayesian leave-one-out cross-validation approximations for Gaussian latent variable models

    DEFF Research Database (Denmark)

    Vehtari, Aki; Mononen, Tommi; Tolvanen, Ville

    2016-01-01

    The future predictive performance of a Bayesian model can be estimated using Bayesian cross-validation. In this article, we consider Gaussian latent variable models where the integration over the latent values is approximated using the Laplace method or expectation propagation (EP). We study...... the properties of several Bayesian leave-one-out (LOO) cross-validation approximations that in most cases can be computed with a small additional cost after forming the posterior approximation given the full data. Our main objective is to assess the accuracy of the approximative LOO cross-validation estimators...

  6. Getting computer models to communicate; Faire communiquer les modeles numeriques

    Energy Technology Data Exchange (ETDEWEB)

    Caremoli, Ch. [Electricite de France (EDF), 75 - Paris (France). Dept. Mecanique et Modeles Numeriques; Erhard, P. [Electricite de France (EDF), 75 - Paris (France). Dept. Physique des Reacteurs

    1999-07-01

    Today's computers have the processing power to deliver detailed and global simulations of complex industrial processes such as the operation of a nuclear reactor core. So should we be producing new, global numerical models to take full advantage of this new-found power? If so, it would be a long-term job. There is, however, another solution; to couple the existing validated numerical models together so that they work as one. (authors)

  7. Validation and testing of the VAM2D computer code

    International Nuclear Information System (INIS)

    Kool, J.B.; Wu, Y.S.

    1991-10-01

    This document describes two modeling studies conducted by HydroGeoLogic, Inc. for the US NRC under contract no. NRC-04089-090, entitled, ''Validation and Testing of the VAM2D Computer Code.'' VAM2D is a two-dimensional, variably saturated flow and transport code, with applications for performance assessment of nuclear waste disposal. The computer code itself is documented in a separate NUREG document (NUREG/CR-5352, 1989). The studies presented in this report involve application of the VAM2D code to two diverse subsurface modeling problems. The first one involves modeling of infiltration and redistribution of water and solutes in an initially dry, heterogeneous field soil. This application involves detailed modeling over a relatively short, 9-month time period. The second problem pertains to the application of VAM2D to the modeling of a waste disposal facility in a fractured clay, over much larger space and time scales and with particular emphasis on the applicability and reliability of using equivalent porous medium approach for simulating flow and transport in fractured geologic media. Reflecting the separate and distinct nature of the two problems studied, this report is organized in two separate parts. 61 refs., 31 figs., 9 tabs

  8. A reliable and valid questionnaire was developed to measure computer vision syndrome at the workplace.

    Science.gov (United States)

    Seguí, María del Mar; Cabrero-García, Julio; Crespo, Ana; Verdú, José; Ronda, Elena

    2015-06-01

    To design and validate a questionnaire to measure visual symptoms related to exposure to computers in the workplace. Our computer vision syndrome questionnaire (CVS-Q) was based on a literature review and validated through discussion with experts and performance of a pretest, pilot test, and retest. Content validity was evaluated by occupational health, optometry, and ophthalmology experts. Rasch analysis was used in the psychometric evaluation of the questionnaire. Criterion validity was determined by calculating the sensitivity and specificity, receiver operator characteristic curve, and cutoff point. Test-retest repeatability was tested using the intraclass correlation coefficient (ICC) and concordance by Cohen's kappa (κ). The CVS-Q was developed with wide consensus among experts and was well accepted by the target group. It assesses the frequency and intensity of 16 symptoms using a single rating scale (symptom severity) that fits the Rasch rating scale model well. The questionnaire has sensitivity and specificity over 70% and achieved good test-retest repeatability both for the scores obtained [ICC = 0.802; 95% confidence interval (CI): 0.673, 0.884] and CVS classification (κ = 0.612; 95% CI: 0.384, 0.839). The CVS-Q has acceptable psychometric properties, making it a valid and reliable tool to control the visual health of computer workers, and can potentially be used in clinical trials and outcome research. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. The concept of validation of numerical models for consequence analysis

    International Nuclear Information System (INIS)

    Borg, Audun; Paulsen Husted, Bjarne; Njå, Ove

    2014-01-01

    Numerical models such as computational fluid dynamics (CFD) models are increasingly used in life safety studies and other types of analyses to calculate the effects of fire and explosions. The validity of these models is usually established by benchmark testing. This is done to quantitatively measure the agreement between the predictions provided by the model and the real world represented by observations in experiments. This approach assumes that all variables in the real world relevant for the specific study are adequately measured in the experiments and in the predictions made by the model. In this paper the various definitions of validation for CFD models used for hazard prediction are investigated to assess their implication for consequence analysis in a design phase. In other words, how is uncertainty in the prediction of future events reflected in the validation process? The sources of uncertainty are viewed from the perspective of the safety engineer. An example of the use of a CFD model is included to illustrate the assumptions the analyst must make and how these affect the prediction made by the model. The assessments presented in this paper are based on a review of standards and best practice guides for CFD modeling and the documentation from two existing CFD programs. Our main thrust has been to assess how validation work is performed and communicated in practice. We conclude that the concept of validation adopted for numerical models is adequate in terms of model performance. However, it does not address the main sources of uncertainty from the perspective of the safety engineer. Uncertainty in the input quantities describing future events, which are determined by the model user, outweighs the inaccuracies in the model as reported in validation studies. - Highlights: • Examine the basic concept of validation applied to models for consequence analysis. • Review standards and guides for validation of numerical models. • Comparison of the validation

  10. Some guidance on preparing validation plans for the DART Full System Models.

    Energy Technology Data Exchange (ETDEWEB)

    Gray, Genetha Anne; Hough, Patricia Diane; Hills, Richard Guy (Sandia National Laboratories, Albuquerque, NM)

    2009-03-01

    Planning is an important part of computational model verification and validation (V&V) and the requisite planning document is vital for effectively executing the plan. The document provides a means of communicating intent to the typically large group of people, from program management to analysts to test engineers, who must work together to complete the validation activities. This report provides guidelines for writing a validation plan. It describes the components of such a plan and includes important references and resources. While the initial target audience is the DART Full System Model teams in the nuclear weapons program, the guidelines are generally applicable to other modeling efforts. Our goal in writing this document is to provide a framework for consistency in validation plans across weapon systems, different types of models, and different scenarios. Specific details contained in any given validation plan will vary according to application requirements and available resources.

  11. Development and validation of a computational model to study the effect of foot constraint on ankle injury due to external rotation.

    Science.gov (United States)

    Wei, Feng; Hunley, Stanley C; Powell, John W; Haut, Roger C

    2011-02-01

    Recent studies, using two different manners of foot constraint, potted and taped, document altered failure characteristics in the human cadaver ankle under controlled external rotation of the foot. The posterior talofibular ligament (PTaFL) was commonly injured when the foot was constrained in potting material, while the frequency of deltoid ligament injury was higher for the taped foot. In this study an existing multibody computational modeling approach was validated to include the influence of foot constraint, determine the kinematics of the joint under external foot rotation, and consequently obtain strains in various ligaments. It was hypothesized that the location of ankle injury due to excessive levels of external foot rotation is a function of foot constraint. The results from this model simulation supported this hypothesis and helped to explain the mechanisms of injury in the cadaver experiments. An excessive external foot rotation might generate a PTaFL injury for a rigid foot constraint, and an anterior deltoid ligament injury for a pliant foot constraint. The computational models may be further developed and modified to simulate the human response for different shoe designs, as well as on various athletic shoe-surface interfaces, so as to provide a computational basis for optimizing athletic performance with minimal injury risk.

  12. Vortex-Concept for Radioactivity Release Prevention at NPP: Development of Computational Model of Lab-Scale Experimental Setup

    Energy Technology Data Exchange (ETDEWEB)

    Ullah, Sana; Sung, Yim Man; Park, Jin Soo; Sung Hyung Jin [KAERI, Daejeon (Korea, Republic of)

    2016-05-15

    The experimental validation of the vortex-like air curtain concept and use of an appropriate CFD modelling approach for analyzing the problem becomes crucial. A lab-scale experimental setup is designed to validate the proposed concept and CFD modeling approach as a part of validation process. In this study, a computational model of this lab-scale experiment setup is developed using open source CFD code OpenFOAM. The computational results will be compared with experimental data for validation purposes in future, when experimental data is available. 1) A computation model of a lab-scale experimental setup, designed to validate the concept of artificial vortex-like airflow generation for application to radioactivity dispersion prevention in the event of severe accident, was developed. 2) The mesh sensitivity study was performed and a mesh of about 2 million cells was found to be sufficient for this setup.

  13. Coupling of EIT with computational lung modeling for predicting patient-specific ventilatory responses.

    Science.gov (United States)

    Roth, Christian J; Becher, Tobias; Frerichs, Inéz; Weiler, Norbert; Wall, Wolfgang A

    2017-04-01

    Providing optimal personalized mechanical ventilation for patients with acute or chronic respiratory failure is still a challenge within a clinical setting for each case anew. In this article, we integrate electrical impedance tomography (EIT) monitoring into a powerful patient-specific computational lung model to create an approach for personalizing protective ventilatory treatment. The underlying computational lung model is based on a single computed tomography scan and able to predict global airflow quantities, as well as local tissue aeration and strains for any ventilation maneuver. For validation, a novel "virtual EIT" module is added to our computational lung model, allowing to simulate EIT images based on the patient's thorax geometry and the results of our numerically predicted tissue aeration. Clinically measured EIT images are not used to calibrate the computational model. Thus they provide an independent method to validate the computational predictions at high temporal resolution. The performance of this coupling approach has been tested in an example patient with acute respiratory distress syndrome. The method shows good agreement between computationally predicted and clinically measured airflow data and EIT images. These results imply that the proposed framework can be used for numerical prediction of patient-specific responses to certain therapeutic measures before applying them to an actual patient. In the long run, definition of patient-specific optimal ventilation protocols might be assisted by computational modeling. NEW & NOTEWORTHY In this work, we present a patient-specific computational lung model that is able to predict global and local ventilatory quantities for a given patient and any selected ventilation protocol. For the first time, such a predictive lung model is equipped with a virtual electrical impedance tomography module allowing real-time validation of the computed results with the patient measurements. First promising results

  14. Validation of the newborn larynx modeling with aerodynamical experimental data.

    Science.gov (United States)

    Nicollas, R; Giordano, J; Garrel, R; Medale, M; Caminat, P; Giovanni, A; Ouaknine, M; Triglia, J M

    2009-06-01

    Many authors have studied adult's larynx modelization, but the mechanisms of newborn's voice production have very rarely been investigated. After validating a numerical model with acoustic data, studies were performed on larynges of human fetuses in order to validate this model with aerodynamical experiments. Anatomical measurements were performed and a simplified numerical model was built using Fluent((R)) with the vocal folds in phonatory position. The results obtained are in good agreement with those obtained by laser Doppler velocimetry (LDV) and high-frame rate particle image velocimetry (HFR-PIV), on an experimental bench with excised human fetus larynges. It appears that computing with first cry physiological parameters leads to a model which is close to those obtained in experiments with real organs.

  15. Development and Validation of Computational Fluid Dynamics Models for Prediction of Heat Transfer and Thermal Microenvironments of Corals

    Science.gov (United States)

    Ong, Robert H.; King, Andrew J. C.; Mullins, Benjamin J.; Cooper, Timothy F.; Caley, M. Julian

    2012-01-01

    We present Computational Fluid Dynamics (CFD) models of the coupled dynamics of water flow, heat transfer and irradiance in and around corals to predict temperatures experienced by corals. These models were validated against controlled laboratory experiments, under constant and transient irradiance, for hemispherical and branching corals. Our CFD models agree very well with experimental studies. A linear relationship between irradiance and coral surface warming was evident in both the simulation and experimental result agreeing with heat transfer theory. However, CFD models for the steady state simulation produced a better fit to the linear relationship than the experimental data, likely due to experimental error in the empirical measurements. The consistency of our modelling results with experimental observations demonstrates the applicability of CFD simulations, such as the models developed here, to coral bleaching studies. A study of the influence of coral skeletal porosity and skeletal bulk density on surface warming was also undertaken, demonstrating boundary layer behaviour, and interstitial flow magnitude and temperature profiles in coral cross sections. Our models compliment recent studies showing systematic changes in these parameters in some coral colonies and have utility in the prediction of coral bleaching. PMID:22701582

  16. Preliminary validation of computational model for neutron flux prediction of Thai Research Reactor (TRR-1/M1)

    Science.gov (United States)

    Sabaibang, S.; Lekchaum, S.; Tipayakul, C.

    2015-05-01

    This study is a part of an on-going work to develop a computational model of Thai Research Reactor (TRR-1/M1) which is capable of accurately predicting the neutron flux level and spectrum. The computational model was created by MCNPX program and the CT (Central Thimble) in-core irradiation facility was selected as the location for validation. The comparison was performed with the typical flux measurement method routinely practiced at TRR-1/M1, that is, the foil activation technique. In this technique, gold foil is irradiated for a certain period of time and the activity of the irradiated target is measured to derive the thermal neutron flux. Additionally, the flux measurement with SPND (self-powered neutron detector) was also performed for comparison. The thermal neutron flux from the MCNPX simulation was found to be 1.79×1013 neutron/cm2s while that from the foil activation measurement was 4.68×1013 neutron/cm2s. On the other hand, the thermal neutron flux from the measurement using SPND was 2.47×1013 neutron/cm2s. An assessment of the differences among the three methods was done. The difference of the MCNPX with the foil activation technique was found to be 67.8% and the difference of the MCNPX with the SPND was found to be 27.8%.

  17. A qualitatively validated mathematical-computational model of the immune response to the yellow fever vaccine.

    Science.gov (United States)

    Bonin, Carla R B; Fernandes, Guilherme C; Dos Santos, Rodrigo W; Lobosco, Marcelo

    2018-05-25

    Although a safe and effective yellow fever vaccine was developed more than 80 years ago, several issues regarding its use remain unclear. For example, what is the minimum dose that can provide immunity against the disease? A useful tool that can help researchers answer this and other related questions is a computational simulator that implements a mathematical model describing the human immune response to vaccination against yellow fever. This work uses a system of ten ordinary differential equations to represent a few important populations in the response process generated by the body after vaccination. The main populations include viruses, APCs, CD8+ T cells, short-lived and long-lived plasma cells, B cells and antibodies. In order to qualitatively validate our model, four experiments were carried out, and their computational results were compared to experimental data obtained from the literature. The four experiments were: a) simulation of a scenario in which an individual was vaccinated against yellow fever for the first time; b) simulation of a booster dose ten years after the first dose; c) simulation of the immune response to the yellow fever vaccine in individuals with different levels of naïve CD8+ T cells; and d) simulation of the immune response to distinct doses of the yellow fever vaccine. This work shows that the simulator was able to qualitatively reproduce some of the experimental results reported in the literature, such as the amount of antibodies and viremia throughout time, as well as to reproduce other behaviors of the immune response reported in the literature, such as those that occur after a booster dose of the vaccine.

  18. Validity of two methods to assess computer use: Self-report by questionnaire and computer use software

    NARCIS (Netherlands)

    Douwes, M.; Kraker, H.de; Blatter, B.M.

    2007-01-01

    A long duration of computer use is known to be positively associated with Work Related Upper Extremity Disorders (WRUED). Self-report by questionnaire is commonly used to assess a worker's duration of computer use. The aim of the present study was to assess the validity of self-report and computer

  19. Validation techniques of agent based modelling for geospatial simulations

    Directory of Open Access Journals (Sweden)

    M. Darvishi

    2014-10-01

    Full Text Available One of the most interesting aspects of modelling and simulation study is to describe the real world phenomena that have specific properties; especially those that are in large scales and have dynamic and complex behaviours. Studying these phenomena in the laboratory is costly and in most cases it is impossible. Therefore, Miniaturization of world phenomena in the framework of a model in order to simulate the real phenomena is a reasonable and scientific approach to understand the world. Agent-based modelling and simulation (ABMS is a new modelling method comprising of multiple interacting agent. They have been used in the different areas; for instance, geographic information system (GIS, biology, economics, social science and computer science. The emergence of ABM toolkits in GIS software libraries (e.g. ESRI’s ArcGIS, OpenMap, GeoTools, etc for geospatial modelling is an indication of the growing interest of users to use of special capabilities of ABMS. Since ABMS is inherently similar to human cognition, therefore it could be built easily and applicable to wide range applications than a traditional simulation. But a key challenge about ABMS is difficulty in their validation and verification. Because of frequent emergence patterns, strong dynamics in the system and the complex nature of ABMS, it is hard to validate and verify ABMS by conventional validation methods. Therefore, attempt to find appropriate validation techniques for ABM seems to be necessary. In this paper, after reviewing on Principles and Concepts of ABM for and its applications, the validation techniques and challenges of ABM validation are discussed.

  20. Validation techniques of agent based modelling for geospatial simulations

    Science.gov (United States)

    Darvishi, M.; Ahmadi, G.

    2014-10-01

    One of the most interesting aspects of modelling and simulation study is to describe the real world phenomena that have specific properties; especially those that are in large scales and have dynamic and complex behaviours. Studying these phenomena in the laboratory is costly and in most cases it is impossible. Therefore, Miniaturization of world phenomena in the framework of a model in order to simulate the real phenomena is a reasonable and scientific approach to understand the world. Agent-based modelling and simulation (ABMS) is a new modelling method comprising of multiple interacting agent. They have been used in the different areas; for instance, geographic information system (GIS), biology, economics, social science and computer science. The emergence of ABM toolkits in GIS software libraries (e.g. ESRI's ArcGIS, OpenMap, GeoTools, etc) for geospatial modelling is an indication of the growing interest of users to use of special capabilities of ABMS. Since ABMS is inherently similar to human cognition, therefore it could be built easily and applicable to wide range applications than a traditional simulation. But a key challenge about ABMS is difficulty in their validation and verification. Because of frequent emergence patterns, strong dynamics in the system and the complex nature of ABMS, it is hard to validate and verify ABMS by conventional validation methods. Therefore, attempt to find appropriate validation techniques for ABM seems to be necessary. In this paper, after reviewing on Principles and Concepts of ABM for and its applications, the validation techniques and challenges of ABM validation are discussed.

  1. A real-time computational model for estimating kinematics of ankle ligaments.

    Science.gov (United States)

    Zhang, Mingming; Davies, T Claire; Zhang, Yanxin; Xie, Sheng Quan

    2016-01-01

    An accurate assessment of ankle ligament kinematics is crucial in understanding the injury mechanisms and can help to improve the treatment of an injured ankle, especially when used in conjunction with robot-assisted therapy. A number of computational models have been developed and validated for assessing the kinematics of ankle ligaments. However, few of them can do real-time assessment to allow for an input into robotic rehabilitation programs. An ankle computational model was proposed and validated to quantify the kinematics of ankle ligaments as the foot moves in real-time. This model consists of three bone segments with three rotational degrees of freedom (DOFs) and 12 ankle ligaments. This model uses inputs for three position variables that can be measured from sensors in many ankle robotic devices that detect postures within the foot-ankle environment and outputs the kinematics of ankle ligaments. Validation of this model in terms of ligament length and strain was conducted by comparing it with published data on cadaver anatomy and magnetic resonance imaging. The model based on ligament lengths and strains is in concurrence with those from the published studies but is sensitive to ligament attachment positions. This ankle computational model has the potential to be used in robot-assisted therapy for real-time assessment of ligament kinematics. The results provide information regarding the quantification of kinematics associated with ankle ligaments related to the disability level and can be used for optimizing the robotic training trajectory.

  2. Validation of spectral gas radiation models under oxyfuel conditions

    Energy Technology Data Exchange (ETDEWEB)

    Becher, Johann Valentin

    2013-05-15

    Combustion of hydrocarbon fuels with pure oxygen results in a different flue gas composition than combustion with air. Standard computational-fluid-dynamics (CFD) spectral gas radiation models for air combustion are therefore out of their validity range in oxyfuel combustion. This thesis provides a common spectral basis for the validation of new spectral models. A literature review about fundamental gas radiation theory, spectral modeling and experimental methods provides the reader with a basic understanding of the topic. In the first results section, this thesis validates detailed spectral models with high resolution spectral measurements in a gas cell with the aim of recommending one model as the best benchmark model. In the second results section, spectral measurements from a turbulent natural gas flame - as an example for a technical combustion process - are compared to simulated spectra based on measured gas atmospheres. The third results section compares simplified spectral models to the benchmark model recommended in the first results section and gives a ranking of the proposed models based on their accuracy. A concluding section gives recommendations for the selection and further development of simplified spectral radiation models. Gas cell transmissivity spectra in the spectral range of 2.4 - 5.4 {mu}m of water vapor and carbon dioxide in the temperature range from 727 C to 1500 C and at different concentrations were compared in the first results section at a nominal resolution of 32 cm{sup -1} to line-by-line models from different databases, two statistical-narrow-band models and the exponential-wide-band model. The two statistical-narrow-band models EM2C and RADCAL showed good agreement with a maximal band transmissivity deviation of 3 %. The exponential-wide-band model showed a deviation of 6 %. The new line-by-line database HITEMP2010 had the lowest band transmissivity deviation of 2.2% and was therefore recommended as a reference model for the

  3. Ex Vivo Methods for Informing Computational Models of the Mitral Valve

    OpenAIRE

    Bloodworth, Charles H.; Pierce, Eric L.; Easley, Thomas F.; Drach, Andrew; Khalighi, Amir H.; Toma, Milan; Jensen, Morten O.; Sacks, Michael S.; Yoganathan, Ajit P.

    2016-01-01

    Computational modeling of the mitral valve (MV) has potential applications for determining optimal MV repair techniques and risk of recurrent mitral regurgitation. Two key concerns for informing these models are (1) sensitivity of model performance to the accuracy of the input geometry, and, (2) acquisition of comprehensive data sets against which the simulation can be validated across clinically relevant geometries. Addressing the first concern, ex vivo micro-computed tomography (microCT) wa...

  4. Validation and computing and performance studies for the ATLAS simulation

    CERN Document Server

    Marshall, Z; The ATLAS collaboration

    2009-01-01

    We present the validation of the ATLAS simulation software pro ject. Software development is controlled by nightly builds and several levels of automatic tests to ensure stability. Computing validation, including CPU time, memory, and disk space required per event, is benchmarked for all software releases. Several different physics processes and event types are checked to thoroughly test all aspects of the detector simulation. The robustness of the simulation software is demonstrated by the production of 500 million events on the World-wide LHC Computing Grid in the last year.

  5. Thermal hydraulic model validation for HOR mixed core fuel management

    International Nuclear Information System (INIS)

    Gibcus, H.P.M.; Vries, J.W. de; Leege, P.F.A. de

    1997-01-01

    A thermal-hydraulic core management model has been developed for the Hoger Onderwijsreactor (HOR), a 2 MW pool-type university research reactor. The model was adopted for safety analysis purposes in the framework of HEU/LEU core conversion studies. It is applied in the thermal-hydraulic computer code SHORT (Steady-state HOR Thermal-hydraulics) which is presently in use in designing core configurations and for in-core fuel management. An elaborate measurement program was performed for establishing the core hydraulic characteristics for a variety of conditions. The hydraulic data were obtained with a dummy fuel element with special equipment allowing a.o. direct measurement of the true core flow rate. Using these data the thermal-hydraulic model was validated experimentally. The model, experimental tests, and model validation are discussed. (author)

  6. A Perspective on Computational Human Performance Models as Design Tools

    Science.gov (United States)

    Jones, Patricia M.

    2010-01-01

    The design of interactive systems, including levels of automation, displays, and controls, is usually based on design guidelines and iterative empirical prototyping. A complementary approach is to use computational human performance models to evaluate designs. An integrated strategy of model-based and empirical test and evaluation activities is particularly attractive as a methodology for verification and validation of human-rated systems for commercial space. This talk will review several computational human performance modeling approaches and their applicability to design of display and control requirements.

  7. Phantom-based experimental validation of computational fluid dynamics simulations on cerebral aneurysms

    Energy Technology Data Exchange (ETDEWEB)

    Sun Qi; Groth, Alexandra; Bertram, Matthias; Waechter, Irina; Bruijns, Tom; Hermans, Roel; Aach, Til [Philips Research Europe, Weisshausstrasse 2, 52066 Aachen (Germany) and Institute of Imaging and Computer Vision, RWTH Aachen University, Sommerfeldstrasse 24, 52074 Aachen (Germany); Philips Research Europe, Weisshausstrasse 2, 52066 Aachen (Germany); Philips Healthcare, X-Ray Pre-Development, Veenpluis 4-6, 5684PC Best (Netherlands); Institute of Imaging and Computer Vision, RWTH Aachen University, Sommerfeldstrasse 24, 52074 Aachen (Germany)

    2010-09-15

    Purpose: Recently, image-based computational fluid dynamics (CFD) simulation has been applied to investigate the hemodynamics inside human cerebral aneurysms. The knowledge of the computed three-dimensional flow fields is used for clinical risk assessment and treatment decision making. However, the reliability of the application specific CFD results has not been thoroughly validated yet. Methods: In this work, by exploiting a phantom aneurysm model, the authors therefore aim to prove the reliability of the CFD results obtained from simulations with sufficiently accurate input boundary conditions. To confirm the correlation between the CFD results and the reality, virtual angiograms are generated by the simulation pipeline and are quantitatively compared to the experimentally acquired angiograms. In addition, a parametric study has been carried out to systematically investigate the influence of the input parameters associated with the current measuring techniques on the flow patterns. Results: Qualitative and quantitative evaluations demonstrate good agreement between the simulated and the real flow dynamics. Discrepancies of less than 15% are found for the relative root mean square errors of time intensity curve comparisons from each selected characteristic position. The investigated input parameters show different influences on the simulation results, indicating the desired accuracy in the measurements. Conclusions: This study provides a comprehensive validation method of CFD simulation for reproducing the real flow field in the cerebral aneurysm phantom under well controlled conditions. The reliability of the CFD is well confirmed. Through the parametric study, it is possible to assess the degree of validity of the associated CFD model based on the parameter values and their estimated accuracy range.

  8. Phantom-based experimental validation of computational fluid dynamics simulations on cerebral aneurysms

    International Nuclear Information System (INIS)

    Sun Qi; Groth, Alexandra; Bertram, Matthias; Waechter, Irina; Bruijns, Tom; Hermans, Roel; Aach, Til

    2010-01-01

    Purpose: Recently, image-based computational fluid dynamics (CFD) simulation has been applied to investigate the hemodynamics inside human cerebral aneurysms. The knowledge of the computed three-dimensional flow fields is used for clinical risk assessment and treatment decision making. However, the reliability of the application specific CFD results has not been thoroughly validated yet. Methods: In this work, by exploiting a phantom aneurysm model, the authors therefore aim to prove the reliability of the CFD results obtained from simulations with sufficiently accurate input boundary conditions. To confirm the correlation between the CFD results and the reality, virtual angiograms are generated by the simulation pipeline and are quantitatively compared to the experimentally acquired angiograms. In addition, a parametric study has been carried out to systematically investigate the influence of the input parameters associated with the current measuring techniques on the flow patterns. Results: Qualitative and quantitative evaluations demonstrate good agreement between the simulated and the real flow dynamics. Discrepancies of less than 15% are found for the relative root mean square errors of time intensity curve comparisons from each selected characteristic position. The investigated input parameters show different influences on the simulation results, indicating the desired accuracy in the measurements. Conclusions: This study provides a comprehensive validation method of CFD simulation for reproducing the real flow field in the cerebral aneurysm phantom under well controlled conditions. The reliability of the CFD is well confirmed. Through the parametric study, it is possible to assess the degree of validity of the associated CFD model based on the parameter values and their estimated accuracy range.

  9. Verification and validation benchmarks.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy

    2007-02-01

    Verification and validation (V&V) are the primary means to assess the accuracy and reliability of computational simulations. V&V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V&V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the level of

  10. Models, validation, and applied geochemistry: Issues in science, communication, and philosophy

    International Nuclear Information System (INIS)

    Kirk Nordstrom, D.

    2012-01-01

    Models have become so fashionable that many scientists and engineers cannot imagine working without them. The predominant use of computer codes to execute model calculations has blurred the distinction between code and model. The recent controversy regarding model validation has brought into question what we mean by a ‘model’ and by ‘validation.’ It has become apparent that the usual meaning of validation may be common in engineering practice and seems useful in legal practice but it is contrary to scientific practice and brings into question our understanding of science and how it can best be applied to such problems as hazardous waste characterization, remediation, and aqueous geochemistry in general. This review summarizes arguments against using the phrase model validation and examines efforts to validate models for high-level radioactive waste management and for permitting and monitoring open-pit mines. Part of the controversy comes from a misunderstanding of ‘prediction’ and the need to distinguish logical from temporal prediction. Another problem stems from the difference in the engineering approach contrasted with the scientific approach. The reductionist influence on the way we approach environmental investigations also limits our ability to model the interconnected nature of reality. Guidelines are proposed to improve our perceptions and proper utilization of models. Use of the word ‘validation’ is strongly discouraged when discussing model reliability.

  11. A Bayesian framework for adaptive selection, calibration, and validation of coarse-grained models of atomistic systems

    Energy Technology Data Exchange (ETDEWEB)

    Farrell, Kathryn, E-mail: kfarrell@ices.utexas.edu; Oden, J. Tinsley, E-mail: oden@ices.utexas.edu; Faghihi, Danial, E-mail: danial@ices.utexas.edu

    2015-08-15

    A general adaptive modeling algorithm for selection and validation of coarse-grained models of atomistic systems is presented. A Bayesian framework is developed to address uncertainties in parameters, data, and model selection. Algorithms for computing output sensitivities to parameter variances, model evidence and posterior model plausibilities for given data, and for computing what are referred to as Occam Categories in reference to a rough measure of model simplicity, make up components of the overall approach. Computational results are provided for representative applications.

  12. A Bayesian framework for adaptive selection, calibration, and validation of coarse-grained models of atomistic systems

    Science.gov (United States)

    Farrell, Kathryn; Oden, J. Tinsley; Faghihi, Danial

    2015-08-01

    A general adaptive modeling algorithm for selection and validation of coarse-grained models of atomistic systems is presented. A Bayesian framework is developed to address uncertainties in parameters, data, and model selection. Algorithms for computing output sensitivities to parameter variances, model evidence and posterior model plausibilities for given data, and for computing what are referred to as Occam Categories in reference to a rough measure of model simplicity, make up components of the overall approach. Computational results are provided for representative applications.

  13. Validation of Slosh Modeling Approach Using STAR-CCM+

    Science.gov (United States)

    Benson, David J.; Ng, Wanyi

    2018-01-01

    Without an adequate understanding of propellant slosh, the spacecraft attitude control system may be inadequate to control the spacecraft or there may be an unexpected loss of science observation time due to higher slosh settling times. Computational fluid dynamics (CFD) is used to model propellant slosh. STAR-CCM+ is a commercially available CFD code. This paper seeks to validate the CFD modeling approach via a comparison between STAR-CCM+ liquid slosh modeling results and experimental, empirically, and analytically derived results. The geometries examined are a bare right cylinder tank and a right cylinder with a single ring baffle.

  14. Testing and Validation of Computational Methods for Mass Spectrometry.

    Science.gov (United States)

    Gatto, Laurent; Hansen, Kasper D; Hoopmann, Michael R; Hermjakob, Henning; Kohlbacher, Oliver; Beyer, Andreas

    2016-03-04

    High-throughput methods based on mass spectrometry (proteomics, metabolomics, lipidomics, etc.) produce a wealth of data that cannot be analyzed without computational methods. The impact of the choice of method on the overall result of a biological study is often underappreciated, but different methods can result in very different biological findings. It is thus essential to evaluate and compare the correctness and relative performance of computational methods. The volume of the data as well as the complexity of the algorithms render unbiased comparisons challenging. This paper discusses some problems and challenges in testing and validation of computational methods. We discuss the different types of data (simulated and experimental validation data) as well as different metrics to compare methods. We also introduce a new public repository for mass spectrometric reference data sets ( http://compms.org/RefData ) that contains a collection of publicly available data sets for performance evaluation for a wide range of different methods.

  15. Toward a computational model of hemostasis

    Science.gov (United States)

    Leiderman, Karin; Danes, Nicholas; Schoeman, Rogier; Neeves, Keith

    2017-11-01

    Hemostasis is the process by which a blood clot forms to prevent bleeding at a site of injury. The formation time, size and structure of a clot depends on the local hemodynamics and the nature of the injury. Our group has previously developed computational models to study intravascular clot formation, a process confined to the interior of a single vessel. Here we present the first stage of an experimentally-validated, computational model of extravascular clot formation (hemostasis) in which blood through a single vessel initially escapes through a hole in the vessel wall and out a separate injury channel. This stage of the model consists of a system of partial differential equations that describe platelet aggregation and hemodynamics, solved via the finite element method. We also present results from the analogous, in vitro, microfluidic model. In both models, formation of a blood clot occludes the injury channel and stops flow from escaping while blood in the main vessel retains its fluidity. We discuss the different biochemical and hemodynamic effects on clot formation using distinct geometries representing intra- and extravascular injuries.

  16. A briefing to verification and validation of computer software

    International Nuclear Information System (INIS)

    Zhang Aisen; Xie Yalian

    2012-01-01

    Nowadays, the computer equipment and information processing technology is coming into the engineering of instrument and process control. Owing to its convenient and other advantages, more and more utilities are more than happy to use it. After initial utilization in basic functional controlling, the computer equipment and information processing technology is widely used in safety critical control. Consequently, the people pay more attentions to the quality of computer software. How to assess and ensure its quality are the most concerned problems. The verification and validation technology of computer software are important steps to the quality assurance. (authors)

  17. NRPB models for calculating the transfer of radionuclides through the environment. Verification and validation

    International Nuclear Information System (INIS)

    Attwood, C.; Barraclough, I.; Brown, J.

    1998-06-01

    There is a wide range of models available at NRPB to predict the transfer of radionuclides through the environment. Such models form an essential part of assessments of the radiological impact of releases of radionuclides into the environment. These models cover: the atmosphere; the aquatic environment; the geosphere; the terrestrial environment including foodchains. It is important that the models used for radiological impact assessments are robust, reliable and suitable for the assessment being undertaken. During model development it is, therefore, important that the model is both verified and validated. Verification of a model involves ensuring that it has been implemented correctly, while validation consists of demonstrating that the model is an adequate representation of the real environment. The extent to which a model can be verified depends on its complexity and whether similar models exist. For relatively simple models verification is straightforward, but for more complex models verification has to form part of the development, coding and testing of the model within quality assurance procedures. Validation of models should ideally consist of comparisons between the results of the models and experimental or environmental measurement data that were not used to develop the model. This is more straightforward for some models than for others depending on the quantity and type of data available. Validation becomes increasingly difficult for models which are intended to predict environmental transfer at long times or at great distances. It is, therefore, necessary to adopt qualitative validation techniques to ensure that the model is an adequate representation of the real environment. This report summarises the models used at NRPB to predict the transfer of radionuclides through the environment as part of a radiological impact assessment. It outlines the work carried out to verify and validate the models. The majority of these models are not currently available

  18. Airfoil computations using the gamma-Retheta model; Wind turbines

    Energy Technology Data Exchange (ETDEWEB)

    Soerensen, Niels N.

    2009-05-15

    The present work addresses the validation of the implementation of the Menter, Langtry et al. gamma-theta correlation based transition model [1, 2, 3] in the EllipSys2D code. Firstly the 2. order of accuracy of the code is verified using a grid refinement study for laminar, turbulent and transitional computations. Based on this, an estimate of the error in the computations is determined to be approximately one percent in the attached region. Following the verification of the implemented model, the model is applied to four airfoils, NACA64-018, NACA64-218, NACA64-418 and NACA64-618 and the results are compared to measurements [4] and computations using the Xfoil code by Drela et al. [5]. In the linear pre stall region good agreement is observed both for lift and drag, while differences to both measurements and Xfoil computations are observed in stalled conditions. (au)

  19. Screening for cognitive impairment in older individuals. Validation study of a computer-based test.

    Science.gov (United States)

    Green, R C; Green, J; Harrison, J M; Kutner, M H

    1994-08-01

    This study examined the validity of a computer-based cognitive test that was recently designed to screen the elderly for cognitive impairment. Criterion-related validity was examined by comparing test scores of impaired patients and normal control subjects. Construct-related validity was computed through correlations between computer-based subtests and related conventional neuropsychological subtests. University center for memory disorders. Fifty-two patients with mild cognitive impairment by strict clinical criteria and 50 unimpaired, age- and education-matched control subjects. Control subjects were rigorously screened by neurological, neuropsychological, imaging, and electrophysiological criteria to identify and exclude individuals with occult abnormalities. Using a cut-off total score of 126, this computer-based instrument had a sensitivity of 0.83 and a specificity of 0.96. Using a prevalence estimate of 10%, predictive values, positive and negative, were 0.70 and 0.96, respectively. Computer-based subtests correlated significantly with conventional neuropsychological tests measuring similar cognitive domains. Thirteen (17.8%) of 73 volunteers with normal medical histories were excluded from the control group, with unsuspected abnormalities on standard neuropsychological tests, electroencephalograms, or magnetic resonance imaging scans. Computer-based testing is a valid screening methodology for the detection of mild cognitive impairment in the elderly, although this particular test has important limitations. Broader applications of computer-based testing will require extensive population-based validation. Future studies should recognize that normal control subjects without a history of disease who are typically used in validation studies may have a high incidence of unsuspected abnormalities on neurodiagnostic studies.

  20. Validation of ASTEC v1.0 computer code against FPT2 test

    International Nuclear Information System (INIS)

    Mladenov, I.; Tusheva, P.; Kalchev, B.; Dimov, D.; Ivanov, I.

    2005-01-01

    The aim of the work is by various nodalization schemes of the model to investigate the ASTEC v1.0 computer code sensitivity and to validate the code against PHEBUS - FPT2 experiment. This code is used for severe accident analysis. The aim corresponds to the main technical objective of the experiment which is to contribute to the validation of models and computer codes to be used for the calculation of the source term in case of a severe accident in a Light Water Reactor. The objective's scope of the FPT2 is large - separately for the bundle, the experimental circuit and the containment. Additional objectives are to characterize aerosol sizing and deposition processes, and also potential FP poisoning effects on hydrogen recombiner coupons exposed to containment atmospheric conditions representative of a LWR severe accident. The analyses of the results of the performed calculations show a good accordance with the reference case calculations, and then with the experimental data. Some differences in the calculations for the thermal behavior appear locally during the oxidation phase and the heat-up phase. There is very good confirmation regarding the volatile and semi-volatile fission products release from the fuel pellets. Important for analysis of the process is the final axial distribution of the mass of fuel relocation obtained at the end of the calculation

  1. Pre-engineering Spaceflight Validation of Environmental Models and the 2005 HZETRN Simulation Code

    Science.gov (United States)

    Nealy, John E.; Cucinotta, Francis A.; Wilson, John W.; Badavi, Francis F.; Dachev, Ts. P.; Tomov, B. T.; Walker, Steven A.; DeAngelis, Giovanni; Blattnig, Steve R.; Atwell, William

    2006-01-01

    The HZETRN code has been identified by NASA for engineering design in the next phase of space exploration highlighting a return to the Moon in preparation for a Mars mission. In response, a new series of algorithms beginning with 2005 HZETRN, will be issued by correcting some prior limitations and improving control of propagated errors along with established code verification processes. Code validation processes will use new/improved low Earth orbit (LEO) environmental models with a recently improved International Space Station (ISS) shield model to validate computational models and procedures using measured data aboard ISS. These validated models will provide a basis for flight-testing the designs of future space vehicles and systems of the Constellation program in the LEO environment.

  2. Development and validation of a mass casualty conceptual model.

    Science.gov (United States)

    Culley, Joan M; Effken, Judith A

    2010-03-01

    To develop and validate a conceptual model that provides a framework for the development and evaluation of information systems for mass casualty events. The model was designed based on extant literature and existing theoretical models. A purposeful sample of 18 experts validated the model. Open-ended questions, as well as a 7-point Likert scale, were used to measure expert consensus on the importance of each construct and its relationship in the model and the usefulness of the model to future research. Computer-mediated applications were used to facilitate a modified Delphi technique through which a panel of experts provided validation for the conceptual model. Rounds of questions continued until consensus was reached, as measured by an interquartile range (no more than 1 scale point for each item); stability (change in the distribution of responses less than 15% between rounds); and percent agreement (70% or greater) for indicator questions. Two rounds of the Delphi process were needed to satisfy the criteria for consensus or stability related to the constructs, relationships, and indicators in the model. The panel reached consensus or sufficient stability to retain all 10 constructs, 9 relationships, and 39 of 44 indicators. Experts viewed the model as useful (mean of 5.3 on a 7-point scale). Validation of the model provides the first step in understanding the context in which mass casualty events take place and identifying variables that impact outcomes of care. This study provides a foundation for understanding the complexity of mass casualty care, the roles that nurses play in mass casualty events, and factors that must be considered in designing and evaluating information-communication systems to support effective triage under these conditions.

  3. On the usage of ultrasound computational models for decision making under ambiguity

    Science.gov (United States)

    Dib, Gerges; Sexton, Samuel; Prowant, Matthew; Crawford, Susan; Diaz, Aaron

    2018-04-01

    Computer modeling and simulation is becoming pervasive within the non-destructive evaluation (NDE) industry as a convenient tool for designing and assessing inspection techniques. This raises a pressing need for developing quantitative techniques for demonstrating the validity and applicability of the computational models. Computational models provide deterministic results based on deterministic and well-defined input, or stochastic results based on inputs defined by probability distributions. However, computational models cannot account for the effects of personnel, procedures, and equipment, resulting in ambiguity about the efficacy of inspections based on guidance from computational models only. In addition, ambiguity arises when model inputs, such as the representation of realistic cracks, cannot be defined deterministically, probabilistically, or by intervals. In this work, Pacific Northwest National Laboratory demonstrates the ability of computational models to represent field measurements under known variabilities, and quantify the differences using maximum amplitude and power spectrum density metrics. Sensitivity studies are also conducted to quantify the effects of different input parameters on the simulation results.

  4. Structural biomechanics of the craniomaxillofacial skeleton under maximal masticatory loading: Inferences and critical analysis based on a validated computational model.

    Science.gov (United States)

    Pakdel, Amir R; Whyne, Cari M; Fialkov, Jeffrey A

    2017-06-01

    The trend towards optimizing stabilization of the craniomaxillofacial skeleton (CMFS) with the minimum amount of fixation required to achieve union, and away from maximizing rigidity, requires a quantitative understanding of craniomaxillofacial biomechanics. This study uses computational modeling to quantify the structural biomechanics of the CMFS under maximal physiologic masticatory loading. Using an experimentally validated subject-specific finite element (FE) model of the CMFS, the patterns of stress and strain distribution as a result of physiological masticatory loading were calculated. The trajectories of the stresses were plotted to delineate compressive and tensile regimes over the entire CMFS volume. The lateral maxilla was found to be the primary vertical buttress under maximal bite force loading, with much smaller involvement of the naso-maxillary buttress. There was no evidence that the pterygo-maxillary region is a buttressing structure, counter to classical buttress theory. The stresses at the zygomatic sutures suggest that two-point fixation of zygomatic complex fractures may be sufficient for fixation under bite force loading. The current experimentally validated biomechanical FE model of the CMFS is a practical tool for in silico optimization of current practice techniques and may be used as a foundation for the development of design criteria for future technologies for the treatment of CMFS injury and disease. Copyright © 2017 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.

  5. A combined sensitivity analysis and kriging surrogate modeling for early validation of health indicators

    International Nuclear Information System (INIS)

    Lamoureux, Benjamin; Mechbal, Nazih; Massé, Jean-Rémi

    2014-01-01

    To increase the dependability of complex systems, one solution is to assess their state of health continuously through the monitoring of variables sensitive to potential degradation modes. When computed in an operating environment, these variables, known as health indicators, are subject to many uncertainties. Hence, the stochastic nature of health assessment combined with the lack of data in design stages makes it difficult to evaluate the efficiency of a health indicator before the system enters into service. This paper introduces a method for early validation of health indicators during the design stages of a system development process. This method uses physics-based modeling and uncertainties propagation to create simulated stochastic data. However, because of the large number of parameters defining the model and its computation duration, the necessary runtime for uncertainties propagation is prohibitive. Thus, kriging is used to obtain low computation time estimations of the model outputs. Moreover, sensitivity analysis techniques are performed upstream to determine the hierarchization of the model parameters and to reduce the dimension of the input space. The validation is based on three types of numerical key performance indicators corresponding to the detection, identification and prognostic processes. After having introduced and formalized the framework of uncertain systems modeling and the different performance metrics, the issues of sensitivity analysis and surrogate modeling are addressed. The method is subsequently applied to the validation of a set of health indicators for the monitoring of an aircraft engine’s pumping unit

  6. Development validation and use of computer codes for inelastic analysis

    International Nuclear Information System (INIS)

    Jobson, D.A.

    1983-01-01

    A finite element scheme is a system which provides routines so carry out the operations which are common to all finite element programs. The list of items that can be provided as standard by the finite element scheme is surprisingly large and the list provided by the UNCLE finite element scheme is unusually comprehensive. This presentation covers the following: construction of the program, setting up a finite element mesh, generation of coordinates, incorporating boundary and load conditions. Program validation was done by creep calculations performed using CAUSE code. Program use is illustrated by calculating a typical inelastic analysis problem. This includes computer model of the PFR intermediate heat exchanger

  7. Verification and validation benchmarks

    International Nuclear Information System (INIS)

    Oberkampf, William Louis; Trucano, Timothy Guy

    2007-01-01

    Verification and validation (V and V) are the primary means to assess the accuracy and reliability of computational simulations. V and V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V and V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the

  8. Verification and validation benchmarks

    International Nuclear Information System (INIS)

    Oberkampf, William L.; Trucano, Timothy G.

    2008-01-01

    Verification and validation (V and V) are the primary means to assess the accuracy and reliability of computational simulations. V and V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V and V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the

  9. Computational Modeling of Micrometastatic Breast Cancer Radiation Dose Response

    Energy Technology Data Exchange (ETDEWEB)

    Smith, Daniel L.; Debeb, Bisrat G. [Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Morgan Welch Inflammatory Breast Cancer Research Program and Clinic, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Thames, Howard D. [Department of Biostatistics, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Woodward, Wendy A., E-mail: wwoodward@mdanderson.org [Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Morgan Welch Inflammatory Breast Cancer Research Program and Clinic, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States)

    2016-09-01

    Purpose: Prophylactic cranial irradiation (PCI) involves giving radiation to the entire brain with the goals of reducing the incidence of brain metastasis and improving overall survival. Experimentally, we have demonstrated that PCI prevents brain metastases in a breast cancer mouse model. We developed a computational model to expand on and aid in the interpretation of our experimental results. Methods and Materials: MATLAB was used to develop a computational model of brain metastasis and PCI in mice. Model input parameters were optimized such that the model output would match the experimental number of metastases per mouse from the unirradiated group. An independent in vivo–limiting dilution experiment was performed to validate the model. The effect of whole brain irradiation at different measurement points after tumor cells were injected was evaluated in terms of the incidence, number of metastases, and tumor burden and was then compared with the corresponding experimental data. Results: In the optimized model, the correlation between the number of metastases per mouse and the experimental fits was >95. Our attempt to validate the model with a limiting dilution assay produced 99.9% correlation with respect to the incidence of metastases. The model accurately predicted the effect of whole-brain irradiation given 3 weeks after cell injection but substantially underestimated its effect when delivered 5 days after cell injection. The model further demonstrated that delaying whole-brain irradiation until the development of gross disease introduces a dose threshold that must be reached before a reduction in incidence can be realized. Conclusions: Our computational model of mouse brain metastasis and PCI correlated strongly with our experiments with unirradiated mice. The results further suggest that early treatment of subclinical disease is more effective than irradiating established disease.

  10. Preliminary experimentally-validated forced and mixed convection computational simulations of the Rotatable Buoyancy Tunnel

    International Nuclear Information System (INIS)

    Clifford, Corey E.; Kimber, Mark L.

    2015-01-01

    Although computational fluid dynamics (CFD) has not been directly utilized to perform safety analyses of nuclear reactors in the United States, several vendors are considering adopting commercial numerical packages for current and future projects. To ensure the accuracy of these computational models, it is imperative to validate the assumptions and approximations built into commercial CFD codes against physical data from flows analogous to those in modern nuclear reactors. To this end, researchers at Utah State University (USU) have constructed the Rotatable Buoyancy Tunnel (RoBuT) test facility, which is designed to provide flow and thermal validation data for CFD simulations of forced and mixed convection scenarios. In order to evaluate the ability of current CFD codes to capture the complex physics associated with these types of flows, a computational model of the RoBuT test facility is created using the ANSYS Fluent commercial CFD code. The numerical RoBuT model is analyzed at identical conditions to several experimental trials undertaken at USU. Each experiment is reconstructed numerically and evaluated with the second-order Reynolds stress model (RSM). Two different thermal boundary conditions at the heated surface of the RoBuT test section are investigated: constant temperature (isothermal) and constant surface heat flux (isoflux). Additionally, the fluid velocity at the inlet of the test section is varied in an effort to modify the relative importance of natural convection heat transfer from the heated wall of the RoBuT. Mean velocity, both in the streamwise and transverse directions, as well as components of the Reynolds stress tensor at three points downstream of the RoBuT test section inlet are compared to results obtained from experimental trials. Early computational results obtained from this research initiative are in good agreement with experimental data obtained from the RoBuT facility and both the experimental data and numerical method can be used

  11. A proposed framework for computational fluid dynamics code calibration/validation

    International Nuclear Information System (INIS)

    Oberkampf, W.L.

    1993-01-01

    The paper reviews the terminology and methodology that have been introduced during the last several years for building confidence n the predictions from Computational Fluid Dynamics (CID) codes. Code validation terminology developed for nuclear reactor analyses and aerospace applications is reviewed and evaluated. Currently used terminology such as ''calibrated code,'' ''validated code,'' and a ''validation experiment'' is discussed along with the shortcomings and criticisms of these terms. A new framework is proposed for building confidence in CFD code predictions that overcomes some of the difficulties of past procedures and delineates the causes of uncertainty in CFD predictions. Building on previous work, new definitions of code verification and calibration are proposed. These definitions provide more specific requirements for the knowledge level of the flow physics involved and the solution accuracy of the given partial differential equations. As part of the proposed framework, categories are also proposed for flow physics research, flow modeling research, and the application of numerical predictions. The contributions of physical experiments, analytical solutions, and other numerical solutions are discussed, showing that each should be designed to achieve a distinctively separate purpose in building confidence in accuracy of CFD predictions. A number of examples are given for each approach to suggest methods for obtaining the highest value for CFD code quality assurance

  12. Geochemical databases. Part 1. Pmatch: a program to manage thermochemical data. Part 2. The experimental validation of geochemical computer models

    International Nuclear Information System (INIS)

    Pearson, F.J. Jr.; Avis, J.D.; Nilsson, K.; Skytte Jensen, B.

    1993-01-01

    This work is carried out under cost-sharing contract with European Atomic Energy Community in the framework of its programme on Management and Storage of Radioactive Wastes. Part 1: PMATCH, A Program to Manage Thermochemical Data, describes the development and use of a computer program, by means of which new thermodynamic data from literature may be referenced to a common frame and thereby become internally consistent with an existing database. The report presents the relevant thermodynamic expressions and their use in the program is discussed. When there is not sufficient thermodynamic data available to describe a species behaviour under all conceivable conditions, the problems arising are thoroughly discussed and the available data is handled by approximating expressions. Part II: The Experimental Validation of Geochemical Computer models are the results of experimental investigations of the equilibria established in aqueous suspensions of mixtures of carbonate minerals (Calcium, magnesium, manganese and europium carbonates) compared with theoretical calculations made by means of the geochemical JENSEN program. The study revealed that the geochemical computer program worked well, and that its database was of sufficient validity. However, it was observed that experimental difficulties could hardly be avoided, when as here a gaseous component took part in the equilibria. Whereas the magnesium and calcium carbonates did not demonstrate mutual solid solubility, this produced abnormal effects when manganese and calcium carbonates were mixed resulting in a diminished solubility of both manganese and calcium. With tracer amounts of europium added to a suspension of calcite in sodium carbonate solutions long term experiments revealed a transition after 1-2 months, whereby the tracer became more strongly adsorbed onto calcite. The transition is interpreted as the nucleation and formation of a surface phase incorporating the 'species' NaEu(Co 3 ) 2

  13. N2A: a computational tool for modeling from neurons to algorithms

    Directory of Open Access Journals (Sweden)

    Fredrick eRothganger

    2014-01-01

    Full Text Available The exponential increase in available neural data has combined with the exponential growth in computing (Moore’s law to create new opportunities to understand neural systems at large scale and high detail. The ability to produce large and sophisticated simulations has introduced unique challenges to neuroscientists. Computational models in neuroscience are increasingly broad efforts, often involving the collaboration of experts in different domains. Furthermore, the size and detail of models have grown to levels for which understanding the implications of variability and assumptions is no longer trivial. Here, we introduce the model design platform N2A which aims to facilitate the design and validation of biologically realistic models. N2A uses a hierarchical representation of neural information to enable the integration of models from different users. N2A streamlines computational validation of a model by natively implementing standard tools in sensitivity analysis and uncertainty quantification. The part-relationship representation allows both network-level analysis and dynamical simulations. We will demonstrate how N2A can be used in a range of examples, including a simple Hodgkin-Huxley cable model, basic parameter sensitivity of an 80/20 network, and the expression of the structural plasticity of a growing dendrite and stem cell proliferation and differentiation.

  14. An original piecewise model for computing energy expenditure from accelerometer and heart rate signals.

    Science.gov (United States)

    Romero-Ugalde, Hector M; Garnotel, M; Doron, M; Jallon, P; Charpentier, G; Franc, S; Huneker, E; Simon, C; Bonnet, S

    2017-07-28

    Activity energy expenditure (EE) plays an important role in healthcare, therefore, accurate EE measures are required. Currently available reference EE acquisition methods, such as doubly labeled water and indirect calorimetry, are complex, expensive, uncomfortable, and/or difficult to apply on real time. To overcome these drawbacks, the goal of this paper is to propose a model for computing EE in real time (minute-by-minute) from heart rate and accelerometer signals. The proposed model, which consists of an original branched model, uses heart rate signals for computing EE on moderate to vigorous physical activities and a linear combination of heart rate and counts per minute for computing EE on light to moderate physical activities. Model parameters were estimated from a given data set composed of 53 subjects performing 25 different physical activities (light-, moderate- and vigorous-intensity), and validated using leave-one-subject-out. A different database (semi-controlled in-city circuit), was used in order to validate the versatility of the proposed model. Comparisons are done versus linear and nonlinear models, which are also used for computing EE from accelerometer and/or HR signals. The proposed piecewise model leads to more accurate EE estimations ([Formula: see text], [Formula: see text] and [Formula: see text] J kg -1 min -1 and [Formula: see text], [Formula: see text], and [Formula: see text] J kg -1 min -1 on each validation database). This original approach, which is more conformable and less expensive than the reference methods, allows accurate EE estimations, in real time (minute-by-minute), during a large variety of physical activities. Therefore, this model may be used on applications such as computing the time that a given subject spent on light-intensity physical activities and on moderate to vigorous physical activities (binary classification accuracy of 0.8155).

  15. Asymptotic optimality and efficient computation of the leave-subject-out cross-validation

    KAUST Repository

    Xu, Ganggang

    2012-12-01

    Although the leave-subject-out cross-validation (CV) has been widely used in practice for tuning parameter selection for various nonparametric and semiparametric models of longitudinal data, its theoretical property is unknown and solving the associated optimization problem is computationally expensive, especially when there are multiple tuning parameters. In this paper, by focusing on the penalized spline method, we show that the leave-subject-out CV is optimal in the sense that it is asymptotically equivalent to the empirical squared error loss function minimization. An efficient Newton-type algorithm is developed to compute the penalty parameters that optimize the CV criterion. Simulated and real data are used to demonstrate the effectiveness of the leave-subject-out CV in selecting both the penalty parameters and the working correlation matrix. © 2012 Institute of Mathematical Statistics.

  16. Asymptotic optimality and efficient computation of the leave-subject-out cross-validation

    KAUST Repository

    Xu, Ganggang; Huang, Jianhua Z.

    2012-01-01

    Although the leave-subject-out cross-validation (CV) has been widely used in practice for tuning parameter selection for various nonparametric and semiparametric models of longitudinal data, its theoretical property is unknown and solving the associated optimization problem is computationally expensive, especially when there are multiple tuning parameters. In this paper, by focusing on the penalized spline method, we show that the leave-subject-out CV is optimal in the sense that it is asymptotically equivalent to the empirical squared error loss function minimization. An efficient Newton-type algorithm is developed to compute the penalty parameters that optimize the CV criterion. Simulated and real data are used to demonstrate the effectiveness of the leave-subject-out CV in selecting both the penalty parameters and the working correlation matrix. © 2012 Institute of Mathematical Statistics.

  17. Theory and Validation for the Collision Module

    DEFF Research Database (Denmark)

    Simonsen, Bo Cerup

    1999-01-01

    This report describes basic modelling principles, the theoretical background and validation examples for the Collision Module for the computer program DAMAGE.......This report describes basic modelling principles, the theoretical background and validation examples for the Collision Module for the computer program DAMAGE....

  18. Verifying and Validating Simulation Models

    Energy Technology Data Exchange (ETDEWEB)

    Hemez, Francois M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-02-23

    This presentation is a high-level discussion of the Verification and Validation (V&V) of computational models. Definitions of V&V are given to emphasize that “validation” is never performed in a vacuum; it accounts, instead, for the current state-of-knowledge in the discipline considered. In particular comparisons between physical measurements and numerical predictions should account for their respective sources of uncertainty. The differences between error (bias), aleatoric uncertainty (randomness) and epistemic uncertainty (ignorance, lack-of- knowledge) are briefly discussed. Four types of uncertainty in physics and engineering are discussed: 1) experimental variability, 2) variability and randomness, 3) numerical uncertainty and 4) model-form uncertainty. Statistical sampling methods are available to propagate, and analyze, variability and randomness. Numerical uncertainty originates from the truncation error introduced by the discretization of partial differential equations in time and space. Model-form uncertainty is introduced by assumptions often formulated to render a complex problem more tractable and amenable to modeling and simulation. The discussion concludes with high-level guidance to assess the “credibility” of numerical simulations, which stems from the level of rigor with which these various sources of uncertainty are assessed and quantified.

  19. A novel patient-specific model to compute coronary fractional flow reserve.

    Science.gov (United States)

    Kwon, Soon-Sung; Chung, Eui-Chul; Park, Jin-Seo; Kim, Gook-Tae; Kim, Jun-Woo; Kim, Keun-Hong; Shin, Eun-Seok; Shim, Eun Bo

    2014-09-01

    The fractional flow reserve (FFR) is a widely used clinical index to evaluate the functional severity of coronary stenosis. A computer simulation method based on patients' computed tomography (CT) data is a plausible non-invasive approach for computing the FFR. This method can provide a detailed solution for the stenosed coronary hemodynamics by coupling computational fluid dynamics (CFD) with the lumped parameter model (LPM) of the cardiovascular system. In this work, we have implemented a simple computational method to compute the FFR. As this method uses only coronary arteries for the CFD model and includes only the LPM of the coronary vascular system, it provides simpler boundary conditions for the coronary geometry and is computationally more efficient than existing approaches. To test the efficacy of this method, we simulated a three-dimensional straight vessel using CFD coupled with the LPM. The computed results were compared with those of the LPM. To validate this method in terms of clinically realistic geometry, a patient-specific model of stenosed coronary arteries was constructed from CT images, and the computed FFR was compared with clinically measured results. We evaluated the effect of a model aorta on the computed FFR and compared this with a model without the aorta. Computationally, the model without the aorta was more efficient than that with the aorta, reducing the CPU time required for computing a cardiac cycle to 43.4%. Copyright © 2014. Published by Elsevier Ltd.

  20. Validation of a model to investigate the effects of modifying cardiovascular disease (CVD) risk factors on the burden of CVD: the rotterdam ischemic heart disease and stroke computer simulation (RISC) model.

    Science.gov (United States)

    van Kempen, Bob J H; Ferket, Bart S; Hofman, Albert; Steyerberg, Ewout W; Colkesen, Ersen B; Boekholdt, S Matthijs; Wareham, Nicholas J; Khaw, Kay-Tee; Hunink, M G Myriam

    2012-12-06

    We developed a Monte Carlo Markov model designed to investigate the effects of modifying cardiovascular disease (CVD) risk factors on the burden of CVD. Internal, predictive, and external validity of the model have not yet been established. The Rotterdam Ischemic Heart Disease and Stroke Computer Simulation (RISC) model was developed using data covering 5 years of follow-up from the Rotterdam Study. To prove 1) internal and 2) predictive validity, the incidences of coronary heart disease (CHD), stroke, CVD death, and non-CVD death simulated by the model over a 13-year period were compared with those recorded for 3,478 participants in the Rotterdam Study with at least 13 years of follow-up. 3) External validity was verified using 10 years of follow-up data from the European Prospective Investigation of Cancer (EPIC)-Norfolk study of 25,492 participants, for whom CVD and non-CVD mortality was compared. At year 5, the observed incidences (with simulated incidences in brackets) of CHD, stroke, and CVD and non-CVD mortality for the 3,478 Rotterdam Study participants were 5.30% (4.68%), 3.60% (3.23%), 4.70% (4.80%), and 7.50% (7.96%), respectively. At year 13, these percentages were 10.60% (10.91%), 9.90% (9.13%), 14.20% (15.12%), and 24.30% (23.42%). After recalibrating the model for the EPIC-Norfolk population, the 10-year observed (simulated) incidences of CVD and non-CVD mortality were 3.70% (4.95%) and 6.50% (6.29%). All observed incidences fell well within the 95% credibility intervals of the simulated incidences. We have confirmed the internal, predictive, and external validity of the RISC model. These findings provide a basis for analyzing the effects of modifying cardiovascular disease risk factors on the burden of CVD with the RISC model.

  1. Validation of a model to investigate the effects of modifying cardiovascular disease (CVD risk factors on the burden of CVD: the rotterdam ischemic heart disease and stroke computer simulation (RISC model

    Directory of Open Access Journals (Sweden)

    van Kempen Bob JH

    2012-12-01

    Full Text Available Abstract Background We developed a Monte Carlo Markov model designed to investigate the effects of modifying cardiovascular disease (CVD risk factors on the burden of CVD. Internal, predictive, and external validity of the model have not yet been established. Methods The Rotterdam Ischemic Heart Disease and Stroke Computer Simulation (RISC model was developed using data covering 5 years of follow-up from the Rotterdam Study. To prove 1 internal and 2 predictive validity, the incidences of coronary heart disease (CHD, stroke, CVD death, and non-CVD death simulated by the model over a 13-year period were compared with those recorded for 3,478 participants in the Rotterdam Study with at least 13 years of follow-up. 3 External validity was verified using 10 years of follow-up data from the European Prospective Investigation of Cancer (EPIC-Norfolk study of 25,492 participants, for whom CVD and non-CVD mortality was compared. Results At year 5, the observed incidences (with simulated incidences in brackets of CHD, stroke, and CVD and non-CVD mortality for the 3,478 Rotterdam Study participants were 5.30% (4.68%, 3.60% (3.23%, 4.70% (4.80%, and 7.50% (7.96%, respectively. At year 13, these percentages were 10.60% (10.91%, 9.90% (9.13%, 14.20% (15.12%, and 24.30% (23.42%. After recalibrating the model for the EPIC-Norfolk population, the 10-year observed (simulated incidences of CVD and non-CVD mortality were 3.70% (4.95% and 6.50% (6.29%. All observed incidences fell well within the 95% credibility intervals of the simulated incidences. Conclusions We have confirmed the internal, predictive, and external validity of the RISC model. These findings provide a basis for analyzing the effects of modifying cardiovascular disease risk factors on the burden of CVD with the RISC model.

  2. Test-driven verification/validation of model transformations

    Institute of Scientific and Technical Information of China (English)

    László LENGYEL; Hassan CHARAF

    2015-01-01

    Why is it important to verify/validate model transformations? The motivation is to improve the quality of the trans-formations, and therefore the quality of the generated software artifacts. Verified/validated model transformations make it possible to ensure certain properties of the generated software artifacts. In this way, verification/validation methods can guarantee different requirements stated by the actual domain against the generated/modified/optimized software products. For example, a verified/ validated model transformation can ensure the preservation of certain properties during the model-to-model transformation. This paper emphasizes the necessity of methods that make model transformation verified/validated, discusses the different scenarios of model transformation verification and validation, and introduces the principles of a novel test-driven method for verifying/ validating model transformations. We provide a solution that makes it possible to automatically generate test input models for model transformations. Furthermore, we collect and discuss the actual open issues in the field of verification/validation of model transformations.

  3. Validation of DYSTOOL for unsteady aerodynamic modeling of 2D airfoils

    Science.gov (United States)

    González, A.; Gomez-Iradi, S.; Munduate, X.

    2014-06-01

    From the point of view of wind turbine modeling, an important group of tools is based on blade element momentum (BEM) theory using 2D aerodynamic calculations on the blade elements. Due to the importance of this sectional computation of the blades, the National Renewable Wind Energy Center of Spain (CENER) developed DYSTOOL, an aerodynamic code for 2D airfoil modeling based on the Beddoes-Leishman model. The main focus here is related to the model parameters, whose values depend on the airfoil or the operating conditions. In this work, the values of the parameters are adjusted using available experimental or CFD data. The present document is mainly related to the validation of the results of DYSTOOL for 2D airfoils. The results of the computations have been compared with unsteady experimental data of the S809 and NACA0015 profiles. Some of the cases have also been modeled using the CFD code WMB (Wind Multi Block), within the framework of a collaboration with ACCIONA Windpower. The validation has been performed using pitch oscillations with different reduced frequencies, Reynolds numbers, amplitudes and mean angles of attack. The results have shown a good agreement using the methodology of adjustment for the value of the parameters. DYSTOOL have demonstrated to be a promising tool for 2D airfoil unsteady aerodynamic modeling.

  4. Validation of DYSTOOL for unsteady aerodynamic modeling of 2D airfoils

    International Nuclear Information System (INIS)

    González, A; Gomez-Iradi, S; Munduate, X

    2014-01-01

    From the point of view of wind turbine modeling, an important group of tools is based on blade element momentum (BEM) theory using 2D aerodynamic calculations on the blade elements. Due to the importance of this sectional computation of the blades, the National Renewable Wind Energy Center of Spain (CENER) developed DYSTOOL, an aerodynamic code for 2D airfoil modeling based on the Beddoes-Leishman model. The main focus here is related to the model parameters, whose values depend on the airfoil or the operating conditions. In this work, the values of the parameters are adjusted using available experimental or CFD data. The present document is mainly related to the validation of the results of DYSTOOL for 2D airfoils. The results of the computations have been compared with unsteady experimental data of the S809 and NACA0015 profiles. Some of the cases have also been modeled using the CFD code WMB (Wind Multi Block), within the framework of a collaboration with ACCIONA Windpower. The validation has been performed using pitch oscillations with different reduced frequencies, Reynolds numbers, amplitudes and mean angles of attack. The results have shown a good agreement using the methodology of adjustment for the value of the parameters. DYSTOOL have demonstrated to be a promising tool for 2D airfoil unsteady aerodynamic modeling

  5. Towards a realistic approach to validation of reactive transport models for performance assessment

    International Nuclear Information System (INIS)

    Siegel, M.D.

    1993-01-01

    Performance assessment calculations are based on geochemical models that assume that interactions among radionuclides, rocks and groundwaters under natural conditions, can be estimated or bound by data obtained from laboratory-scale studies. The data include radionuclide distribution coefficients, measured in saturated batch systems of powdered rocks, and retardation factors measured in short-term column experiments. Traditional approaches to model validation cannot be applied in a straightforward manner to the simple reactive transport models that use these data. An approach to model validation in support of performance assessment is described in this paper. It is based on a recognition of different levels of model validity and is compatible with the requirements of current regulations for high-level waste disposal. Activities that are being carried out in support of this approach include (1) laboratory and numerical experiments to test the validity of important assumptions inherent in current performance assessment methodologies,(2) integrated transport experiments, and (3) development of a robust coupled reaction/transport code for sensitivity analyses using massively parallel computers

  6. Sci—Thur AM: YIS - 09: Validation of a General Empirically-Based Beam Model for kV X-ray Sources

    Energy Technology Data Exchange (ETDEWEB)

    Poirier, Y. [CancerCare Manitoba (Canada); University of Calgary (Canada); Sommerville, M.; Johnstone, C.D. [San Diego State University (United States); Gräfe, J.; Nygren, I.; Jacso, F. [Tom Baker Cancer Centre (Canada); Khan, R.; Villareal-Barajas, J.E. [University of Calgary (Canada); Tom Baker Cancer Centre (Canada); Tambasco, M. [University of Calgary (Canada); San Diego State University (United States)

    2014-08-15

    Purpose: To present an empirically-based beam model for computing dose deposited by kilovoltage (kV) x-rays and validate it for radiographic, CT, CBCT, superficial, and orthovoltage kV sources. Method and Materials: We modeled a wide variety of imaging (radiographic, CT, CBCT) and therapeutic (superficial, orthovoltage) kV x-ray sources. The model characterizes spatial variations of the fluence and spectrum independently. The spectrum is derived by matching measured values of the half value layer (HVL) and nominal peak potential (kVp) to computationally-derived spectra while the fluence is derived from in-air relative dose measurements. This model relies only on empirical values and requires no knowledge of proprietary source specifications or other theoretical aspects of the kV x-ray source. To validate the model, we compared measured doses to values computed using our previously validated in-house kV dose computation software, kVDoseCalc. The dose was measured in homogeneous and anthropomorphic phantoms using ionization chambers and LiF thermoluminescent detectors (TLDs), respectively. Results: The maximum difference between measured and computed dose measurements was within 2.6%, 3.6%, 2.0%, 4.8%, and 4.0% for the modeled radiographic, CT, CBCT, superficial, and the orthovoltage sources, respectively. In the anthropomorphic phantom, the computed CBCT dose generally agreed with TLD measurements, with an average difference and standard deviation ranging from 2.4 ± 6.0% to 5.7 ± 10.3% depending on the imaging technique. Most (42/62) measured TLD doses were within 10% of computed values. Conclusions: The proposed model can be used to accurately characterize a wide variety of kV x-ray sources using only empirical values.

  7. International Symposium on Scientific Computing, Computer Arithmetic and Validated Numerics

    CERN Document Server

    DEVELOPMENTS IN RELIABLE COMPUTING

    1999-01-01

    The SCAN conference, the International Symposium on Scientific Com­ puting, Computer Arithmetic and Validated Numerics, takes place bian­ nually under the joint auspices of GAMM (Gesellschaft fiir Angewandte Mathematik und Mechanik) and IMACS (International Association for Mathematics and Computers in Simulation). SCAN-98 attracted more than 100 participants from 21 countries all over the world. During the four days from September 22 to 25, nine highlighted, plenary lectures and over 70 contributed talks were given. These figures indicate a large participation, which was partly caused by the attraction of the organizing country, Hungary, but also the effec­ tive support system have contributed to the success. The conference was substantially supported by the Hungarian Research Fund OTKA, GAMM, the National Technology Development Board OMFB and by the J6zsef Attila University. Due to this funding, it was possible to subsidize the participation of over 20 scientists, mainly from Eastern European countries. I...

  8. Lattice Boltzmann model capable of mesoscopic vorticity computation

    Science.gov (United States)

    Peng, Cheng; Guo, Zhaoli; Wang, Lian-Ping

    2017-11-01

    It is well known that standard lattice Boltzmann (LB) models allow the strain-rate components to be computed mesoscopically (i.e., through the local particle distributions) and as such possess a second-order accuracy in strain rate. This is one of the appealing features of the lattice Boltzmann method (LBM) which is of only second-order accuracy in hydrodynamic velocity itself. However, no known LB model can provide the same quality for vorticity and pressure gradients. In this paper, we design a multiple-relaxation time LB model on a three-dimensional 27-discrete-velocity (D3Q27) lattice. A detailed Chapman-Enskog analysis is presented to illustrate all the necessary constraints in reproducing the isothermal Navier-Stokes equations. The remaining degrees of freedom are carefully analyzed to derive a model that accommodates mesoscopic computation of all the velocity and pressure gradients from the nonequilibrium moments. This way of vorticity calculation naturally ensures a second-order accuracy, which is also proven through an asymptotic analysis. We thus show, with enough degrees of freedom and appropriate modifications, the mesoscopic vorticity computation can be achieved in LBM. The resulting model is then validated in simulations of a three-dimensional decaying Taylor-Green flow, a lid-driven cavity flow, and a uniform flow passing a fixed sphere. Furthermore, it is shown that the mesoscopic vorticity computation can be realized even with single relaxation parameter.

  9. Validation of numerical model for cook stove using Reynolds averaged Navier-Stokes based solver

    Science.gov (United States)

    Islam, Md. Moinul; Hasan, Md. Abdullah Al; Rahman, Md. Mominur; Rahaman, Md. Mashiur

    2017-12-01

    Biomass fired cook stoves, for many years, have been the main cooking appliance for the rural people of developing countries. Several researches have been carried out to the find efficient stoves. In the present study, numerical model of an improved household cook stove is developed to analyze the heat transfer and flow behavior of gas during operation. The numerical model is validated with the experimental results. Computation of the numerical model is executed the using non-premixed combustion model. Reynold's averaged Navier-Stokes (RaNS) equation along with the κ - ɛ model governed the turbulent flow associated within the computed domain. The computational results are in well agreement with the experiment. Developed numerical model can be used to predict the effect of different biomasses on the efficiency of the cook stove.

  10. Consistency, Verification, and Validation of Turbulence Models for Reynolds-Averaged Navier-Stokes Applications

    Science.gov (United States)

    Rumsey, Christopher L.

    2009-01-01

    In current practice, it is often difficult to draw firm conclusions about turbulence model accuracy when performing multi-code CFD studies ostensibly using the same model because of inconsistencies in model formulation or implementation in different codes. This paper describes an effort to improve the consistency, verification, and validation of turbulence models within the aerospace community through a website database of verification and validation cases. Some of the variants of two widely-used turbulence models are described, and two independent computer codes (one structured and one unstructured) are used in conjunction with two specific versions of these models to demonstrate consistency with grid refinement for several representative problems. Naming conventions, implementation consistency, and thorough grid resolution studies are key factors necessary for success.

  11. Polarographic validation of chemical speciation models

    International Nuclear Information System (INIS)

    Duffield, J.R.; Jarratt, J.A.

    2001-01-01

    It is well established that the chemical speciation of an element in a given matrix, or system of matrices, is of fundamental importance in controlling the transport behaviour of the element. Therefore, to accurately understand and predict the transport of elements and compounds in the environment it is a requirement that both the identities and concentrations of trace element physico-chemical forms can be ascertained. These twin requirements present the analytical scientist with considerable challenges given the labile equilibria, the range of time scales (from nanoseconds to years) and the range of concentrations (ultra-trace to macro) that may be involved. As a result of this analytical variability, chemical equilibrium modelling has become recognised as an important predictive tool in chemical speciation analysis. However, this technique requires firm underpinning by the use of complementary experimental techniques for the validation of the predictions made. The work reported here has been undertaken with the primary aim of investigating possible methodologies that can be used for the validation of chemical speciation models. However, in approaching this aim, direct chemical speciation analyses have been made in their own right. Results will be reported and analysed for the iron(II)/iron(III)-citrate proton system (pH 2 to 10; total [Fe] = 3 mmol dm -3 ; total [citrate 3- ] 10 mmol dm -3 ) in which equilibrium constants have been determined using glass electrode potentiometry, speciation is predicted using the PHREEQE computer code, and validation of predictions is achieved by determination of iron complexation and redox state with associated concentrations. (authors)

  12. Challenges of forest landscape modeling - simulating large landscapes and validating results

    Science.gov (United States)

    Hong S. He; Jian Yang; Stephen R. Shifley; Frank R. Thompson

    2011-01-01

    Over the last 20 years, we have seen a rapid development in the field of forest landscape modeling, fueled by both technological and theoretical advances. Two fundamental challenges have persisted since the inception of FLMs: (1) balancing realistic simulation of ecological processes at broad spatial and temporal scales with computing capacity, and (2) validating...

  13. Approaches to Validation of Models for Low Gravity Fluid Behavior

    Science.gov (United States)

    Chato, David J.; Marchetta, Jeffery; Hochstein, John I.; Kassemi, Mohammad

    2005-01-01

    This paper details the author experiences with the validation of computer models to predict low gravity fluid behavior. It reviews the literature of low gravity fluid behavior as a starting point for developing a baseline set of test cases. It examines authors attempts to validate their models against these cases and the issues they encountered. The main issues seem to be that: Most of the data is described by empirical correlation rather than fundamental relation; Detailed measurements of the flow field have not been made; Free surface shapes are observed but through thick plastic cylinders, and therefore subject to a great deal of optical distortion; and Heat transfer process time constants are on the order of minutes to days but the zero-gravity time available has been only seconds.

  14. Integrated computation model of lithium-ion battery subject to nail penetration

    International Nuclear Information System (INIS)

    Liu, Binghe; Yin, Sha; Xu, Jun

    2016-01-01

    Highlights: • A coupling model to predict battery penetration process is established. • Penetration test is designed and validates the computational model. • Governing factors of the penetration induced short-circuit is discussed. • Critical safety battery design guidance is suggested. - Abstract: The nail penetration of lithium-ion batteries (LIBs) has become a standard battery safety evaluation method to mimic the potential penetration of a foreign object into LIB, which can lead to internal short circuit with catastrophic consequences, such as thermal runaway, fire, and explosion. To provide a safe, time-efficient, and cost-effective method for studying the nail penetration problem, an integrated computational method that considers the mechanical, electrochemical, and thermal behaviors of the jellyroll was developed using a coupled 3D mechanical model, a 1D battery model, and a short circuit model. The integrated model, along with the sub-models, was validated to agree reasonably well with experimental test data. In addition, a comprehensive quantitative analysis of governing factors, e.g., shapes, sizes, and displacements of nails, states of charge, and penetration speeds, was conducted. The proposed computational framework for LIB nail penetration was first introduced. This framework can provide an accurate prediction of the time history profile of battery voltage, temperature, and mechanical behavior. The factors that affected the behavior of the jellyroll under nail penetration were discussed systematically. Results provide a solid foundation for future in-depth studies on LIB nail penetration mechanisms and safety design.

  15. Contribution to the physical validation of computer programs for reactor cores flows

    International Nuclear Information System (INIS)

    Bourgeois, Pierre

    1998-01-01

    A κ-ε turbulence model was implemented in the FLICA computer code which is devoted to thermal-hydraulic analysis of nuclear reactor cores flows. Foreseen applications concern single-phase flows in rod bundles. First-moment closure principles are reminded. Low Reynolds wall effects are accounted for by a two-layer approach. A computational method for the distance from the wall must have been developed to do so. Two two-layer κ-ε models are proposed and studied: the classical isotropic version, based on the Boussinesq's hypothesis, and an original anisotropic version which supposes a non-linear relation between Reynolds stresses and mean deformation rate. The second one permits the treatment of anisotropy, which is encountered in non-circular ducts in general, and in rod bundles in particular. Turbulent solver is linearized implicit, based on a finite volume method - VF9 scheme for the viscous part, upwind scheme for passive scalar for the convective part, centered scheme for the source terms. Several numerical simulations on 2D and 3D configurations were conducted (validation standard test, industrial application). (author) [fr

  16. Validation of a Computational Fluid Dynamics (CFD) Code for Supersonic Axisymmetric Base Flow

    Science.gov (United States)

    Tucker, P. Kevin

    1993-01-01

    The ability to accurately and efficiently calculate the flow structure in the base region of bodies of revolution in supersonic flight is a significant step in CFD code validation for applications ranging from base heating for rockets to drag for protectives. The FDNS code is used to compute such a flow and the results are compared to benchmark quality experimental data. Flowfield calculations are presented for a cylindrical afterbody at M = 2.46 and angle of attack a = O. Grid independent solutions are compared to mean velocity profiles in the separated wake area and downstream of the reattachment point. Additionally, quantities such as turbulent kinetic energy and shear layer growth rates are compared to the data. Finally, the computed base pressures are compared to the measured values. An effort is made to elucidate the role of turbulence models in the flowfield predictions. The level of turbulent eddy viscosity, and its origin, are used to contrast the various turbulence models and compare the results to the experimental data.

  17. Implementation and validation of the condensation model for containment hydrogen distribution studies

    International Nuclear Information System (INIS)

    Ravva, Srinivasa Rao; Iyer, Kannan N.; Gupta, S.K.; Gaikwad, Avinash J.

    2014-01-01

    Highlights: • A condensation model based on diffusion was implemented in FLUENT. • Validation of a condensation model for the H 2 distribution studies was performed. • Multi-component diffusion is used in the present work. • Appropriate grid and turbulence model were identified. - Abstract: This paper aims at the implementation details of a condensation model in the CFD code FLUENT and its validation so that it can be used in performing the containment hydrogen distribution studies. In such studies, computational fluid dynamics simulations are necessary for obtaining accurate predictions. While steam condensation plays an important role, commercial CFD codes such as FLUENT do not have an in-built condensation model. Therefore, a condensation model was developed and implemented in the FLUENT code through user defined functions (UDFs) for the sink terms in the mass, momentum, energy and species balance equations together with associated turbulence quantities viz., kinetic energy and dissipation rate. The implemented model was validated against the ISP-47 test of TOSQAN facility using the standard wall functions and enhanced wall treatment approaches. The best suitable grid size and the turbulence model for the low density gas (He) distribution studies are brought out in this paper

  18. Computational neurogenetic modeling

    CERN Document Server

    Benuskova, Lubica

    2010-01-01

    Computational Neurogenetic Modeling is a student text, introducing the scope and problems of a new scientific discipline - Computational Neurogenetic Modeling (CNGM). CNGM is concerned with the study and development of dynamic neuronal models for modeling brain functions with respect to genes and dynamic interactions between genes. These include neural network models and their integration with gene network models. This new area brings together knowledge from various scientific disciplines, such as computer and information science, neuroscience and cognitive science, genetics and molecular biol

  19. Understanding Student Teachers’ Behavioural Intention to Use Technology: Technology Acceptance Model (TAM Validation and Testing

    Directory of Open Access Journals (Sweden)

    Kung-Teck, Wong

    2013-01-01

    Full Text Available This study sets out to validate and test the Technology Acceptance Model (TAM in the context of Malaysian student teachers’ integration of their technology in teaching and learning. To establish factorial validity, data collected from 302 respondents were tested against the TAM using confirmatory factor analysis (CFA, and structural equation modelling (SEM was used for model comparison and hypotheses testing. The goodness-of-fit test of the analysis shows partial support of the applicability of the TAM in a Malaysian context. Overall, the TAM accounted for 37.3% of the variance in intention to use technology among student teachers and of the five hypotheses formulated, four are supported. Perceived usefulness is a significant influence on attitude towards computer use and behavioural intention. Perceived ease of use significantly influences perceived usefulness, and finally, behavioural intention is found to be influenced by attitude towards computer use. The findings of this research contribute to the literature by validating the TAM in the Malaysian context and provide several prominent implications for the research and practice of technology integration development.

  20. Evaluating Emulation-based Models of Distributed Computing Systems

    Energy Technology Data Exchange (ETDEWEB)

    Jones, Stephen T. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Cyber Initiatives; Gabert, Kasimir G. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Cyber Initiatives; Tarman, Thomas D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Emulytics Initiatives

    2017-08-01

    Emulation-based models of distributed computing systems are collections of virtual ma- chines, virtual networks, and other emulation components configured to stand in for oper- ational systems when performing experimental science, training, analysis of design alterna- tives, test and evaluation, or idea generation. As with any tool, we should carefully evaluate whether our uses of emulation-based models are appropriate and justified. Otherwise, we run the risk of using a model incorrectly and creating meaningless results. The variety of uses of emulation-based models each have their own goals and deserve thoughtful evaluation. In this paper, we enumerate some of these uses and describe approaches that one can take to build an evidence-based case that a use of an emulation-based model is credible. Predictive uses of emulation-based models, where we expect a model to tell us something true about the real world, set the bar especially high and the principal evaluation method, called validation , is comensurately rigorous. We spend the majority of our time describing and demonstrating the validation of a simple predictive model using a well-established methodology inherited from decades of development in the compuational science and engineering community.

  1. Requirements Validation: Execution of UML Models with CPN Tools

    DEFF Research Database (Denmark)

    Machado, Ricardo J.; Lassen, Kristian Bisgaard; Oliveira, Sérgio

    2007-01-01

    Requirements validation is a critical task in any engineering project. The confrontation of stakeholders with static requirements models is not enough, since stakeholders with non-computer science education are not able to discover all the inter-dependencies between the elicited requirements. Eve...... requirements, where the system to be built must explicitly support the interaction between people within a pervasive cooperative workflow execution. A case study from a real project is used to illustrate the proposed approach....

  2. Unit testing, model validation, and biological simulation.

    Science.gov (United States)

    Sarma, Gopal P; Jacobs, Travis W; Watts, Mark D; Ghayoomie, S Vahid; Larson, Stephen D; Gerkin, Richard C

    2016-01-01

    The growth of the software industry has gone hand in hand with the development of tools and cultural practices for ensuring the reliability of complex pieces of software. These tools and practices are now acknowledged to be essential to the management of modern software. As computational models and methods have become increasingly common in the biological sciences, it is important to examine how these practices can accelerate biological software development and improve research quality. In this article, we give a focused case study of our experience with the practices of unit testing and test-driven development in OpenWorm, an open-science project aimed at modeling Caenorhabditis elegans. We identify and discuss the challenges of incorporating test-driven development into a heterogeneous, data-driven project, as well as the role of model validation tests, a category of tests unique to software which expresses scientific models.

  3. Computational Modeling | Bioenergy | NREL

    Science.gov (United States)

    cell walls and are the source of biofuels and biomaterials. Our modeling investigates their properties . Quantum Mechanical Models NREL studies chemical and electronic properties and processes to reduce barriers Computational Modeling Computational Modeling NREL uses computational modeling to increase the

  4. Provenance for Runtime Workflow Steering and Validation in Computational Seismology

    Science.gov (United States)

    Spinuso, A.; Krischer, L.; Krause, A.; Filgueira, R.; Magnoni, F.; Muraleedharan, V.; David, M.

    2014-12-01

    Provenance systems may be offered by modern workflow engines to collect metadata about the data transformations at runtime. If combined with effective visualisation and monitoring interfaces, these provenance recordings can speed up the validation process of an experiment, suggesting interactive or automated interventions with immediate effects on the lifecycle of a workflow run. For instance, in the field of computational seismology, if we consider research applications performing long lasting cross correlation analysis and high resolution simulations, the immediate notification of logical errors and the rapid access to intermediate results, can produce reactions which foster a more efficient progress of the research. These applications are often executed in secured and sophisticated HPC and HTC infrastructures, highlighting the need for a comprehensive framework that facilitates the extraction of fine grained provenance and the development of provenance aware components, leveraging the scalability characteristics of the adopted workflow engines, whose enactment can be mapped to different technologies (MPI, Storm clusters, etc). This work looks at the adoption of W3C-PROV concepts and data model within a user driven processing and validation framework for seismic data, supporting also computational and data management steering. Validation needs to balance automation with user intervention, considering the scientist as part of the archiving process. Therefore, the provenance data is enriched with community-specific metadata vocabularies and control messages, making an experiment reproducible and its description consistent with the community understandings. Moreover, it can contain user defined terms and annotations. The current implementation of the system is supported by the EU-Funded VERCE (http://verce.eu). It provides, as well as the provenance generation mechanisms, a prototypal browser-based user interface and a web API built on top of a NoSQL storage

  5. Multilaboratory particle image velocimetry analysis of the FDA benchmark nozzle model to support validation of computational fluid dynamics simulations.

    Science.gov (United States)

    Hariharan, Prasanna; Giarra, Matthew; Reddy, Varun; Day, Steven W; Manning, Keefe B; Deutsch, Steven; Stewart, Sandy F C; Myers, Matthew R; Berman, Michael R; Burgreen, Greg W; Paterson, Eric G; Malinauskas, Richard A

    2011-04-01

    This study is part of a FDA-sponsored project to evaluate the use and limitations of computational fluid dynamics (CFD) in assessing blood flow parameters related to medical device safety. In an interlaboratory study, fluid velocities and pressures were measured in a nozzle model to provide experimental validation for a companion round-robin CFD study. The simple benchmark nozzle model, which mimicked the flow fields in several medical devices, consisted of a gradual flow constriction, a narrow throat region, and a sudden expansion region where a fluid jet exited the center of the nozzle with recirculation zones near the model walls. Measurements of mean velocity and turbulent flow quantities were made in the benchmark device at three independent laboratories using particle image velocimetry (PIV). Flow measurements were performed over a range of nozzle throat Reynolds numbers (Re(throat)) from 500 to 6500, covering the laminar, transitional, and turbulent flow regimes. A standard operating procedure was developed for performing experiments under controlled temperature and flow conditions and for minimizing systematic errors during PIV image acquisition and processing. For laminar (Re(throat)=500) and turbulent flow conditions (Re(throat)≥3500), the velocities measured by the three laboratories were similar with an interlaboratory uncertainty of ∼10% at most of the locations. However, for the transitional flow case (Re(throat)=2000), the uncertainty in the size and the velocity of the jet at the nozzle exit increased to ∼60% and was very sensitive to the flow conditions. An error analysis showed that by minimizing the variability in the experimental parameters such as flow rate and fluid viscosity to less than 5% and by matching the inlet turbulence level between the laboratories, the uncertainties in the velocities of the transitional flow case could be reduced to ∼15%. The experimental procedure and flow results from this interlaboratory study (available

  6. Sensitivity analysis of a validated subject-specific finite element model of the human craniofacial skeleton.

    Science.gov (United States)

    Szwedowski, T D; Fialkov, J; Whyne, C M

    2011-01-01

    Developing a more complete understanding of the mechanical response of the craniofacial skeleton (CFS) to physiological loads is fundamental to improving treatment for traumatic injuries, reconstruction due to neoplasia, and deformities. Characterization of the biomechanics of the CFS is challenging due to its highly complex structure and heterogeneity, motivating the utilization of experimentally validated computational models. As such, the objective of this study was to develop, experimentally validate, and parametrically analyse a patient-specific finite element (FE) model of the CFS to elucidate a better understanding of the factors that are of intrinsic importance to the skeletal structural behaviour of the human CFS. An FE model of a cadaveric craniofacial skeleton was created from subject-specific computed tomography data. The model was validated based on bone strain measurements taken under simulated physiological-like loading through the masseter and temporalis muscles (which are responsible for the majority of craniofacial physiologic loading due to mastication). The baseline subject-specific model using locally defined cortical bone thicknesses produced the strongest correlation to the experimental data (r2 = 0.73). Large effects on strain patterns arising from small parametric changes in cortical thickness suggest that the very thin bony structures present in the CFS are crucial to characterizing the local load distribution in the CFS accurately.

  7. Computer-aided and predictive models for design of controlled release of pesticides

    DEFF Research Database (Denmark)

    Suné, Nuria Muro; Gani, Rafiqul

    2004-01-01

    In the field of pesticide controlled release technology, a computer based model that can predict the delivery of the Active Ingredient (AI) from fabricated units is important for purposes of product design and marketing. A model for the release of an M from a microcapsule device is presented...... in this paper, together with a specific case study application to highlight its scope and significance. The paper also addresses the need for predictive models and proposes a computer aided modelling framework for achieving it through the development and introduction of reliable and predictive constitutive...... models. A group-contribution based model for one of the constitutive variables (AI solubility in polymers) is presented together with examples of application and validation....

  8. Computational Intelligence, Cyber Security and Computational Models

    CERN Document Server

    Anitha, R; Lekshmi, R; Kumar, M; Bonato, Anthony; Graña, Manuel

    2014-01-01

    This book contains cutting-edge research material presented by researchers, engineers, developers, and practitioners from academia and industry at the International Conference on Computational Intelligence, Cyber Security and Computational Models (ICC3) organized by PSG College of Technology, Coimbatore, India during December 19–21, 2013. The materials in the book include theory and applications for design, analysis, and modeling of computational intelligence and security. The book will be useful material for students, researchers, professionals, and academicians. It will help in understanding current research trends and findings and future scope of research in computational intelligence, cyber security, and computational models.

  9. Design of experiments in medical physics: Application to the AAA beam model validation.

    Science.gov (United States)

    Dufreneix, S; Legrand, C; Di Bartolo, C; Bremaud, M; Mesgouez, J; Tiplica, T; Autret, D

    2017-09-01

    The purpose of this study is to evaluate the usefulness of the design of experiments in the analysis of multiparametric problems related to the quality assurance in radiotherapy. The main motivation is to use this statistical method to optimize the quality assurance processes in the validation of beam models. Considering the Varian Eclipse system, eight parameters with several levels were selected: energy, MLC, depth, X, Y 1 and Y 2 jaw dimensions, wedge and wedge jaw. A Taguchi table was used to define 72 validation tests. Measurements were conducted in water using a CC04 on a TrueBeam STx, a TrueBeam Tx, a Trilogy and a 2300IX accelerator matched by the vendor. Dose was computed using the AAA algorithm. The same raw data was used for all accelerators during the beam modelling. The mean difference between computed and measured doses was 0.1±0.5% for all beams and all accelerators with a maximum difference of 2.4% (under the 3% tolerance level). For all beams, the measured doses were within 0.6% for all accelerators. The energy was found to be an influencing parameter but the deviations observed were smaller than 1% and not considered clinically significant. Designs of experiment can help define the optimal measurement set to validate a beam model. The proposed method can be used to identify the prognostic factors of dose accuracy. The beam models were validated for the 4 accelerators which were found dosimetrically equivalent even though the accelerator characteristics differ. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  10. Computational design and experimental validation of new thermal barrier systems

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Shengmin [Louisiana State Univ., Baton Rouge, LA (United States)

    2015-03-31

    The focus of this project is on the development of a reliable and efficient ab initio based computational high temperature material design method which can be used to assist the Thermal Barrier Coating (TBC) bond-coat and top-coat design. Experimental evaluations on the new TBCs are conducted to confirm the new TBCs’ properties. Southern University is the subcontractor on this project with a focus on the computational simulation method development. We have performed ab initio density functional theory (DFT) method and molecular dynamics simulation on screening the top coats and bond coats for gas turbine thermal barrier coating design and validation applications. For experimental validations, our focus is on the hot corrosion performance of different TBC systems. For example, for one of the top coatings studied, we examined the thermal stability of TaZr2.75O8 and confirmed it’s hot corrosion performance.

  11. Depth-Averaged Non-Hydrostatic Hydrodynamic Model Using a New Multithreading Parallel Computing Method

    Directory of Open Access Journals (Sweden)

    Ling Kang

    2017-03-01

    Full Text Available Compared to the hydrostatic hydrodynamic model, the non-hydrostatic hydrodynamic model can accurately simulate flows that feature vertical accelerations. The model’s low computational efficiency severely restricts its wider application. This paper proposes a non-hydrostatic hydrodynamic model based on a multithreading parallel computing method. The horizontal momentum equation is obtained by integrating the Navier–Stokes equations from the bottom to the free surface. The vertical momentum equation is approximated by the Keller-box scheme. A two-step method is used to solve the model equations. A parallel strategy based on block decomposition computation is utilized. The original computational domain is subdivided into two subdomains that are physically connected via a virtual boundary technique. Two sub-threads are created and tasked with the computation of the two subdomains. The producer–consumer model and the thread lock technique are used to achieve synchronous communication between sub-threads. The validity of the model was verified by solitary wave propagation experiments over a flat bottom and slope, followed by two sinusoidal wave propagation experiments over submerged breakwater. The parallel computing method proposed here was found to effectively enhance computational efficiency and save 20%–40% computation time compared to serial computing. The parallel acceleration rate and acceleration efficiency are approximately 1.45% and 72%, respectively. The parallel computing method makes a contribution to the popularization of non-hydrostatic models.

  12. Reactor core modeling practice: Operational requirements, model characteristics, and model validation

    International Nuclear Information System (INIS)

    Zerbino, H.

    1997-01-01

    The physical models implemented in power plant simulators have greatly increased in performance and complexity in recent years. This process has been enabled by the ever increasing computing power available at affordable prices. This paper describes this process from several angles: First the operational requirements which are more critical from the point of view of model performance, both for normal and off-normal operating conditions; A second section discusses core model characteristics in the light of the solutions implemented by Thomson Training and Simulation (TT and S) in several full-scope simulators recently built and delivered for Dutch, German, and French nuclear power plants; finally we consider the model validation procedures, which are of course an integral part of model development, and which are becoming more and more severe as performance expectations increase. As a conclusion, it may be asserted that in the core modeling field, as in other areas, the general improvement in the quality of simulation codes has resulted in a fairly rapid convergence towards mainstream engineering-grade calculations. This is remarkable performance in view of the stringent real-time requirements which the simulation codes must satisfy as well as the extremely wide range of operating conditions that they are called upon to cover with good accuracy. (author)

  13. Model Validation in Ontology Based Transformations

    Directory of Open Access Journals (Sweden)

    Jesús M. Almendros-Jiménez

    2012-10-01

    Full Text Available Model Driven Engineering (MDE is an emerging approach of software engineering. MDE emphasizes the construction of models from which the implementation should be derived by applying model transformations. The Ontology Definition Meta-model (ODM has been proposed as a profile for UML models of the Web Ontology Language (OWL. In this context, transformations of UML models can be mapped into ODM/OWL transformations. On the other hand, model validation is a crucial task in model transformation. Meta-modeling permits to give a syntactic structure to source and target models. However, semantic requirements have to be imposed on source and target models. A given transformation will be sound when source and target models fulfill the syntactic and semantic requirements. In this paper, we present an approach for model validation in ODM based transformations. Adopting a logic programming based transformational approach we will show how it is possible to transform and validate models. Properties to be validated range from structural and semantic requirements of models (pre and post conditions to properties of the transformation (invariants. The approach has been applied to a well-known example of model transformation: the Entity-Relationship (ER to Relational Model (RM transformation.

  14. Validation of models for the analysis of the transient behavior of metallic fast reactor fuel

    International Nuclear Information System (INIS)

    Kramer, J.M.; Hughes, T.H.; Gruber, E.E.

    1989-01-01

    The Integral Fast Reactor (IFR) concept being developed at Argonne National Laboratory has prompted a renewed interest in U-Pu-Zr metal alloys as a fuel for sodium-cooled fast reactors. Part of the attractiveness of the IFR concept is the improvement in reactor safety margins through inherent features of a metal-fueled LMR core. In order to demonstrate these safety margins it is necessary to have computer codes available to analyze the detailed response of metallic fuel to a wide range of accident initiators. Two of the codes that play a key role in assessing this response are the STARS fission gas behavior code and the FPIN2 fuel pin mechanics code. Verification and validation are two important components in the development of models and computer codes. Verification demonstrates through comparison of calculations with analytical solutions that the methodology and algorithms correctly solve the equations that govern the phenomena being modeled. Validation, on the other hand, demonstrates through comparison with data that the phenomena are being modeled correctly. Both components are necessary in order to have the confidence to extrapolate the calculations to reactor accident conditions. This paper presents the results of recent progress in the validation of models for the analysis of the behavior of metallic fast reactor fuel. 9 refs., 7 figs

  15. Computer Modeling of Radiation Portal Monitors for Homeland Security Applications

    International Nuclear Information System (INIS)

    Pagh, Richard T.; Kouzes, Richard T.; McConn, Ronald J.; Robinson, Sean M.; Schweppe, John E.; Siciliano, Edward R.

    2005-01-01

    Radiation Portal Monitors (RPMs) are currently being used at our nation's borders to detect potential nuclear threats. At the Pacific Northwest National Laboratory (PNNL), realistic computer models of RPMs are being developed to simulate the screening of vehicles and cargo. Detailed models of the detection equipment, vehicles, cargo containers, cargos, and radioactive sources are being used to determine the optimal configuration of detectors. These models can also be used to support work to optimize alarming algorithms so that they maximize sensitivity for items of interest while minimizing nuisance alarms triggered by legitimate radioactive material in the commerce stream. Proposed next-generation equipment is also being modeled to quantify performance and capability improvements to detect potential nuclear threats. A discussion of the methodology used to perform computer modeling for RPMs will be provided. In addition, the efforts to validate models used to perform these scenario analyses will be described. Finally, areas where improved modeling capability is needed will be discussed as a guide to future development efforts

  16. Simple computational modeling for human extracorporeal irradiation using the BNCT facility of the RA-3 Reactor

    International Nuclear Information System (INIS)

    Farias, Ruben; Gonzalez, S.J.; Bellino, A.; Sztenjberg, M.; Pinto, J.; Thorp, Silvia I.; Gadan, M.; Pozzi, Emiliano; Schwint, Amanda E.; Heber, Elisa M.; Trivillin, V.A.; Zarza, Leandro G.; Estryk, Guillermo; Miller, M.; Bortolussi, S.; Soto, M.S.; Nigg, D.W.

    2009-01-01

    We present a simple computational model of the reactor RA-3 developed using Monte Carlo transport code MCNP. The model parameters are adjusted in order to reproduce experimental measured points in air and the source validation is performed in an acrylic phantom. Performance analysis is carried out using computational models of animal extracorporeal irradiation in liver and lung. Analysis is also performed inside a neutron shielded receptacle use for the irradiation of rats with a model of hepatic metastases.The computational model reproduces the experimental behavior in all the analyzed cases with a maximum difference of 10 percent. (author)

  17. Validation of a mixture-averaged thermal diffusion model for premixed lean hydrogen flames

    Science.gov (United States)

    Schlup, Jason; Blanquart, Guillaume

    2018-03-01

    The mixture-averaged thermal diffusion model originally proposed by Chapman and Cowling is validated using multiple flame configurations. Simulations using detailed hydrogen chemistry are done on one-, two-, and three-dimensional flames. The analysis spans flat and stretched, steady and unsteady, and laminar and turbulent flames. Quantitative and qualitative results using the thermal diffusion model compare very well with the more complex multicomponent diffusion model. Comparisons are made using flame speeds, surface areas, species profiles, and chemical source terms. Once validated, this model is applied to three-dimensional laminar and turbulent flames. For these cases, thermal diffusion causes an increase in the propagation speed of the flames as well as increased product chemical source terms in regions of high positive curvature. The results illustrate the necessity for including thermal diffusion, and the accuracy and computational efficiency of the mixture-averaged thermal diffusion model.

  18. Arterial waveguide model for shear wave elastography: implementation and in vitro validation

    Science.gov (United States)

    Vaziri Astaneh, Ali; Urban, Matthew W.; Aquino, Wilkins; Greenleaf, James F.; Guddati, Murthy N.

    2017-07-01

    Arterial stiffness is found to be an early indicator of many cardiovascular diseases. Among various techniques, shear wave elastography has emerged as a promising tool for estimating local arterial stiffness through the observed dispersion of guided waves. In this paper, we develop efficient models for the computational simulation of guided wave dispersion in arterial walls. The models are capable of considering fluid-loaded tubes, immersed in fluid or embedded in a solid, which are encountered in in vitro/ex vivo, and in vivo experiments. The proposed methods are based on judiciously combining Fourier transformation and finite element discretization, leading to a significant reduction in computational cost while fully capturing complex 3D wave propagation. The developed methods are implemented in open-source code, and verified by comparing them with significantly more expensive, fully 3D finite element models. We also validate the models using the shear wave elastography of tissue-mimicking phantoms. The computational efficiency of the developed methods indicates the possibility of being able to estimate arterial stiffness in real time, which would be beneficial in clinical settings.

  19. The European computer model for optronic system performance prediction (ECOMOS)

    Science.gov (United States)

    Keßler, Stefan; Bijl, Piet; Labarre, Luc; Repasi, Endre; Wittenstein, Wolfgang; Bürsing, Helge

    2017-10-01

    ECOMOS is a multinational effort within the framework of an EDA Project Arrangement. Its aim is to provide a generally accepted and harmonized European computer model for computing nominal Target Acquisition (TA) ranges of optronic imagers operating in the Visible or thermal Infrared (IR). The project involves close co-operation of defence and security industry and public research institutes from France, Germany, Italy, The Netherlands and Sweden. ECOMOS uses and combines well-accepted existing European tools to build up a strong competitive position. This includes two TA models: the analytical TRM4 model and the image-based TOD model. In addition, it uses the atmosphere model MATISSE. In this paper, the central idea of ECOMOS is exposed. The overall software structure and the underlying models are shown and elucidated. The status of the project development is given as well as a short discussion of validation tests and an outlook on the future potential of simulation for sensor assessment.

  20. Electromagnetic Physics Models for Parallel Computing Architectures

    Science.gov (United States)

    Amadio, G.; Ananya, A.; Apostolakis, J.; Aurora, A.; Bandieramonte, M.; Bhattacharyya, A.; Bianchini, C.; Brun, R.; Canal, P.; Carminati, F.; Duhem, L.; Elvira, D.; Gheata, A.; Gheata, M.; Goulas, I.; Iope, R.; Jun, S. Y.; Lima, G.; Mohanty, A.; Nikitina, T.; Novak, M.; Pokorski, W.; Ribon, A.; Seghal, R.; Shadura, O.; Vallecorsa, S.; Wenzel, S.; Zhang, Y.

    2016-10-01

    The recent emergence of hardware architectures characterized by many-core or accelerated processors has opened new opportunities for concurrent programming models taking advantage of both SIMD and SIMT architectures. GeantV, a next generation detector simulation, has been designed to exploit both the vector capability of mainstream CPUs and multi-threading capabilities of coprocessors including NVidia GPUs and Intel Xeon Phi. The characteristics of these architectures are very different in terms of the vectorization depth and type of parallelization needed to achieve optimal performance. In this paper we describe implementation of electromagnetic physics models developed for parallel computing architectures as a part of the GeantV project. Results of preliminary performance evaluation and physics validation are presented as well.

  1. Non-Linear Slosh Damping Model Development and Validation

    Science.gov (United States)

    Yang, H. Q.; West, Jeff

    2015-01-01

    Propellant tank slosh dynamics are typically represented by a mechanical model of spring mass damper. This mechanical model is then included in the equation of motion of the entire vehicle for Guidance, Navigation and Control (GN&C) analysis. For a partially-filled smooth wall propellant tank, the critical damping based on classical empirical correlation is as low as 0.05%. Due to this low value of damping, propellant slosh is potential sources of disturbance critical to the stability of launch and space vehicles. It is postulated that the commonly quoted slosh damping is valid only under the linear regime where the slosh amplitude is small. With the increase of slosh amplitude, the critical damping value should also increase. If this nonlinearity can be verified and validated, the slosh stability margin can be significantly improved, and the level of conservatism maintained in the GN&C analysis can be lessened. The purpose of this study is to explore and to quantify the dependence of slosh damping with slosh amplitude. Accurately predicting the extremely low damping value of a smooth wall tank is very challenging for any Computational Fluid Dynamics (CFD) tool. One must resolve thin boundary layers near the wall and limit numerical damping to minimum. This computational study demonstrates that with proper grid resolution, CFD can indeed accurately predict the low damping physics from smooth walls under the linear regime. Comparisons of extracted damping values with experimental data for different tank sizes show very good agreements. Numerical simulations confirm that slosh damping is indeed a function of slosh amplitude. When slosh amplitude is low, the damping ratio is essentially constant, which is consistent with the empirical correlation. Once the amplitude reaches a critical value, the damping ratio becomes a linearly increasing function of the slosh amplitude. A follow-on experiment validated the developed nonlinear damping relationship. This discovery can

  2. Mesh influence on the fire computer modeling in nuclear power plants

    Directory of Open Access Journals (Sweden)

    D. Lázaro

    2018-04-01

    Full Text Available Fire computer models allow to study real fire scenarios consequences. Its use in nuclear power plants has increased with the new regulations to apply risk informed performance-based methods for the analysis and design of fire safety solutions. The selection of the cell side factor is very important in these kinds of models. The mesh must establish a compromise between the geometry adjustment, the resolution of the equations and the computation times. This paper aims to study the impact of several cell sizes, using the fire computer model FDS, to evaluate the relative affectation in the final simulation results. In order to validate that, we have employed several scenarios of interest for nuclear power plants. Conclusions offer relevant data for users and show some cell sizes that can be selected to guarantee the quality of the simulations and reduce the results uncertainty.

  3. A New Perspective for the Calibration of Computational Predictor Models.

    Energy Technology Data Exchange (ETDEWEB)

    Crespo, Luis Guillermo

    2014-11-01

    This paper presents a framework for calibrating computational models using data from sev- eral and possibly dissimilar validation experiments. The offset between model predictions and observations, which might be caused by measurement noise, model-form uncertainty, and numerical error, drives the process by which uncertainty in the models parameters is characterized. The resulting description of uncertainty along with the computational model constitute a predictor model. Two types of predictor models are studied: Interval Predictor Models (IPMs) and Random Predictor Models (RPMs). IPMs use sets to characterize uncer- tainty, whereas RPMs use random vectors. The propagation of a set through a model makes the response an interval valued function of the state, whereas the propagation of a random vector yields a random process. Optimization-based strategies for calculating both types of predictor models are proposed. Whereas the formulations used to calculate IPMs target solutions leading to the interval value function of minimal spread containing all observations, those for RPMs seek to maximize the models' ability to reproduce the distribution of obser- vations. Regarding RPMs, we choose a structure for the random vector (i.e., the assignment of probability to points in the parameter space) solely dependent on the prediction error. As such, the probabilistic description of uncertainty is not a subjective assignment of belief, nor is it expected to asymptotically converge to a fixed value, but instead it is a description of the model's ability to reproduce the experimental data. This framework enables evaluating the spread and distribution of the predicted response of target applications depending on the same parameters beyond the validation domain (i.e., roll-up and extrapolation).

  4. Ground-water models: Validate or invalidate

    Science.gov (United States)

    Bredehoeft, J.D.; Konikow, Leonard F.

    1993-01-01

    The word validation has a clear meaning to both the scientific community and the general public. Within the scientific community the validation of scientific theory has been the subject of philosophical debate. The philosopher of science, Karl Popper, argued that scientific theory cannot be validated, only invalidated. Popper’s view is not the only opinion in this debate; however, many scientists today agree with Popper (including the authors). To the general public, proclaiming that a ground-water model is validated carries with it an aura of correctness that we do not believe many of us who model would claim. We can place all the caveats we wish, but the public has its own understanding of what the word implies. Using the word valid with respect to models misleads the public; verification carries with it similar connotations as far as the public is concerned. Our point is this: using the terms validation and verification are misleading, at best. These terms should be abandoned by the ground-water community.

  5. Numerical validation of selected computer programs in nonlinear analysis of steel frame exposed to fire

    Science.gov (United States)

    Maślak, Mariusz; Pazdanowski, Michał; Woźniczka, Piotr

    2018-01-01

    Validation of fire resistance for the same steel frame bearing structure is performed here using three different numerical models, i.e. a bar one prepared in the SAFIR environment, and two 3D models developed within the framework of Autodesk Simulation Mechanical (ASM) and an alternative one developed in the environment of the Abaqus code. The results of the computer simulations performed are compared with the experimental results obtained previously, in a laboratory fire test, on a structure having the same characteristics and subjected to the same heating regimen. Comparison of the experimental and numerically determined displacement evolution paths for selected nodes of the considered frame during the simulated fire exposure constitutes the basic criterion applied to evaluate the validity of the numerical results obtained. The experimental and numerically determined estimates of critical temperature specific to the considered frame and related to the limit state of bearing capacity in fire have been verified as well.

  6. Validation of vibration-dissociation coupling models in hypersonic non-equilibrium separated flows

    Science.gov (United States)

    Shoev, G.; Oblapenko, G.; Kunova, O.; Mekhonoshina, M.; Kustova, E.

    2018-03-01

    The validation of recently developed models of vibration-dissociation coupling is discussed in application to numerical solutions of the Navier-Stokes equations in a two-temperature approximation for a binary N2/N flow. Vibrational-translational relaxation rates are computed using the Landau-Teller formula generalized for strongly non-equilibrium flows obtained in the framework of the Chapman-Enskog method. Dissociation rates are calculated using the modified Treanor-Marrone model taking into account the dependence of the model parameter on the vibrational state. The solutions are compared to those obtained using traditional Landau-Teller and Treanor-Marrone models, and it is shown that for high-enthalpy flows, the traditional and recently developed models can give significantly different results. The computed heat flux and pressure on the surface of a double cone are in a good agreement with experimental data available in the literature on low-enthalpy flow with strong thermal non-equilibrium. The computed heat flux on a double wedge qualitatively agrees with available data for high-enthalpy non-equilibrium flows. Different contributions to the heat flux calculated using rigorous kinetic theory methods are evaluated. Quantitative discrepancy of numerical and experimental data is discussed.

  7. Intercomparison and validation of computer codes for thermalhydraulic safety analysis of heavy water reactors

    International Nuclear Information System (INIS)

    2004-08-01

    Activities within the frame of the IAEA's Technical Working Group on Advanced Technologies for HWRs (TWG-HWR) are conducted in a project within the IAEA's subprogramme on nuclear power reactor technology development. The objective of the activities on HWRs is to foster, within the frame of the TWG-HWR, information exchange and co-operative research on technology development for current and future HWRs, with an emphasis on safety, economics and fuel resource sustainability. One of the activities recommended by the TWG-HWR was an international standard problem exercise entitled: Intercomparison and validation of computer codes for thermalhydraulics safety analyses. Intercomparison and validation of computer codes used in different countries for thermalhydraulics safety analyses will enhance the confidence in the predictions made by these codes. However, the intercomparison and validation exercise needs a set of reliable experimental data. The RD-14M Large-Loss Of Coolant Accident (LOCA) test B9401 simulating HWR LOCA behaviour that was conducted by Atomic Energy of Canada Ltd (AECL) was selected for this validation project. This report provides a comparison of the results obtained from six participating countries, utilizing four different computer codes. General conclusions are reached and recommendations made

  8. Development of computer models for fuel element behaviour in water reactors

    International Nuclear Information System (INIS)

    Gittus, J.H.

    1987-03-01

    Description of fuel behaviour during normal operation transients and accident conditions has always represented a most challenging and important problem. Reliable predictions constitute a basic demand for safety based calculations, for design purposes and for fuel performance. Therefore, computer codes based on deterministic and probabilistic models were developed. Possibility of comprehensive descriptions of the phenomena is precluded in view of the great number of individual processes, involving physical, chemical, thermohydraulical and mechanical parameters, to be considered in a wide range of situations. In case of fast thermal transients predictive capability is limited by the kinetics of evolution of the system and its eventual dynamic behaviour. Evidently, probabilistic approaches are also limited by the sparcity and limited breadth of the impirical data base. Code predictions have to be evaluated against power reactor data, results from simulation experiments and, if possible, include cross validation of different codes and validation of sub-models. Progress on this subject is reviewed in this report, which completes the co-ordinated research programme on 'Development of Computer Models for Fuel Element Behaviour in Water Reactors' (D-COM), initiated under the auspices of the IAEA in 1981

  9. Computational modeling of local hemodynamics phenomena: methods, tools and clinical applications

    International Nuclear Information System (INIS)

    Ponzini, R.; Rizzo, G.; Vergara, C.; Veneziani, A.; Morbiducci, U.; Montevecchi, F.M.; Redaelli, A.

    2009-01-01

    Local hemodynamics plays a key role in the onset of vessel wall pathophysiology, with peculiar blood flow structures (i.e. spatial velocity profiles, vortices, re-circulating zones, helical patterns and so on) characterizing the behavior of specific vascular districts. Thanks to the evolving technologies on computer sciences, mathematical modeling and hardware performances, the study of local hemodynamics can today afford also the use of a virtual environment to perform hypothesis testing, product development, protocol design and methods validation that just a couple of decades ago would have not been thinkable. Computational fluid dynamics (Cfd) appears to be more than a complementary partner to in vitro modeling and a possible substitute to animal models, furnishing a privileged environment for cheap fast and reproducible data generation.

  10. Converting differential-equation models of biological systems to membrane computing.

    Science.gov (United States)

    Muniyandi, Ravie Chandren; Zin, Abdullah Mohd; Sanders, J W

    2013-12-01

    This paper presents a method to convert the deterministic, continuous representation of a biological system by ordinary differential equations into a non-deterministic, discrete membrane computation. The dynamics of the membrane computation is governed by rewrite rules operating at certain rates. That has the advantage of applying accurately to small systems, and to expressing rates of change that are determined locally, by region, but not necessary globally. Such spatial information augments the standard differentiable approach to provide a more realistic model. A biological case study of the ligand-receptor network of protein TGF-β is used to validate the effectiveness of the conversion method. It demonstrates the sense in which the behaviours and properties of the system are better preserved in the membrane computing model, suggesting that the proposed conversion method may prove useful for biological systems in particular. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  11. Black liquor combustion validated recovery boiler modeling, five-year report

    Energy Technology Data Exchange (ETDEWEB)

    Grace, T.M.; Frederick, W.J.; Salcudean, M.; Wessel, R.A.

    1996-08-01

    The objective of this project was to develop a new computer model of a recovery boiler furnace using a computational fluid dynamics (CFD) code specifically tailored to the requirements for solving recovery boiler flows, and using improved submodels for black liquor combustion based on continued laboratory fundamental studies. The project originated in October 1990 and was scheduled to run for four years. At that time, there was considerable emphasis on developing accurate predictions of the physical carryover of macroscopic particles of partially burnt black liquor and smelt droplets out of the furnace, since this was seen as the main cause of boiler plugging. This placed a major emphasis on gas flow patterns within the furnace and on the mass loss rates and swelling and shrinking rates of burning black liquor drops. As work proceeded on developing the recovery boiler furnace model, it became apparent that some recovery boilers encounter serious plugging problems even when physical carryover was minimal. After the original four-year period was completed, the project was extended to address this issue. The objective of the extended project was to improve the utility of the models by including the black liquor chemistry relevant to air emissions predictions and aerosol formation, and by developing the knowledge base and computational tools to relate furnace model outputs to fouling and plugging of the convective sections of the boilers. The work done to date includes CFD model development and validation, acquisition of information on black liquor combustion fundamentals and development of improved burning models, char bed model development, and model application and simplification.

  12. The CMS Computing Model

    International Nuclear Information System (INIS)

    Bonacorsi, D.

    2007-01-01

    The CMS experiment at LHC has developed a baseline Computing Model addressing the needs of a computing system capable to operate in the first years of LHC running. It is focused on a data model with heavy streaming at the raw data level based on trigger, and on the achievement of the maximum flexibility in the use of distributed computing resources. The CMS distributed Computing Model includes a Tier-0 centre at CERN, a CMS Analysis Facility at CERN, several Tier-1 centres located at large regional computing centres, and many Tier-2 centres worldwide. The workflows have been identified, along with a baseline architecture for the data management infrastructure. This model is also being tested in Grid Service Challenges of increasing complexity, coordinated with the Worldwide LHC Computing Grid community

  13. Computational fluid dynamics modeling of mixed convection flows in buildings enclosures

    Energy Technology Data Exchange (ETDEWEB)

    Kayne, Alexander; Agarwal, Ramesh K. [Department of Mechanical Engineering and Materials Science, Washington University, St. Louis, MO 63130 (United States)

    2013-07-01

    In recent years Computational Fluid Dynamics (CFD) simulations are increasingly used to model the air circulation and temperature environment inside the rooms of residential and office buildings to gain insight into the relative energy consumptions of various HVAC systems for cooling/heating for climate control and thermal comfort. This requires accurate simulation of turbulent flow and heat transfer for various types of ventilation systems using the Reynolds-Averaged Navier-Stokes (RANS) equations of fluid dynamics. Large Eddy Simulation (LES) or Direct Numerical Simulation (DNS) of Navier-Stokes equations is computationally intensive and expensive for simulations of this kind. As a result, vast majority of CFD simulations employ RANS equations in conjunction with a turbulence model. In order to assess the modeling requirements (mesh, numerical algorithm, turbulence model etc.) for accurate simulations, it is critical to validate the calculations against the experimental data. For this purpose, we use three well known benchmark validation cases, one for natural convection in 2D closed vertical cavity, second for forced convection in a 2D rectangular cavity and the third for mixed convection in a 2D square cavity. The simulations are performed on a number of meshes of different density using a number of turbulence models. It is found that k-epsilon two-equation turbulence model with a second-order algorithm on a reasonable mesh gives the best results. This information is then used to determine the modeling requirements (mesh, numerical algorithm, turbulence model etc.) for flows in 3D enclosures with different ventilation systems. In particular two cases are considered for which the experimental data is available. These cases are (1) air flow and heat transfer in a naturally ventilated room and (2) airflow and temperature distribution in an atrium. Good agreement with the experimental data and computations of other investigators is obtained.

  14. A Test of the Validity of Inviscid Wall-Modeled LES

    Science.gov (United States)

    Redman, Andrew; Craft, Kyle; Aikens, Kurt

    2015-11-01

    Computational expense is one of the main deterrents to more widespread use of large eddy simulations (LES). As such, it is important to reduce computational costs whenever possible. In this vein, it may be reasonable to assume that high Reynolds number flows with turbulent boundary layers are inviscid when using a wall model. This assumption relies on the grid being too coarse to resolve either the viscous length scales in the outer flow or those near walls. We are not aware of other studies that have suggested or examined the validity of this approach. The inviscid wall-modeled LES assumption is tested here for supersonic flow over a flat plate on three different grids. Inviscid and viscous results are compared to those of another wall-modeled LES as well as experimental data - the results appear promising. Furthermore, the inviscid assumption reduces simulation costs by about 25% and 39% for supersonic and subsonic flows, respectively, with the current LES application. Recommendations are presented as are future areas of research. This work used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant number ACI-1053575. Computational resources on TACC Stampede were provided under XSEDE allocation ENG150001.

  15. Analytical Performance Modeling and Validation of Intel’s Xeon Phi Architecture

    Energy Technology Data Exchange (ETDEWEB)

    Chunduri, Sudheer; Balaprakash, Prasanna; Morozov, Vitali; Vishwanath, Venkatram; Kumaran, Kalyan

    2017-01-01

    Modeling the performance of scientific applications on emerging hardware plays a central role in achieving extreme-scale computing goals. Analytical models that capture the interaction between applications and hardware characteristics are attractive because even a reasonably accurate model can be useful for performance tuning before the hardware is made available. In this paper, we develop a hardware model for Intel’s second-generation Xeon Phi architecture code-named Knights Landing (KNL) for the SKOPE framework. We validate the KNL hardware model by projecting the performance of mini-benchmarks and application kernels. The results show that our KNL model can project the performance with prediction errors of 10% to 20%. The hardware model also provides informative recommendations for code transformations and tuning.

  16. High-resolution subject-specific mitral valve imaging and modeling: experimental and computational methods.

    Science.gov (United States)

    Toma, Milan; Bloodworth, Charles H; Einstein, Daniel R; Pierce, Eric L; Cochran, Richard P; Yoganathan, Ajit P; Kunzelman, Karyn S

    2016-12-01

    The diversity of mitral valve (MV) geometries and multitude of surgical options for correction of MV diseases necessitates the use of computational modeling. Numerical simulations of the MV would allow surgeons and engineers to evaluate repairs, devices, procedures, and concepts before performing them and before moving on to more costly testing modalities. Constructing, tuning, and validating these models rely upon extensive in vitro characterization of valve structure, function, and response to change due to diseases. Micro-computed tomography ([Formula: see text]CT) allows for unmatched spatial resolution for soft tissue imaging. However, it is still technically challenging to obtain an accurate geometry of the diastolic MV. We discuss here the development of a novel technique for treating MV specimens with glutaraldehyde fixative in order to minimize geometric distortions in preparation for [Formula: see text]CT scanning. The technique provides a resulting MV geometry which is significantly more detailed in chordal structure, accurate in leaflet shape, and closer to its physiological diastolic geometry. In this paper, computational fluid-structure interaction (FSI) simulations are used to show the importance of more detailed subject-specific MV geometry with 3D chordal structure to simulate a proper closure validated against [Formula: see text]CT images of the closed valve. Two computational models, before and after use of the aforementioned technique, are used to simulate closure of the MV.

  17. A broad view of model validation

    International Nuclear Information System (INIS)

    Tsang, C.F.

    1989-10-01

    The safety assessment of a nuclear waste repository requires the use of models. Such models need to be validated to ensure, as much as possible, that they are a good representation of the actual processes occurring in the real system. In this paper we attempt to take a broad view by reviewing step by step the modeling process and bringing out the need to validating every step of this process. This model validation includes not only comparison of modeling results with data from selected experiments, but also evaluation of procedures for the construction of conceptual models and calculational models as well as methodologies for studying data and parameter correlation. The need for advancing basic scientific knowledge in related fields, for multiple assessment groups, and for presenting our modeling efforts in open literature to public scrutiny is also emphasized. 16 refs

  18. Cross validation for the classical model of structured expert judgment

    International Nuclear Information System (INIS)

    Colson, Abigail R.; Cooke, Roger M.

    2017-01-01

    We update the 2008 TU Delft structured expert judgment database with data from 33 professionally contracted Classical Model studies conducted between 2006 and March 2015 to evaluate its performance relative to other expert aggregation models. We briefly review alternative mathematical aggregation schemes, including harmonic weighting, before focusing on linear pooling of expert judgments with equal weights and performance-based weights. Performance weighting outperforms equal weighting in all but 1 of the 33 studies in-sample. True out-of-sample validation is rarely possible for Classical Model studies, and cross validation techniques that split calibration questions into a training and test set are used instead. Performance weighting incurs an “out-of-sample penalty” and its statistical accuracy out-of-sample is lower than that of equal weighting. However, as a function of training set size, the statistical accuracy of performance-based combinations reaches 75% of the equal weight value when the training set includes 80% of calibration variables. At this point the training set is sufficiently powerful to resolve differences in individual expert performance. The information of performance-based combinations is double that of equal weighting when the training set is at least 50% of the set of calibration variables. Previous out-of-sample validation work used a Total Out-of-Sample Validity Index based on all splits of the calibration questions into training and test subsets, which is expensive to compute and includes small training sets of dubious value. As an alternative, we propose an Out-of-Sample Validity Index based on averaging the product of statistical accuracy and information over all training sets sized at 80% of the calibration set. Performance weighting outperforms equal weighting on this Out-of-Sample Validity Index in 26 of the 33 post-2006 studies; the probability of 26 or more successes on 33 trials if there were no difference between performance

  19. Verification, Validation and Credibility Assessment of a Computational Model of the Advanced Resistive Exercise Device (ARED)

    Science.gov (United States)

    Werner, C. R.; Humphreys, B. T.; Mulugeta, L.

    2014-01-01

    The Advanced Resistive Exercise Device (ARED) is the resistive exercise device used by astronauts on the International Space Station (ISS) to mitigate bone loss and muscle atrophy due to extended exposure to microgravity (micro g). The Digital Astronaut Project (DAP) has developed a multi-body dynamics model of biomechanics models for use in spaceflight exercise physiology research and operations. In an effort to advance model maturity and credibility of the ARED model, the DAP performed verification, validation and credibility (VV and C) assessment of the analyses of the model in accordance to NASA-STD-7009 'Standards for Models and Simulations'.

  20. Computer code validation study of PWR core design system, CASMO-3/MASTER-α

    International Nuclear Information System (INIS)

    Lee, K. H.; Kim, M. H.; Woo, S. W.

    1999-01-01

    In this paper, the feasibility of CASMO-3/MASTER-α nuclear design system was investigated for commercial PWR core. Validation calculation was performed as follows. Firstly, the accuracy of cross section generation from table set using linear feedback model was estimated. Secondly, the results of CASMO-3/MASTER-α was compared with CASMO-3/NESTLE 5.02 for a few benchmark problems. Microscopic cross sections computed from table set were almost the same with those from CASMO-3. There were small differences between calculated results of two code systems. Thirdly, the repetition of CASMO-3/MASTER-α calculation for Younggwang Unit-3, Cycle-1 core was done and their results were compared with nuclear design report(NDR) and uncertainty analysis results of KAERI. It was found that uncertainty analysis results were reliable enough because results were agreed each other. It was concluded that the use of nuclear design system CASMO-3/MASTER-α was validated for commercial PWR core

  1. Hybrid automata models of cardiac ventricular electrophysiology for real-time computational applications.

    Science.gov (United States)

    Andalam, Sidharta; Ramanna, Harshavardhan; Malik, Avinash; Roop, Parthasarathi; Patel, Nitish; Trew, Mark L

    2016-08-01

    Virtual heart models have been proposed for closed loop validation of safety-critical embedded medical devices, such as pacemakers. These models must react in real-time to off-the-shelf medical devices. Real-time performance can be obtained by implementing models in computer hardware, and methods of compiling classes of Hybrid Automata (HA) onto FPGA have been developed. Models of ventricular cardiac cell electrophysiology have been described using HA which capture the complex nonlinear behavior of biological systems. However, many models that have been used for closed-loop validation of pacemakers are highly abstract and do not capture important characteristics of the dynamic rate response. We developed a new HA model of cardiac cells which captures dynamic behavior and we implemented the model in hardware. This potentially enables modeling the heart with over 1 million dynamic cells, making the approach ideal for closed loop testing of medical devices.

  2. Validation of A Global Hydrological Model

    Science.gov (United States)

    Doell, P.; Lehner, B.; Kaspar, F.; Vassolo, S.

    Freshwater availability has been recognized as a global issue, and its consistent quan- tification not only in individual river basins but also at the global scale is required to support the sustainable use of water. The Global Hydrology Model WGHM, which is a submodel of the global water use and availability model WaterGAP 2, computes sur- face runoff, groundwater recharge and river discharge at a spatial resolution of 0.5. WGHM is based on the best global data sets currently available, including a newly developed drainage direction map and a data set of wetlands, lakes and reservoirs. It calculates both natural and actual discharge by simulating the reduction of river discharge by human water consumption (as computed by the water use submodel of WaterGAP 2). WGHM is calibrated against observed discharge at 724 gauging sta- tions (representing about 50% of the global land area) by adjusting a parameter of the soil water balance. It not only computes the long-term average water resources but also water availability indicators that take into account the interannual and seasonal variability of runoff and discharge. The reliability of the model results is assessed by comparing observed and simulated discharges at the calibration stations and at se- lected other stations. We conclude that reliable results can be obtained for basins of more than 20,000 km2. In particular, the 90% reliable monthly discharge is simu- lated well. However, there is the tendency that semi-arid and arid basins are modeled less satisfactorily than humid ones, which is partially due to neglecting river channel losses and evaporation of runoff from small ephemeral ponds in the model. Also, the hydrology of highly developed basins with large artificial storages, basin transfers and irrigation schemes cannot be simulated well. The seasonality of discharge in snow- dominated basins is overestimated by WGHM, and if the snow-dominated basin is uncalibrated, discharge is likely to be underestimated

  3. Certainty in Stockpile Computing: Recommending a Verification and Validation Program for Scientific Software

    Energy Technology Data Exchange (ETDEWEB)

    Lee, J.R.

    1998-11-01

    As computing assumes a more central role in managing the nuclear stockpile, the consequences of an erroneous computer simulation could be severe. Computational failures are common in other endeavors and have caused project failures, significant economic loss, and loss of life. This report examines the causes of software failure and proposes steps to mitigate them. A formal verification and validation program for scientific software is recommended and described.

  4. A paradigm for modeling and computation of gas dynamics

    Science.gov (United States)

    Xu, Kun; Liu, Chang

    2017-02-01

    In the continuum flow regime, the Navier-Stokes (NS) equations are usually used for the description of gas dynamics. On the other hand, the Boltzmann equation is applied for the rarefied flow. These two equations are based on distinguishable modeling scales for flow physics. Fortunately, due to the scale separation, i.e., the hydrodynamic and kinetic ones, both the Navier-Stokes equations and the Boltzmann equation are applicable in their respective domains. However, in real science and engineering applications, they may not have such a distinctive scale separation. For example, around a hypersonic flying vehicle, the flow physics at different regions may correspond to different regimes, where the local Knudsen number can be changed significantly in several orders of magnitude. With a variation of flow physics, theoretically a continuous governing equation from the kinetic Boltzmann modeling to the hydrodynamic Navier-Stokes dynamics should be used for its efficient description. However, due to the difficulties of a direct modeling of flow physics in the scale between the kinetic and hydrodynamic ones, there is basically no reliable theory or valid governing equations to cover the whole transition regime, except resolving flow physics always down to the mean free path scale, such as the direct Boltzmann solver and the Direct Simulation Monte Carlo (DSMC) method. In fact, it is an unresolved problem about the exact scale for the validity of the NS equations, especially in the small Reynolds number cases. The computational fluid dynamics (CFD) is usually based on the numerical solution of partial differential equations (PDEs), and it targets on the recovering of the exact solution of the PDEs as mesh size and time step converging to zero. This methodology can be hardly applied to solve the multiple scale problem efficiently because there is no such a complete PDE for flow physics through a continuous variation of scales. For the non-equilibrium flow study, the direct

  5. Medical image computing and computer-assisted intervention - MICCAI 2005. Proceedings; Pt. 1

    International Nuclear Information System (INIS)

    Duncan, J.S.; Gerig, G.

    2005-01-01

    The two-volume set LNCS 3749 and LNCS 3750 constitutes the refereed proceedings of the 8th International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2005, held in Palm Springs, CA, USA, in October 2005. Based on rigorous peer reviews the program committee selected 237 carefully revised full papers from 632 submissions for presentation in two volumes. The first volume includes all the contributions related to image analysis and validation, vascular image segmentation, image registration, diffusion tensor image analysis, image segmentation and analysis, clinical applications - validation, imaging systems - visualization, computer assisted diagnosis, cellular and molecular image analysis, physically-based modeling, robotics and intervention, medical image computing for clinical applications, and biological imaging - simulation and modeling. The second volume collects the papers related to robotics, image-guided surgery and interventions, image registration, medical image computing, structural and functional brain analysis, model-based image analysis, image-guided intervention: simulation, modeling and display, and image segmentation and analysis. (orig.)

  6. Medical image computing and computer science intervention. MICCAI 2005. Pt. 2. Proceedings

    International Nuclear Information System (INIS)

    Duncan, J.S.; Yale Univ., New Haven, CT; Gerig, G.

    2005-01-01

    The two-volume set LNCS 3749 and LNCS 3750 constitutes the refereed proceedings of the 8th International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2005, held in Palm Springs, CA, USA, in October 2005. Based on rigorous peer reviews the program committee selected 237 carefully revised full papers from 632 submissions for presentation in two volumes. The first volume includes all the contributions related to image analysis and validation, vascular image segmentation, image registration, diffusion tensor image analysis, image segmentation and analysis, clinical applications - validation, imaging systems - visualization, computer assisted diagnosis, cellular and molecular image analysis, physically-based modeling, robotics and intervention, medical image computing for clinical applications, and biological imaging - simulation and modeling. The second volume collects the papers related to robotics, image-guided surgery and interventions, image registration, medical image computing, structural and functional brain analysis, model-based image analysis, image-guided intervention: simulation, modeling and display, and image segmentation and analysis. (orig.)

  7. Medical image computing and computer-assisted intervention - MICCAI 2005. Proceedings; Pt. 1

    Energy Technology Data Exchange (ETDEWEB)

    Duncan, J.S. [Yale Univ., New Haven, CT (United States). Dept. of Biomedical Engineering and Diagnostic Radiology; Gerig, G. (eds.) [North Carolina Univ., Chapel Hill (United States). Dept. of Computer Science

    2005-07-01

    The two-volume set LNCS 3749 and LNCS 3750 constitutes the refereed proceedings of the 8th International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2005, held in Palm Springs, CA, USA, in October 2005. Based on rigorous peer reviews the program committee selected 237 carefully revised full papers from 632 submissions for presentation in two volumes. The first volume includes all the contributions related to image analysis and validation, vascular image segmentation, image registration, diffusion tensor image analysis, image segmentation and analysis, clinical applications - validation, imaging systems - visualization, computer assisted diagnosis, cellular and molecular image analysis, physically-based modeling, robotics and intervention, medical image computing for clinical applications, and biological imaging - simulation and modeling. The second volume collects the papers related to robotics, image-guided surgery and interventions, image registration, medical image computing, structural and functional brain analysis, model-based image analysis, image-guided intervention: simulation, modeling and display, and image segmentation and analysis. (orig.)

  8. Medical image computing and computer science intervention. MICCAI 2005. Pt. 2. Proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Duncan, J.S. [Yale Univ., New Haven, CT (United States). Dept. of Biomedical Engineering]|[Yale Univ., New Haven, CT (United States). Dept. of Diagnostic Radiology; Gerig, G. (eds.) [North Carolina Univ., Chapel Hill, NC (United States). Dept. of Computer Science

    2005-07-01

    The two-volume set LNCS 3749 and LNCS 3750 constitutes the refereed proceedings of the 8th International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2005, held in Palm Springs, CA, USA, in October 2005. Based on rigorous peer reviews the program committee selected 237 carefully revised full papers from 632 submissions for presentation in two volumes. The first volume includes all the contributions related to image analysis and validation, vascular image segmentation, image registration, diffusion tensor image analysis, image segmentation and analysis, clinical applications - validation, imaging systems - visualization, computer assisted diagnosis, cellular and molecular image analysis, physically-based modeling, robotics and intervention, medical image computing for clinical applications, and biological imaging - simulation and modeling. The second volume collects the papers related to robotics, image-guided surgery and interventions, image registration, medical image computing, structural and functional brain analysis, model-based image analysis, image-guided intervention: simulation, modeling and display, and image segmentation and analysis. (orig.)

  9. Statistical validation of engineering and scientific models : bounds, calibration, and extrapolation.

    Energy Technology Data Exchange (ETDEWEB)

    Dowding, Kevin J.; Hills, Richard Guy (New Mexico State University, Las Cruces, NM)

    2005-04-01

    Numerical models of complex phenomena often contain approximations due to our inability to fully model the underlying physics, the excessive computational resources required to fully resolve the physics, the need to calibrate constitutive models, or in some cases, our ability to only bound behavior. Here we illustrate the relationship between approximation, calibration, extrapolation, and model validation through a series of examples that use the linear transient convective/dispersion equation to represent the nonlinear behavior of Burgers equation. While the use of these models represents a simplification relative to the types of systems we normally address in engineering and science, the present examples do support the tutorial nature of this document without obscuring the basic issues presented with unnecessarily complex models.

  10. Computational models of neuromodulation.

    Science.gov (United States)

    Fellous, J M; Linster, C

    1998-05-15

    Computational modeling of neural substrates provides an excellent theoretical framework for the understanding of the computational roles of neuromodulation. In this review, we illustrate, with a large number of modeling studies, the specific computations performed by neuromodulation in the context of various neural models of invertebrate and vertebrate preparations. We base our characterization of neuromodulations on their computational and functional roles rather than on anatomical or chemical criteria. We review the main framework in which neuromodulation has been studied theoretically (central pattern generation and oscillations, sensory processing, memory and information integration). Finally, we present a detailed mathematical overview of how neuromodulation has been implemented at the single cell and network levels in modeling studies. Overall, neuromodulation is found to increase and control computational complexity.

  11. Model performance analysis and model validation in logistic regression

    Directory of Open Access Journals (Sweden)

    Rosa Arboretti Giancristofaro

    2007-10-01

    Full Text Available In this paper a new model validation procedure for a logistic regression model is presented. At first, we illustrate a brief review of different techniques of model validation. Next, we define a number of properties required for a model to be considered "good", and a number of quantitative performance measures. Lastly, we describe a methodology for the assessment of the performance of a given model by using an example taken from a management study.

  12. Prediction and Validation of Heat Release Direct Injection Diesel Engine Using Multi-Zone Model

    Science.gov (United States)

    Anang Nugroho, Bagus; Sugiarto, Bambang; Prawoto; Shalahuddin, Lukman

    2014-04-01

    The objective of this study is to develop simulation model which capable to predict heat release of diesel combustion accurately in efficient computation time. A multi-zone packet model has been applied to solve the combustion phenomena inside diesel cylinder. The model formulations are presented first and then the numerical results are validated on a single cylinder direct injection diesel engine at various engine speed and timing injections. The model were found to be promising to fulfill the objective above.

  13. Development and test validation of a computational scheme for high-fidelity fluence estimations of the Swiss BWRs

    International Nuclear Information System (INIS)

    Vasiliev, A.; Wieselquist, W.; Ferroukhi, H.; Canepa, S.; Heldt, J.; Ledergerber, G.

    2011-01-01

    One of the current objectives within reactor analysis related projects at the Paul Scherrer Institut is the establishment of a comprehensive computational methodology for fast neutron fluence (FNF) estimations of reactor pressure vessels (RPV) and internals for both PWRs and BWRs. In the recent past, such an integral calculational methodology based on the CASMO-4/SIMULATE- 3/MCNPX system of codes was developed for PWRs and validated against RPV scraping tests. Based on the very satisfactory validation results, the methodology was recently applied for predictive FNF evaluations of a Swiss PWR to support the national nuclear safety inspectorate in the framework of life-time estimations. Today, focus is at PSI given to develop a corresponding advanced methodology for high-fidelity FNF estimations of BWR reactors. In this paper, the preliminary steps undertaken in that direction are presented. To start, the concepts of the PWR computational scheme and its transfer/adaptation to BWR are outlined. Then, the modelling of a Swiss BWR characterized by very heterogeneous core designs is presented along with preliminary sensitivity studies carried out to assess the sufficient level of details required for the complex core region. Finally, a first validation test case is presented on the basis of two dosimeter monitors irradiated during two recent cycles of the given BWR reactor. The achieved computational results show a satisfactory agreement with measured dosimeter data and illustrate thereby the feasibility of applying the PSI FNF computational scheme also for BWRs. Further sensitivity/optimization studies are nevertheless necessary in order to consolidate the scheme and to ensure increasing continuously, the fidelity and reliability of the BWR FNF estimations. (author)

  14. Validation of MCNP and ORIGEN-S 3-D computational model for reactivity predictions during BR2 operation

    International Nuclear Information System (INIS)

    Kalcheva, S.; Koonen, E.; Ponsard, B.

    2005-01-01

    The Belgian Material Test Reactor (MTR) BR2 is strongly heterogeneous high flux engineering test reactor at SCK-CEN (Centre d'Etude de l'energie Nucleaire) in Mol at a thermal power 60 to 100 MW. It deploys highly enriched uranium, water cooled concentric plate fuel elements, positioned inside a beryllium reflector with complex hyperboloid arrangement of test holes. The objective of this paper is the validation of a MCNP and ORIGEN-S 3D model for reactivity predictions of the entire BR2 core during reactor operation. We employ the Monte Carlo code MCNP-4C for evaluating the effective multiplication factor k eff and 3D space dependent specific power distribution. The 1D code ORIGEN-S is used for calculation of isotopic fuel depletion versus burn up and preparation of a database (DB) with depleted fuel compositions. The approach taken is to evaluate the 3D power distribution at each time step and along with DB to evaluate the 3D isotopic fuel depletion at the next step and to deduce the corresponding shim rods positions of the reactor operation. The capabilities of the both codes are fully exploited without constraints on the number of involved isotope depletion chains or increase of the computational time. The reactor has a complex operation, with important shutdowns between cycles, and its reactivity is strongly influenced by poisons, mainly 3 He and 6 Li from the beryllium reflector, and burnable absorbers 149 Sm and 10 B in the fresh UAlx fuel. Our computational predictions for the shim rods position at various restarts are within 0.5$ (β eff =0.0072). (author)

  15. Verification and Validation of a Three-Dimensional Generalized Composite Material Model

    Science.gov (United States)

    Hoffarth, Canio; Harrington, Joseph; Rajan, Subramaniam D.; Goldberg, Robert K.; Carney, Kelly S.; DuBois, Paul; Blankenhorn, Gunther

    2015-01-01

    A general purpose orthotropic elasto-plastic computational constitutive material model has been developed to improve predictions of the response of composites subjected to high velocity impact. The three-dimensional orthotropic elasto-plastic composite material model is being implemented initially for solid elements in LS-DYNA as MAT213. In order to accurately represent the response of a composite, experimental stress-strain curves are utilized as input, allowing for a more general material model that can be used on a variety of composite applications. The theoretical details are discussed in a companion paper. This paper documents the implementation, verification and qualitative validation of the material model using the T800-F3900 fiber/resin composite material

  16. Rater reliability and concurrent validity of the Keyboard Personal Computer Style instrument (K-PeCS).

    Science.gov (United States)

    Baker, Nancy A; Cook, James R; Redfern, Mark S

    2009-01-01

    This paper describes the inter-rater and intra-rater reliability, and the concurrent validity of an observational instrument, the Keyboard Personal Computer Style instrument (K-PeCS), which assesses stereotypical postures and movements associated with computer keyboard use. Three trained raters independently rated the video clips of 45 computer keyboard users to ascertain inter-rater reliability, and then re-rated a sub-sample of 15 video clips to ascertain intra-rater reliability. Concurrent validity was assessed by comparing the ratings obtained using the K-PeCS to scores developed from a 3D motion analysis system. The overall K-PeCS had excellent reliability [inter-rater: intra-class correlation coefficients (ICC)=.90; intra-rater: ICC=.92]. Most individual items on the K-PeCS had from good to excellent reliability, although six items fell below ICC=.75. Those K-PeCS items that were assessed for concurrent validity compared favorably to the motion analysis data for all but two items. These results suggest that most items on the K-PeCS can be used to reliably document computer keyboarding style.

  17. Electromagnetic Physics Models for Parallel Computing Architectures

    International Nuclear Information System (INIS)

    Amadio, G; Bianchini, C; Iope, R; Ananya, A; Apostolakis, J; Aurora, A; Bandieramonte, M; Brun, R; Carminati, F; Gheata, A; Gheata, M; Goulas, I; Nikitina, T; Bhattacharyya, A; Mohanty, A; Canal, P; Elvira, D; Jun, S Y; Lima, G; Duhem, L

    2016-01-01

    The recent emergence of hardware architectures characterized by many-core or accelerated processors has opened new opportunities for concurrent programming models taking advantage of both SIMD and SIMT architectures. GeantV, a next generation detector simulation, has been designed to exploit both the vector capability of mainstream CPUs and multi-threading capabilities of coprocessors including NVidia GPUs and Intel Xeon Phi. The characteristics of these architectures are very different in terms of the vectorization depth and type of parallelization needed to achieve optimal performance. In this paper we describe implementation of electromagnetic physics models developed for parallel computing architectures as a part of the GeantV project. Results of preliminary performance evaluation and physics validation are presented as well. (paper)

  18. Establishing model credibility involves more than validation

    International Nuclear Information System (INIS)

    Kirchner, T.

    1991-01-01

    One widely used definition of validation is that the quantitative test of the performance of a model through the comparison of model predictions to independent sets of observations from the system being simulated. The ability to show that the model predictions compare well with observations is often thought to be the most rigorous test that can be used to establish credibility for a model in the scientific community. However, such tests are only part of the process used to establish credibility, and in some cases may be either unnecessary or misleading. Naylor and Finger extended the concept of validation to include the establishment of validity for the postulates embodied in the model and the test of assumptions used to select postulates for the model. Validity of postulates is established through concurrence by experts in the field of study that the mathematical or conceptual model contains the structural components and mathematical relationships necessary to adequately represent the system with respect to the goals for the model. This extended definition of validation provides for consideration of the structure of the model, not just its performance, in establishing credibility. Evaluation of a simulation model should establish the correctness of the code and the efficacy of the model within its domain of applicability. (24 refs., 6 figs.)

  19. Advanced computational modeling for in vitro nanomaterial dosimetry.

    Science.gov (United States)

    DeLoid, Glen M; Cohen, Joel M; Pyrgiotakis, Georgios; Pirela, Sandra V; Pal, Anoop; Liu, Jiying; Srebric, Jelena; Demokritou, Philip

    2015-10-24

    Accurate and meaningful dose metrics are a basic requirement for in vitro screening to assess potential health risks of engineered nanomaterials (ENMs). Correctly and consistently quantifying what cells "see," during an in vitro exposure requires standardized preparation of stable ENM suspensions, accurate characterizatoin of agglomerate sizes and effective densities, and predictive modeling of mass transport. Earlier transport models provided a marked improvement over administered concentration or total mass, but included assumptions that could produce sizable inaccuracies, most notably that all particles at the bottom of the well are adsorbed or taken up by cells, which would drive transport downward, resulting in overestimation of deposition. Here we present development, validation and results of two robust computational transport models. Both three-dimensional computational fluid dynamics (CFD) and a newly-developed one-dimensional Distorted Grid (DG) model were used to estimate delivered dose metrics for industry-relevant metal oxide ENMs suspended in culture media. Both models allow simultaneous modeling of full size distributions for polydisperse ENM suspensions, and provide deposition metrics as well as concentration metrics over the extent of the well. The DG model also emulates the biokinetics at the particle-cell interface using a Langmuir isotherm, governed by a user-defined dissociation constant, K(D), and allows modeling of ENM dissolution over time. Dose metrics predicted by the two models were in remarkably close agreement. The DG model was also validated by quantitative analysis of flash-frozen, cryosectioned columns of ENM suspensions. Results of simulations based on agglomerate size distributions differed substantially from those obtained using mean sizes. The effect of cellular adsorption on delivered dose was negligible for K(D) values consistent with non-specific binding (> 1 nM), whereas smaller values (≤ 1 nM) typical of specific high

  20. Validation of Bayesian analysis of compartmental kinetic models in medical imaging.

    Science.gov (United States)

    Sitek, Arkadiusz; Li, Quanzheng; El Fakhri, Georges; Alpert, Nathaniel M

    2016-10-01

    Kinetic compartmental analysis is frequently used to compute physiologically relevant quantitative values from time series of images. In this paper, a new approach based on Bayesian analysis to obtain information about these parameters is presented and validated. The closed-form of the posterior distribution of kinetic parameters is derived with a hierarchical prior to model the standard deviation of normally distributed noise. Markov chain Monte Carlo methods are used for numerical estimation of the posterior distribution. Computer simulations of the kinetics of F18-fluorodeoxyglucose (FDG) are used to demonstrate drawing statistical inferences about kinetic parameters and to validate the theory and implementation. Additionally, point estimates of kinetic parameters and covariance of those estimates are determined using the classical non-linear least squares approach. Posteriors obtained using methods proposed in this work are accurate as no significant deviation from the expected shape of the posterior was found (one-sided P>0.08). It is demonstrated that the results obtained by the standard non-linear least-square methods fail to provide accurate estimation of uncertainty for the same data set (P<0.0001). The results of this work validate new methods for a computer simulations of FDG kinetics. Results show that in situations where the classical approach fails in accurate estimation of uncertainty, Bayesian estimation provides an accurate information about the uncertainties in the parameters. Although a particular example of FDG kinetics was used in the paper, the methods can be extended for different pharmaceuticals and imaging modalities. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  1. Towards policy relevant environmental modeling: contextual validity and pragmatic models

    Science.gov (United States)

    Miles, Scott B.

    2000-01-01

    "What makes for a good model?" In various forms, this question is a question that, undoubtedly, many people, businesses, and institutions ponder with regards to their particular domain of modeling. One particular domain that is wrestling with this question is the multidisciplinary field of environmental modeling. Examples of environmental models range from models of contaminated ground water flow to the economic impact of natural disasters, such as earthquakes. One of the distinguishing claims of the field is the relevancy of environmental modeling to policy and environment-related decision-making in general. A pervasive view by both scientists and decision-makers is that a "good" model is one that is an accurate predictor. Thus, determining whether a model is "accurate" or "correct" is done by comparing model output to empirical observations. The expected outcome of this process, usually referred to as "validation" or "ground truthing," is a stamp on the model in question of "valid" or "not valid" that serves to indicate whether or not the model will be reliable before it is put into service in a decision-making context. In this paper, I begin by elaborating on the prevailing view of model validation and why this view must change. Drawing from concepts coming out of the studies of science and technology, I go on to propose a contextual view of validity that can overcome the problems associated with "ground truthing" models as an indicator of model goodness. The problem of how we talk about and determine model validity has much to do about how we perceive the utility of environmental models. In the remainder of the paper, I argue that we should adopt ideas of pragmatism in judging what makes for a good model and, in turn, developing good models. From such a perspective of model goodness, good environmental models should facilitate communication, convey—not bury or "eliminate"—uncertainties, and, thus, afford the active building of consensus decisions, instead

  2. Verification and validation of an actuator disc model

    DEFF Research Database (Denmark)

    Réthoré, Pierre-Elouan; Laan, van der, Paul Maarten; Troldborg, Niels

    2014-01-01

    reduce the computational cost of large wind farm wake simulations. The special case of the actuator disc is successfully validated with an analytical solution for heavily loaded turbines and with a full-rotor computation in computational fluid dynamics. Copyright © 2013 John Wiley & Sons, Ltd....... take any kind of shape discretization, determine the intersectional elements with the computational grid and use the size of these elements to redistribute proportionally the forces. This method can potentially reduce the need for mesh refinement in the region surrounding the rotor and, therefore, also...

  3. New-generation Monte Carlo shell model for the K computer era

    International Nuclear Information System (INIS)

    Shimizu, Noritaka; Abe, Takashi; Yoshida, Tooru; Otsuka, Takaharu; Tsunoda, Yusuke; Utsuno, Yutaka; Mizusaki, Takahiro; Honma, Michio

    2012-01-01

    We present a newly enhanced version of the Monte Carlo shell-model (MCSM) method by incorporating the conjugate gradient method and energy-variance extrapolation. This new method enables us to perform large-scale shell-model calculations that the direct diagonalization method cannot reach. This new-generation framework of the MCSM provides us with a powerful tool to perform very advanced large-scale shell-model calculations on current massively parallel computers such as the K computer. We discuss the validity of this method in ab initio calculations of light nuclei, and propose a new method to describe the intrinsic wave function in terms of the shell-model picture. We also apply this new MCSM to the study of neutron-rich Cr and Ni isotopes using conventional shell-model calculations with an inert 40 Ca core and discuss how the magicity of N = 28, 40, 50 remains or is broken. (author)

  4. Validation of a Monte Carlo model used for simulating tube current modulation in computed tomography over a wide range of phantom conditions/challenges

    Energy Technology Data Exchange (ETDEWEB)

    Bostani, Maryam, E-mail: mbostani@mednet.ucla.edu; McMillan, Kyle; Cagnon, Chris H.; McNitt-Gray, Michael F. [Departments of Biomedical Physics and Radiology, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, California 90024 (United States); DeMarco, John J. [Department of Radiation Oncology, University of California, Los Angeles, Los Angeles, California 90095 (United States)

    2014-11-01

    Purpose: Monte Carlo (MC) simulation methods have been widely used in patient dosimetry in computed tomography (CT), including estimating patient organ doses. However, most simulation methods have undergone a limited set of validations, often using homogeneous phantoms with simple geometries. As clinical scanning has become more complex and the use of tube current modulation (TCM) has become pervasive in the clinic, MC simulations should include these techniques in their methodologies and therefore should also be validated using a variety of phantoms with different shapes and material compositions to result in a variety of differently modulated tube current profiles. The purpose of this work is to perform the measurements and simulations to validate a Monte Carlo model under a variety of test conditions where fixed tube current (FTC) and TCM were used. Methods: A previously developed MC model for estimating dose from CT scans that models TCM, built using the platform of MCNPX, was used for CT dose quantification. In order to validate the suitability of this model to accurately simulate patient dose from FTC and TCM CT scan, measurements and simulations were compared over a wide range of conditions. Phantoms used for testing range from simple geometries with homogeneous composition (16 and 32 cm computed tomography dose index phantoms) to more complex phantoms including a rectangular homogeneous water equivalent phantom, an elliptical shaped phantom with three sections (where each section was a homogeneous, but different material), and a heterogeneous, complex geometry anthropomorphic phantom. Each phantom requires varying levels of x-, y- and z-modulation. Each phantom was scanned on a multidetector row CT (Sensation 64) scanner under the conditions of both FTC and TCM. Dose measurements were made at various surface and depth positions within each phantom. Simulations using each phantom were performed for FTC, detailed x–y–z TCM, and z-axis-only TCM to obtain

  5. Computational modelling of thermo-mechanical and transport properties of carbon nanotubes

    International Nuclear Information System (INIS)

    Rafii-Tabar, H.

    2004-01-01

    Over the recent years, numerical modelling and computer-based simulation of the properties of carbon nanotubes have become the focal points of research in computational nano-science and its associated fields of computational condensed matter physics and materials modelling. Modelling of the mechanical, thermal and transport properties of nanotubes via numerical simulations forms the central part of this research, concerned with the nano-scale mechanics and nano-scale thermodynamics of nanotubes, and nano-scale adsorption, storage and flow properties in nanotubes. A review of these properties, obtained via computational modelling studies, is presented here. We first introduce the physics of carbon nanotubes, and then present the computational simulation tools that are appropriate for conducting a modelling study at the nano-scales. These include the molecular dynamics (MD), the Monte Carlo (MC), and the ab initio MD simulation methods. A complete range of inter-atomic potentials, of two-body and many-body varieties, that underlie all the modelling studies considered in this review is also given. Mechanical models from continuum-based elasticity theory that have been extensively employed in computing the energetics of nanotubes, or interpret the results from atomistic modelling, are presented and discussed. These include models based on the continuum theory of curved plates, shells, vibrating rods and bending beams. The validity of these continuum-based models has also been examined and the conditions under which they are applicable to nanotube modelling have been listed. Pertinent concepts from continuum theories of stress analysis are included, and the relevant methods for conducting the computation of the stress tensor, elastic constants and elastic modulii at the atomic level are also given. We then survey a comprehensive range of modelling studies concerned with the adsorption and storage of gases, and flow of fluids, in carbon nanotubes of various types. This

  6. Computational modelling of thermo-mechanical and transport properties of carbon nanotubes

    Energy Technology Data Exchange (ETDEWEB)

    Rafii-Tabar, H

    2004-02-01

    Over the recent years, numerical modelling and computer-based simulation of the properties of carbon nanotubes have become the focal points of research in computational nano-science and its associated fields of computational condensed matter physics and materials modelling. Modelling of the mechanical, thermal and transport properties of nanotubes via numerical simulations forms the central part of this research, concerned with the nano-scale mechanics and nano-scale thermodynamics of nanotubes, and nano-scale adsorption, storage and flow properties in nanotubes. A review of these properties, obtained via computational modelling studies, is presented here. We first introduce the physics of carbon nanotubes, and then present the computational simulation tools that are appropriate for conducting a modelling study at the nano-scales. These include the molecular dynamics (MD), the Monte Carlo (MC), and the ab initio MD simulation methods. A complete range of inter-atomic potentials, of two-body and many-body varieties, that underlie all the modelling studies considered in this review is also given. Mechanical models from continuum-based elasticity theory that have been extensively employed in computing the energetics of nanotubes, or interpret the results from atomistic modelling, are presented and discussed. These include models based on the continuum theory of curved plates, shells, vibrating rods and bending beams. The validity of these continuum-based models has also been examined and the conditions under which they are applicable to nanotube modelling have been listed. Pertinent concepts from continuum theories of stress analysis are included, and the relevant methods for conducting the computation of the stress tensor, elastic constants and elastic modulii at the atomic level are also given. We then survey a comprehensive range of modelling studies concerned with the adsorption and storage of gases, and flow of fluids, in carbon nanotubes of various types. This

  7. Using Computer Simulations for Promoting Model-based Reasoning. Epistemological and Educational Dimensions

    Science.gov (United States)

    Develaki, Maria

    2017-11-01

    Scientific reasoning is particularly pertinent to science education since it is closely related to the content and methodologies of science and contributes to scientific literacy. Much of the research in science education investigates the appropriate framework and teaching methods and tools needed to promote students' ability to reason and evaluate in a scientific way. This paper aims (a) to contribute to an extended understanding of the nature and pedagogical importance of model-based reasoning and (b) to exemplify how using computer simulations can support students' model-based reasoning. We provide first a background for both scientific reasoning and computer simulations, based on the relevant philosophical views and the related educational discussion. This background suggests that the model-based framework provides an epistemologically valid and pedagogically appropriate basis for teaching scientific reasoning and for helping students develop sounder reasoning and decision-taking abilities and explains how using computer simulations can foster these abilities. We then provide some examples illustrating the use of computer simulations to support model-based reasoning and evaluation activities in the classroom. The examples reflect the procedure and criteria for evaluating models in science and demonstrate the educational advantages of their application in classroom reasoning activities.

  8. Design of an intermediate-scale experiment to validate unsaturated- zone transport models

    International Nuclear Information System (INIS)

    Siegel, M.D.; Hopkins, P.L.; Glass, R.J.; Ward, D.B.

    1991-01-01

    An intermediate-scale experiment is being carried out to evaluate instrumentation and models that might be used for transport-model validation for the Yucca Mountain Site Characterization Project. The experimental test bed is a 6-m high x 3-m diameter caisson filled with quartz sand with a sorbing layer at an intermediate depth. The experiment involves the detection and prediction of the migration of fluid and tracers through an unsaturated porous medium. Pre-test design requires estimation of physical properties of the porous medium such as the relative permeability, saturation/pressure relations, porosity, and saturated hydraulic conductivity as well as geochemical properties such as surface complexation constants and empircial K d 'S. The pre-test characterization data will be used as input to several computer codes to predict the fluid flow and tracer migration. These include a coupled chemical-reaction/transport model, a stochastic model, and a deterministic model using retardation factors. The calculations will be completed prior to elution of the tracers, providing a basis for validation by comparing the predictions to observed moisture and tracer behavior

  9. A Computational Fluid Dynamic Model for a Novel Flash Ironmaking Process

    Science.gov (United States)

    Perez-Fontes, Silvia E.; Sohn, Hong Yong; Olivas-Martinez, Miguel

    A computational fluid dynamic model for a novel flash ironmaking process based on the direct gaseous reduction of iron oxide concentrates is presented. The model solves the three-dimensional governing equations including both gas-phase and gas-solid reaction kinetics. The turbulence-chemistry interaction in the gas-phase is modeled by the eddy dissipation concept incorporating chemical kinetics. The particle cloud model is used to track the particle phase in a Lagrangian framework. A nucleation and growth kinetics rate expression is adopted to calculate the reduction rate of magnetite concentrate particles. Benchmark experiments reported in the literature for a nonreacting swirling gas jet and a nonpremixed hydrogen jet flame were simulated for validation. The model predictions showed good agreement with measurements in terms of gas velocity, gas temperature and species concentrations. The relevance of the computational model for the analysis of a bench reactor operation and the design of an industrial-pilot plant is discussed.

  10. Selection, calibration, and validation of models of tumor growth.

    Science.gov (United States)

    Lima, E A B F; Oden, J T; Hormuth, D A; Yankeelov, T E; Almeida, R C

    2016-11-01

    This paper presents general approaches for addressing some of the most important issues in predictive computational oncology concerned with developing classes of predictive models of tumor growth. First, the process of developing mathematical models of vascular tumors evolving in the complex, heterogeneous, macroenvironment of living tissue; second, the selection of the most plausible models among these classes, given relevant observational data; third, the statistical calibration and validation of models in these classes, and finally, the prediction of key Quantities of Interest (QOIs) relevant to patient survival and the effect of various therapies. The most challenging aspects of this endeavor is that all of these issues often involve confounding uncertainties: in observational data, in model parameters, in model selection, and in the features targeted in the prediction. Our approach can be referred to as "model agnostic" in that no single model is advocated; rather, a general approach that explores powerful mixture-theory representations of tissue behavior while accounting for a range of relevant biological factors is presented, which leads to many potentially predictive models. Then representative classes are identified which provide a starting point for the implementation of OPAL, the Occam Plausibility Algorithm (OPAL) which enables the modeler to select the most plausible models (for given data) and to determine if the model is a valid tool for predicting tumor growth and morphology ( in vivo ). All of these approaches account for uncertainties in the model, the observational data, the model parameters, and the target QOI. We demonstrate these processes by comparing a list of models for tumor growth, including reaction-diffusion models, phase-fields models, and models with and without mechanical deformation effects, for glioma growth measured in murine experiments. Examples are provided that exhibit quite acceptable predictions of tumor growth in laboratory

  11. Statistical validation of normal tissue complication probability models.

    Science.gov (United States)

    Xu, Cheng-Jian; van der Schaaf, Arjen; Van't Veld, Aart A; Langendijk, Johannes A; Schilstra, Cornelis

    2012-09-01

    To investigate the applicability and value of double cross-validation and permutation tests as established statistical approaches in the validation of normal tissue complication probability (NTCP) models. A penalized regression method, LASSO (least absolute shrinkage and selection operator), was used to build NTCP models for xerostomia after radiation therapy treatment of head-and-neck cancer. Model assessment was based on the likelihood function and the area under the receiver operating characteristic curve. Repeated double cross-validation showed the uncertainty and instability of the NTCP models and indicated that the statistical significance of model performance can be obtained by permutation testing. Repeated double cross-validation and permutation tests are recommended to validate NTCP models before clinical use. Copyright © 2012 Elsevier Inc. All rights reserved.

  12. Statistical Validation of Normal Tissue Complication Probability Models

    Energy Technology Data Exchange (ETDEWEB)

    Xu Chengjian, E-mail: c.j.xu@umcg.nl [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Schaaf, Arjen van der; Veld, Aart A. van' t; Langendijk, Johannes A. [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Schilstra, Cornelis [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Radiotherapy Institute Friesland, Leeuwarden (Netherlands)

    2012-09-01

    Purpose: To investigate the applicability and value of double cross-validation and permutation tests as established statistical approaches in the validation of normal tissue complication probability (NTCP) models. Methods and Materials: A penalized regression method, LASSO (least absolute shrinkage and selection operator), was used to build NTCP models for xerostomia after radiation therapy treatment of head-and-neck cancer. Model assessment was based on the likelihood function and the area under the receiver operating characteristic curve. Results: Repeated double cross-validation showed the uncertainty and instability of the NTCP models and indicated that the statistical significance of model performance can be obtained by permutation testing. Conclusion: Repeated double cross-validation and permutation tests are recommended to validate NTCP models before clinical use.

  13. Validation of the standalone implementation of the dynamic wake meandering model for power production

    DEFF Research Database (Denmark)

    Keck, Rolf-Erik Henrik Jussi

    2015-01-01

    This paper presents validation for using the standalone implementation of the dynamic wake meandering (DWM) model to conduct numerical simulations of power production of rows of wind turbines. The standalone DWM model is an alternative formulation of the conventional DWM model that does not require...... information exchange with an aeroelastic code. As a consequence, the standalone DWM model has significantly shorter computational times and lower demands on the user environment. The drawback of the standalone DWM model is that it does not have the capability to predict turbine loads. Instead, it should...

  14. Plasticity: modeling & computation

    National Research Council Canada - National Science Library

    Borja, Ronaldo Israel

    2013-01-01

    .... "Plasticity Modeling & Computation" is a textbook written specifically for students who want to learn the theoretical, mathematical, and computational aspects of inelastic deformation in solids...

  15. Development and validation of the computer technology literacy self-assessment scale for Taiwanese elementary school students.

    Science.gov (United States)

    Chang, Chiung-Sui

    2008-01-01

    The purpose of this study was to describe the development and validation of an instrument to identify various dimensions of the computer technology literacy self-assessment scale (CTLS) for elementary school students. The instrument included five CTLS dimensions (subscales): the technology operation skills, the computer usages concepts, the attitudes toward computer technology, the learning with technology, and the Internet operation skills. Participants were 1,539 elementary school students in Taiwan. Data analysis indicated that the instrument developed in the study had satisfactory validity and reliability. Correlations analysis supported the legitimacy of using multiple dimensions in representing students' computer technology literacy. Significant differences were found between male and female students, and between grades on some CTLS dimensions. Suggestions are made for use of the instrument to examine complicated interplays between students' computer behaviors and their computer technology literacy.

  16. Validation process of simulation model

    International Nuclear Information System (INIS)

    San Isidro, M. J.

    1998-01-01

    It is presented a methodology on empirical validation about any detailed simulation model. This king of validation it is always related with an experimental case. The empirical validation has a residual sense, because the conclusions are based on comparisons between simulated outputs and experimental measurements. This methodology will guide us to detect the fails of the simulation model. Furthermore, it can be used a guide in the design of posterior experiments. Three steps can be well differentiated: Sensitivity analysis. It can be made with a DSA, differential sensitivity analysis, and with a MCSA, Monte-Carlo sensitivity analysis. Looking the optimal domains of the input parameters. It has been developed a procedure based on the Monte-Carlo methods and Cluster techniques, to find the optimal domains of these parameters. Residual analysis. This analysis has been made on the time domain and on the frequency domain, it has been used the correlation analysis and spectral analysis. As application of this methodology, it is presented the validation carried out on a thermal simulation model on buildings, Esp., studying the behavior of building components on a Test Cell of LECE of CIEMAT. (Author) 17 refs

  17. Review of the status of validation of the computer codes used in the severe accident source term reassessment study (BMI-2104)

    International Nuclear Information System (INIS)

    Kress, T.S.

    1985-04-01

    The determination of severe accident source terms must, by necessity it seems, rely heavily on the use of complex computer codes. Source term acceptability, therefore, rests on the assessed validity of such codes. Consequently, one element of NRC's recent efforts to reassess LWR severe accident source terms is to provide a review of the status of validation of the computer codes used in the reassessment. The results of this review is the subject of this document. The separate review documents compiled in this report were used as a resource along with the results of the BMI-2104 study by BCL and the QUEST study by SNL to arrive at a more-or-less independent appraisal of the status of source term modeling at this time

  18. Construction and validation of detailed kinetic models for the combustion of gasoline surrogates; Construction et validation de modeles cinetiques detailles pour la combustion de melanges modeles des essences

    Energy Technology Data Exchange (ETDEWEB)

    Touchard, S.

    2005-10-15

    The irreversible reduction of oil resources, the CO{sub 2} emission control and the application of increasingly strict standards of pollutants emission lead the worldwide researchers to work to reduce the pollutants formation and to improve the engine yields, especially by using homogenous charge combustion of lean mixtures. The numerical simulation of fuel blends oxidation is an essential tool to study the influence of fuel formulation and motor conditions on auto-ignition and on pollutants emissions. The automatic generation helps to obtain detailed kinetic models, especially at low temperature, where the number of reactions quickly exceeds thousand. The main purpose of this study is the generation and the validation of detailed kinetic models for the oxidation of gasoline blends using the EXGAS software. This work has implied an improvement of computation rules for thermodynamic and kinetic data, those were validated by numerical simulation using CHEMKIN II softwares. A large part of this work has concerned the understanding of the low temperature oxidation chemistry of the C5 and larger alkenes. Low and high temperature mechanisms were proposed and validated for 1 pentene, 1-hexene, the binary mixtures containing 1 hexene/iso octane, 1 hexene/toluene, iso octane/toluene and the ternary mixture of 1 hexene/toluene/iso octane. Simulations were also done for propene, 1-butene and iso-octane with former models including the modifications proposed in this PhD work. If the generated models allowed us to simulate with a good agreement the auto-ignition delays of the studied molecules and blends, some uncertainties still remains for some reaction paths leading to the formation of cyclic products in the case of alkenes oxidation at low temperature. It would be also interesting to carry on this work for combustion models of gasoline blends at low temperature. (author)

  19. Off-take Model of the SPACE Code and Its Validation

    International Nuclear Information System (INIS)

    Oh, Myung Taek; Park, Chan Eok; Sohn, Jong Joo

    2011-01-01

    Liquid entrainment and vapor pull-through models of horizontal pipe have been implemented in the SPACE code. The model of SPACE accounts for the phase separation phenomena and computes the flux of mass and energy through an off-take attached to a horizontal pipe when stratified conditions occur in the horizontal pipe. This model is referred to as the off-take model. The importance of predicting the fluid conditions through an off-take in a small-break LOCA has been well known. In this case, the occurrence of the stratification can affect the break node void fraction and thus the break flow discharged from the primary system. In order to validate the off-take model newly developed for the SPACE code, a simulation of the HDU experiments has been performed. The main feature of the off-take model and its application results will be presented in this paper

  20. Validation of coastal oceanographic models at Laxemar-Simpevarp. Site descriptive modelling SDM-Site Laxemar

    International Nuclear Information System (INIS)

    Engqvist, Anders; Andrejev, Oleg

    2008-12-01

    validation can be summarized in three points: (i) The Baltic CR-model reproduces the measured salinity and the temperature profiles of the three peripheral stations acceptably well, while the correlation levels of the velocities are on an acceptable level for only one component, the other being close to zero; (ii) For the interior station Si24, the FR-model reproduces the salinity and the temperature profiles with a yet improved level of correlation compared with the CR-model; (iii) The bottom current velocity measured at Djupesund corresponds to an internal strait within the CDB-model and yields a correlation level of nearly 50% for salinity and about 95% for temperature. The conclusion is that the present validation of velocity components of the peripheral stations between the CR- and FR-domains has mainly confirmed what was found in the corresponding validation study of the Forsmark area, namely that this represents a challenge that demands considerably more measuring effort than has been possible to muster presently in order to average out sub-grid eddies that the model cannot resolve. This applies even though the levels of the correlation analysis are considerably higher than was found for the parallel study of the waters off the Forsmark coast. This together with supporting current velocity transects in the vicinity of the measurement stations can be explained by a more horizontally homogeneous flow field. For the inner station (Si24) that was computed by the FR-model, the correlation levels are considerably improved. Also for the station (Si25) pertaining to the CDB-model good correlation levels are reproduced. All temperature profiles are also acceptably well captured by the models, but this is judged to be more an effect of the seasonal variation than an expression of the virtue of the actual models. As for the Forsmark validation program, the salinity dynamics of the interior FR-domain is the strong point of the model, but in the present study high levels of

  1. Validation of coastal oceanographic models at Laxemar-Simpevarp. Site descriptive modelling SDM-Site Laxemar

    Energy Technology Data Exchange (ETDEWEB)

    Engqvist, Anders (A och I Engqvist Konsult HB, Vaxholm (SE)); Andrejev, Oleg (Finnish Inst. of Marine Research, Helsinki (FI))

    2008-12-15

    validation can be summarized in three points: (i) The Baltic CR-model reproduces the measured salinity and the temperature profiles of the three peripheral stations acceptably well, while the correlation levels of the velocities are on an acceptable level for only one component, the other being close to zero; (ii) For the interior station Si24, the FR-model reproduces the salinity and the temperature profiles with a yet improved level of correlation compared with the CR-model; (iii) The bottom current velocity measured at Djupesund corresponds to an internal strait within the CDB-model and yields a correlation level of nearly 50% for salinity and about 95% for temperature. The conclusion is that the present validation of velocity components of the peripheral stations between the CR- and FR-domains has mainly confirmed what was found in the corresponding validation study of the Forsmark area, namely that this represents a challenge that demands considerably more measuring effort than has been possible to muster presently in order to average out sub-grid eddies that the model cannot resolve. This applies even though the levels of the correlation analysis are considerably higher than was found for the parallel study of the waters off the Forsmark coast. This together with supporting current velocity transects in the vicinity of the measurement stations can be explained by a more horizontally homogeneous flow field. For the inner station (Si24) that was computed by the FR-model, the correlation levels are considerably improved. Also for the station (Si25) pertaining to the CDB-model good correlation levels are reproduced. All temperature profiles are also acceptably well captured by the models, but this is judged to be more an effect of the seasonal variation than an expression of the virtue of the actual models. As for the Forsmark validation program, the salinity dynamics of the interior FR-domain is the strong point of the model, but in the present study high levels of

  2. Monte Carlo simulations of adult and pediatric computed tomography exams: Validation studies of organ doses with physical phantoms

    International Nuclear Information System (INIS)

    Long, Daniel J.; Lee, Choonsik; Tien, Christopher; Fisher, Ryan; Hoerner, Matthew R.; Hintenlang, David; Bolch, Wesley E.

    2013-01-01

    Purpose: To validate the accuracy of a Monte Carlo source model of the Siemens SOMATOM Sensation 16 CT scanner using organ doses measured in physical anthropomorphic phantoms. Methods: The x-ray output of the Siemens SOMATOM Sensation 16 multidetector CT scanner was simulated within the Monte Carlo radiation transport code, MCNPX version 2.6. The resulting source model was able to perform various simulated axial and helical computed tomographic (CT) scans of varying scan parameters, including beam energy, filtration, pitch, and beam collimation. Two custom-built anthropomorphic phantoms were used to take dose measurements on the CT scanner: an adult male and a 9-month-old. The adult male is a physical replica of University of Florida reference adult male hybrid computational phantom, while the 9-month-old is a replica of University of Florida Series B 9-month-old voxel computational phantom. Each phantom underwent a series of axial and helical CT scans, during which organ doses were measured using fiber-optic coupled plastic scintillator dosimeters developed at University of Florida. The physical setup was reproduced and simulated in MCNPX using the CT source model and the computational phantoms upon which the anthropomorphic phantoms were constructed. Average organ doses were then calculated based upon these MCNPX results. Results: For all CT scans, good agreement was seen between measured and simulated organ doses. For the adult male, the percent differences were within 16% for axial scans, and within 18% for helical scans. For the 9-month-old, the percent differences were all within 15% for both the axial and helical scans. These results are comparable to previously published validation studies using GE scanners and commercially available anthropomorphic phantoms. Conclusions: Overall results of this study show that the Monte Carlo source model can be used to accurately and reliably calculate organ doses for patients undergoing a variety of axial or helical CT

  3. A proposed best practice model validation framework for banks

    Directory of Open Access Journals (Sweden)

    Pieter J. (Riaan de Jongh

    2017-06-01

    Full Text Available Background: With the increasing use of complex quantitative models in applications throughout the financial world, model risk has become a major concern. The credit crisis of 2008–2009 provoked added concern about the use of models in finance. Measuring and managing model risk has subsequently come under scrutiny from regulators, supervisors, banks and other financial institutions. Regulatory guidance indicates that meticulous monitoring of all phases of model development and implementation is required to mitigate this risk. Considerable resources must be mobilised for this purpose. The exercise must embrace model development, assembly, implementation, validation and effective governance. Setting: Model validation practices are generally patchy, disparate and sometimes contradictory, and although the Basel Accord and some regulatory authorities have attempted to establish guiding principles, no definite set of global standards exists. Aim: Assessing the available literature for the best validation practices. Methods: This comprehensive literature study provided a background to the complexities of effective model management and focussed on model validation as a component of model risk management. Results: We propose a coherent ‘best practice’ framework for model validation. Scorecard tools are also presented to evaluate if the proposed best practice model validation framework has been adequately assembled and implemented. Conclusion: The proposed best practice model validation framework is designed to assist firms in the construction of an effective, robust and fully compliant model validation programme and comprises three principal elements: model validation governance, policy and process.

  4. A multibody motorcycle model with rigid-ring tyres: formulation and validation

    Science.gov (United States)

    Leonelli, Luca; Mancinelli, Nicolò

    2015-06-01

    The aim of this paper is the development and validation of a three-dimensional multibody motorcycle model including a rigid-ring tyre model, taking into account both the slopes and elevation of the road surface. In order to achieve accurate assessment of ride and handling performances of a road racing motorcycle, a tyre model capable of reproducing the dynamic response to actual road excitation is required. While a number of vehicle models with such feature are available for car application, the extension to the motorcycle modelling has not been addressed yet. To do so, a novel parametrisation for the general motorcycle kinematics is proposed, using a mixed reference point and relative coordinates approach. The resulting description, developed in terms of dependent coordinates, makes it possible to include the rigid-ring kinematics as well as road elevation and slopes, without affecting computational efficiency. The equations of motion for the whole multibody system are derived symbolically and the constraint equations arising from the dependent coordinate formulation are handled using the position and velocity vector projection technique. The resulting system of equations is integrated in time domain using a standard ordinary differential equation (ODE) algorithm. Finally, the model is validated with respect to experimentally measured data in both time and frequency domains.

  5. A Model for the Acceptance of Cloud Computing Technology Using DEMATEL Technique and System Dynamics Approach

    Directory of Open Access Journals (Sweden)

    seyyed mohammad zargar

    2018-03-01

    Full Text Available Cloud computing is a new method to provide computing resources and increase computing power in organizations. Despite the many benefits this method shares, it has not been universally used because of some obstacles including security issues and has become a concern for IT managers in organization. In this paper, the general definition of cloud computing is presented. In addition, having reviewed previous studies, the researchers identified effective variables on technology acceptance and, especially, cloud computing technology. Then, using DEMATEL technique, the effectiveness and permeability of the variable were determined. The researchers also designed a model to show the existing dynamics in cloud computing technology using system dynamics approach. The validity of the model was confirmed through evaluation methods in dynamics model by using VENSIM software. Finally, based on different conditions of the proposed model, a variety of scenarios were designed. Then, the implementation of these scenarios was simulated within the proposed model. The results showed that any increase in data security, government support and user training can lead to the increase in the adoption and use of cloud computing technology.

  6. Measuring Students' Writing Ability on a Computer-Analytic Developmental Scale: An Exploratory Validity Study

    Science.gov (United States)

    Burdick, Hal; Swartz, Carl W.; Stenner, A. Jackson; Fitzgerald, Jill; Burdick, Don; Hanlon, Sean T.

    2013-01-01

    The purpose of the study was to explore the validity of a novel computer-analytic developmental scale, the Writing Ability Developmental Scale. On the whole, collective results supported the validity of the scale. It was sensitive to writing ability differences across grades and sensitive to within-grade variability as compared to human-rated…

  7. Direct modeling for computational fluid dynamics

    Science.gov (United States)

    Xu, Kun

    2015-06-01

    All fluid dynamic equations are valid under their modeling scales, such as the particle mean free path and mean collision time scale of the Boltzmann equation and the hydrodynamic scale of the Navier-Stokes (NS) equations. The current computational fluid dynamics (CFD) focuses on the numerical solution of partial differential equations (PDEs), and its aim is to get the accurate solution of these governing equations. Under such a CFD practice, it is hard to develop a unified scheme that covers flow physics from kinetic to hydrodynamic scales continuously because there is no such governing equation which could make a smooth transition from the Boltzmann to the NS modeling. The study of fluid dynamics needs to go beyond the traditional numerical partial differential equations. The emerging engineering applications, such as air-vehicle design for near-space flight and flow and heat transfer in micro-devices, do require further expansion of the concept of gas dynamics to a larger domain of physical reality, rather than the traditional distinguishable governing equations. At the current stage, the non-equilibrium flow physics has not yet been well explored or clearly understood due to the lack of appropriate tools. Unfortunately, under the current numerical PDE approach, it is hard to develop such a meaningful tool due to the absence of valid PDEs. In order to construct multiscale and multiphysics simulation methods similar to the modeling process of constructing the Boltzmann or the NS governing equations, the development of a numerical algorithm should be based on the first principle of physical modeling. In this paper, instead of following the traditional numerical PDE path, we introduce direct modeling as a principle for CFD algorithm development. Since all computations are conducted in a discretized space with limited cell resolution, the flow physics to be modeled has to be done in the mesh size and time step scales. Here, the CFD is more or less a direct

  8. Computer-Aided Modeling Framework

    DEFF Research Database (Denmark)

    Fedorova, Marina; Sin, Gürkan; Gani, Rafiqul

    Models are playing important roles in design and analysis of chemicals based products and the processes that manufacture them. Computer-aided methods and tools have the potential to reduce the number of experiments, which can be expensive and time consuming, and there is a benefit of working...... development and application. The proposed work is a part of the project for development of methods and tools that will allow systematic generation, analysis and solution of models for various objectives. It will use the computer-aided modeling framework that is based on a modeling methodology, which combines....... In this contribution, the concept of template-based modeling is presented and application is highlighted for the specific case of catalytic membrane fixed bed models. The modeling template is integrated in a generic computer-aided modeling framework. Furthermore, modeling templates enable the idea of model reuse...

  9. System capacity and economic modeling computer tool for satellite mobile communications systems

    Science.gov (United States)

    Wiedeman, Robert A.; Wen, Doong; Mccracken, Albert G.

    1988-01-01

    A unique computer modeling tool that combines an engineering tool with a financial analysis program is described. The resulting combination yields a flexible economic model that can predict the cost effectiveness of various mobile systems. Cost modeling is necessary in order to ascertain if a given system with a finite satellite resource is capable of supporting itself financially and to determine what services can be supported. Personal computer techniques using Lotus 123 are used for the model in order to provide as universal an application as possible such that the model can be used and modified to fit many situations and conditions. The output of the engineering portion of the model consists of a channel capacity analysis and link calculations for several qualities of service using up to 16 types of earth terminal configurations. The outputs of the financial model are a revenue analysis, an income statement, and a cost model validation section.

  10. MASADA: A MODELING AND SIMULATION AUTOMATED DATA ANALYSIS FRAMEWORK FOR CONTINUOUS DATA-INTENSIVE VALIDATION OF SIMULATION MODELS

    CERN Document Server

    Foguelman, Daniel Jacob; The ATLAS collaboration

    2016-01-01

    Complex networked computer systems are usually subjected to upgrades and enhancements on a continuous basis. Modeling and simulation of such systems helps with guiding their engineering processes, in particular when testing candi- date design alternatives directly on the real system is not an option. Models are built and simulation exercises are run guided by specific research and/or design questions. A vast amount of operational conditions for the real system need to be assumed in order to focus on the relevant questions at hand. A typical boundary condition for computer systems is the exogenously imposed workload. Meanwhile, in typical projects huge amounts of monitoring information are logged and stored with the purpose of studying the system’s performance in search for improvements. Also research questions change as systems’ operational conditions vary throughout its lifetime. This context poses many challenges to determine the validity of simulation models. As the behavioral empirical base of the sys...

  11. MASADA: A Modeling and Simulation Automated Data Analysis framework for continuous data-intensive validation of simulation models

    CERN Document Server

    Foguelman, Daniel Jacob; The ATLAS collaboration

    2016-01-01

    Complex networked computer systems are usually subjected to upgrades and enhancements on a continuous basis. Modeling and simulation of such systems helps with guiding their engineering processes, in particular when testing candi- date design alternatives directly on the real system is not an option. Models are built and simulation exercises are run guided by specific research and/or design questions. A vast amount of operational conditions for the real system need to be assumed in order to focus on the relevant questions at hand. A typical boundary condition for computer systems is the exogenously imposed workload. Meanwhile, in typical projects huge amounts of monitoring information are logged and stored with the purpose of studying the system’s performance in search for improvements. Also research questions change as systems’ operational conditions vary throughout its lifetime. This context poses many challenges to determine the validity of simulation models. As the behavioral empirical base of the sys...

  12. Statistical Validation of Engineering and Scientific Models: Background

    International Nuclear Information System (INIS)

    Hills, Richard G.; Trucano, Timothy G.

    1999-01-01

    A tutorial is presented discussing the basic issues associated with propagation of uncertainty analysis and statistical validation of engineering and scientific models. The propagation of uncertainty tutorial illustrates the use of the sensitivity method and the Monte Carlo method to evaluate the uncertainty in predictions for linear and nonlinear models. Four example applications are presented; a linear model, a model for the behavior of a damped spring-mass system, a transient thermal conduction model, and a nonlinear transient convective-diffusive model based on Burger's equation. Correlated and uncorrelated model input parameters are considered. The model validation tutorial builds on the material presented in the propagation of uncertainty tutoriaI and uses the damp spring-mass system as the example application. The validation tutorial illustrates several concepts associated with the application of statistical inference to test model predictions against experimental observations. Several validation methods are presented including error band based, multivariate, sum of squares of residuals, and optimization methods. After completion of the tutorial, a survey of statistical model validation literature is presented and recommendations for future work are made

  13. Computer Profiling Based Model for Investigation

    OpenAIRE

    Neeraj Choudhary; Nikhil Kumar Singh; Parmalik Singh

    2011-01-01

    Computer profiling is used for computer forensic analysis, and proposes and elaborates on a novel model for use in computer profiling, the computer profiling object model. The computer profiling object model is an information model which models a computer as objects with various attributes and inter-relationships. These together provide the information necessary for a human investigator or an automated reasoning engine to make judgments as to the probable usage and evidentiary value of a comp...

  14. Novel prediction model of renal function after nephrectomy from automated renal volumetry with preoperative multidetector computed tomography (MDCT).

    Science.gov (United States)

    Isotani, Shuji; Shimoyama, Hirofumi; Yokota, Isao; Noma, Yasuhiro; Kitamura, Kousuke; China, Toshiyuki; Saito, Keisuke; Hisasue, Shin-ichi; Ide, Hisamitsu; Muto, Satoru; Yamaguchi, Raizo; Ukimura, Osamu; Gill, Inderbir S; Horie, Shigeo

    2015-10-01

    The predictive model of postoperative renal function may impact on planning nephrectomy. To develop the novel predictive model using combination of clinical indices with computer volumetry to measure the preserved renal cortex volume (RCV) using multidetector computed tomography (MDCT), and to prospectively validate performance of the model. Total 60 patients undergoing radical nephrectomy from 2011 to 2013 participated, including a development cohort of 39 patients and an external validation cohort of 21 patients. RCV was calculated by voxel count using software (Vincent, FUJIFILM). Renal function before and after radical nephrectomy was assessed via the estimated glomerular filtration rate (eGFR). Factors affecting postoperative eGFR were examined by regression analysis to develop the novel model for predicting postoperative eGFR with a backward elimination method. The predictive model was externally validated and the performance of the model was compared with that of the previously reported models. The postoperative eGFR value was associated with age, preoperative eGFR, preserved renal parenchymal volume (RPV), preserved RCV, % of RPV alteration, and % of RCV alteration (p volumetry and clinical indices might yield an important tool for predicting postoperative renal function.

  15. Enhancing hit identification in Mycobacterium tuberculosis drug discovery using validated dual-event Bayesian models.

    Directory of Open Access Journals (Sweden)

    Sean Ekins

    Full Text Available High-throughput screening (HTS in whole cells is widely pursued to find compounds active against Mycobacterium tuberculosis (Mtb for further development towards new tuberculosis (TB drugs. Hit rates from these screens, usually conducted at 10 to 25 µM concentrations, typically range from less than 1% to the low single digits. New approaches to increase the efficiency of hit identification are urgently needed to learn from past screening data. The pharmaceutical industry has for many years taken advantage of computational approaches to optimize compound libraries for in vitro testing, a practice not fully embraced by academic laboratories in the search for new TB drugs. Adapting these proven approaches, we have recently built and validated Bayesian machine learning models for predicting compounds with activity against Mtb based on publicly available large-scale HTS data from the Tuberculosis Antimicrobial Acquisition Coordinating Facility. We now demonstrate the largest prospective validation to date in which we computationally screened 82,403 molecules with these Bayesian models, assayed a total of 550 molecules in vitro, and identified 124 actives against Mtb. Individual hit rates for the different datasets varied from 15-28%. We have identified several FDA approved and late stage clinical candidate kinase inhibitors with activity against Mtb which may represent starting points for further optimization. The computational models developed herein and the commercially available molecules derived from them are now available to any group pursuing Mtb drug discovery.

  16. Soft computing techniques toward modeling the water supplies of Cyprus.

    Science.gov (United States)

    Iliadis, L; Maris, F; Tachos, S

    2011-10-01

    This research effort aims in the application of soft computing techniques toward water resources management. More specifically, the target is the development of reliable soft computing models capable of estimating the water supply for the case of "Germasogeia" mountainous watersheds in Cyprus. Initially, ε-Regression Support Vector Machines (ε-RSVM) and fuzzy weighted ε-RSVMR models have been developed that accept five input parameters. At the same time, reliable artificial neural networks have been developed to perform the same job. The 5-fold cross validation approach has been employed in order to eliminate bad local behaviors and to produce a more representative training data set. Thus, the fuzzy weighted Support Vector Regression (SVR) combined with the fuzzy partition has been employed in an effort to enhance the quality of the results. Several rational and reliable models have been produced that can enhance the efficiency of water policy designers. Copyright © 2011 Elsevier Ltd. All rights reserved.

  17. Calibration, validation, and sensitivity analysis: What's what

    International Nuclear Information System (INIS)

    Trucano, T.G.; Swiler, L.P.; Igusa, T.; Oberkampf, W.L.; Pilch, M.

    2006-01-01

    One very simple interpretation of calibration is to adjust a set of parameters associated with a computational science and engineering code so that the model agreement is maximized with respect to a set of experimental data. One very simple interpretation of validation is to quantify our belief in the predictive capability of a computational code through comparison with a set of experimental data. Uncertainty in both the data and the code are important and must be mathematically understood to correctly perform both calibration and validation. Sensitivity analysis, being an important methodology in uncertainty analysis, is thus important to both calibration and validation. In this paper, we intend to clarify the language just used and express some opinions on the associated issues. We will endeavor to identify some technical challenges that must be resolved for successful validation of a predictive modeling capability. One of these challenges is a formal description of a 'model discrepancy' term. Another challenge revolves around the general adaptation of abstract learning theory as a formalism that potentially encompasses both calibration and validation in the face of model uncertainty

  18. Individualized prediction of perineural invasion in colorectal cancer: development and validation of a radiomics prediction model.

    Science.gov (United States)

    Huang, Yanqi; He, Lan; Dong, Di; Yang, Caiyun; Liang, Cuishan; Chen, Xin; Ma, Zelan; Huang, Xiaomei; Yao, Su; Liang, Changhong; Tian, Jie; Liu, Zaiyi

    2018-02-01

    To develop and validate a radiomics prediction model for individualized prediction of perineural invasion (PNI) in colorectal cancer (CRC). After computed tomography (CT) radiomics features extraction, a radiomics signature was constructed in derivation cohort (346 CRC patients). A prediction model was developed to integrate the radiomics signature and clinical candidate predictors [age, sex, tumor location, and carcinoembryonic antigen (CEA) level]. Apparent prediction performance was assessed. After internal validation, independent temporal validation (separate from the cohort used to build the model) was then conducted in 217 CRC patients. The final model was converted to an easy-to-use nomogram. The developed radiomics nomogram that integrated the radiomics signature and CEA level showed good calibration and discrimination performance [Harrell's concordance index (c-index): 0.817; 95% confidence interval (95% CI): 0.811-0.823]. Application of the nomogram in validation cohort gave a comparable calibration and discrimination (c-index: 0.803; 95% CI: 0.794-0.812). Integrating the radiomics signature and CEA level into a radiomics prediction model enables easy and effective risk assessment of PNI in CRC. This stratification of patients according to their PNI status may provide a basis for individualized auxiliary treatment.

  19. Modeling and validation of heat and mass transfer in individual coffee beans during the coffee roasting process using computational fluid dynamics (CFD).

    Science.gov (United States)

    Alonso-Torres, Beatriz; Hernández-Pérez, José Alfredo; Sierra-Espinoza, Fernando; Schenker, Stefan; Yeretzian, Chahan

    2013-01-01

    Heat and mass transfer in individual coffee beans during roasting were simulated using computational fluid dynamics (CFD). Numerical equations for heat and mass transfer inside the coffee bean were solved using the finite volume technique in the commercial CFD code Fluent; the software was complemented with specific user-defined functions (UDFs). To experimentally validate the numerical model, a single coffee bean was placed in a cylindrical glass tube and roasted by a hot air flow, using the identical geometrical 3D configuration and hot air flow conditions as the ones used for numerical simulations. Temperature and humidity calculations obtained with the model were compared with experimental data. The model predicts the actual process quite accurately and represents a useful approach to monitor the coffee roasting process in real time. It provides valuable information on time-resolved process variables that are otherwise difficult to obtain experimentally, but critical to a better understanding of the coffee roasting process at the individual bean level. This includes variables such as time-resolved 3D profiles of bean temperature and moisture content, and temperature profiles of the roasting air in the vicinity of the coffee bean.

  20. Computational experience with a three-dimensional rotary engine combustion model

    Science.gov (United States)

    Raju, M. S.; Willis, E. A.

    1990-04-01

    A new computer code was developed to analyze the chemically reactive flow and spray combustion processes occurring inside a stratified-charge rotary engine. Mathematical and numerical details of the new code were recently described by the present authors. The results are presented of limited, initial computational trials as a first step in a long-term assessment/validation process. The engine configuration studied was chosen to approximate existing rotary engine flow visualization and hot firing test rigs. Typical results include: (1) pressure and temperature histories, (2) torque generated by the nonuniform pressure distribution within the chamber, (3) energy release rates, and (4) various flow-related phenomena. These are discussed and compared with other predictions reported in the literature. The adequacy or need for improvement in the spray/combustion models and the need for incorporating an appropriate turbulence model are also discussed.

  1. Independent verification and validation testing of the FLASH computer code, Versiion 3.0

    International Nuclear Information System (INIS)

    Martian, P.; Chung, J.N.

    1992-06-01

    Independent testing of the FLASH computer code, Version 3.0, was conducted to determine if the code is ready for use in hydrological and environmental studies at various Department of Energy sites. This report describes the technical basis, approach, and results of this testing. Verification tests, and validation tests, were used to determine the operational status of the FLASH computer code. These tests were specifically designed to test: correctness of the FORTRAN coding, computational accuracy, and suitability to simulating actual hydrologic conditions. This testing was performed using a structured evaluation protocol which consisted of: blind testing, independent applications, and graduated difficulty of test cases. Both quantitative and qualitative testing was performed through evaluating relative root mean square values and graphical comparisons of the numerical, analytical, and experimental data. Four verification test were used to check the computational accuracy and correctness of the FORTRAN coding, and three validation tests were used to check the suitability to simulating actual conditions. These tests cases ranged in complexity from simple 1-D saturated flow to 2-D variably saturated problems. The verification tests showed excellent quantitative agreement between the FLASH results and analytical solutions. The validation tests showed good qualitative agreement with the experimental data. Based on the results of this testing, it was concluded that the FLASH code is a versatile and powerful two-dimensional analysis tool for fluid flow. In conclusion, all aspects of the code that were tested, except for the unit gradient bottom boundary condition, were found to be fully operational and ready for use in hydrological and environmental studies

  2. Some considerations for validation of repository performance assessment models

    International Nuclear Information System (INIS)

    Eisenberg, N.

    1991-01-01

    Validation is an important aspect of the regulatory uses of performance assessment. A substantial body of literature exists indicating the manner in which validation of models is usually pursued. Because performance models for a nuclear waste repository cannot be tested over the long time periods for which the model must make predictions, the usual avenue for model validation is precluded. Further impediments to model validation include a lack of fundamental scientific theory to describe important aspects of repository performance and an inability to easily deduce the complex, intricate structures characteristic of a natural system. A successful strategy for validation must attempt to resolve these difficulties in a direct fashion. Although some procedural aspects will be important, the main reliance of validation should be on scientific substance and logical rigor. The level of validation needed will be mandated, in part, by the uses to which these models are put, rather than by the ideal of validation of a scientific theory. Because of the importance of the validation of performance assessment models, the NRC staff has engaged in a program of research and international cooperation to seek progress in this important area. 2 figs., 16 refs

  3. Models of optical quantum computing

    Directory of Open Access Journals (Sweden)

    Krovi Hari

    2017-03-01

    Full Text Available I review some work on models of quantum computing, optical implementations of these models, as well as the associated computational power. In particular, we discuss the circuit model and cluster state implementations using quantum optics with various encodings such as dual rail encoding, Gottesman-Kitaev-Preskill encoding, and coherent state encoding. Then we discuss intermediate models of optical computing such as boson sampling and its variants. Finally, we review some recent work in optical implementations of adiabatic quantum computing and analog optical computing. We also provide a brief description of the relevant aspects from complexity theory needed to understand the results surveyed.

  4. Cloud Computing and Validated Learning for Accelerating Innovation in IoT

    Science.gov (United States)

    Suciu, George; Todoran, Gyorgy; Vulpe, Alexandru; Suciu, Victor; Bulca, Cristina; Cheveresan, Romulus

    2015-01-01

    Innovation in Internet of Things (IoT) requires more than just creation of technology and use of cloud computing or big data platforms. It requires accelerated commercialization or aptly called go-to-market processes. To successfully accelerate, companies need a new type of product development, the so-called validated learning process.…

  5. A Parallel Computational Model for Multichannel Phase Unwrapping Problem

    Science.gov (United States)

    Imperatore, Pasquale; Pepe, Antonio; Lanari, Riccardo

    2015-05-01

    In this paper, a parallel model for the solution of the computationally intensive multichannel phase unwrapping (MCh-PhU) problem is proposed. Firstly, the Extended Minimum Cost Flow (EMCF) algorithm for solving MCh-PhU problem is revised within the rigorous mathematical framework of the discrete calculus ; thus permitting to capture its topological structure in terms of meaningful discrete differential operators. Secondly, emphasis is placed on those methodological and practical aspects, which lead to a parallel reformulation of the EMCF algorithm. Thus, a novel dual-level parallel computational model, in which the parallelism is hierarchically implemented at two different (i.e., process and thread) levels, is presented. The validity of our approach has been demonstrated through a series of experiments that have revealed a significant speedup. Therefore, the attained high-performance prototype is suitable for the solution of large-scale phase unwrapping problems in reasonable time frames, with a significant impact on the systematic exploitation of the existing, and rapidly growing, large archives of SAR data.

  6. Calibration and Validation of the Dynamic Wake Meandering Model for Implementation in an Aeroelastic Code

    DEFF Research Database (Denmark)

    Aagaard Madsen, Helge; Larsen, Gunner Chr.; Larsen, Torben J.

    2010-01-01

    in an aeroelastic model. Calibration and validation of the different parts of the model is carried out by comparisons with actuator disk and actuator line (ACL) computations as well as with inflow measurements on a full-scale 2 MW turbine. It is shown that the load generating part of the increased turbulence....... Finally, added turbulence characteristics are compared with correlation results from literature. ©2010 American Society of Mechanical Engineers...

  7. SDG and qualitative trend based model multiple scale validation

    Science.gov (United States)

    Gao, Dong; Xu, Xin; Yin, Jianjin; Zhang, Hongyu; Zhang, Beike

    2017-09-01

    Verification, Validation and Accreditation (VV&A) is key technology of simulation and modelling. For the traditional model validation methods, the completeness is weak; it is carried out in one scale; it depends on human experience. The SDG (Signed Directed Graph) and qualitative trend based multiple scale validation is proposed. First the SDG model is built and qualitative trends are added to the model. And then complete testing scenarios are produced by positive inference. The multiple scale validation is carried out by comparing the testing scenarios with outputs of simulation model in different scales. Finally, the effectiveness is proved by carrying out validation for a reactor model.

  8. Roll-up of validation results to a target application.

    Energy Technology Data Exchange (ETDEWEB)

    Hills, Richard Guy

    2013-09-01

    Suites of experiments are preformed over a validation hierarchy to test computational simulation models for complex applications. Experiments within the hierarchy can be performed at different conditions and configurations than those for an intended application, with each experiment testing only part of the physics relevant for the application. The purpose of the present work is to develop methodology to roll-up validation results to an application, and to assess the impact the validation hierarchy design has on the roll-up results. The roll-up is accomplished through the development of a meta-model that relates validation measurements throughout a hierarchy to the desired response quantities for the target application. The meta-model is developed using the computation simulation models for the experiments and the application. The meta-model approach is applied to a series of example transport problems that represent complete and incomplete coverage of the physics of the target application by the validation experiments.

  9. Validation of a RANS transition model using a high-order weighted compact nonlinear scheme

    Science.gov (United States)

    Tu, GuoHua; Deng, XiaoGang; Mao, MeiLiang

    2013-04-01

    A modified transition model is given based on the shear stress transport (SST) turbulence model and an intermittency transport equation. The energy gradient term in the original model is replaced by flow strain rate to saving computational costs. The model employs local variables only, and then it can be conveniently implemented in modern computational fluid dynamics codes. The fifth-order weighted compact nonlinear scheme and the fourth-order staggered scheme are applied to discrete the governing equations for the purpose of minimizing discretization errors, so as to mitigate the confusion between numerical errors and transition model errors. The high-order package is compared with a second-order TVD method on simulating the transitional flow of a flat plate. Numerical results indicate that the high-order package give better grid convergence property than that of the second-order method. Validation of the transition model is performed for transitional flows ranging from low speed to hypersonic speed.

  10. Preliminary Phase Field Computational Model Development

    Energy Technology Data Exchange (ETDEWEB)

    Li, Yulan [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Hu, Shenyang Y. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Xu, Ke [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Suter, Jonathan D. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); McCloy, John S. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Johnson, Bradley R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Ramuhalli, Pradeep [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2014-12-15

    This interim report presents progress towards the development of meso-scale models of magnetic behavior that incorporate microstructural information. Modeling magnetic signatures in irradiated materials with complex microstructures (such as structural steels) is a significant challenge. The complexity is addressed incrementally, using the monocrystalline Fe (i.e., ferrite) film as model systems to develop and validate initial models, followed by polycrystalline Fe films, and by more complicated and representative alloys. In addition, the modeling incrementally addresses inclusion of other major phases (e.g., martensite, austenite), minor magnetic phases (e.g., carbides, FeCr precipitates), and minor nonmagnetic phases (e.g., Cu precipitates, voids). The focus of the magnetic modeling is on phase-field models. The models are based on the numerical solution to the Landau-Lifshitz-Gilbert equation. From the computational standpoint, phase-field modeling allows the simulation of large enough systems that relevant defect structures and their effects on functional properties like magnetism can be simulated. To date, two phase-field models have been generated in support of this work. First, a bulk iron model with periodic boundary conditions was generated as a proof-of-concept to investigate major loop effects of single versus polycrystalline bulk iron and effects of single non-magnetic defects. More recently, to support the experimental program herein using iron thin films, a new model was generated that uses finite boundary conditions representing surfaces and edges. This model has provided key insights into the domain structures observed in magnetic force microscopy (MFM) measurements. Simulation results for single crystal thin-film iron indicate the feasibility of the model for determining magnetic domain wall thickness and mobility in an externally applied field. Because the phase-field model dimensions are limited relative to the size of most specimens used in

  11. Validating EHR clinical models using ontology patterns.

    Science.gov (United States)

    Martínez-Costa, Catalina; Schulz, Stefan

    2017-12-01

    Clinical models are artefacts that specify how information is structured in electronic health records (EHRs). However, the makeup of clinical models is not guided by any formal constraint beyond a semantically vague information model. We address this gap by advocating ontology design patterns as a mechanism that makes the semantics of clinical models explicit. This paper demonstrates how ontology design patterns can validate existing clinical models using SHACL. Based on the Clinical Information Modelling Initiative (CIMI), we show how ontology patterns detect both modeling and terminology binding errors in CIMI models. SHACL, a W3C constraint language for the validation of RDF graphs, builds on the concept of "Shape", a description of data in terms of expected cardinalities, datatypes and other restrictions. SHACL, as opposed to OWL, subscribes to the Closed World Assumption (CWA) and is therefore more suitable for the validation of clinical models. We have demonstrated the feasibility of the approach by manually describing the correspondences between six CIMI clinical models represented in RDF and two SHACL ontology design patterns. Using a Java-based SHACL implementation, we found at least eleven modeling and binding errors within these CIMI models. This demonstrates the usefulness of ontology design patterns not only as a modeling tool but also as a tool for validation. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. Computer-Based Linguistic Analysis.

    Science.gov (United States)

    Wright, James R.

    Noam Chomsky's transformational-generative grammar model may effectively be translated into an equivalent computer model. Phrase-structure rules and transformations are tested as to their validity and ordering by the computer via the process of random lexical substitution. Errors appearing in the grammar are detected and rectified, and formal…

  13. Solution Validation for a Double Façade Prototype

    Directory of Open Access Journals (Sweden)

    Pau Fonseca i Casas

    2017-12-01

    Full Text Available A Solution Validation involves comparing the data obtained from the system that are implemented following the model recommendations, as well as the model results. This paper presents a Solution Validation that has been performed with the aim of certifying that a set of computer-optimized designs, for a double façade, are consistent with reality. To validate the results obtained through simulation models, based on dynamic thermal calculation and using Computational Fluid Dynamic techniques, a comparison with the data obtained by monitoring a real implemented prototype has been carried out. The new validated model can be used to describe the system thermal behavior in different climatic zones without having to build a new prototype. The good performance of the proposed double façade solution is confirmed since the validation assures there is a considerable energy saving, preserving and even improving interior comfort. This work shows all the processes in the Solution Validation depicting some of the problems we faced and represents an example of this kind of validation that often is not considered in a simulation project.

  14. Computational engineering

    CERN Document Server

    2014-01-01

    The book presents state-of-the-art works in computational engineering. Focus is on mathematical modeling, numerical simulation, experimental validation and visualization in engineering sciences. In particular, the following topics are presented: constitutive models and their implementation into finite element codes, numerical models in nonlinear elasto-dynamics including seismic excitations, multiphase models in structural engineering and multiscale models of materials systems, sensitivity and reliability analysis of engineering structures, the application of scientific computing in urban water management and hydraulic engineering, and the application of genetic algorithms for the registration of laser scanner point clouds.

  15. Investigation of a two-phase nozzle flow and validation of several computer codes by the experimental data

    International Nuclear Information System (INIS)

    Kedziur, F.

    1980-03-01

    Stationary experiments with a convergent nozzle are performed in order to validate advanced two-phase computer codes, which find application in the blowdown-phase of a loss-of-coolant accident (LOCA). The steam/water flow presents a broad variety of initial conditions: The pressure varies between 2 and 13 MPa, the void fraction between 0 (subcooled) and about 80%, a great number of subcritical as well as critical experiments with different flow pattern is investigated. Additional air/water experiments serve for the separation of phase transition effects. The transient acceleration of the fluid in the LOCA-case is simulated by a local acceleration in the experiments. The layout of the nozzle and the applied measurement technique allow for a separate testing of physical models and the determination of empirical model parameters, respectively: In the four codes DUESE, DRIX-20, RELAP4/MOD6 and STRUYA the models - if they exist - for slip between the phases, thermodynamic non-equilibrium, pipe friction and critical mass flow rate are validated and criticised in comparison with the experimental data, and the corresponding model parameters are determined. The parameters essentially are a function of the void fraction. (orig.) [de

  16. Fast Running Urban Dispersion Model for Radiological Dispersal Device (RDD) Releases: Model Description and Validation

    Energy Technology Data Exchange (ETDEWEB)

    Gowardhan, Akshay [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). National Atmospheric Release Advisory Center (NARAC); Neuscamman, Stephanie [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). National Atmospheric Release Advisory Center (NARAC); Donetti, John [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). National Atmospheric Release Advisory Center (NARAC); Walker, Hoyt [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). National Atmospheric Release Advisory Center (NARAC); Belles, Rich [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). National Atmospheric Release Advisory Center (NARAC); Eme, Bill [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). National Atmospheric Release Advisory Center (NARAC); Homann, Steven [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). National Atmospheric Release Advisory Center (NARAC); Simpson, Matthew [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). National Atmospheric Release Advisory Center (NARAC); Nasstrom, John [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). National Atmospheric Release Advisory Center (NARAC)

    2017-05-24

    Aeolus is an efficient three-dimensional computational fluid dynamics code based on finite volume method developed for predicting transport and dispersion of contaminants in a complex urban area. It solves the time dependent incompressible Navier-Stokes equation on a regular Cartesian staggered grid using a fractional step method. It also solves a scalar transport equation for temperature and using the Boussinesq approximation. The model also includes a Lagrangian dispersion model for predicting the transport and dispersion of atmospheric contaminants. The model can be run in an efficient Reynolds Average Navier-Stokes (RANS) mode with a run time of several minutes, or a more detailed Large Eddy Simulation (LES) mode with run time of hours for a typical simulation. This report describes the model components, including details on the physics models used in the code, as well as several model validation efforts. Aeolus wind and dispersion predictions are compared to field data from the Joint Urban Field Trials 2003 conducted in Oklahoma City (Allwine et al 2004) including both continuous and instantaneous releases. Newly implemented Aeolus capabilities include a decay chain model and an explosive Radiological Dispersal Device (RDD) source term; these capabilities are described. Aeolus predictions using the buoyant explosive RDD source are validated against two experimental data sets: the Green Field explosive cloud rise experiments conducted in Israel (Sharon et al 2012) and the Full-Scale RDD Field Trials conducted in Canada (Green et al 2016).

  17. Simulation Methods and Validation Criteria for Modeling Cardiac Ventricular Electrophysiology.

    Directory of Open Access Journals (Sweden)

    Shankarjee Krishnamoorthi

    Full Text Available We describe a sequence of methods to produce a partial differential equation model of the electrical activation of the ventricles. In our framework, we incorporate the anatomy and cardiac microstructure obtained from magnetic resonance imaging and diffusion tensor imaging of a New Zealand White rabbit, the Purkinje structure and the Purkinje-muscle junctions, and an electrophysiologically accurate model of the ventricular myocytes and tissue, which includes transmural and apex-to-base gradients of action potential characteristics. We solve the electrophysiology governing equations using the finite element method and compute both a 6-lead precordial electrocardiogram (ECG and the activation wavefronts over time. We are particularly concerned with the validation of the various methods used in our model and, in this regard, propose a series of validation criteria that we consider essential. These include producing a physiologically accurate ECG, a correct ventricular activation sequence, and the inducibility of ventricular fibrillation. Among other components, we conclude that a Purkinje geometry with a high density of Purkinje muscle junctions covering the right and left ventricular endocardial surfaces as well as transmural and apex-to-base gradients in action potential characteristics are necessary to produce ECGs and time activation plots that agree with physiological observations.

  18. Simulation Methods and Validation Criteria for Modeling Cardiac Ventricular Electrophysiology.

    Science.gov (United States)

    Krishnamoorthi, Shankarjee; Perotti, Luigi E; Borgstrom, Nils P; Ajijola, Olujimi A; Frid, Anna; Ponnaluri, Aditya V; Weiss, James N; Qu, Zhilin; Klug, William S; Ennis, Daniel B; Garfinkel, Alan

    2014-01-01

    We describe a sequence of methods to produce a partial differential equation model of the electrical activation of the ventricles. In our framework, we incorporate the anatomy and cardiac microstructure obtained from magnetic resonance imaging and diffusion tensor imaging of a New Zealand White rabbit, the Purkinje structure and the Purkinje-muscle junctions, and an electrophysiologically accurate model of the ventricular myocytes and tissue, which includes transmural and apex-to-base gradients of action potential characteristics. We solve the electrophysiology governing equations using the finite element method and compute both a 6-lead precordial electrocardiogram (ECG) and the activation wavefronts over time. We are particularly concerned with the validation of the various methods used in our model and, in this regard, propose a series of validation criteria that we consider essential. These include producing a physiologically accurate ECG, a correct ventricular activation sequence, and the inducibility of ventricular fibrillation. Among other components, we conclude that a Purkinje geometry with a high density of Purkinje muscle junctions covering the right and left ventricular endocardial surfaces as well as transmural and apex-to-base gradients in action potential characteristics are necessary to produce ECGs and time activation plots that agree with physiological observations.

  19. Validation of the What Matters Index: A brief, patient-reported index that guides care for chronic conditions and can substitute for computer-generated risk models.

    Science.gov (United States)

    Wasson, John H; Ho, Lynn; Soloway, Laura; Moore, L Gordon

    2018-01-01

    Current health care delivery relies on complex, computer-generated risk models constructed from insurance claims and medical record data. However, these models produce inaccurate predictions of risk levels for individual patients, do not explicitly guide care, and undermine health management investments in many patients at lesser risk. Therefore, this study prospectively validates a concise patient-reported risk assessment that addresses these inadequacies of computer-generated risk models. Five measures with well-documented impacts on the use of health services are summed to create a "What Matters Index." These measures are: 1) insufficient confidence to self-manage health problems, 2) pain, 3) bothersome emotions, 4) polypharmacy, and 5) adverse medication effects. We compare the sensitivity and predictive values of this index with two representative risk models in a population of 8619 Medicaid recipients. The patient-reported "What Matters Index" and the conventional risk models are found to exhibit similar sensitivities and predictive values for subsequent hospital or emergency room use. The "What Matters Index" is also reliable: akin to its performance during development, for patients with index scores of 1, 2, and ≥3, the odds ratios (with 95% confidence intervals) for subsequent hospitalization within 1 year, relative to patients with a score of 0, are 1.3 (1.1-1.6), 2.0 (1.6-2.4), and 3.4 (2.9-4.0), respectively; for emergency room use, the corresponding odds ratios are 1.3 (1.1-1.4), 1.9 (1.6-2.1), and 2.9 (2.6-3.3). Similar findings were replicated among smaller populations of 1061 mostly older patients from nine private practices and 4428 Medicaid patients without chronic conditions. In contrast to complex computer-generated risk models, the brief patient-reported "What Matters Index" immediately and unambiguously identifies fundamental, remediable needs for each patient and more sensibly directs the delivery of services to patient categories based on

  20. Overhead Crane Computer Model

    Science.gov (United States)

    Enin, S. S.; Omelchenko, E. Y.; Fomin, N. V.; Beliy, A. V.

    2018-03-01

    The paper has a description of a computer model of an overhead crane system. The designed overhead crane system consists of hoisting, trolley and crane mechanisms as well as a payload two-axis system. With the help of the differential equation of specified mechanisms movement derived through Lagrange equation of the II kind, it is possible to build an overhead crane computer model. The computer model was obtained using Matlab software. Transients of coordinate, linear speed and motor torque of trolley and crane mechanism systems were simulated. In addition, transients of payload swaying were obtained with respect to the vertical axis. A trajectory of the trolley mechanism with simultaneous operation with the crane mechanism is represented in the paper as well as a two-axis trajectory of payload. The designed computer model of an overhead crane is a great means for studying positioning control and anti-sway control systems.

  1. Computer model of the MFTF-B neutral beam Accel dc power supply

    International Nuclear Information System (INIS)

    Wilson, J.H.

    1983-01-01

    Using the SCEPTRE circuit modeling code, a computer model was developed for the MFTF Neutral Beam Power Supply System (NBPSS) Accel dc Power Supply (ADCPS). The ADCPS provides 90 kV, 88 A, to the Accel Modulator. Because of the complex behavior of the power supply, use of the computer model is necessary to adequately understand the power supply's behavior over a wide range of load conditions and faults. The model developed includes all the circuit components and parameters, and some of the stray values. The model has been well validated for transients with times on the order of milliseconds, and with one exception, for steady-state operation. When using a circuit modeling code for a system with a wide range of time constants, it can become impossible to obtain good solutions for all time ranges at once. The present model concentrates on the millisecond-range transients because the compensating capacitor bank tends to isolate the power supply from the load for faster transients. Attempts to include stray circuit elements with time constants in the microsecond and shorter range have had little success because of huge increases in computing time that result. The model has been successfully extended to include the accel modulator

  2. Tracer travel time and model validation

    International Nuclear Information System (INIS)

    Tsang, Chin-Fu.

    1988-01-01

    The performance assessment of a nuclear waste repository demands much more in comparison to the safety evaluation of any civil constructions such as dams, or the resource evaluation of a petroleum or geothermal reservoir. It involves the estimation of low probability (low concentration) of radionuclide transport extrapolated 1000's of years into the future. Thus models used to make these estimates need to be carefully validated. A number of recent efforts have been devoted to the study of this problem. Some general comments on model validation were given by Tsang. The present paper discusses some issues of validation in regards to radionuclide transport. 5 refs

  3. Model validation: a systemic and systematic approach

    International Nuclear Information System (INIS)

    Sheng, G.; Elzas, M.S.; Cronhjort, B.T.

    1993-01-01

    The term 'validation' is used ubiquitously in association with the modelling activities of numerous disciplines including social, political natural, physical sciences, and engineering. There is however, a wide range of definitions which give rise to very different interpretations of what activities the process involves. Analyses of results from the present large international effort in modelling radioactive waste disposal systems illustrate the urgent need to develop a common approach to model validation. Some possible explanations are offered to account for the present state of affairs. The methodology developed treats model validation and code verification in a systematic fashion. In fact, this approach may be regarded as a comprehensive framework to assess the adequacy of any simulation study. (author)

  4. A novel cost based model for energy consumption in cloud computing.

    Science.gov (United States)

    Horri, A; Dastghaibyfard, Gh

    2015-01-01

    Cloud data centers consume enormous amounts of electrical energy. To support green cloud computing, providers also need to minimize cloud infrastructure energy consumption while conducting the QoS. In this study, for cloud environments an energy consumption model is proposed for time-shared policy in virtualization layer. The cost and energy usage of time-shared policy were modeled in the CloudSim simulator based upon the results obtained from the real system and then proposed model was evaluated by different scenarios. In the proposed model, the cache interference costs were considered. These costs were based upon the size of data. The proposed model was implemented in the CloudSim simulator and the related simulation results indicate that the energy consumption may be considerable and that it can vary with different parameters such as the quantum parameter, data size, and the number of VMs on a host. Measured results validate the model and demonstrate that there is a tradeoff between energy consumption and QoS in the cloud environment. Also, measured results validate the model and demonstrate that there is a tradeoff between energy consumption and QoS in the cloud environment.

  5. ENEL overall PWR plant models and neutronic integrated computing systems

    International Nuclear Information System (INIS)

    Pedroni, G.; Pollachini, L.; Vimercati, G.; Cori, R.; Pretolani, F.; Spelta, S.

    1987-01-01

    To support the design activity of the Italian nuclear energy program for the construction of pressurized water reactors, the Italian Electricity Board (ENEL) needs to verify the design as a whole (that is, the nuclear steam supply system and balance of plant) both in steady-state operation and in transient. The ENEL has therefore developed two computer models to analyze both operational and incidental transients. The models, named STRIP and SFINCS, perform the analysis of the nuclear as well as the conventional part of the plant (the control system being properly taken into account). The STRIP model has been developed by means of the French (Electricite de France) modular code SICLE, while SFINCS is based on the Italian (ENEL) modular code LEGO. STRIP validation was performed with respect to Fessenheim French power plant experimental data. Two significant transients were chosen: load step and total load rejection. SFINCS validation was performed with respect to Saint-Laurent French power plant experimental data and also by comparing the SFINCS-STRIP responses

  6. Computer Games as Virtual Environments for Safety-Critical Software Validation

    Directory of Open Access Journals (Sweden)

    Štefan Korečko

    2017-01-01

    Full Text Available Computer games became an inseparable part of everyday life in modern society and the time people spend playing them every day is increasing. This trend caused a noticeable research activity focused on utilizing the time spent playing in a meaningful way, for example to help solving scientific problems or tasks related to computer systems development. In this paper we present one contribution to this activity, a software system consisting of a modified version of the Open Rails train simulator and an application called TS2JavaConn, which allows to use separately developed software controllers with the simulator. The system is intended for validation of controllers developed by formal methods. The paper describes the overall architecture of the system and operation of its components. It also compares the system with other approaches to purposeful utilization of computer games, specifies suitable formal methods and illustrates its intended use on an example.

  7. Computational Design and Experimental Validation of New Thermal Barrier Systems

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Shengmin; Yang, Shizhong; Khosravi, Ebrahim

    2011-12-31

    This project (10/01/2010-9/30/2013), “Computational Design and Experimental Validation of New Thermal Barrier Systems”, originates from Louisiana State University (LSU) Mechanical Engineering Department and Southern University (SU) Department of Computer Science. This proposal will directly support the technical goals specified in DE-FOA-0000248, Topic Area 3: Turbine Materials, by addressing key technologies needed to enable the development of advanced turbines and turbine-based systems that will operate safely and efficiently using coal-derived synthesis gases. We will develop novel molecular dynamics method to improve the efficiency of simulation on novel TBC materials; we will perform high performance computing (HPC) on complex TBC structures to screen the most promising TBC compositions; we will perform material characterizations and oxidation/corrosion tests; and we will demonstrate our new Thermal barrier coating (TBC) systems experimentally under Integrated gasification combined cycle (IGCC) environments. The durability of the coating will be examined using the proposed High Temperature/High Pressure Durability Test Rig under real syngas product compositions.

  8. Independent validation testing of the FLAME computer code, Version 1.0

    International Nuclear Information System (INIS)

    Martian, P.; Chung, J.N.

    1992-07-01

    Independent testing of the FLAME computer code, Version 1.0, was conducted to determine if the code is ready for use in hydrological and environmental studies at Department of Energy sites. This report describes the technical basis, approach, and results of this testing. Validation tests, (i.e., tests which compare field data to the computer generated solutions) were used to determine the operational status of the FLAME computer code and were done on a qualitative basis through graphical comparisons of the experimental and numerical data. These tests were specifically designed to check: (1) correctness of the FORTRAN coding, (2) computational accuracy, and (3) suitability to simulating actual hydrologic conditions. This testing was performed using a structured evaluation protocol which consisted of: (1) independent applications, and (2) graduated difficulty of test cases. Three tests ranging in complexity from simple one-dimensional steady-state flow field problems under near-saturated conditions to two-dimensional transient flow problems with very dry initial conditions

  9. Experimental Validation of Various Temperature Modells for Semi-Physical Tyre Model Approaches

    Science.gov (United States)

    Hackl, Andreas; Scherndl, Christoph; Hirschberg, Wolfgang; Lex, Cornelia

    2017-10-01

    With increasing level of complexity and automation in the area of automotive engineering, the simulation of safety relevant Advanced Driver Assistance Systems (ADAS) leads to increasing accuracy demands in the description of tyre contact forces. In recent years, with improvement in tyre simulation, the needs for coping with tyre temperatures and the resulting changes in tyre characteristics are rising significantly. Therefore, experimental validation of three different temperature model approaches is carried out, discussed and compared in the scope of this article. To investigate or rather evaluate the range of application of the presented approaches in combination with respect of further implementation in semi-physical tyre models, the main focus lies on the a physical parameterisation. Aside from good modelling accuracy, focus is held on computational time and complexity of the parameterisation process. To evaluate this process and discuss the results, measurements from a Hoosier racing tyre 6.0 / 18.0 10 LCO C2000 from an industrial flat test bench are used. Finally the simulation results are compared with the measurement data.

  10. An agent-based computational model of the spread of tuberculosis

    International Nuclear Information System (INIS)

    De Espíndola, Aquino L; Bauch, Chris T; Troca Cabella, Brenno C; Martinez, Alexandre Souto

    2011-01-01

    In this work we propose an alternative model of the spread of tuberculosis (TB) and the emergence of drug resistance due to the treatment with antibiotics. We implement the simulations by an agent-based model computational approach where the spatial structure is taken into account. The spread of tuberculosis occurs according to probabilities defined by the interactions among individuals. The model was validated by reproducing results already known from the literature in which different treatment regimes yield the emergence of drug resistance. The different patterns of TB spread can be visualized at any time of the system evolution. The implementation details as well as some results of this alternative approach are discussed

  11. Validation of transport models using additive flux minimization technique

    Energy Technology Data Exchange (ETDEWEB)

    Pankin, A. Y.; Kruger, S. E. [Tech-X Corporation, 5621 Arapahoe Ave., Boulder, Colorado 80303 (United States); Groebner, R. J. [General Atomics, San Diego, California 92121 (United States); Hakim, A. [Princeton Plasma Physics Laboratory, Princeton, New Jersey 08543-0451 (United States); Kritz, A. H.; Rafiq, T. [Department of Physics, Lehigh University, Bethlehem, Pennsylvania 18015 (United States)

    2013-10-15

    A new additive flux minimization technique is proposed for carrying out the verification and validation (V and V) of anomalous transport models. In this approach, the plasma profiles are computed in time dependent predictive simulations in which an additional effective diffusivity is varied. The goal is to obtain an optimal match between the computed and experimental profile. This new technique has several advantages over traditional V and V methods for transport models in tokamaks and takes advantage of uncertainty quantification methods developed by the applied math community. As a demonstration of its efficiency, the technique is applied to the hypothesis that the paleoclassical density transport dominates in the plasma edge region in DIII-D tokamak discharges. A simplified version of the paleoclassical model that utilizes the Spitzer resistivity for the parallel neoclassical resistivity and neglects the trapped particle effects is tested in this paper. It is shown that a contribution to density transport, in addition to the paleoclassical density transport, is needed in order to describe the experimental profiles. It is found that more additional diffusivity is needed at the top of the H-mode pedestal, and almost no additional diffusivity is needed at the pedestal bottom. The implementation of this V and V technique uses the FACETS::Core transport solver and the DAKOTA toolkit for design optimization and uncertainty quantification. The FACETS::Core solver is used for advancing the plasma density profiles. The DAKOTA toolkit is used for the optimization of plasma profiles and the computation of the additional diffusivity that is required for the predicted density profile to match the experimental profile.

  12. Validation of transport models using additive flux minimization technique

    International Nuclear Information System (INIS)

    Pankin, A. Y.; Kruger, S. E.; Groebner, R. J.; Hakim, A.; Kritz, A. H.; Rafiq, T.

    2013-01-01

    A new additive flux minimization technique is proposed for carrying out the verification and validation (V and V) of anomalous transport models. In this approach, the plasma profiles are computed in time dependent predictive simulations in which an additional effective diffusivity is varied. The goal is to obtain an optimal match between the computed and experimental profile. This new technique has several advantages over traditional V and V methods for transport models in tokamaks and takes advantage of uncertainty quantification methods developed by the applied math community. As a demonstration of its efficiency, the technique is applied to the hypothesis that the paleoclassical density transport dominates in the plasma edge region in DIII-D tokamak discharges. A simplified version of the paleoclassical model that utilizes the Spitzer resistivity for the parallel neoclassical resistivity and neglects the trapped particle effects is tested in this paper. It is shown that a contribution to density transport, in addition to the paleoclassical density transport, is needed in order to describe the experimental profiles. It is found that more additional diffusivity is needed at the top of the H-mode pedestal, and almost no additional diffusivity is needed at the pedestal bottom. The implementation of this V and V technique uses the FACETS::Core transport solver and the DAKOTA toolkit for design optimization and uncertainty quantification. The FACETS::Core solver is used for advancing the plasma density profiles. The DAKOTA toolkit is used for the optimization of plasma profiles and the computation of the additional diffusivity that is required for the predicted density profile to match the experimental profile

  13. Computationally Modeling Interpersonal Trust

    Directory of Open Access Journals (Sweden)

    Jin Joo eLee

    2013-12-01

    Full Text Available We present a computational model capable of predicting—above human accuracy—the degree of trust a person has toward their novel partner by observing the trust-related nonverbal cues expressed in their social interaction. We summarize our prior work, in which we identify nonverbal cues that signal untrustworthy behavior and also demonstrate the human mind’s readiness to interpret those cues to assess the trustworthiness of a social robot. We demonstrate that domain knowledge gained from our prior work using human-subjects experiments, when incorporated into the feature engineering process, permits a computational model to outperform both human predictions and a baseline model built in naivete' of this domain knowledge. We then present the construction of hidden Markov models to incorporate temporal relationships among the trust-related nonverbal cues. By interpreting the resulting learned structure, we observe that models built to emulate different levels of trust exhibit different sequences of nonverbal cues. From this observation, we derived sequence-based temporal features that further improve the accuracy of our computational model. Our multi-step research process presented in this paper combines the strength of experimental manipulation and machine learning to not only design a computational trust model but also to further our understanding of the dynamics of interpersonal trust.

  14. An approach to computing discrete adjoints for MPI-parallelized models applied to Ice Sheet System Model 4.11

    Directory of Open Access Journals (Sweden)

    E. Larour

    2016-11-01

    Full Text Available Within the framework of sea-level rise projections, there is a strong need for hindcast validation of the evolution of polar ice sheets in a way that tightly matches observational records (from radar, gravity, and altimetry observations mainly. However, the computational requirements for making hindcast reconstructions possible are severe and rely mainly on the evaluation of the adjoint state of transient ice-flow models. Here, we look at the computation of adjoints in the context of the NASA/JPL/UCI Ice Sheet System Model (ISSM, written in C++ and designed for parallel execution with MPI. We present the adaptations required in the way the software is designed and written, but also generic adaptations in the tools facilitating the adjoint computations. We concentrate on the use of operator overloading coupled with the AdjoinableMPI library to achieve the adjoint computation of the ISSM. We present a comprehensive approach to (1 carry out type changing through the ISSM, hence facilitating operator overloading, (2 bind to external solvers such as MUMPS and GSL-LU, and (3 handle MPI-based parallelism to scale the capability. We demonstrate the success of the approach by computing sensitivities of hindcast metrics such as the misfit to observed records of surface altimetry on the northeastern Greenland Ice Stream, or the misfit to observed records of surface velocities on Upernavik Glacier, central West Greenland. We also provide metrics for the scalability of the approach, and the expected performance. This approach has the potential to enable a new generation of hindcast-validated projections that make full use of the wealth of datasets currently being collected, or already collected, in Greenland and Antarctica.

  15. Online self-report questionnaire on computer work-related exposure (OSCWE): validity and internal consistency.

    Science.gov (United States)

    Mekhora, Keerin; Jalayondeja, Wattana; Jalayondeja, Chutima; Bhuanantanondh, Petcharatana; Dusadiisariyavong, Asadang; Upiriyasakul, Rujiret; Anuraktam, Khajornyod

    2014-07-01

    To develop an online, self-report questionnaire on computer work-related exposure (OSCWE) and to determine the internal consistency, face and content validity of the questionnaire. The online, self-report questionnaire was developed to determine the risk factors related to musculoskeletal disorders in computer users. It comprised five domains: personal, work-related, work environment, physical health and psychosocial factors. The questionnaire's content was validated by an occupational medical doctor and three physical therapy lecturers involved in ergonomic teaching. Twenty-five lay people examined the feasibility of computer-administered and the user-friendly language. The item correlation in each domain was analyzed by the internal consistency (Cronbach's alpha; alpha). The content of the questionnaire was considered congruent with the testing purposes. Eight hundred and thirty-five computer users at the PTT Exploration and Production Public Company Limited registered to the online self-report questionnaire. The internal consistency of the five domains was: personal (alpha = 0.58), work-related (alpha = 0.348), work environment (alpha = 0.72), physical health (alpha = 0.68) and psychosocial factor (alpha = 0.93). The findings suggested that the OSCWE had acceptable internal consistency for work environment and psychosocial factors. The OSCWE is available to use in population-based survey research among computer office workers.

  16. Validity of questionnaire self-reports on computer, mouse and keyboard usage during a four-week period

    DEFF Research Database (Denmark)

    Mikkelsen, S.; Vilstrup, Imogen; Lassen, C. F.

    2007-01-01

    OBJECTIVE: To examine the validity and potential biases in self-reports of computer, mouse and keyboard usage times, compared with objective recordings. METHODS: A study population of 1211 people was asked in a questionnaire to estimate the average time they had worked with computer, mouse...... and keyboard during the past four working weeks. During the same period, a software program recorded these activities objectively. The study was part of a one-year follow-up study from 2000-1 of musculoskeletal outcomes among Danish computer workers. RESULTS: Self-reports on computer, mouse and keyboard usage...... times were positively associated with objectively measured activity, but the validity was low. Self-reports explained only between a quarter and a third of the variance of objectively measured activity, and were even lower for one measure (keyboard time). Self-reports overestimated usage times...

  17. Advanced computational modelling for drying processes – A review

    International Nuclear Information System (INIS)

    Defraeye, Thijs

    2014-01-01

    determination, model validation, more complete multiphysics models and more energy-oriented and integrated “nexus” modelling of the dehydration process. Development of more user-friendly, specialised software is paramount to bridge the current gap between modelling in research and industry by making it more attractive. These advanced computational methods show promising perspectives to aid developing next-generation sustainable and green drying technology, tailored to the new requirements for the future society, and are expected to play an increasingly important role in drying technology R and D

  18. Linear Unlearning for Cross-Validation

    DEFF Research Database (Denmark)

    Hansen, Lars Kai; Larsen, Jan

    1996-01-01

    The leave-one-out cross-validation scheme for generalization assessment of neural network models is computationally expensive due to replicated training sessions. In this paper we suggest linear unlearning of examples as an approach to approximative cross-validation. Further, we discuss...... time series prediction benchmark demonstrate the potential of the linear unlearning technique...

  19. Assessment model validity document FARF31

    International Nuclear Information System (INIS)

    Elert, Mark; Gylling Bjoern; Lindgren, Maria

    2004-08-01

    The prime goal of model validation is to build confidence in the model concept and that the model is fit for its intended purpose. In other words: Does the model predict transport in fractured rock adequately to be used in repository performance assessments. Are the results reasonable for the type of modelling tasks the model is designed for. Commonly, in performance assessments a large number of realisations of flow and transport is made to cover the associated uncertainties. Thus, the flow and transport including radioactive chain decay are preferably calculated in the same model framework. A rather sophisticated concept is necessary to be able to model flow and radionuclide transport in the near field and far field of a deep repository, also including radioactive chain decay. In order to avoid excessively long computational times there is a need for well-based simplifications. For this reason, the far field code FARF31 is made relatively simple, and calculates transport by using averaged entities to represent the most important processes. FARF31 has been shown to be suitable for the performance assessments within the SKB studies, e.g. SR 97. Among the advantages are that it is a fast, simple and robust code, which enables handling of many realisations with wide spread in parameters in combination with chain decay of radionuclides. Being a component in the model chain PROPER, it is easy to assign statistical distributions to the input parameters. Due to the formulation of the advection-dispersion equation in FARF31 it is possible to perform the groundwater flow calculations separately.The basis for the modelling is a stream tube, i.e. a volume of rock including fractures with flowing water, with the walls of the imaginary stream tube defined by streamlines. The transport within the stream tube is described using a dual porosity continuum approach, where it is assumed that rock can be divided into two distinct domains with different types of porosity

  20. Interactive computer modeling of combustion chemistry and coalescence-dispersion modeling of turbulent combustion

    Science.gov (United States)

    Pratt, D. T.

    1984-01-01

    An interactive computer code for simulation of a high-intensity turbulent combustor as a single point inhomogeneous stirred reactor was developed from an existing batch processing computer code CDPSR. The interactive CDPSR code was used as a guide for interpretation and direction of DOE-sponsored companion experiments utilizing Xenon tracer with optical laser diagnostic techniques to experimentally determine the appropriate mixing frequency, and for validation of CDPSR as a mixing-chemistry model for a laboratory jet-stirred reactor. The coalescence-dispersion model for finite rate mixing was incorporated into an existing interactive code AVCO-MARK I, to enable simulation of a combustor as a modular array of stirred flow and plug flow elements, each having a prescribed finite mixing frequency, or axial distribution of mixing frequency, as appropriate. Further increase the speed and reliability of the batch kinetics integrator code CREKID was increased by rewriting in vectorized form for execution on a vector or parallel processor, and by incorporating numerical techniques which enhance execution speed by permitting specification of a very low accuracy tolerance.

  1. Novel approach for dam break flow modeling using computational intelligence

    Science.gov (United States)

    Seyedashraf, Omid; Mehrabi, Mohammad; Akhtari, Ali Akbar

    2018-04-01

    A new methodology based on the computational intelligence (CI) system is proposed and tested for modeling the classic 1D dam-break flow problem. The reason to seek for a new solution lies in the shortcomings of the existing analytical and numerical models. This includes the difficulty of using the exact solutions and the unwanted fluctuations, which arise in the numerical results. In this research, the application of the radial-basis-function (RBF) and multi-layer-perceptron (MLP) systems is detailed for the solution of twenty-nine dam-break scenarios. The models are developed using seven variables, i.e. the length of the channel, the depths of the up-and downstream sections, time, and distance as the inputs. Moreover, the depths and velocities of each computational node in the flow domain are considered as the model outputs. The models are validated against the analytical, and Lax-Wendroff and MacCormack FDM schemes. The findings indicate that the employed CI models are able to replicate the overall shape of the shock- and rarefaction-waves. Furthermore, the MLP system outperforms RBF and the tested numerical schemes. A new monolithic equation is proposed based on the best fitting model, which can be used as an efficient alternative to the existing piecewise analytic equations.

  2. Development and Validation of a Polarimetric-MCScene 3D Atmospheric Radiation Model

    Energy Technology Data Exchange (ETDEWEB)

    Berk, Alexander [Spectral Sciences, Inc., Burlington, MA (United States); Hawes, Frederick [Spectral Sciences, Inc., Burlington, MA (United States); Fox, Marsha [Spectral Sciences, Inc., Burlington, MA (United States)

    2016-03-15

    Polarimetric measurements can substantially enhance the ability of both spectrally resolved and single band imagery to detect the proliferation of weapons of mass destruction, providing data for locating and identifying facilities, materials, and processes of undeclared and proliferant nuclear weapons programs worldwide. Unfortunately, models do not exist that efficiently and accurately predict spectral polarized signatures for the materials of interest embedded in complex 3D environments. Having such a model would enable one to test hypotheses and optimize both the enhancement of scene contrast and the signal processing for spectral signature extraction. The Phase I set the groundwork for development of fully validated polarimetric spectral signature and scene simulation models. This has been accomplished 1. by (a) identifying and downloading state-of-the-art surface and atmospheric polarimetric data sources, (b) implementing tools for generating custom polarimetric data, and (c) identifying and requesting US Government funded field measurement data for use in validation; 2. by formulating an approach for upgrading the radiometric spectral signature model MODTRAN to generate polarimetric intensities through (a) ingestion of the polarimetric data, (b) polarimetric vectorization of existing MODTRAN modules, and (c) integration of a newly developed algorithm for computing polarimetric multiple scattering contributions; 3. by generating an initial polarimetric model that demonstrates calculation of polarimetric solar and lunar single scatter intensities arising from the interaction of incoming irradiances with molecules and aerosols; 4. by developing a design and implementation plan to (a) automate polarimetric scene construction and (b) efficiently sample polarimetric scattering and reflection events, for use in a to be developed polarimetric version of the existing first-principles synthetic scene simulation model, MCScene; and 5. by planning a validation field

  3. A methodology for the design of experiments in computational intelligence with multiple regression models.

    Science.gov (United States)

    Fernandez-Lozano, Carlos; Gestal, Marcos; Munteanu, Cristian R; Dorado, Julian; Pazos, Alejandro

    2016-01-01

    The design of experiments and the validation of the results achieved with them are vital in any research study. This paper focuses on the use of different Machine Learning approaches for regression tasks in the field of Computational Intelligence and especially on a correct comparison between the different results provided for different methods, as those techniques are complex systems that require further study to be fully understood. A methodology commonly accepted in Computational intelligence is implemented in an R package called RRegrs. This package includes ten simple and complex regression models to carry out predictive modeling using Machine Learning and well-known regression algorithms. The framework for experimental design presented herein is evaluated and validated against RRegrs. Our results are different for three out of five state-of-the-art simple datasets and it can be stated that the selection of the best model according to our proposal is statistically significant and relevant. It is of relevance to use a statistical approach to indicate whether the differences are statistically significant using this kind of algorithms. Furthermore, our results with three real complex datasets report different best models than with the previously published methodology. Our final goal is to provide a complete methodology for the use of different steps in order to compare the results obtained in Computational Intelligence problems, as well as from other fields, such as for bioinformatics, cheminformatics, etc., given that our proposal is open and modifiable.

  4. A methodology for the design of experiments in computational intelligence with multiple regression models

    Directory of Open Access Journals (Sweden)

    Carlos Fernandez-Lozano

    2016-12-01

    Full Text Available The design of experiments and the validation of the results achieved with them are vital in any research study. This paper focuses on the use of different Machine Learning approaches for regression tasks in the field of Computational Intelligence and especially on a correct comparison between the different results provided for different methods, as those techniques are complex systems that require further study to be fully understood. A methodology commonly accepted in Computational intelligence is implemented in an R package called RRegrs. This package includes ten simple and complex regression models to carry out predictive modeling using Machine Learning and well-known regression algorithms. The framework for experimental design presented herein is evaluated and validated against RRegrs. Our results are different for three out of five state-of-the-art simple datasets and it can be stated that the selection of the best model according to our proposal is statistically significant and relevant. It is of relevance to use a statistical approach to indicate whether the differences are statistically significant using this kind of algorithms. Furthermore, our results with three real complex datasets report different best models than with the previously published methodology. Our final goal is to provide a complete methodology for the use of different steps in order to compare the results obtained in Computational Intelligence problems, as well as from other fields, such as for bioinformatics, cheminformatics, etc., given that our proposal is open and modifiable.

  5. An exercise in model validation: Comparing univariate statistics and Monte Carlo-based multivariate statistics

    International Nuclear Information System (INIS)

    Weathers, J.B.; Luck, R.; Weathers, J.W.

    2009-01-01

    The complexity of mathematical models used by practicing engineers is increasing due to the growing availability of sophisticated mathematical modeling tools and ever-improving computational power. For this reason, the need to define a well-structured process for validating these models against experimental results has become a pressing issue in the engineering community. This validation process is partially characterized by the uncertainties associated with the modeling effort as well as the experimental results. The net impact of the uncertainties on the validation effort is assessed through the 'noise level of the validation procedure', which can be defined as an estimate of the 95% confidence uncertainty bounds for the comparison error between actual experimental results and model-based predictions of the same quantities of interest. Although general descriptions associated with the construction of the noise level using multivariate statistics exists in the literature, a detailed procedure outlining how to account for the systematic and random uncertainties is not available. In this paper, the methodology used to derive the covariance matrix associated with the multivariate normal pdf based on random and systematic uncertainties is examined, and a procedure used to estimate this covariance matrix using Monte Carlo analysis is presented. The covariance matrices are then used to construct approximate 95% confidence constant probability contours associated with comparison error results for a practical example. In addition, the example is used to show the drawbacks of using a first-order sensitivity analysis when nonlinear local sensitivity coefficients exist. Finally, the example is used to show the connection between the noise level of the validation exercise calculated using multivariate and univariate statistics.

  6. An exercise in model validation: Comparing univariate statistics and Monte Carlo-based multivariate statistics

    Energy Technology Data Exchange (ETDEWEB)

    Weathers, J.B. [Shock, Noise, and Vibration Group, Northrop Grumman Shipbuilding, P.O. Box 149, Pascagoula, MS 39568 (United States)], E-mail: James.Weathers@ngc.com; Luck, R. [Department of Mechanical Engineering, Mississippi State University, 210 Carpenter Engineering Building, P.O. Box ME, Mississippi State, MS 39762-5925 (United States)], E-mail: Luck@me.msstate.edu; Weathers, J.W. [Structural Analysis Group, Northrop Grumman Shipbuilding, P.O. Box 149, Pascagoula, MS 39568 (United States)], E-mail: Jeffrey.Weathers@ngc.com

    2009-11-15

    The complexity of mathematical models used by practicing engineers is increasing due to the growing availability of sophisticated mathematical modeling tools and ever-improving computational power. For this reason, the need to define a well-structured process for validating these models against experimental results has become a pressing issue in the engineering community. This validation process is partially characterized by the uncertainties associated with the modeling effort as well as the experimental results. The net impact of the uncertainties on the validation effort is assessed through the 'noise level of the validation procedure', which can be defined as an estimate of the 95% confidence uncertainty bounds for the comparison error between actual experimental results and model-based predictions of the same quantities of interest. Although general descriptions associated with the construction of the noise level using multivariate statistics exists in the literature, a detailed procedure outlining how to account for the systematic and random uncertainties is not available. In this paper, the methodology used to derive the covariance matrix associated with the multivariate normal pdf based on random and systematic uncertainties is examined, and a procedure used to estimate this covariance matrix using Monte Carlo analysis is presented. The covariance matrices are then used to construct approximate 95% confidence constant probability contours associated with comparison error results for a practical example. In addition, the example is used to show the drawbacks of using a first-order sensitivity analysis when nonlinear local sensitivity coefficients exist. Finally, the example is used to show the connection between the noise level of the validation exercise calculated using multivariate and univariate statistics.

  7. Learning-based computing techniques in geoid modeling for precise height transformation

    Science.gov (United States)

    Erol, B.; Erol, S.

    2013-03-01

    Precise determination of local geoid is of particular importance for establishing height control in geodetic GNSS applications, since the classical leveling technique is too laborious. A geoid model can be accurately obtained employing properly distributed benchmarks having GNSS and leveling observations using an appropriate computing algorithm. Besides the classical multivariable polynomial regression equations (MPRE), this study attempts an evaluation of learning based computing algorithms: artificial neural networks (ANNs), adaptive network-based fuzzy inference system (ANFIS) and especially the wavelet neural networks (WNNs) approach in geoid surface approximation. These algorithms were developed parallel to advances in computer technologies and recently have been used for solving complex nonlinear problems of many applications. However, they are rather new in dealing with precise modeling problem of the Earth gravity field. In the scope of the study, these methods were applied to Istanbul GPS Triangulation Network data. The performances of the methods were assessed considering the validation results of the geoid models at the observation points. In conclusion the ANFIS and WNN revealed higher prediction accuracies compared to ANN and MPRE methods. Beside the prediction capabilities, these methods were also compared and discussed from the practical point of view in conclusions.

  8. Aeroelastic modelling without the need for excessive computing power

    Energy Technology Data Exchange (ETDEWEB)

    Infield, D. [Loughborough Univ., Centre for Renewable Energy Systems Technology, Dept. of Electronic and Electrical Engineering, Loughborough (United Kingdom)

    1996-09-01

    The aeroelastic model presented here was developed specifically to represent a wind turbine manufactured by Northern Power Systems which features a passive pitch control mechanism. It was considered that this particular turbine, which also has low solidity flexible blades, and is free yawing, would provide a stringent test of modelling approaches. It was believed that blade element aerodynamic modelling would not be adequate to properly describe the combination of yawed flow, dynamic inflow and unsteady aerodynamics; consequently a wake modelling approach was adopted. In order to keep computation time limited, a highly simplified, semi-free wake approach (developed in previous work) was used. a similarly simple structural model was adopted with up to only six degrees of freedom in total. In order to take account of blade (flapwise) flexibility a simple finite element sub-model is used. Good quality data from the turbine has recently been collected and it is hoped to undertake model validation in the near future. (au)

  9. Patient-Specific Computational Modeling

    CERN Document Server

    Peña, Estefanía

    2012-01-01

    This book addresses patient-specific modeling. It integrates computational modeling, experimental procedures, imagine clinical segmentation and mesh generation with the finite element method (FEM) to solve problems in computational biomedicine and bioengineering. Specific areas of interest include cardiovascular problems, ocular and muscular systems and soft tissue modeling. Patient-specific modeling has been the subject of serious research over the last seven years and interest in the area is continually growing and this area is expected to further develop in the near future.

  10. A low earth orbit dynamic model for the proton anisotropy validation

    Science.gov (United States)

    Badavi, Francis F.

    2011-11-01

    Ionizing radiation measurements at low earth orbit (LEO) form the ideal tool for the experimental validation of radiation environmental models, nuclear transport code algorithms and nuclear reaction cross sections. Indeed, prior measurements on the space transportation system (STS; shuttle) have provided vital information impacting both the environmental models and the nuclear transport code development by requiring dynamic models of the LEO environment. Previous studies using computer aided design (CAD) models of the international space station (ISS) have demonstrated that the dosimetric prediction for a spacecraft at LEO requires the description of an environmental model with accurate anisotropic as well as dynamic behavior. This paper describes such a model for the trapped proton. The described model is a component of a suite of codes collectively named GEORAD (GEOmagnetic RADiation) which computes cutoff rigidity, trapped proton and trapped electron environments. The web version of GEORAD is named OLTARIS (On-line Tool for the Assessment of Radiation in Space). GEORAD suite is applicable to radiation environment prediction at LEO, medium earth orbit (MEO) and geosynchronous earth orbit (GEO) at quiet solar periods. GEORAD interest is in the study of long term effect of the trapped environment and therefore it does not account for any short term external field contribution due to solar activity. With the concentration of the paper on the LEO protons only, the paper presents the validation of the trapped proton model within GEORAD with reported measurements from the compact environment anomaly sensor (CEASE) science instrument package, flown onboard the tri-service experiment-5 (TSX-5) satellite during the period of June 2000 to July 2006. The spin stabilized satellite was flown in a 410 × 1710 km, 69° inclination elliptical orbit, allowing it to be exposed to a broad range of the LEO regime. The paper puts particular emphasize on the validation of the

  11. Model of nuclear reactor type VVER-1000/V-320 built by computer code ATHLET-CD

    International Nuclear Information System (INIS)

    Georgiev, Yoto; Filipov, Kalin; Velev, Vladimir

    2014-01-01

    A model of nuclear reactor type VVER-1000 V-320 developed for computer code ATHLET-CD2.1A is presented. Validation of the has been made, in the analysis of the station blackout scenario with LOCA on fourth cold leg is shown. As the calculation has been completed, the results are checked through comparison with the results from the computer codes ATHLET-2.1A, ASTEC-2.1 and RELAP5mod3.2

  12. Condensation of steam in horizontal pipes: model development and validation

    International Nuclear Information System (INIS)

    Szijarto, R.

    2015-01-01

    This thesis submitted to the Swiss Federal Institute of Technology ETH in Zurich presents the development and validation of a model for the condensation of steam in horizontal pipes. Condensation models were introduced and developed particularly for the application in the emergency cooling system of a Gen-III+ boiling water reactor. Such an emergency cooling system consists of slightly inclined horizontal pipes, which are immersed in a cold water tank. The pipes are connected to the reactor pressure vessel. They are responsible for a fast depressurization of the reactor core in the case of accident. Condensation in horizontal pipes was investigated with both one-dimensional system codes (RELAP5) and three-dimensional computational fluid dynamics software (ANSYS FLUENT). The performance of the RELAP5 code was not sufficient for transient condensation processes. Therefore, a mechanistic model was developed and implemented. Four models were tested on the LAOKOON facility, which analysed direct contact condensation in a horizontal duct

  13. Regorafenib effects on human colon carcinoma xenografts monitored by dynamic contrast-enhanced computed tomography with immunohistochemical validation.

    Directory of Open Access Journals (Sweden)

    Clemens C Cyran

    Full Text Available To investigate dynamic contrast-enhanced computed tomography for monitoring the effects of regorafenib on experimental colon carcinomas in rats by quantitative assessments of tumor microcirculation parameters with immunohistochemical validation.Colon carcinoma xenografts (HT-29 implanted subcutaneously in female athymic rats (n = 15 were imaged at baseline and after a one-week treatment with regorafenib by dynamic contrast-enhanced computed tomography (128-slice dual-source computed tomography. The therapy group (n = 7 received regorafenib daily (10 mg/kg bodyweight. Quantitative parameters of tumor microcirculation (plasma flow, mL/100 mL/min, endothelial permeability (PS, mL/100 mL/min, and tumor vascularity (plasma volume, % were calculated using a 2-compartment uptake model. Dynamic contrast-enhanced computed tomography parameters were validated with immunohistochemical assessments of tumor microvascular density (CD-31, tumor cell apoptosis (TUNEL, and proliferation (Ki-67.Regorafenib suppressed tumor vascularity (15.7±5.3 to 5.5±3.5%; p<0.05 and tumor perfusion (12.8±2.3 to 8.8±2.9 mL/100 mL/min; p = 0.063. Significantly lower microvascular density was observed in the therapy group (CD-31; 48±10 vs. 113±25, p<0.05. In regorafenib-treated tumors, significantly more apoptotic cells (TUNEL; 11844±2927 vs. 5097±3463, p<0.05 were observed. Dynamic contrast-enhanced computed tomography tumor perfusion and tumor vascularity correlated significantly (p<0.05 with microvascular density (CD-31; r = 0.84 and 0.66 and inversely with apoptosis (TUNEL; r = -0.66 and -0.71.Regorafenib significantly suppressed tumor vascularity (plasma volume quantified by dynamic contrast-enhanced computed tomography in experimental colon carcinomas in rats with good-to-moderate correlations to an immunohistochemical gold standard. Tumor response biomarkers assessed by dynamic contrast-enhanced computed tomography may be a promising future

  14. Software phantom with realistic speckle modeling for validation of image analysis methods in echocardiography

    Science.gov (United States)

    Law, Yuen C.; Tenbrinck, Daniel; Jiang, Xiaoyi; Kuhlen, Torsten

    2014-03-01

    Computer-assisted processing and interpretation of medical ultrasound images is one of the most challenging tasks within image analysis. Physical phenomena in ultrasonographic images, e.g., the characteristic speckle noise and shadowing effects, make the majority of standard methods from image analysis non optimal. Furthermore, validation of adapted computer vision methods proves to be difficult due to missing ground truth information. There is no widely accepted software phantom in the community and existing software phantoms are not exible enough to support the use of specific speckle models for different tissue types, e.g., muscle and fat tissue. In this work we propose an anatomical software phantom with a realistic speckle pattern simulation to _ll this gap and provide a exible tool for validation purposes in medical ultrasound image analysis. We discuss the generation of speckle patterns and perform statistical analysis of the simulated textures to obtain quantitative measures of the realism and accuracy regarding the resulting textures.

  15. Development and validation of the computer program TNHXY

    International Nuclear Information System (INIS)

    Xolocostli M, V.; Valle G, E. del; Alonso V, G.

    2003-01-01

    This work describes the development and validation of the computer program TNHXY (Neutron Transport with Nodal Hybrid schemes in X Y geometry), which solves the discrete-ordinates neutron transport equations using a discontinuous Bi-Linear (DBiL) nodal hybrid method. One of the immediate applications of TNHXY is in the analysis of nuclear fuel assemblies, in particular those of BWRs. Its validation was carried out by reproducing some results for test or benchmark problems that some authors have solved using other numerical techniques. This allows to ensure that the program will provide results with similar accuracy for other problems of the same type. To accomplish this two benchmark problems have been solved. The first problem consists in a BWR fuel assembly in a 7x7 array without and with control rod. The results obtained with TNHXY are consistent with those reported for the TWOTRAN code. The second benchmark problem is a Mixed Oxide (MOX) fuel assembly in a 10x10 array. This last problem is known as the WPPR benchmark problem of the NEA Data Bank and the results are compared with those obtained with commercial codes like HELIOS, MCNP-4B and CPM-3. (Author)

  16. HRP's Healthcare Spin-Offs Through Computational Modeling and Simulation Practice Methodologies

    Science.gov (United States)

    Mulugeta, Lealem; Walton, Marlei; Nelson, Emily; Peng, Grace; Morrison, Tina; Erdemir, Ahmet; Myers, Jerry

    2014-01-01

    Spaceflight missions expose astronauts to novel operational and environmental conditions that pose health risks that are currently not well understood, and perhaps unanticipated. Furthermore, given the limited number of humans that have flown in long duration missions and beyond low Earth-orbit, the amount of research and clinical data necessary to predict and mitigate these health and performance risks are limited. Consequently, NASA's Human Research Program (HRP) conducts research and develops advanced methods and tools to predict, assess, and mitigate potential hazards to the health of astronauts. In this light, NASA has explored the possibility of leveraging computational modeling since the 1970s as a means to elucidate the physiologic risks of spaceflight and develop countermeasures. Since that time, substantial progress has been realized in this arena through a number of HRP funded activates such as the Digital Astronaut Project (DAP) and the Integrated Medical Model (IMM). Much of this success can be attributed to HRP's endeavor to establish rigorous verification, validation, and credibility (VV&C) processes that ensure computational models and simulations (M&S) are sufficiently credible to address issues within their intended scope. This presentation summarizes HRP's activities in credibility of modeling and simulation, in particular through its outreach to the community of modeling and simulation practitioners. METHODS: The HRP requires all M&S that can have moderate to high impact on crew health or mission success must be vetted in accordance to NASA Standard for Models and Simulations, NASA-STD-7009 (7009) [5]. As this standard mostly focuses on engineering systems, the IMM and DAP have invested substantial efforts to adapt the processes established in this standard for their application to biological M&S, which is more prevalent in human health and performance (HHP) and space biomedical research and operations [6,7]. These methods have also generated

  17. IMPROVED COMPUTATIONAL NEUTRONICS METHODS AND VALIDATION PROTOCOLS FOR THE ADVANCED TEST REACTOR

    Energy Technology Data Exchange (ETDEWEB)

    David W. Nigg; Joseph W. Nielsen; Benjamin M. Chase; Ronnie K. Murray; Kevin A. Steuhm

    2012-04-01

    The Idaho National Laboratory (INL) is in the process of modernizing the various reactor physics modeling and simulation tools used to support operation and safety assurance of the Advanced Test Reactor (ATR). Key accomplishments so far have encompassed both computational as well as experimental work. A new suite of stochastic and deterministic transport theory based reactor physics codes and their supporting nuclear data libraries (HELIOS, KENO6/SCALE, NEWT/SCALE, ATTILA, and an extended implementation of MCNP5) has been installed at the INL. Corresponding models of the ATR and ATRC are now operational with all five codes, demonstrating the basic feasibility of the new code packages for their intended purpose. Of particular importance, a set of as-run core depletion HELIOS calculations for all ATR cycles since August 2009 was successfully completed during 2011. This demonstration supported a decision late in the year to proceed with the phased incorporation of the HELIOS methodology into the ATR fuel cycle management process beginning in 2012. On the experimental side of the project, new hardware was fabricated, measurement protocols were finalized, and the first four of six planned physics code validation experiments based on neutron activation spectrometry were conducted at the ATRC facility. Data analysis for the first three experiments, focused on characterization of the neutron spectrum in one of the ATR flux traps, has been completed. The six experiments will ultimately form the basis for a flexible, easily-repeatable ATR physics code validation protocol that is consistent with applicable ASTM standards.

  18. Blast Load Simulator Experiments for Computational Model Validation: Report 2

    Science.gov (United States)

    2017-02-01

    simulations of these explosive events and their effects . These codes are continuously improving, but still require validation against experimental data to...contents of this report are not to be used for advertising , publication, or promotional purposes. Citation of trade names does not constitute an...12 Figure 18. Ninety-five percent confidence intervals on measured peak pressure. ............................ 14 Figure 19. Ninety-five percent

  19. Modeling Remote I/O versus Staging Tradeoff in Multi-Data Center Computing

    International Nuclear Information System (INIS)

    Suslu, Ibrahim H

    2014-01-01

    In multi-data center computing, data to be processed is not always local to the computation. This is a major challenge especially for data-intensive Cloud computing applications, since large amount of data would need to be either moved the local sites (staging) or accessed remotely over the network (remote I/O). Cloud application developers generally chose between staging and remote I/O intuitively without making any scientific comparison specific to their application data access patterns since there is no generic model available that they can use. In this paper, we propose a generic model for the Cloud application developers which would help them to choose the most appropriate data access mechanism for their specific application workloads. We define the parameters that potentially affect the end-to-end performance of the multi-data center Cloud applications which need to access large datasets over the network. To test and validate our models, we implemented a series of synthetic benchmark applications to simulate the most common data access patterns encountered in Cloud applications. We show that our model provides promising results in different settings with different parameters, such as network bandwidth, server and client capabilities, and data access ratio

  20. Modeling of requirement specification for safety critical real time computer system using formal mathematical specifications

    International Nuclear Information System (INIS)

    Sankar, Bindu; Sasidhar Rao, B.; Ilango Sambasivam, S.; Swaminathan, P.

    2002-01-01

    Full text: Real time computer systems are increasingly used for safety critical supervision and control of nuclear reactors. Typical application areas are supervision of reactor core against coolant flow blockage, supervision of clad hot spot, supervision of undesirable power excursion, power control and control logic for fuel handling systems. The most frequent cause of fault in safety critical real time computer system is traced to fuzziness in requirement specification. To ensure the specified safety, it is necessary to model the requirement specification of safety critical real time computer systems using formal mathematical methods. Modeling eliminates the fuzziness in the requirement specification and also helps to prepare the verification and validation schemes. Test data can be easily designed from the model of the requirement specification. Z and B are the popular languages used for modeling the requirement specification. A typical safety critical real time computer system for supervising the reactor core of prototype fast breeder reactor (PFBR) against flow blockage is taken as case study. Modeling techniques and the actual model are explained in detail. The advantages of modeling for ensuring the safety are summarized

  1. The PROMIS model to highlight the importance of the foetus to the validation of a pregnant woman model

    OpenAIRE

    AURIAULT, Florent; THOLLON, Lionel; PERES, Jérémie; DELOTTE, J; KAYVANTASH, K; BRUNET, Christian; BEHR, Michel

    2013-01-01

    The percentage of trauma during pregnancy related to road accident is between 50% and 75%. This type of trauma can result in premature birth or even foetal loss. To analyse and understand the injury mechanisms in pregnant women involved in a car accident, several studies proposed computational or physical tools to simulate accidents. Specific dummy and numerical models have been proposed and validated using experimental data from post-mortem human surrogate (PMHS) scaled with the equal-stress...

  2. A measurement-based X-ray source model characterization for CT dosimetry computations.

    Science.gov (United States)

    Sommerville, Mitchell; Poirier, Yannick; Tambasco, Mauro

    2015-11-08

    The purpose of this study was to show that the nominal peak tube voltage potential (kVp) and measured half-value layer (HVL) can be used to generate energy spectra and fluence profiles for characterizing a computed tomography (CT) X-ray source, and to validate the source model and an in-house kV X-ray dose computation algorithm (kVDoseCalc) for computing machine- and patient-specific CT dose. Spatial variation of the X-ray source spectra of a Philips Brilliance and a GE Optima Big Bore CT scanner were found by measuring the HVL along the direction of the internal bow-tie filter axes. Third-party software, Spektr, and the nominal kVp settings were used to generate the energy spectra. Beam fluence was calculated by dividing the integral product of the spectra and the in-air NIST mass-energy attenuation coefficients by in-air dose measurements along the filter axis. The authors found the optimal number of photons to seed in kVDoseCalc to achieve dose convergence. The Philips Brilliance beams were modeled for 90, 120, and 140 kVp tube settings. The GE Optima beams were modeled for 80, 100, 120, and 140 kVp tube settings. Relative doses measured using a Capintec Farmer-type ionization chamber (0.65 cc) placed in a cylindrical polymethyl methacrylate (PMMA) phantom and irradiated by the Philips Brilliance, were compared to those computed with kVDoseCalc. Relative doses in an anthropomorphic thorax phantom (E2E SBRT Phantom) irradiated by the GE Optima were measured using a (0.015 cc) PTW Freiburg ionization chamber and compared to computations from kVDoseCalc. The number of photons required to reduce the average statistical uncertainty in dose to measurement over all 12 PMMA phantom positions was found to be 1.44%, 1.47%, and 1.41% for 90, 120, and 140 kVp, respectively. The maximum percent difference between calculation and measurement for all energies, measurement positions, and phantoms was less than 3.50%. Thirty-five out of a total of 36 simulation conditions were

  3. Validating computationally predicted TMS stimulation areas using direct electrical stimulation in patients with brain tumors near precentral regions.

    Science.gov (United States)

    Opitz, Alexander; Zafar, Noman; Bockermann, Volker; Rohde, Veit; Paulus, Walter

    2014-01-01

    The spatial extent of transcranial magnetic stimulation (TMS) is of paramount interest for all studies employing this method. It is generally assumed that the induced electric field is the crucial parameter to determine which cortical regions are excited. While it is difficult to directly measure the electric field, one usually relies on computational models to estimate the electric field distribution. Direct electrical stimulation (DES) is a local brain stimulation method generally considered the gold standard to map structure-function relationships in the brain. Its application is typically limited to patients undergoing brain surgery. In this study we compare the computationally predicted stimulation area in TMS with the DES area in six patients with tumors near precentral regions. We combine a motor evoked potential (MEP) mapping experiment for both TMS and DES with realistic individual finite element method (FEM) simulations of the electric field distribution during TMS and DES. On average, stimulation areas in TMS and DES show an overlap of up to 80%, thus validating our computational physiology approach to estimate TMS excitation volumes. Our results can help in understanding the spatial spread of TMS effects and in optimizing stimulation protocols to more specifically target certain cortical regions based on computational modeling.

  4. A model to predict element redistribution in unsaturated soil: Its simplification and validation

    International Nuclear Information System (INIS)

    Sheppard, M.I.; Stephens, M.E.; Davis, P.A.; Wojciechowski, L.

    1991-01-01

    A research model has been developed to predict the long-term fate of contaminants entering unsaturated soil at the surface through irrigation or atmospheric deposition, and/or at the water table through groundwater. The model, called SCEMR1 (Soil Chemical Exchange and Migration of Radionuclides, Version 1), uses Darcy's law to model water movement, and the soil solid/liquid partition coefficient, K d , to model chemical exchange. SCEMR1 has been validated extensively on controlled field experiments with several soils, aeration statuses and the effects of plants. These validation results show that the model is robust and performs well. Sensitivity analyses identified soil K d , annual effective precipitation, soil type and soil depth to be the four most important model parameters. SCEMR1 consumes too much computer time for incorporation into a probabilistic assessment code. Therefore, we have used SCEMR1 output to derive a simple assessment model. The assessment model reflects the complexity of its parent code, and provides a more realistic description of containment transport in soils than would a compartment model. Comparison of the performance of the SCEMR1 research model, the simple SCEMR1 assessment model and the TERRA compartment model on a four-year soil-core experiment shows that the SCEMR1 assessment model generally provides conservative soil concentrations. (15 refs., 3 figs.)

  5. A Model of Computation for Bit-Level Concurrent Computing and Programming: APEC

    Science.gov (United States)

    Ajiro, Takashi; Tsuchida, Kensei

    A concurrent model of computation and a language based on the model for bit-level operation are useful for developing asynchronous and concurrent programs compositionally, which frequently use bit-level operations. Some examples are programs for video games, hardware emulation (including virtual machines), and signal processing. However, few models and languages are optimized and oriented to bit-level concurrent computation. We previously developed a visual programming language called A-BITS for bit-level concurrent programming. The language is based on a dataflow-like model that computes using processes that provide serial bit-level operations and FIFO buffers connected to them. It can express bit-level computation naturally and develop compositionally. We then devised a concurrent computation model called APEC (Asynchronous Program Elements Connection) for bit-level concurrent computation. This model enables precise and formal expression of the process of computation, and a notion of primitive program elements for controlling and operating can be expressed synthetically. Specifically, the model is based on a notion of uniform primitive processes, called primitives, that have three terminals and four ordered rules at most, as well as on bidirectional communication using vehicles called carriers. A new notion is that a carrier moving between two terminals can briefly express some kinds of computation such as synchronization and bidirectional communication. The model's properties make it most applicable to bit-level computation compositionally, since the uniform computation elements are enough to develop components that have practical functionality. Through future application of the model, our research may enable further research on a base model of fine-grain parallel computer architecture, since the model is suitable for expressing massive concurrency by a network of primitives.

  6. Blast Load Simulator Experiments for Computational Model Validation Report 3

    Science.gov (United States)

    2017-07-01

    the effect of the contact surface on the measurement . For gauge locations where a clearly defined initial peak is not present, Figure 24 for example...these explosive events and their effects . These codes are continuously improving, but still require validation against experimental data to...DISCLAIMER: The contents of this report are not to be used for advertising , publication, or promotional purposes. Citation of trade names does not

  7. Predictive Capability Maturity Model for computational modeling and simulation.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy; Pilch, Martin M.

    2007-10-01

    The Predictive Capability Maturity Model (PCMM) is a new model that can be used to assess the level of maturity of computational modeling and simulation (M&S) efforts. The development of the model is based on both the authors experience and their analysis of similar investigations in the past. The perspective taken in this report is one of judging the usefulness of a predictive capability that relies on the numerical solution to partial differential equations to better inform and improve decision making. The review of past investigations, such as the Software Engineering Institute's Capability Maturity Model Integration and the National Aeronautics and Space Administration and Department of Defense Technology Readiness Levels, indicates that a more restricted, more interpretable method is needed to assess the maturity of an M&S effort. The PCMM addresses six contributing elements to M&S: (1) representation and geometric fidelity, (2) physics and material model fidelity, (3) code verification, (4) solution verification, (5) model validation, and (6) uncertainty quantification and sensitivity analysis. For each of these elements, attributes are identified that characterize four increasing levels of maturity. Importantly, the PCMM is a structured method for assessing the maturity of an M&S effort that is directed toward an engineering application of interest. The PCMM does not assess whether the M&S effort, the accuracy of the predictions, or the performance of the engineering system satisfies or does not satisfy specified application requirements.

  8. Validation of computer codes used in the safety analysis of Canadian research reactors

    International Nuclear Information System (INIS)

    Bishop, W.E.; Lee, A.G.

    1998-01-01

    AECL has embarked on a validation program for the suite of computer codes that it uses in performing the safety analyses for its research reactors. Current focus is on codes used for the analysis of the two MAPLE reactors under construction at Chalk River but the program will be extended to include additional codes that will be used for the Irradiation Research Facility. The program structure is similar to that used for the validation of codes used in the safety analyses for CANDU power reactors. (author)

  9. A computer model of the MFTF-B neutral beam accel dc power supply

    International Nuclear Information System (INIS)

    Wilson, J.H.

    1983-01-01

    Using the SCEPTRE circuit modeling code, a computer model was developed for the MFTF Neutral Beam Power Supply System (NBPSS) Accel DC Power Supply (ADCPS). The ADCPS provides 90 kV, 88 A, to the Accel Modulator. Because of the complex behavior of the power supply, use of the computer model is necessary to adequately understand the power supply's behavior over a wide range of load conditions and faults. The model developed includes all the circuit components and parameters, and some of the stray values. The model has been well validated for transients with times on the order of milliseconds, and with one exception, for steady-state operation. When using a circuit modeling code for a system with a wide range of time constants, it can become impossible to obtain good solutions for all time ranges at once. The present model concentrates on the millisecond-range transients because the compensating capacitor bank tends to isolate the power supply from the load for faster transients. Attempts to include stray circuit elements with time constants in the microsecond and shorter range have had little success because of hugh increases in computing time that result. The model has been successfully extended to include the accel modulator

  10. International Conference on Computational Intelligence, Cyber Security, and Computational Models

    CERN Document Server

    Ramasamy, Vijayalakshmi; Sheen, Shina; Veeramani, C; Bonato, Anthony; Batten, Lynn

    2016-01-01

    This book aims at promoting high-quality research by researchers and practitioners from academia and industry at the International Conference on Computational Intelligence, Cyber Security, and Computational Models ICC3 2015 organized by PSG College of Technology, Coimbatore, India during December 17 – 19, 2015. This book enriches with innovations in broad areas of research like computational modeling, computational intelligence and cyber security. These emerging inter disciplinary research areas have helped to solve multifaceted problems and gained lot of attention in recent years. This encompasses theory and applications, to provide design, analysis and modeling of the aforementioned key areas.

  11. Modeling of the behavior of radon and its decay products in dwelling, and experimental validation of the model

    International Nuclear Information System (INIS)

    Gouronnec, A.M.; Robe, M.C.; Montassier, N.; Boulaud, D.

    1993-01-01

    A model of the type written by Jacobi is adapted to indoor air to describe the behavior of radon and its decay products within a dwelling, and is then adapted to a system of several stories. To start the validation of the model, computed data are compared with field measurements. The first observations we may make are that the model is consistent with data we have. But it is important to develop an exhaustive set of experimental data and to obtain as faithful as possible a representation of the mean situation; this specially concerns the ventilation rate of the enclosure and the rate of attachment to airborne particles. Further work should also be done to model deposition on surfaces. (orig.). (6 refs., 4 tabs.)

  12. Definition, modeling and simulation of a grid computing system for high throughput computing

    CERN Document Server

    Caron, E; Tsaregorodtsev, A Yu

    2006-01-01

    In this paper, we study and compare grid and global computing systems and outline the benefits of having an hybrid system called dirac. To evaluate the dirac scheduling for high throughput computing, a new model is presented and a simulator was developed for many clusters of heterogeneous nodes belonging to a local network. These clusters are assumed to be connected to each other through a global network and each cluster is managed via a local scheduler which is shared by many users. We validate our simulator by comparing the experimental and analytical results of a M/M/4 queuing system. Next, we do the comparison with a real batch system and we obtain an average error of 10.5% for the response time and 12% for the makespan. We conclude that the simulator is realistic and well describes the behaviour of a large-scale system. Thus we can study the scheduling of our system called dirac in a high throughput context. We justify our decentralized, adaptive and oppor! tunistic approach in comparison to a centralize...

  13. Validation and uncertainty analysis of the Athlet thermal-hydraulic computer code

    International Nuclear Information System (INIS)

    Glaeser, H.

    1995-01-01

    The computer code ATHLET is being developed by GRS as an advanced best-estimate code for the simulation of breaks and transients in Pressurized Water Reactor (PWRs) and Boiling Water Reactor (BWRs) including beyond design basis accidents. A systematic validation of ATHLET is based on a well balanced set of integral and separate effects tests emphasizing the German combined Emergency Core Cooling (ECC) injection system. When using best estimate codes for predictions of reactor plant states during assumed accidents, qualification of the uncertainty in these calculations is highly desirable. A method for uncertainty and sensitivity evaluation has been developed by GRS where the computational effort is independent of the number of uncertain parameters. (author)

  14. Building confidence and credibility amid growing model and computing complexity

    Science.gov (United States)

    Evans, K. J.; Mahajan, S.; Veneziani, C.; Kennedy, J. H.

    2017-12-01

    As global Earth system models are developed to answer an ever-wider range of science questions, software products that provide robust verification, validation, and evaluation must evolve in tandem. Measuring the degree to which these new models capture past behavior, predict the future, and provide the certainty of predictions is becoming ever more challenging for reasons that are generally well known, yet are still challenging to address. Two specific and divergent needs for analysis of the Accelerated Climate Model for Energy (ACME) model - but with a similar software philosophy - are presented to show how a model developer-based focus can address analysis needs during expansive model changes to provide greater fidelity and execute on multi-petascale computing facilities. A-PRIME is a python script-based quick-look overview of a fully-coupled global model configuration to determine quickly if it captures specific behavior before significant computer time and expense is invested. EVE is an ensemble-based software framework that focuses on verification of performance-based ACME model development, such as compiler or machine settings, to determine the equivalence of relevant climate statistics. The challenges and solutions for analysis of multi-petabyte output data are highlighted from the aspect of the scientist using the software, with the aim of fostering discussion and further input from the community about improving developer confidence and community credibility.

  15. Feature Extraction for Structural Dynamics Model Validation

    Energy Technology Data Exchange (ETDEWEB)

    Farrar, Charles [Los Alamos National Laboratory; Nishio, Mayuko [Yokohama University; Hemez, Francois [Los Alamos National Laboratory; Stull, Chris [Los Alamos National Laboratory; Park, Gyuhae [Chonnam Univesity; Cornwell, Phil [Rose-Hulman Institute of Technology; Figueiredo, Eloi [Universidade Lusófona; Luscher, D. J. [Los Alamos National Laboratory; Worden, Keith [University of Sheffield

    2016-01-13

    As structural dynamics becomes increasingly non-modal, stochastic and nonlinear, finite element model-updating technology must adopt the broader notions of model validation and uncertainty quantification. For example, particular re-sampling procedures must be implemented to propagate uncertainty through a forward calculation, and non-modal features must be defined to analyze nonlinear data sets. The latter topic is the focus of this report, but first, some more general comments regarding the concept of model validation will be discussed.

  16. A stylized computational model of the head for the reference Japanese male

    International Nuclear Information System (INIS)

    Yamauchi, M.; Ishikawa, M.; Hoshi, M.

    2005-01-01

    observed absorbed dose values (Gy) at all six points were calculated as the percentage difference between MCNP4C simulation and the TLDs. In our computational model, the average values of all the percentage differences were 6.0±4.0% (tissue substitute materials) and 7.6±6.6% (ICRU Report 46), respectively. In Cristy's model, the corresponding values were 20.4±3.8% (tissue substitute materials) and 21.0±4.1% (ICRU Report 46), respectively. Considering the margin of error in the radiation sensitivity of the TLDs, this study validates our computational model as a test object for radiation dosimetry studies

  17. Validation of ASTEC core degradation and containment models

    International Nuclear Information System (INIS)

    Kruse, Philipp; Brähler, Thimo; Koch, Marco K.

    2014-01-01

    Ruhr-Universitaet Bochum performed in a German funded project validation of in-vessel and containment models of the integral code ASTEC V2, jointly developed by IRSN (France) and GRS (Germany). In this paper selected results of this validation are presented. In the in-vessel part, the main point of interest was the validation of the code capability concerning cladding oxidation and hydrogen generation. The ASTEC calculations of QUENCH experiments QUENCH-03 and QUENCH-11 show satisfactory results, despite of some necessary adjustments in the input deck. Furthermore, the oxidation models based on the Cathcart–Pawel and Urbanic–Heidrick correlations are not suitable for higher temperatures while the ASTEC model BEST-FIT based on the Prater–Courtright approach at high temperature gives reliable enough results. One part of the containment model validation was the assessment of three hydrogen combustion models of ASTEC against the experiment BMC Ix9. The simulation results of these models differ from each other and therefore the quality of the simulations depends on the characteristic of each model. Accordingly, the CPA FRONT model, corresponding to the simplest necessary input parameters, provides the best agreement to the experimental data

  18. Development of a computational model applied to a unitary 144 CM2 proton exchange membrane fuel cell

    International Nuclear Information System (INIS)

    Robalinho, Eric

    2009-01-01

    This work presents the development of a numerical computer model and methodology to study and design polymeric exchange membrane fuel cell - PEM. For the validation of experimental results, a sequence of routines, appropriate to fit the data obtained in the laboratory, was described. At the computational implementation it was created a new strategy of coupling two 3-dimensional models to satisfy the requirements of the comprehensive model of the fuel cell, including its various geometries and materials, as well as the various physical and chemical processes simulated. To effective assessment of the real cell analogy with numerical model, numerical studies were carried out. Comparisons with values obtained in the literature, characterization of variables through laboratory experiments and estimates from models already tested in the literature were also performed. Regarding the experimental part, a prototype of a fuel cell unit of 144 cm 2 of geometric area was designed, produced and operated at laboratory with the purpose of validating the numerical computer model proposed, with positive results. The results of simulations for the 2D and 3D geometries proposed are presented in the form of polarization curves, highlighting the catalytic layer model based on the geometry of agglomerates. Parametric and sensitivity studies are presented to illustrate the change in performance of the fuel cell studied. The final model is robust and useful as a tool for design and optimization of PEM type fuel cells in a wide range of operating conditions. (author)

  19. ASTEC V2 severe accident integral code: Fission product modelling and validation

    International Nuclear Information System (INIS)

    Cantrel, L.; Cousin, F.; Bosland, L.; Chevalier-Jabet, K.; Marchetto, C.

    2014-01-01

    One main goal of the severe accident integral code ASTEC V2, jointly developed since almost more than 15 years by IRSN and GRS, is to simulate the overall behaviour of fission products (FP) in a damaged nuclear facility. ASTEC applications are source term determinations, level 2 Probabilistic Safety Assessment (PSA2) studies including the determination of uncertainties, accident management studies and physical analyses of FP experiments to improve the understanding of the phenomenology. ASTEC is a modular code and models of a part of the phenomenology are implemented in each module: the release of FPs and structural materials from degraded fuel in the ELSA module; the transport through the reactor coolant system approximated as a sequence of control volumes in the SOPHAEROS module; and the radiochemistry inside the containment nuclear building in the IODE module. Three other modules, CPA, ISODOP and DOSE, allow respectively computing the deposition rate of aerosols inside the containment, the activities of the isotopes as a function of time, and the gaseous dose rate which is needed to model radiochemistry in the gaseous phase. In ELSA, release models are semi-mechanistic and have been validated for a wide range of experimental data, and noticeably for VERCORS experiments. For SOPHAEROS, the models can be divided into two parts: vapour phase phenomena and aerosol phase phenomena. For IODE, iodine and ruthenium chemistry are modelled based on a semi-mechanistic approach, these FPs can form some volatile species and are particularly important in terms of potential radiological consequences. The models in these 3 modules are based on a wide experimental database, resulting for a large part from international programmes, and they are considered at the state of the art of the R and D knowledge. This paper illustrates some FPs modelling capabilities of ASTEC and computed values are compared to some experimental results, which are parts of the validation matrix

  20. Validation of fracture flow models in the Stripa project

    International Nuclear Information System (INIS)

    Herbert, A.; Dershowitz, W.; Long, J.; Hodgkinson, D.

    1991-01-01

    One of the objectives of Phase III of the Stripa Project is to develop and evaluate approaches for the prediction of groundwater flow and nuclide transport in a specific unexplored volume of the Stripa granite and make a comparison with data from field measurements. During the first stage of the project, a prediction of inflow to the D-holes, an array of six parallel closely spaced 100m boreholes, was made based on data from six other boreholes. This data included fracture geometry, stress, single borehole geophysical logging, crosshole and reflection radar and seismic tomogram, head monitoring and single hole packer test measurements. Maps of fracture traces on the drift walls have also been made. The D-holes are located along a future Validation Drift which will be excavated. The water inflow to the D-holes has been measured in an experiment called the Simulated Drift Experiment. The paper reviews the Simulated Drift Experiment validation exercise. Following a discussion of the approach to validation, the characterization data and its preliminary interpretation are summarised and commented upon. That work has proved feasible to carry through all the complex and interconnected tasks associated with the gathering and interpretation of characterization data, the development and application of complex models, and the comparison with measured inflows. This exercise has provided detailed feed-back to the experimental and theoretical work required for measurements and predictions of flow into the Validation Drift. Computer codes used: CHANGE, FRACMAN, MAFIC, NAPSAC and TRINET. 2 figs., 2 tabs., 19 refs

  1. Spiral computed tomography phase-space source model in the BEAMnrc/EGSnrc Monte Carlo system: implementation and validation

    International Nuclear Information System (INIS)

    Kim, Sangroh; Yoshizumi, Terry T; Yin Fangfang; Chetty, Indrin J

    2013-01-01

    Currently, the BEAMnrc/EGSnrc Monte Carlo (MC) system does not provide a spiral CT source model for the simulation of spiral CT scanning. We developed and validated a spiral CT phase-space source model in the BEAMnrc/EGSnrc system. The spiral phase-space source model was implemented in the DOSXYZnrc user code of the BEAMnrc/EGSnrc system by analyzing the geometry of spiral CT scan—scan range, initial angle, rotational direction, pitch, slice thickness, etc. Table movement was simulated by changing the coordinates of the isocenter as a function of beam angles. Some parameters such as pitch, slice thickness and translation per rotation were also incorporated into the model to make the new phase-space source model, designed specifically for spiral CT scan simulations. The source model was hard-coded by modifying the ‘ISource = 8: Phase-Space Source Incident from Multiple Directions’ in the srcxyznrc.mortran and dosxyznrc.mortran files in the DOSXYZnrc user code. In order to verify the implementation, spiral CT scans were simulated in a CT dose index phantom using the validated x-ray tube model of a commercial CT simulator for both the original multi-direction source (ISOURCE = 8) and the new phase-space source model in the DOSXYZnrc system. Then the acquired 2D and 3D dose distributions were analyzed with respect to the input parameters for various pitch values. In addition, surface-dose profiles were also measured for a patient CT scan protocol using radiochromic film and were compared with the MC simulations. The new phase-space source model was found to simulate the spiral CT scanning in a single simulation run accurately. It also produced the equivalent dose distribution of the ISOURCE = 8 model for the same CT scan parameters. The MC-simulated surface profiles were well matched to the film measurement overall within 10%. The new spiral CT phase-space source model was implemented in the BEAMnrc/EGSnrc system. This work will be beneficial in estimating the

  2. Spiral computed tomography phase-space source model in the BEAMnrc/EGSnrc Monte Carlo system: implementation and validation.

    Science.gov (United States)

    Kim, Sangroh; Yoshizumi, Terry T; Yin, Fang-Fang; Chetty, Indrin J

    2013-04-21

    Currently, the BEAMnrc/EGSnrc Monte Carlo (MC) system does not provide a spiral CT source model for the simulation of spiral CT scanning. We developed and validated a spiral CT phase-space source model in the BEAMnrc/EGSnrc system. The spiral phase-space source model was implemented in the DOSXYZnrc user code of the BEAMnrc/EGSnrc system by analyzing the geometry of spiral CT scan-scan range, initial angle, rotational direction, pitch, slice thickness, etc. Table movement was simulated by changing the coordinates of the isocenter as a function of beam angles. Some parameters such as pitch, slice thickness and translation per rotation were also incorporated into the model to make the new phase-space source model, designed specifically for spiral CT scan simulations. The source model was hard-coded by modifying the 'ISource = 8: Phase-Space Source Incident from Multiple Directions' in the srcxyznrc.mortran and dosxyznrc.mortran files in the DOSXYZnrc user code. In order to verify the implementation, spiral CT scans were simulated in a CT dose index phantom using the validated x-ray tube model of a commercial CT simulator for both the original multi-direction source (ISOURCE = 8) and the new phase-space source model in the DOSXYZnrc system. Then the acquired 2D and 3D dose distributions were analyzed with respect to the input parameters for various pitch values. In addition, surface-dose profiles were also measured for a patient CT scan protocol using radiochromic film and were compared with the MC simulations. The new phase-space source model was found to simulate the spiral CT scanning in a single simulation run accurately. It also produced the equivalent dose distribution of the ISOURCE = 8 model for the same CT scan parameters. The MC-simulated surface profiles were well matched to the film measurement overall within 10%. The new spiral CT phase-space source model was implemented in the BEAMnrc/EGSnrc system. This work will be beneficial in estimating the spiral

  3. Cross validation in LULOO

    DEFF Research Database (Denmark)

    Sørensen, Paul Haase; Nørgård, Peter Magnus; Hansen, Lars Kai

    1996-01-01

    The leave-one-out cross-validation scheme for generalization assessment of neural network models is computationally expensive due to replicated training sessions. Linear unlearning of examples has recently been suggested as an approach to approximative cross-validation. Here we briefly review...... the linear unlearning scheme, dubbed LULOO, and we illustrate it on a systemidentification example. Further, we address the possibility of extracting confidence information (error bars) from the LULOO ensemble....

  4. Validation experiments of the chimney model for the operational simulation of hydrogen recombiners

    International Nuclear Information System (INIS)

    Simon, Berno

    2013-01-01

    The calculation program REKO-DIREKT allows the simulation of the operational behavior of a hydrogen recombiner during accidents with hydrogen release. The interest is focused on the interaction between the catalyst insertion and the chimney that influences the natural ventilation and thus the throughput through the recombiner significantly. For validation experiments were performed with a small-scale recombiner model in the test facility REKO-4. The results show the correlation between the hydrogen concentration at the recombiner entrance, the temperature on catalyst sheets and the entrance velocity using different chimney heights. The entrance velocity increases with the heights of the installed chimney that influences the natural ventilation significantly. The results allow the generation of a wide data base for validation of the computer code REKO-DIREKT.

  5. Analytical models approximating individual processes: a validation method.

    Science.gov (United States)

    Favier, C; Degallier, N; Menkès, C E

    2010-12-01

    Upscaling population models from fine to coarse resolutions, in space, time and/or level of description, allows the derivation of fast and tractable models based on a thorough knowledge of individual processes. The validity of such approximations is generally tested only on a limited range of parameter sets. A more general validation test, over a range of parameters, is proposed; this would estimate the error induced by the approximation, using the original model's stochastic variability as a reference. This method is illustrated by three examples taken from the field of epidemics transmitted by vectors that bite in a temporally cyclical pattern, that illustrate the use of the method: to estimate if an approximation over- or under-fits the original model; to invalidate an approximation; to rank possible approximations for their qualities. As a result, the application of the validation method to this field emphasizes the need to account for the vectors' biology in epidemic prediction models and to validate these against finer scale models. Copyright © 2010 Elsevier Inc. All rights reserved.

  6. Can We Trust Computational Modeling for Medical Applications?

    Science.gov (United States)

    Mulugeta, Lealem; Walton, Marlei; Nelson, Emily; Myers, Jerry

    2015-01-01

    Operations in extreme environments such as spaceflight pose human health risks that are currently not well understood and potentially unanticipated. In addition, there are limited clinical and research data to inform development and implementation of therapeutics for these unique health risks. In this light, NASA's Human Research Program (HRP) is leveraging biomedical computational models and simulations (M&S) to help inform, predict, assess and mitigate spaceflight health and performance risks, and enhance countermeasure development. To ensure that these M&S can be applied with confidence to the space environment, it is imperative to incorporate a rigorous verification, validation and credibility assessment (VV&C) processes to ensure that the computational tools are sufficiently reliable to answer questions within their intended use domain. In this presentation, we will discuss how NASA's Integrated Medical Model (IMM) and Digital Astronaut Project (DAP) have successfully adapted NASA's Standard for Models and Simulations, NASA-STD-7009 (7009) to achieve this goal. These VV&C methods are also being leveraged by organization such as the Food and Drug Administration (FDA), National Institute of Health (NIH) and the American Society of Mechanical Engineers (ASME) to establish new M&S VV&C standards and guidelines for healthcare applications. Similarly, we hope to provide some insight to the greater aerospace medicine community on how to develop and implement M&S with sufficient confidence to augment medical research and operations.

  7. The COSIMA-experiments, a data base for validation of two-phase flow computer codes

    International Nuclear Information System (INIS)

    Class, G.; Meyder, R.; Stratmanns, E.

    1985-12-01

    The report presents an overview on the large data base generated with COSIMA. The data base is to be used to validate and develop computer codes for two-phase flow. In terms of fuel rod behavior it was found that during blowdown under realistic conditions only small strains are reached. For clad rupture extremely high rod internal pressure is necessary. Additionally important results were found in the behavior of a fuel rod simulator and on the effect of thermocouples attached on the cladding outer surface. Post-test calculations, performed with the codes RELAP and DRUFAN show a good agreement with the experiments. This however can be improved if the phase separation models in the codes would be updated. (orig./HP) [de

  8. Application of queueing models to multiprogrammed computer systems operating in a time-critical environment

    Science.gov (United States)

    Eckhardt, D. E., Jr.

    1979-01-01

    A model of a central processor (CPU) which services background applications in the presence of time critical activity is presented. The CPU is viewed as an M/M/1 queueing system subject to periodic interrupts by deterministic, time critical process. The Laplace transform of the distribution of service times for the background applications is developed. The use of state of the art queueing models for studying the background processing capability of time critical computer systems is discussed and the results of a model validation study which support this application of queueing models are presented.

  9. Modeling Computer Virus and Its Dynamics

    Directory of Open Access Journals (Sweden)

    Mei Peng

    2013-01-01

    Full Text Available Based on that the computer will be infected by infected computer and exposed computer, and some of the computers which are in suscepitible status and exposed status can get immunity by antivirus ability, a novel coumputer virus model is established. The dynamic behaviors of this model are investigated. First, the basic reproduction number R0, which is a threshold of the computer virus spreading in internet, is determined. Second, this model has a virus-free equilibrium P0, which means that the infected part of the computer disappears, and the virus dies out, and P0 is a globally asymptotically stable equilibrium if R01 then this model has only one viral equilibrium P*, which means that the computer persists at a constant endemic level, and P* is also globally asymptotically stable. Finally, some numerical examples are given to demonstrate the analytical results.

  10. Computational Design of Batteries from Materials to Systems

    Energy Technology Data Exchange (ETDEWEB)

    Smith, Kandler A [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Santhanagopalan, Shriram [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Yang, Chuanbo [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Graf, Peter A [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Usseglio Viretta, Francois L [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Li, Qibo [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Finegan, Donal [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Pesaran, Ahmad A [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Yao, Koffi (Pierre) [Argonne National Laboratory; Abraham, Daniel [Argonne National Laboratory; Dees, Dennis [Argonne National Laboratory; Jansen, Andy [Argonne National Laboratory; Mukherjee, Partha [Texas A& M University; Mistry, Aashutosh [Texas A& M University; Verma, Ankit [Texas A& M University; Lamb, Josh [Sandia National Laboratories; Darcy, Eric [NASA

    2017-09-01

    Computer models are helping to accelerate the design and validation of next generation batteries and provide valuable insights not possible through experimental testing alone. Validated 3-D physics-based models exist for predicting electrochemical performance, thermal and mechanical response of cells and packs under normal and abuse scenarios. The talk describes present efforts to make the models better suited for engineering design, including improving their computation speed, developing faster processes for model parameter identification including under aging, and predicting the performance of a proposed electrode material recipe a priori using microstructure models.

  11. A discussion on validation of hydrogeological models

    International Nuclear Information System (INIS)

    Carrera, J.; Mousavi, S.F.; Usunoff, E.J.; Sanchez-Vila, X.; Galarza, G.

    1993-01-01

    Groundwater flow and solute transport are often driven by heterogeneities that elude easy identification. It is also difficult to select and describe the physico-chemical processes controlling solute behaviour. As a result, definition of a conceptual model involves numerous assumptions both on the selection of processes and on the representation of their spatial variability. Validating a numerical model by comparing its predictions with actual measurements may not be sufficient for evaluating whether or not it provides a good representation of 'reality'. Predictions will be close to measurements, regardless of model validity, if these are taken from experiments that stress well-calibrated model modes. On the other hand, predictions will be far from measurements when model parameters are very uncertain, even if the model is indeed a very good representation of the real system. Hence, we contend that 'classical' validation of hydrogeological models is not possible. Rather, models should be viewed as theories about the real system. We propose to follow a rigorous modeling approach in which different sources of uncertainty are explicitly recognized. The application of one such approach is illustrated by modeling a laboratory uranium tracer test performed on fresh granite, which was used as Test Case 1b in INTRAVAL. (author)

  12. Verification and validation for waste disposal models

    International Nuclear Information System (INIS)

    1987-07-01

    A set of evaluation criteria has been developed to assess the suitability of current verification and validation techniques for waste disposal methods. A survey of current practices and techniques was undertaken and evaluated using these criteria with the items most relevant to waste disposal models being identified. Recommendations regarding the most suitable verification and validation practices for nuclear waste disposal modelling software have been made

  13. COMPUTATIONAL MODELS FOR SUSTAINABLE DEVELOPMENT

    OpenAIRE

    Monendra Grover; Rajesh Kumar; Tapan Kumar Mondal; S. Rajkumar

    2011-01-01

    Genetic erosion is a serious problem and computational models have been developed to prevent it. The computational modeling in this field not only includes (terrestrial) reserve design, but also decision modeling for related problems such as habitat restoration, marine reserve design, and nonreserve approaches to conservation management. Models have been formulated for evaluating tradeoffs between socioeconomic, biophysical, and spatial criteria in establishing marine reserves. The percolatio...

  14. Verification and validation of predictive computer programs describing the near and far-field chemistry of radioactive waste disposal systems

    International Nuclear Information System (INIS)

    Read, D.; Broyd, T.W.

    1988-01-01

    This paper provides an introduction to CHEMVAL, an international project concerned with establishing the applicability of chemical speciation and coupled transport models to the simulation of realistic waste disposal situations. The project aims to validate computer-based models quantitatively by comparison with laboratory and field experiments. Verification of the various computer programs employed by research organisations within the European Community is ensured through close inter-laboratory collaboration. The compilation and review of thermodynamic data forms an essential aspect of this work and has led to the production of an internally consistent standard CHEMVAL database. The sensitivity of results to variation in fundamental constants is being monitored at each stage of the project and, where feasible, complementary laboratory studies are used to improve the data set. Currently, thirteen organisations from five countries are participating in CHEMVAL which forms part of the Commission of European Communities' MIRAGE 2 programme of research. (orig.)

  15. Ranked retrieval of Computational Biology models.

    Science.gov (United States)

    Henkel, Ron; Endler, Lukas; Peters, Andre; Le Novère, Nicolas; Waltemath, Dagmar

    2010-08-11

    The study of biological systems demands computational support. If targeting a biological problem, the reuse of existing computational models can save time and effort. Deciding for potentially suitable models, however, becomes more challenging with the increasing number of computational models available, and even more when considering the models' growing complexity. Firstly, among a set of potential model candidates it is difficult to decide for the model that best suits ones needs. Secondly, it is hard to grasp the nature of an unknown model listed in a search result set, and to judge how well it fits for the particular problem one has in mind. Here we present an improved search approach for computational models of biological processes. It is based on existing retrieval and ranking methods from Information Retrieval. The approach incorporates annotations suggested by MIRIAM, and additional meta-information. It is now part of the search engine of BioModels Database, a standard repository for computational models. The introduced concept and implementation are, to our knowledge, the first application of Information Retrieval techniques on model search in Computational Systems Biology. Using the example of BioModels Database, it was shown that the approach is feasible and extends the current possibilities to search for relevant models. The advantages of our system over existing solutions are that we incorporate a rich set of meta-information, and that we provide the user with a relevance ranking of the models found for a query. Better search capabilities in model databases are expected to have a positive effect on the reuse of existing models.

  16. Development and validation of Monte Carlo dose computations for contrast-enhanced stereotactic synchrotron radiation therapy

    International Nuclear Information System (INIS)

    Vautrin, M.

    2011-01-01

    Contrast-enhanced stereotactic synchrotron radiation therapy (SSRT) is an innovative technique based on localized dose-enhancement effects obtained by reinforced photoelectric absorption in the tumor. Medium energy monochromatic X-rays (50 - 100 keV) are used for irradiating tumors previously loaded with a high-Z element. Clinical trials of SSRT are being prepared at the European Synchrotron Radiation Facility (ESRF), an iodinated contrast agent will be used. In order to compute the energy deposited in the patient (dose), a dedicated treatment planning system (TPS) has been developed for the clinical trials, based on the ISOgray TPS. This work focuses on the SSRT specific modifications of the TPS, especially to the PENELOPE-based Monte Carlo dose engine. The TPS uses a dedicated Monte Carlo simulation of medium energy polarized photons to compute the deposited energy in the patient. Simulations are performed considering the synchrotron source, the modeled beamline geometry and finally the patient. Specific materials were also implemented in the voxelized geometry of the patient, to consider iodine concentrations in the tumor. The computation process has been optimized and parallelized. Finally a specific computation of absolute doses and associated irradiation times (instead of monitor units) was implemented. The dedicated TPS was validated with depth dose curves, dose profiles and absolute dose measurements performed at the ESRF in a water tank and solid water phantoms with or without bone slabs. (author) [fr

  17. Structural system identification: Structural dynamics model validation

    Energy Technology Data Exchange (ETDEWEB)

    Red-Horse, J.R.

    1997-04-01

    Structural system identification is concerned with the development of systematic procedures and tools for developing predictive analytical models based on a physical structure`s dynamic response characteristics. It is a multidisciplinary process that involves the ability (1) to define high fidelity physics-based analysis models, (2) to acquire accurate test-derived information for physical specimens using diagnostic experiments, (3) to validate the numerical simulation model by reconciling differences that inevitably exist between the analysis model and the experimental data, and (4) to quantify uncertainties in the final system models and subsequent numerical simulations. The goal of this project was to develop structural system identification techniques and software suitable for both research and production applications in code and model validation.

  18. Validating the passenger traffic model for Copenhagen

    DEFF Research Database (Denmark)

    Overgård, Christian Hansen; VUK, Goran

    2006-01-01

    The paper presents a comprehensive validation procedure for the passenger traffic model for Copenhagen based on external data from the Danish national travel survey and traffic counts. The model was validated for the years 2000 to 2004, with 2004 being of particular interest because the Copenhagen...... matched the observed traffic better than those of the transit assignment model. With respect to the metro forecasts, the model over-predicts metro passenger flows by 10% to 50%. The wide range of findings from the project resulted in two actions. First, a project was started in January 2005 to upgrade...

  19. Computational design of patterned interfaces using reduced order models

    International Nuclear Information System (INIS)

    Vattre, A.J.; Abdolrahim, N.; Kolluri, K.; Demkowicz, M.J.

    2014-01-01

    Patterning is a familiar approach for imparting novel functionalities to free surfaces. We extend the patterning paradigm to interfaces between crystalline solids. Many interfaces have non-uniform internal structures comprised of misfit dislocations, which in turn govern interface properties. We develop and validate a computational strategy for designing interfaces with controlled misfit dislocation patterns by tailoring interface crystallography and composition. Our approach relies on a novel method for predicting the internal structure of interfaces: rather than obtaining it from resource-intensive atomistic simulations, we compute it using an efficient reduced order model based on anisotropic elasticity theory. Moreover, our strategy incorporates interface synthesis as a constraint on the design process. As an illustration, we apply our approach to the design of interfaces with rapid, 1-D point defect diffusion. Patterned interfaces may be integrated into the microstructure of composite materials, markedly improving performance. (authors)

  20. Geant4 Hadronic Cascade Models and CMS Data Analysis : Computational Challenges in the LHC era

    CERN Document Server

    Heikkinen, Aatos

    This work belongs to the field of computational high-energy physics (HEP). The key methods used in this thesis work to meet the challenges raised by the Large Hadron Collider (LHC) era experiments are object-orientation with software engineering, Monte Carlo simulation, the computer technology of clusters, and artificial neural networks. The first aspect discussed is the development of hadronic cascade models, used for the accurate simulation of medium-energy hadron-nucleus reactions, up to 10 GeV. These models are typically needed in hadronic calorimeter studies and in the estimation of radiation backgrounds. Various applications outside HEP include the medical field (such as hadron treatment simulations), space science (satellite shielding), and nuclear physics (spallation studies). Validation results are presented for several significant improvements released in Geant4 simulation tool, and the significance of the new models for computing in the Large Hadron Collider era is estimated. In particular, we es...

  1. Validation of mentorship model for newly qualified professional ...

    African Journals Online (AJOL)

    Newly qualified professional nurses (NQPNs) allocated to community health care services require the use of validated model to practice independently. Validation was done to adapt and assess if the model is understood and could be implemented by NQPNs and mentors employed in community health care services.

  2. Validation and Adaptation of Router and Switch Models

    NARCIS (Netherlands)

    Boltjes, B.; Fernandez Diaz, I.; Kock, B.A.; Langeveld, R.J.G.M.; Schoenmaker, G.

    2003-01-01

    This paper describes validating OPNET models of key devices for the next generation IP-based tactical network of the Royal Netherlands Army (RNLA). The task of TNO-FEL is to provide insight in scalability and performance of future deployed networks. Because validated models ol key Cisco equipment

  3. Development of a computer model to predict aortic rupture due to impact loading.

    Science.gov (United States)

    Shah, C S; Yang, K H; Hardy, W; Wang, H K; King, A I

    2001-11-01

    Aortic injuries during blunt thoracic impacts can lead to life threatening hemorrhagic shock and potential exsanguination. Experimental approaches designed to study the mechanism of aortic rupture such as the testing of cadavers is not only expensive and time consuming, but has also been relatively unsuccessful. The objective of this study was to develop a computer model and to use it to predict modes of loading that are most likely to produce aortic ruptures. Previously, a 3D finite element model of the human thorax was developed and validated against data obtained from lateral pendulum tests. The model included a detailed description of the heart, lungs, rib cage, sternum, spine, diaphragm, major blood vessels and intercostal muscles. However, the aorta was modeled as a hollow tube using shell elements with no fluid within, and its material properties were assumed to be linear and isotropic. In this study fluid elements representing blood have been incorporated into the model in order to simulate pressure changes inside the aorta due to impact. The current model was globally validated against experimental data published in the literature for both frontal and lateral pendulum impact tests. Simulations of the validated model for thoracic impacts from a number of directions indicate that the ligamentum arteriosum, subclavian artery, parietal pleura and pressure changes within the aorta are factors that could influence aortic rupture. The model suggests that a right-sided impact to the chest is potentially more hazardous with respect to aortic rupture than any other impact direction simulated in this study. The aortic isthmus was the most likely site of aortic rupture regardless of impact direction. The reader is cautioned that this model could only be validated on a global scale. Validation of the kinematics and dynamics of the aorta at the local level could not be done due to a lack of experimental data. It is hoped that this model will be used to design

  4. Model Selection in Historical Research Using Approximate Bayesian Computation

    Science.gov (United States)

    Rubio-Campillo, Xavier

    2016-01-01

    Formal Models and History Computational models are increasingly being used to study historical dynamics. This new trend, which could be named Model-Based History, makes use of recently published datasets and innovative quantitative methods to improve our understanding of past societies based on their written sources. The extensive use of formal models allows historians to re-evaluate hypotheses formulated decades ago and still subject to debate due to the lack of an adequate quantitative framework. The initiative has the potential to transform the discipline if it solves the challenges posed by the study of historical dynamics. These difficulties are based on the complexities of modelling social interaction, and the methodological issues raised by the evaluation of formal models against data with low sample size, high variance and strong fragmentation. Case Study This work examines an alternate approach to this evaluation based on a Bayesian-inspired model selection method. The validity of the classical Lanchester’s laws of combat is examined against a dataset comprising over a thousand battles spanning 300 years. Four variations of the basic equations are discussed, including the three most common formulations (linear, squared, and logarithmic) and a new variant introducing fatigue. Approximate Bayesian Computation is then used to infer both parameter values and model selection via Bayes Factors. Impact Results indicate decisive evidence favouring the new fatigue model. The interpretation of both parameter estimations and model selection provides new insights into the factors guiding the evolution of warfare. At a methodological level, the case study shows how model selection methods can be used to guide historical research through the comparison between existing hypotheses and empirical evidence. PMID:26730953

  5. Automatically quantifying the scientific quality and sensationalism of news records mentioning pandemics: validating a maximum entropy machine-learning model.

    Science.gov (United States)

    Hoffman, Steven J; Justicz, Victoria

    2016-07-01

    To develop and validate a method for automatically quantifying the scientific quality and sensationalism of individual news records. After retrieving 163,433 news records mentioning the Severe Acute Respiratory Syndrome (SARS) and H1N1 pandemics, a maximum entropy model for inductive machine learning was used to identify relationships among 500 randomly sampled news records that correlated with systematic human assessments of their scientific quality and sensationalism. These relationships were then computationally applied to automatically classify 10,000 additional randomly sampled news records. The model was validated by randomly sampling 200 records and comparing human assessments of them to the computer assessments. The computer model correctly assessed the relevance of 86% of news records, the quality of 65% of records, and the sensationalism of 73% of records, as compared to human assessments. Overall, the scientific quality of SARS and H1N1 news media coverage had potentially important shortcomings, but coverage was not too sensationalizing. Coverage slightly improved between the two pandemics. Automated methods can evaluate news records faster, cheaper, and possibly better than humans. The specific procedure implemented in this study can at the very least identify subsets of news records that are far more likely to have particular scientific and discursive qualities. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. On the Validation of a Numerical Model for the Analysis of Soil-Structure Interaction Problems

    Directory of Open Access Journals (Sweden)

    Jorge Luis Palomino Tamayo

    Full Text Available Abstract Modeling and simulation of mechanical response of structures, relies on the use of computational models. Therefore, verification and validation procedures are the primary means of assessing accuracy, confidence and credibility in modeling. This paper is concerned with the validation of a three dimensional numerical model based on the finite element method suitable for the dynamic analysis of soil-structure interaction problems. The soil mass, structure, structure's foundation and the appropriate boundary conditions can be represented altogether in a single model by using a direct approach. The theory of porous media of Biot is used to represent the soil mass as a two-phase material which is considered to be fully saturated with water; meanwhile other parts of the system are treated as one-phase materials. Plasticity of the soil mass is the main source of non-linearity in the problem and therefore an iterative-incremental algorithm based on the Newton-Raphson procedure is used to solve the nonlinear equilibrium equations. For discretization in time, the Generalized Newmark-β method is used. The soil is represented by a plasticity-based, effective-stress constitutive model suitable for liquefaction. Validation of the present numerical model is done by comparing analytical and centrifuge test results of soil and soil-pile systems with those results obtained with the present numerical model. A soil-pile-structure interaction problem is also presented in order to shown the potentiality of the numerical tool.

  7. Development and validation of a 10-year-old child ligamentous cervical spine finite element model.

    Science.gov (United States)

    Dong, Liqiang; Li, Guangyao; Mao, Haojie; Marek, Stanley; Yang, King H

    2013-12-01

    Although a number of finite element (FE) adult cervical spine models have been developed to understand the injury mechanisms of the neck in automotive related crash scenarios, there have been fewer efforts to develop a child neck model. In this study, a 10-year-old ligamentous cervical spine FE model was developed for application in the improvement of pediatric safety related to motor vehicle crashes. The model geometry was obtained from medical scans and meshed using a multi-block approach. Appropriate properties based on review of literature in conjunction with scaling were assigned to different parts of the model. Child tensile force-deformation data in three segments, Occipital-C2 (C0-C2), C4-C5 and C6-C7, were used to validate the cervical spine model and predict failure forces and displacements. Design of computer experiments was performed to determine failure properties for intervertebral discs and ligaments needed to set up the FE model. The model-predicted ultimate displacements and forces were within the experimental range. The cervical spine FE model was validated in flexion and extension against the child experimental data in three segments, C0-C2, C4-C5 and C6-C7. Other model predictions were found to be consistent with the experimental responses scaled from adult data. The whole cervical spine model was also validated in tension, flexion and extension against the child experimental data. This study provided methods for developing a child ligamentous cervical spine FE model and to predict soft tissue failures in tension.

  8. CSNI Integral Test Facility Matrices for Validation of Best-Estimate Thermal-Hydraulic Computer Codes

    International Nuclear Information System (INIS)

    Glaeser, H.

    2008-01-01

    Internationally agreed Integral Test Facility (ITF) matrices for validation of realistic thermal hydraulic system computer codes were established. ITF development is mainly for Pressurised Water Reactors (PWRs) and Boiling Water Reactors (BWRs). A separate activity was for Russian Pressurised Water-cooled and Water-moderated Energy Reactors (WWER). Firstly, the main physical phenomena that occur during considered accidents are identified, test types are specified, and test facilities suitable for reproducing these aspects are selected. Secondly, a list of selected experiments carried out in these facilities has been set down. The criteria to achieve the objectives are outlined. In this paper some specific examples from the ITF matrices will also be provided. The matrices will be a guide for code validation, will be a basis for comparisons of code predictions performed with different system codes, and will contribute to the quantification of the uncertainty range of code model predictions. In addition to this objective, the construction of such a matrix is an attempt to record information which has been generated around the world over the last years, so that it is more accessible to present and future workers in that field than would otherwise be the case.

  9. CMS computing model evolution

    International Nuclear Information System (INIS)

    Grandi, C; Bonacorsi, D; Colling, D; Fisk, I; Girone, M

    2014-01-01

    The CMS Computing Model was developed and documented in 2004. Since then the model has evolved to be more flexible and to take advantage of new techniques, but many of the original concepts remain and are in active use. In this presentation we will discuss the changes planned for the restart of the LHC program in 2015. We will discuss the changes planning in the use and definition of the computing tiers that were defined with the MONARC project. We will present how we intend to use new services and infrastructure to provide more efficient and transparent access to the data. We will discuss the computing plans to make better use of the computing capacity by scheduling more of the processor nodes, making better use of the disk storage, and more intelligent use of the networking.

  10. Computational biomechanics for medicine imaging, modeling and computing

    CERN Document Server

    Doyle, Barry; Wittek, Adam; Nielsen, Poul; Miller, Karol

    2016-01-01

    The Computational Biomechanics for Medicine titles provide an opportunity for specialists in computational biomechanics to present their latest methodologies and advancements. This volume comprises eighteen of the newest approaches and applications of computational biomechanics, from researchers in Australia, New Zealand, USA, UK, Switzerland, Scotland, France and Russia. Some of the interesting topics discussed are: tailored computational models; traumatic brain injury; soft-tissue mechanics; medical image analysis; and clinically-relevant simulations. One of the greatest challenges facing the computational engineering community is to extend the success of computational mechanics to fields outside traditional engineering, in particular to biology, the biomedical sciences, and medicine. We hope the research presented within this book series will contribute to overcoming this grand challenge.

  11. Toward Development of a Stochastic Wake Model: Validation Using LES and Turbine Loads

    Directory of Open Access Journals (Sweden)

    Jae Sang Moon

    2017-12-01

    Full Text Available Wind turbines within an array do not experience free-stream undisturbed flow fields. Rather, the flow fields on internal turbines are influenced by wakes generated by upwind unit and exhibit different dynamic characteristics relative to the free stream. The International Electrotechnical Commission (IEC standard 61400-1 for the design of wind turbines only considers a deterministic wake model for the design of a wind plant. This study is focused on the development of a stochastic model for waked wind fields. First, high-fidelity physics-based waked wind velocity fields are generated using Large-Eddy Simulation (LES. Stochastic characteristics of these LES waked wind velocity field, including mean and turbulence components, are analyzed. Wake-related mean and turbulence field-related parameters are then estimated for use with a stochastic model, using Multivariate Multiple Linear Regression (MMLR with the LES data. To validate the simulated wind fields based on the stochastic model, wind turbine tower and blade loads are generated using aeroelastic simulation for utility-scale wind turbine models and compared with those based directly on the LES inflow. The study’s overall objective is to offer efficient and validated stochastic approaches that are computationally tractable for assessing the performance and loads of turbines operating in wakes.

  12. Computer-assisted teaching of skin flap surgery: validation of a mobile platform software for medical students.

    Science.gov (United States)

    de Sena, David P; Fabricio, Daniela D; Lopes, Maria Helena I; da Silva, Vinicius D

    2013-01-01

    The purpose of this study was to develop and validate a multimedia software application for mobile platforms to assist in the teaching and learning process of design and construction of a skin flap. Traditional training in surgery is based on learning by doing. Initially, the use of cadavers and animal models appeared to be a valid alternative for training. However, many conflicts with these training models prompted progression to synthetic and virtual reality models. Fifty volunteer fifth- and sixth-year medical students completed a pretest and were randomly allocated into two groups of 25 students each. The control group was exposed for 5 minutes to a standard text-based print article, while the test group used multimedia software describing how to fashion a rhomboid flap. Each group then performed a cutaneous flap on a training bench model while being evaluated by three blinded BSPS (Brazilian Society of Plastic Surgery) board-certified surgeons using the OSATS (Objective Structured Assessment of Technical Skill) protocol and answered a post-test. The text-based group was then tested again using the software. The computer-assisted learning (CAL) group had superior performance as confirmed by checklist scores (pmultimedia method as the best study tool. CAL learners exhibited better subjective and objective performance when fashioning rhomboid flaps as compared to those taught with standard print material. These findings indicate that students preferred to learn using the multimedia method.

  13. Computational morphology of the lung and its virtual imaging

    International Nuclear Information System (INIS)

    Kitaoka, Hiroko

    2002-01-01

    The author proposes an entirely new approach called 'virtual imaging' of an organ based on 'computational morphology'. Computational morphology describes mathematically design as principles of an organ structure to generate the organ model via computer, which can be called virtual organ. Virtual imaging simulates image data using the virtual organ. The virtual organ is divided into cubic voxels, and the CT value or other intensity value for each voxel is calculated according to the tissue properties within the voxel. The validity of the model is examined by comparing virtual images with clinical images. Computational image analysis methods can be developed based on validated models. In this paper, computational anatomy of the lung and its virtual X-ray imaging are introduced

  14. Development of tools and models for computational fracture assessment

    International Nuclear Information System (INIS)

    Talja, H.; Santaoja, K.

    1998-01-01

    The aim of the work presented in this paper has been to develop and test new computational tools and theoretically more sound methods for fracture mechanical analysis. The applicability of the engineering integrity assessment system MASI for evaluation of piping components has been extended. The most important motivation for the theoretical development have been the well-known fundamental limitations in the validity of J-integral, which limits its applicability in many important practical safety assessment cases. Examples are extensive plastic deformation, multimaterial structures and ascending loading paths (especially warm prestress, WPS). Further, the micromechanical Gurson model has been applied to several reactor pressure vessel materials. Special attention is paid to the transferability of Gurson model parameters from tensile test results to prediction of ductile failure behaviour of cracked structures. (author)

  15. Using computer algebra and SMT-solvers to analyze a mathematical model of cholera propagation

    Science.gov (United States)

    Trujillo Arredondo, Mariana

    2014-06-01

    We analyze a mathematical model for the transmission of cholera. The model is already defined and involves variables such as the pathogen agent, which in this case is the bacterium Vibrio cholera, and the human population. The human population is divided into three classes: susceptible, infectious and removed. Using Computer Algebra, specifically Maple we obtain two equilibrium states: the disease free state and the endemic state. Using Maple it is possible to prove that the disease free state is locally asymptotically stable if and only if R0 1. Using the package Red-Log of the Computer algebra system Reduce and the SMT-Solver Z3Py it is possible to obtain numerical conditions for the model. The formula for the basic reproductive number makes a synthesis with all epidemic parameters in the model. Also it is possible to make numerical simulations which are very illustrative about the epidemic patters that are expected to be observed in real situations. We claim that these kinds of software are very useful in the analysis of epidemic models given that the symbolic computation provides algebraic formulas for the basic reproductive number and such algebraic formulas are very useful to derive control measures. For other side, computer algebra software is a powerful tool to make the stability analysis for epidemic models given that the all steps in the stability analysis can be made automatically: finding the equilibrium points, computing the jacobian, computing the characteristic polynomial for the jacobian, and applying the Routh-Hurwitz theorem to the characteristic polynomial. Finally, using SMT-Solvers is possible to make automatically checks of satisfiability, validity and quantifiers elimination being these computations very useful to analyse complicated epidemic models.

  16. The IceCube Computing Infrastructure Model

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    Besides the big LHC experiments a number of mid-size experiments is coming online which need to define new computing models to meet the demands on processing and storage requirements of those experiments. We present the hybrid computing model of IceCube which leverages GRID models with a more flexible direct user model as an example of a possible solution. In IceCube a central datacenter at UW-Madison servers as Tier-0 with a single Tier-1 datacenter at DESY Zeuthen. We describe the setup of the IceCube computing infrastructure and report on our experience in successfully provisioning the IceCube computing needs.

  17. Validation of the Actuator Line Model for Simulating Flows past Yawed Wind Turbine Rotors

    DEFF Research Database (Denmark)

    Shen, Wen Zhong; Zhu, Wei Jun; Yang, Hua

    2015-01-01

    The Actuator Line/Navier-Stokes model is validated against wind tunnel measurements for flows past the yawed MEXICO rotor and past the yawed NREL Phase VI rotor. The MEXICO rotor is operated at a rotational speed of 424 rpm, a pitch angle of −2.3˚, wind speeds of 10, 15, 24 m/s and yaw angles of 15......˚, 30˚ and 45˚. The computed loads as well as the velocity field behind the yawed MEXICO rotor are compared to the detailed pressure and PIV measurements which were carried out in the EU funded MEXICO project. For the NREL Phase VI rotor, computations were carried out at a rotational speed of 90.2 rpm...

  18. Methodical Approaches to Teaching of Computer Modeling in Computer Science Course

    Science.gov (United States)

    Rakhimzhanova, B. Lyazzat; Issabayeva, N. Darazha; Khakimova, Tiyshtik; Bolyskhanova, J. Madina

    2015-01-01

    The purpose of this study was to justify of the formation technique of representation of modeling methodology at computer science lessons. The necessity of studying computer modeling is that the current trends of strengthening of general education and worldview functions of computer science define the necessity of additional research of the…

  19. Computation of External Quality Factors for RF Structures by Means of Model Order Reduction and a Perturbation Approach

    CERN Document Server

    Flisgen, Thomas; van Rienen, Ursula

    2016-01-01

    External quality factors are significant quantities to describe losses via waveguide ports in radio frequency resonators. The current contribution presents a novel approach to determine external quality factors by means of a two-step procedure: First, a state-space model for the lossless radio frequency structure is generated and its model order is reduced. Subsequently, a perturbation method is applied on the reduced model so that external losses are accounted for. The advantage of this approach results from the fact that the challenges in dealing with lossy systems are shifted to the reduced order model. This significantly saves computational costs. The present paper provides a short overview on existing methods to compute external quality factors. Then, the novel approach is introduced and validated in terms of accuracy and computational time by means of commercial software.

  20. Investigations of incorporating source directivity into room acoustics computer models to improve auralizations

    Science.gov (United States)

    Vigeant, Michelle C.

    Room acoustics computer modeling and auralizations are useful tools when designing or modifying acoustically sensitive spaces. In this dissertation, the input parameter of source directivity has been studied in great detail to determine first its effect in room acoustics computer models and secondly how to better incorporate the directional source characteristics into these models to improve auralizations. To increase the accuracy of room acoustics computer models, the source directivity of real sources, such as musical instruments, must be included in the models. The traditional method for incorporating source directivity into room acoustics computer models involves inputting the measured static directivity data taken every 10° in a sphere-shaped pattern around the source. This data can be entered into the room acoustics software to create a directivity balloon, which is used in the ray tracing algorithm to simulate the room impulse response. The first study in this dissertation shows that using directional sources over an omni-directional source in room acoustics computer models produces significant differences both in terms of calculated room acoustics parameters and auralizations. The room acoustics computer model was also validated in terms of accurately incorporating the input source directivity. A recently proposed technique for creating auralizations using a multi-channel source representation has been investigated with numerous subjective studies, applied to both solo instruments and an orchestra. The method of multi-channel auralizations involves obtaining multi-channel anechoic recordings of short melodies from various instruments and creating individual channel auralizations. These auralizations are then combined to create a total multi-channel auralization. Through many subjective studies, this process was shown to be effective in terms of improving the realism and source width of the auralizations in a number of cases, and also modeling different

  1. Validation of a numerical 3-D fluid-structure interaction model for a prosthetic valve based on experimental PIV measurements.

    Science.gov (United States)

    Guivier-Curien, Carine; Deplano, Valérie; Bertrand, Eric

    2009-10-01

    A numerical 3-D fluid-structure interaction (FSI) model of a prosthetic aortic valve was developed, based on a commercial computational fluid dynamics (CFD) software program using an Arbitrary Eulerian Lagrangian (ALE) formulation. To make sure of the validity of this numerical model, an equivalent experimental model accounting for both the geometrical features and the hydrodynamic conditions was also developed. The leaflet and the flow behaviours around the bileaflet valve were investigated numerically and experimentally by performing particle image velocimetry (PIV) measurements. Through quantitative and qualitative comparisons, it was shown that the leaflet behaviour and the velocity fields were similar in both models. The present study allows the validation of a fully coupled 3-D FSI numerical model. The promising numerical tool could be therefore used to investigate clinical issues involving the aortic valve.

  2. Current computational modelling trends in craniomandibular biomechanics and their clinical implications.

    Science.gov (United States)

    Hannam, A G

    2011-03-01

    Computational models of interactions in the craniomandibular apparatus are used with increasing frequency to study biomechanics in normal and abnormal masticatory systems. Methods and assumptions in these models can be difficult to assess by those unfamiliar with current practices in this field; health professionals are often faced with evaluating the appropriateness, validity and significance of models which are perhaps more familiar to the engineering community. This selective review offers a foundation for assessing the strength and implications of a craniomandibular modelling study. It explores different models used in general science and engineering and focuses on current best practices in biomechanics. The problem of validation is considered at some length, because this is not always fully realisable in living subjects. Rigid-body, finite element and combined approaches are discussed, with examples of their application to basic and clinically relevant problems. Some advanced software platforms currently available for modelling craniomandibular systems are mentioned. Recent studies of the face, masticatory muscles, tongue, craniomandibular skeleton, temporomandibular joint, dentition and dental implants are reviewed, and the significance of non-linear and non-isotropic material properties is emphasised. The unique challenges in clinical application are discussed, and the review concludes by posing some questions which one might reasonably expect to find answered in plausible modelling studies of the masticatory apparatus. © 2010 Blackwell Publishing Ltd.

  3. Validation of mathematical models to describe fluid dynamics of a cold riser by gamma ray attenuation

    International Nuclear Information System (INIS)

    Melo, Ana Cristina Bezerra Azedo de

    2004-12-01

    The fluid dynamic behavior of a riser in a cold type FCC model was investigated by means of catalyst concentration distribution measured with gamma attenuation and simulated with a mathematical model. In the riser of the cold model, MEF, 0,032 m in diameter, 2,30 m in length the fluidized bed, whose components are air and FCC catalyst, circulates. The MEF is operated by automatic control and instruments for measuring fluid dynamic variables. An axial catalyst concentration distribution was measured using an Am-241 gamma source and a NaI detector coupled to a multichannel provided with a software for data acquisition and evaluation. The MEF was adapted for a fluid dynamic model validation which describes the flow in the riser, for example, by introducing an injector for controlling the solid flow in circulation. Mathematical models were selected from literature, analyzed and tested to simulate the fluid dynamic of the riser. A methodology for validating fluid dynamic models was studied and implemented. The stages of the work were developed according to the validation methodology, such as data planning experiments, study of the equations which describe the fluidodynamic, computational solvers application and comparison with experimental data. Operational sequences were carried out keeping the MEF conditions for measuring catalyst concentration and simultaneously measuring the fluid dynamic variables, velocity of the components and pressure drop in the riser. Following this, simulated and experimental values were compared and statistical data treatment done, aiming at the required precision to validate the fluid dynamic model. The comparison tests between experimental and simulated data were carried out under validation criteria. The fluid dynamic behavior of the riser was analyzed and the results and the agreement with literature were discussed. The adopt model was validated under the MEF operational conditions, for a 3 to 6 m/s gas velocity in the riser and a slip

  4. A conceptual and computational model of moral decision making in human and artificial agents.

    Science.gov (United States)

    Wallach, Wendell; Franklin, Stan; Allen, Colin

    2010-07-01

    Recently, there has been a resurgence of interest in general, comprehensive models of human cognition. Such models aim to explain higher-order cognitive faculties, such as deliberation and planning. Given a computational representation, the validity of these models can be tested in computer simulations such as software agents or embodied robots. The push to implement computational models of this kind has created the field of artificial general intelligence (AGI). Moral decision making is arguably one of the most challenging tasks for computational approaches to higher-order cognition. The need for increasingly autonomous artificial agents to factor moral considerations into their choices and actions has given rise to another new field of inquiry variously known as Machine Morality, Machine Ethics, Roboethics, or Friendly AI. In this study, we discuss how LIDA, an AGI model of human cognition, can be adapted to model both affective and rational features of moral decision making. Using the LIDA model, we will demonstrate how moral decisions can be made in many domains using the same mechanisms that enable general decision making. Comprehensive models of human cognition typically aim for compatibility with recent research in the cognitive and neural sciences. Global workspace theory, proposed by the neuropsychologist Bernard Baars (1988), is a highly regarded model of human cognition that is currently being computationally instantiated in several software implementations. LIDA (Franklin, Baars, Ramamurthy, & Ventura, 2005) is one such computational implementation. LIDA is both a set of computational tools and an underlying model of human cognition, which provides mechanisms that are capable of explaining how an agent's selection of its next action arises from bottom-up collection of sensory data and top-down processes for making sense of its current situation. We will describe how the LIDA model helps integrate emotions into the human decision-making process, and we

  5. Development of a Conservative Model Validation Approach for Reliable Analysis

    Science.gov (United States)

    2015-01-01

    CIE 2015 August 2-5, 2015, Boston, Massachusetts, USA [DRAFT] DETC2015-46982 DEVELOPMENT OF A CONSERVATIVE MODEL VALIDATION APPROACH FOR RELIABLE...obtain a conservative simulation model for reliable design even with limited experimental data. Very little research has taken into account the...3, the proposed conservative model validation is briefly compared to the conventional model validation approach. Section 4 describes how to account

  6. Modeling and Validation of Sodium Plugging for Heat Exchangers in Sodium-cooled Fast Reactor Systems

    Energy Technology Data Exchange (ETDEWEB)

    Ferroni, Paolo [Westinghouse Electric Company LLC, Cranberry Township, PA (United States). Global Technology Development; Tatli, Emre [Westinghouse Electric Company LLC, Cranberry Township, PA (United States); Czerniak, Luke [Westinghouse Electric Company LLC, Cranberry Township, PA (United States); Sienicki, James J. [Argonne National Lab. (ANL), Argonne, IL (United States); Chien, Hual-Te [Argonne National Lab. (ANL), Argonne, IL (United States); Yoichi, Momozaki [Argonne National Lab. (ANL), Argonne, IL (United States); Bakhtiari, Sasan [Argonne National Lab. (ANL), Argonne, IL (United States)

    2016-06-29

    The project “Modeling and Validation of Sodium Plugging for Heat Exchangers in Sodium-cooled Fast Reactor Systems” was conducted jointly by Westinghouse Electric Company (Westinghouse) and Argonne National Laboratory (ANL), over the period October 1, 2013- March 31, 2016. The project’s motivation was the need to provide designers of Sodium Fast Reactors (SFRs) with a validated, state-of-the-art computational tool for the prediction of sodium oxide (Na2O) deposition in small-diameter sodium heat exchanger (HX) channels, such as those in the diffusion bonded HXs proposed for SFRs coupled with a supercritical CO2 (sCO2) Brayton cycle power conversion system. In SFRs, Na2O deposition can potentially occur following accidental air ingress in the intermediate heat transport system (IHTS) sodium and simultaneous failure of the IHTS sodium cold trap. In this scenario, oxygen can travel through the IHTS loop and reach the coldest regions, represented by the cold end of the sodium channels of the HXs, where Na2O precipitation may initiate and continue. In addition to deteriorating HX heat transfer and pressure drop performance, Na2O deposition can lead to channel plugging especially when the size of the sodium channels is small, which is the case for diffusion bonded HXs whose sodium channel hydraulic diameter is generally below 5 mm. Sodium oxide melts at a high temperature well above the sodium melting temperature such that removal of a solid plug such as through dissolution by pure sodium could take a lengthy time. The Sodium Plugging Phenomena Loop (SPPL) was developed at ANL, prior to this project, for investigating Na2O deposition phenomena within sodium channels that are prototypical of the diffusion bonded HX channels envisioned for SFR-sCO2 systems. In this project, a Computational Fluid Dynamic (CFD) model capable of simulating the thermal-hydraulics of the SPPL test

  7. A computational model of in vitro angiogenesis based on extracellular matrix fibre orientation.

    Science.gov (United States)

    Edgar, Lowell T; Sibole, Scott C; Underwood, Clayton J; Guilkey, James E; Weiss, Jeffrey A

    2013-01-01

    Recent interest in the process of vascularisation within the biomedical community has motivated numerous new research efforts focusing on the process of angiogenesis. Although the role of chemical factors during angiogenesis has been well documented, the role of mechanical factors, such as the interaction between angiogenic vessels and the extracellular matrix, remains poorly understood. In vitro methods for studying angiogenesis exist; however, measurements available using such techniques often suffer from limited spatial and temporal resolutions. For this reason, computational models have been extensively employed to investigate various aspects of angiogenesis. This paper outlines the formulation and validation of a simple and robust computational model developed to accurately simulate angiogenesis based on length, branching and orientation morphometrics collected from vascularised tissue constructs. Microvessels were represented as a series of connected line segments. The morphology of the vessels was determined by a linear combination of the collagen fibre orientation, the vessel density gradient and a random walk component. Excellent agreement was observed between computational and experimental morphometric data over time. Computational predictions of microvessel orientation within an anisotropic matrix correlated well with experimental data. The accuracy of this modelling approach makes it a valuable platform for investigating the role of mechanical interactions during angiogenesis.

  8. Smooth particle hydrodynamic modeling and validation for impact bird substitution

    Science.gov (United States)

    Babu, Arun; Prasad, Ganesh

    2018-04-01

    Bird strike events incidentally occur and can at times be fatal for air frame structures. Federal Aviation Regulations (FAR) and such other ones mandates aircrafts to be modeled to withstand various levels of bird hit damages. The subject matter of this paper is numerical modeling of a soft body geometry for realistically substituting an actual bird for carrying out simulations of bird hit on target structures. Evolution of such a numerical code to effect an actual bird behavior through impact is much desired for making use of the state of the art computational facilities in simulating bird strike events. Validity, of simulations depicting bird hits, is largely dependent on the correctness of the bird model. In an impact, a set of complex and coupled dynamic interaction exists between the target and the impactor. To simplify this problem, impactor response needs to be decoupled from that of the target. This can be done by assuming and modeling the target as noncompliant. Bird is assumed as fluidic in a impact. Generated stresses in the bird body are significant than its yield stresses. Hydrodynamic theory is most ideal for describing this problem. Impactor literally flows steadily over the target for most part of this problem. The impact starts with an initial shock and falls into a radial release shock regime. Subsequently a steady flow is established in the bird body and this phase continues till the whole length of the bird body is turned around. Initial shock pressure and steady state pressure are ideal variables for comparing and validating the bird model. Spatial discretization of the bird is done using Smooth Particle Hydrodynamic (SPH) approach. This Discrete Element Model (DEM) offers significant advantages over other contemporary approaches. Thermodynamic state variable relations are established using Polynomial Equation of State (EOS). ANSYS AUTODYN is used to perform the explicit dynamic simulation of the impact event. Validation of the shock and steady

  9. Development and validation of computer codes for analysis of PHWR containment behaviour

    International Nuclear Information System (INIS)

    Markandeya, S.G.; Haware, S.K.; Ghosh, A.K.; Venkat Raj, V.

    1997-01-01

    In order to ensure that the design intent of the containment of Indian Pressurised Heavy Water Reactors (IPHWRs) is met, both analytical and experimental studies are being pursued at BARC. As a part of analytical studies, computer codes for predicting the behaviour of containment under various accident scenarios are developed/adapted. These include codes for predicting 1) pressure, temperature transients in the containment following either Loss of Coolant Accident (LOCA) or Main Steam Line Break (MSLB), 2) hydrogen behaviour in respect of its distribution, combustion and the performance of proposed mitigation systems, and 3) behaviour of fission product aerosols in the piping circuits of the primary heat transport system and in the containment. All these codes have undergone thorough validation using data obtained from in-house test facilities or from international sources. Participation in the International Standard Problem (ISP) exercises has also helped in validation of the codes. The present paper briefly describes some of these codes and the various exercises performed for their validation. (author)

  10. Validation of ecological state space models using the Laplace approximation

    DEFF Research Database (Denmark)

    Thygesen, Uffe Høgsbro; Albertsen, Christoffer Moesgaard; Berg, Casper Willestofte

    2017-01-01

    Many statistical models in ecology follow the state space paradigm. For such models, the important step of model validation rarely receives as much attention as estimation or hypothesis testing, perhaps due to lack of available algorithms and software. Model validation is often based on a naive...... for estimation in general mixed effects models. Implementing one-step predictions in the R package Template Model Builder, we demonstrate that it is possible to perform model validation with little effort, even if the ecological model is multivariate, has non-linear dynamics, and whether observations...... useful directions in which the model could be improved....

  11. Sierra toolkit computational mesh conceptual model

    International Nuclear Information System (INIS)

    Baur, David G.; Edwards, Harold Carter; Cochran, William K.; Williams, Alan B.; Sjaardema, Gregory D.

    2010-01-01

    The Sierra Toolkit computational mesh is a software library intended to support massively parallel multi-physics computations on dynamically changing unstructured meshes. This domain of intended use is inherently complex due to distributed memory parallelism, parallel scalability, heterogeneity of physics, heterogeneous discretization of an unstructured mesh, and runtime adaptation of the mesh. Management of this inherent complexity begins with a conceptual analysis and modeling of this domain of intended use; i.e., development of a domain model. The Sierra Toolkit computational mesh software library is designed and implemented based upon this domain model. Software developers using, maintaining, or extending the Sierra Toolkit computational mesh library must be familiar with the concepts/domain model presented in this report.

  12. Computational fluid dynamics simulations and validations of results

    CSIR Research Space (South Africa)

    Sitek, MA

    2013-09-01

    Full Text Available Wind flow influence on a high-rise building is analyzed. The research covers full-scale tests, wind-tunnel experiments and numerical simulations. In the present paper computational model used in simulations is described and the results, which were...

  13. Large Eddy Simulation Modeling of Flashback and Flame Stabilization in Hydrogen-Rich Gas Turbines Using a Hierarchical Validation Approach

    Energy Technology Data Exchange (ETDEWEB)

    Clemens, Noel [Univ. of Texas, Austin, TX (United States)

    2015-09-30

    This project was a combined computational and experimental effort to improve predictive capability for boundary layer flashback of premixed swirl flames relevant to gas-turbine power plants operating with high-hydrogen-content fuels. During the course of this project, significant progress in modeling was made on four major fronts: 1) use of direct numerical simulation of turbulent flames to understand the coupling between the flame and the turbulent boundary layer; 2) improved modeling capability for flame propagation in stratified pre-mixtures; 3) improved portability of computer codes using the OpenFOAM platform to facilitate transfer to industry and other researchers; and 4) application of LES to flashback in swirl combustors, and a detailed assessment of its capabilities and limitations for predictive purposes. A major component of the project was an experimental program that focused on developing a rich experimental database of boundary layer flashback in swirl flames. Both methane and high-hydrogen fuels, including effects of elevated pressure (1 to 5 atm), were explored. For this project, a new model swirl combustor was developed. Kilohertz-rate stereoscopic PIV and chemiluminescence imaging were used to investigate the flame propagation dynamics. In addition to the planar measurements, a technique capable of detecting the instantaneous, time-resolved 3D flame front topography was developed and applied successfully to investigate the flow-flame interaction. The UT measurements and legacy data were used in a hierarchical validation approach where flows with increasingly complex physics were used for validation. First component models were validated with DNS and literature data in simplified configurations, and this was followed by validation with the UT 1-atm flashback cases, and then the UT high-pressure flashback cases. The new models and portable code represent a major improvement over what was available before this project was initiated.

  14. Advanced training simulator models. Implementation and validation

    International Nuclear Information System (INIS)

    Borkowsky, Jeffrey; Judd, Jerry; Belblidia, Lotfi; O'farrell, David; Andersen, Peter

    2008-01-01

    Modern training simulators are required to replicate plant data for both thermal-hydraulic and neutronic response. Replication is required such that reactivity manipulation on the simulator properly trains the operator for reactivity manipulation at the plant. This paper discusses advanced models which perform this function in real-time using the coupled code system THOR/S3R. This code system models the all fluids systems in detail using an advanced, two-phase thermal-hydraulic a model. The nuclear core is modeled using an advanced, three-dimensional nodal method and also by using cycle-specific nuclear data. These models are configured to run interactively from a graphical instructor station or handware operation panels. The simulator models are theoretically rigorous and are expected to replicate the physics of the plant. However, to verify replication, the models must be independently assessed. Plant data is the preferred validation method, but plant data is often not available for many important training scenarios. In the absence of data, validation may be obtained by slower-than-real-time transient analysis. This analysis can be performed by coupling a safety analysis code and a core design code. Such a coupling exists between the codes RELAP5 and SIMULATE-3K (S3K). RELAP5/S3K is used to validate the real-time model for several postulated plant events. (author)

  15. GSTARS computer models and their applications, part I: theoretical development

    Science.gov (United States)

    Yang, C.T.; Simoes, F.J.M.

    2008-01-01

    GSTARS is a series of computer models developed by the U.S. Bureau of Reclamation for alluvial river and reservoir sedimentation studies while the authors were employed by that agency. The first version of GSTARS was released in 1986 using Fortran IV for mainframe computers. GSTARS 2.0 was released in 1998 for personal computer application with most of the code in the original GSTARS revised, improved, and expanded using Fortran IV/77. GSTARS 2.1 is an improved and revised GSTARS 2.0 with graphical user interface. The unique features of all GSTARS models are the conjunctive use of the stream tube concept and of the minimum stream power theory. The application of minimum stream power theory allows the determination of optimum channel geometry with variable channel width and cross-sectional shape. The use of the stream tube concept enables the simulation of river hydraulics using one-dimensional numerical solutions to obtain a semi-two- dimensional presentation of the hydraulic conditions along and across an alluvial channel. According to the stream tube concept, no water or sediment particles can cross the walls of stream tubes, which is valid for many natural rivers. At and near sharp bends, however, sediment particles may cross the boundaries of stream tubes. GSTARS3, based on FORTRAN 90/95, addresses this phenomenon and further expands the capabilities of GSTARS 2.1 for cohesive and non-cohesive sediment transport in rivers and reservoirs. This paper presents the concepts, methods, and techniques used to develop the GSTARS series of computer models, especially GSTARS3. ?? 2008 International Research and Training Centre on Erosion and Sedimentation and the World Association for Sedimentation and Erosion Research.

  16. An integrated computational validation approach for potential novel miRNA prediction

    Directory of Open Access Journals (Sweden)

    Pooja Viswam

    2017-12-01

    Full Text Available MicroRNAs (miRNAs are short, non-coding RNAs between 17bp-24bp length that regulate gene expression by targeting mRNA molecules. The regulatory functions of miRNAs are known to be majorly associated with disease phenotypes such as cancer, cell signaling, cell division, growth and other metabolisms. Novel miRNAs are defined as sequences which does not have any similarity with the existing known sequences and void of any experimental evidences. In recent decades, the advent of next-generation sequencing allows us to capture the small RNA molecules form the cells and developing methods to estimate their expression levels. Several computational algorithms are available to predict the novel miRNAs from the deep sequencing data. In this work, we integrated three novel miRNA prediction programs miRDeep, miRanalyzer and miRPRo to compare and validate their prediction efficiency. The dicer cleavage sites, alignment density, seed conservation, minimum free energy, AU-GC percentage, secondary loop scores, false discovery rates and confidence scores will be considered for comparison and evaluation. Efficiency to identify isomiRs and base pair mismatches in a strand specific manner will also be considered for the computational validation. Further, the criteria and parameters for the identification of the best possible novel miRNA with minimal false positive rates were deduced.

  17. M3 version 3.0: Verification and validation; Hydrochemical model of ground water at repository site

    Energy Technology Data Exchange (ETDEWEB)

    Gomez, Javier B. (Dept. of Earth Sciences, Univ. of Zaragoza, Zaragoza (Spain)); Laaksoharju, Marcus (Geopoint AB, Sollentuna (Sweden)); Skaarman, Erik (Abscondo, Bromma (Sweden)); Gurban, Ioana (3D-Terra (Canada))

    2009-01-15

    Hydrochemical evaluation is a complex type of work that is carried out by specialists. The outcome of this work is generally presented as qualitative models and process descriptions of a site. To support and help to quantify the processes in an objective way, a multivariate mathematical tool entitled M3 (Multivariate Mixing and Mass balance calculations) has been constructed. The computer code can be used to trace the origin of the groundwater, and to calculate the mixing proportions and mass balances from groundwater data. The M3 code is a groundwater response model, which means that changes in the groundwater chemistry in terms of sources and sinks are traced in relation to an ideal mixing model. The complexity of the measured groundwater data determines the configuration of the ideal mixing model. Deviations from the ideal mixing model are interpreted as being due to reactions. Assumptions concerning important mineral phases altering the groundwater or uncertainties associated with thermodynamic constants do not affect the modelling because the calculations are solely based on the measured groundwater composition. M3 uses the opposite approach to that of many standard hydrochemical models. In M3, mixing is evaluated and calculated first. The constituents that cannot be described by mixing are described by reactions. The M3 model consists of three steps: the first is a standard principal component analysis, followed by mixing and finally mass balance calculations. The measured groundwater composition can be described in terms of mixing proportions (%), while the sinks and sources of an element associated with reactions are reported in mg/L. This report contains a set of verification and validation exercises with the intention of building confidence in the use of the M3 methodology. At the same time, clear answers are given to questions related to the accuracy and the precision of the results, including the inherent uncertainties and the errors that can be made

  18. Validation of Storm Water Management Model Storm Control Measures Modules

    Science.gov (United States)

    Simon, M. A.; Platz, M. C.

    2017-12-01

    EPA's Storm Water Management Model (SWMM) is a computational code heavily relied upon by industry for the simulation of wastewater and stormwater infrastructure performance. Many municipalities are relying on SWMM results to design multi-billion-dollar, multi-decade infrastructure upgrades. Since the 1970's, EPA and others have developed five major releases, the most recent ones containing storm control measures modules for green infrastructure. The main objective of this study was to quantify the accuracy with which SWMM v5.1.10 simulates the hydrologic activity of previously monitored low impact developments. Model performance was evaluated with a mathematical comparison of outflow hydrographs and total outflow volumes, using empirical data and a multi-event, multi-objective calibration method. The calibration methodology utilized PEST++ Version 3, a parameter estimation tool, which aided in the selection of unmeasured hydrologic parameters. From the validation study and sensitivity analysis, several model improvements were identified to advance SWMM LID Module performance for permeable pavements, infiltration units and green roofs, and these were performed and reported herein. Overall, it was determined that SWMM can successfully simulate low impact development controls given accurate model confirmation, parameter measurement, and model calibration.

  19. Mathematical Modeling and Computational Thinking

    Science.gov (United States)

    Sanford, John F.; Naidu, Jaideep T.

    2017-01-01

    The paper argues that mathematical modeling is the essence of computational thinking. Learning a computer language is a valuable assistance in learning logical thinking but of less assistance when learning problem-solving skills. The paper is third in a series and presents some examples of mathematical modeling using spreadsheets at an advanced…

  20. ELASTO-KINEMATIC COMPUTATIONAL MODEL OF SUSPENSION WITH FLEXIBLE SUPPORTING ELEMENTS

    Directory of Open Access Journals (Sweden)

    Tomáš Vrána

    2016-04-01

    Full Text Available This paper analyzes the impact of flexibility of individual supporting elements of independent suspension on its elasto-kinematic characteristics. The toe and camber angle are the geometric parameters of the suspension, which waveforms and their changes under the action of vertical, longitudinal and transverse forces affect the stability of the vehicle. To study these dependencies, the computational multibody system (MBS model of axle suspension in the system HyperWorks is created. There are implemented Finite-Element-Method (FEM models reflecting the flexibility of the main supporting elements. These are subframe, the longitudinal arms, transverse arms and knuckle. Flexible models are developed using Component Mode Synthesis (CMS by Craig-Bampton. The model further comprises force elements, such as helical springs, shock absorbers with a stop of the wheel and the anti-roll bar. Rubber-metal bushings are modeled flexibly, using nonlinear deformation characteristics. Simulation results are validated by experimental measurements of geometric parameters of real suspension.

  1. Method for Pre-Conditioning a Measured Surface Height Map for Model Validation

    Science.gov (United States)

    Sidick, Erkin

    2012-01-01

    This software allows one to up-sample or down-sample a measured surface map for model validation, not only without introducing any re-sampling errors, but also eliminating the existing measurement noise and measurement errors. Because the re-sampling of a surface map is accomplished based on the analytical expressions of Zernike-polynomials and a power spectral density model, such re-sampling does not introduce any aliasing and interpolation errors as is done by the conventional interpolation and FFT-based (fast-Fourier-transform-based) spatial-filtering method. Also, this new method automatically eliminates the measurement noise and other measurement errors such as artificial discontinuity. The developmental cycle of an optical system, such as a space telescope, includes, but is not limited to, the following two steps: (1) deriving requirements or specs on the optical quality of individual optics before they are fabricated through optical modeling and simulations, and (2) validating the optical model using the measured surface height maps after all optics are fabricated. There are a number of computational issues related to model validation, one of which is the "pre-conditioning" or pre-processing of the measured surface maps before using them in a model validation software tool. This software addresses the following issues: (1) up- or down-sampling a measured surface map to match it with the gridded data format of a model validation tool, and (2) eliminating the surface measurement noise or measurement errors such that the resulted surface height map is continuous or smoothly-varying. So far, the preferred method used for re-sampling a surface map is two-dimensional interpolation. The main problem of this method is that the same pixel can take different values when the method of interpolation is changed among the different methods such as the "nearest," "linear," "cubic," and "spline" fitting in Matlab. The conventional, FFT-based spatial filtering method used to

  2. Modeling an Excitable Biosynthetic Tissue with Inherent Variability for Paired Computational-Experimental Studies.

    Directory of Open Access Journals (Sweden)

    Tanmay A Gokhale

    2017-01-01

    Full Text Available To understand how excitable tissues give rise to arrhythmias, it is crucially necessary to understand the electrical dynamics of cells in the context of their environment. Multicellular monolayer cultures have proven useful for investigating arrhythmias and other conduction anomalies, and because of their relatively simple structure, these constructs lend themselves to paired computational studies that often help elucidate mechanisms of the observed behavior. However, tissue cultures of cardiomyocyte monolayers currently require the use of neonatal cells with ionic properties that change rapidly during development and have thus been poorly characterized and modeled to date. Recently, Kirkton and Bursac demonstrated the ability to create biosynthetic excitable tissues from genetically engineered and immortalized HEK293 cells with well-characterized electrical properties and the ability to propagate action potentials. In this study, we developed and validated a computational model of these excitable HEK293 cells (called "Ex293" cells using existing electrophysiological data and a genetic search algorithm. In order to reproduce not only the mean but also the variability of experimental observations, we examined what sources of variation were required in the computational model. Random cell-to-cell and inter-monolayer variation in both ionic conductances and tissue conductivity was necessary to explain the experimentally observed variability in action potential shape and macroscopic conduction, and the spatial organization of cell-to-cell conductance variation was found to not impact macroscopic behavior; the resulting model accurately reproduces both normal and drug-modified conduction behavior. The development of a computational Ex293 cell and tissue model provides a novel framework to perform paired computational-experimental studies to study normal and abnormal conduction in multidimensional excitable tissue, and the methodology of modeling

  3. The development and validation of a numerical integration method for non-linear viscoelastic modeling

    Science.gov (United States)

    Ramo, Nicole L.; Puttlitz, Christian M.

    2018-01-01

    Compelling evidence that many biological soft tissues display both strain- and time-dependent behavior has led to the development of fully non-linear viscoelastic modeling techniques to represent the tissue’s mechanical response under dynamic conditions. Since the current stress state of a viscoelastic material is dependent on all previous loading events, numerical analyses are complicated by the requirement of computing and storing the stress at each step throughout the load history. This requirement quickly becomes computationally expensive, and in some cases intractable, for finite element models. Therefore, we have developed a strain-dependent numerical integration approach for capturing non-linear viscoelasticity that enables calculation of the current stress from a strain-dependent history state variable stored from the preceding time step only, which improves both fitting efficiency and computational tractability. This methodology was validated based on its ability to recover non-linear viscoelastic coefficients from simulated stress-relaxation (six strain levels) and dynamic cyclic (three frequencies) experimental stress-strain data. The model successfully fit each data set with average errors in recovered coefficients of 0.3% for stress-relaxation fits and 0.1% for cyclic. The results support the use of the presented methodology to develop linear or non-linear viscoelastic models from stress-relaxation or cyclic experimental data of biological soft tissues. PMID:29293558

  4. Computational fluid dynamics modeling of two-phase flow in a BWR fuel assembly

    International Nuclear Information System (INIS)

    Andrey Ioilev; Maskhud Samigulin; Vasily Ustinenko; Simon Lo; Adrian Tentner

    2005-01-01

    Full text of publication follows: The goal of this project is to develop an advanced Computational Fluid Dynamics (CFD) computer code (CFD-BWR) that allows the detailed analysis of the two-phase flow and heat transfer phenomena in a Boiling Water Reactor (BWR) fuel bundle under various operating conditions. This code will include more fundamental physical models than the current generation of sub-channel codes and advanced numerical algorithms for improved computational accuracy, robustness, and speed. It is highly desirable to understand the detailed two-phase flow phenomena inside a BWR fuel bundle. These phenomena include coolant phase changes and multiple flow regimes which directly influence the coolant interaction with fuel assembly and, ultimately, the reactor performance. Traditionally, the best analysis tools for the analysis of two-phase flow phenomena inside the BWR fuel assembly have been the sub-channel codes. However, the resolution of these codes is still too coarse for analyzing the detailed intra-assembly flow patterns, such as flow around a spacer element. Recent progress in Computational Fluid Dynamics (CFD), coupled with the rapidly increasing computational power of massively parallel computers, shows promising potential for the fine-mesh, detailed simulation of fuel assembly two-phase flow phenomena. However, the phenomenological models available in the commercial CFD programs are not as advanced as those currently being used in the sub-channel codes used in the nuclear industry. In particular, there are no models currently available which are able to reliably predict the nature of the flow regimes, and use the appropriate sub-models for those flow regimes. The CFD-BWR code is being developed as a customized module built on the foundation of the commercial CFD Code STAR-CD which provides general two-phase flow modeling capabilities. The paper describes the model development strategy which has been adopted by the development team for the

  5. Formulation and Validation of an Efficient Computational Model for a Dilute, Settling Suspension Undergoing Rotational Mixing

    Energy Technology Data Exchange (ETDEWEB)

    Sprague, Michael A.; Stickel, Jonathan J.; Sitaraman, Hariswaran; Crawford, Nathan C.; Fischer, Paul F.

    2017-04-11

    Designing processing equipment for the mixing of settling suspensions is a challenging problem. Achieving low-cost mixing is especially difficult for the application of slowly reacting suspended solids because the cost of impeller power consumption becomes quite high due to the long reaction times (batch mode) or due to large-volume reactors (continuous mode). Further, the usual scale-up metrics for mixing, e.g., constant tip speed and constant power per volume, do not apply well for mixing of suspensions. As an alternative, computational fluid dynamics (CFD) can be useful for analyzing mixing at multiple scales and determining appropriate mixer designs and operating parameters. We developed a mixture model to describe the hydrodynamics of a settling cellulose suspension. The suspension motion is represented as a single velocity field in a computationally efficient Eulerian framework. The solids are represented by a scalar volume-fraction field that undergoes transport due to particle diffusion, settling, fluid advection, and shear stress. A settling model and a viscosity model, both functions of volume fraction, were selected to fit experimental settling and viscosity data, respectively. Simulations were performed with the open-source Nek5000 CFD program, which is based on the high-order spectral-finite-element method. Simulations were performed for the cellulose suspension undergoing mixing in a laboratory-scale vane mixer. The settled-bed heights predicted by the simulations were in semi-quantitative agreement with experimental observations. Further, the simulation results were in quantitative agreement with experimentally obtained torque and mixing-rate data, including a characteristic torque bifurcation. In future work, we plan to couple this CFD model with a reaction-kinetics model for the enzymatic digestion of cellulose, allowing us to predict enzymatic digestion performance for various mixing intensities and novel reactor designs.

  6. LHCb computing model

    CERN Document Server

    Frank, M; Pacheco, Andreu

    1998-01-01

    This document is a first attempt to describe the LHCb computing model. The CPU power needed to process data for the event filter and reconstruction is estimated to be 2.2 \\Theta 106 MIPS. This will be installed at the experiment and will be reused during non data-taking periods for reprocessing. The maximal I/O of these activities is estimated to be around 40 MB/s.We have studied three basic models concerning the placement of the CPU resources for the other computing activities, Monte Carlo-simulation (1:4 \\Theta 106 MIPS) and physics analysis (0:5 \\Theta 106 MIPS): CPU resources may either be located at the physicist's homelab, national computer centres (Regional Centres) or at CERN.The CPU resources foreseen for analysis are sufficient to allow 100 concurrent analyses. It is assumed that physicists will work in physics groups that produce analysis data at an average rate of 4.2 MB/s or 11 TB per month. However, producing these group analysis data requires reading capabilities of 660 MB/s. It is further assu...

  7. Texas Panhandle soil-crop-beef food chain for uranium: a dynamic model validated by experimental data

    International Nuclear Information System (INIS)

    Wenzel, W.J.; Wallwork-Barber, K.M.; Rodgers, J.C.; Gallegos, A.F.

    1982-01-01

    Long-term simulations of uranium transport in the soil-crop-beef food chain were performed using the BIOTRAN model. Experimental data means from an extensive Pantex beef cattle study are presented. Experimental data were used to validate the computer model. Measurements of uranium in air, soil, water, range grasses, feed, and cattle tissues are compared to simulated uranium output values in these matrices when the BIOTRAN model was set at the measured soil and air values. The simulations agreed well with experimental data even though metabolic details for ruminants and uranium chemical form in the environment remain to be studied

  8. 40 CFR 194.23 - Models and computer codes.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 24 2010-07-01 2010-07-01 false Models and computer codes. 194.23... General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... obtain stable solutions; (iv) Computer models accurately implement the numerical models; i.e., computer...

  9. Increasing the Reliability of Circulation Model Validation: Quantifying Drifter Slip to See how Currents are Actually Moving

    Science.gov (United States)

    Anderson, T.

    2016-02-01

    Ocean circulation forecasts can help answer questions regarding larval dispersal, passive movement of injured sea animals, oil spill mitigation, and search and rescue efforts. Circulation forecasts are often validated with GPS-tracked drifter paths, but how accurately do these drifters actually move with ocean currents? Drifters are not only moved by water, but are also forced by wind and waves acting on the exposed buoy and transmitter; this imperfect movement is referred to as drifter slip. The quantification and further understanding of drifter slip will allow scientists to differentiate between drifter imperfections and actual computer model error when comparing trajectory forecasts with actual drifter tracks. This will avoid falsely accrediting all discrepancies between a trajectory forecast and an actual drifter track to computer model error. During multiple deployments of drifters in Nantucket Sound and using observed wind and wave data, we attempt to quantify the slip of drifters developed by the Northeast Fisheries Science Center's (NEFSC) Student Drifters Program. While similar studies have been conducted previously, very few have directly attached current meters to drifters to quantify drifter slip. Furthermore, none have quantified slip of NEFSC drifters relative to the oceanographic-standard "CODE" drifter. The NEFSC drifter archive has over 1000 drifter tracks primarily off the New England coast. With a better understanding of NEFSC drifter slip, modelers can reliably use these tracks for model validation.

  10. On the Predictability of Computer simulations: Advances in Verification and Validation

    KAUST Repository

    Prudhomme, Serge

    2014-01-06

    We will present recent advances on the topics of Verification and Validation in order to assess the reliability and predictability of computer simulations. The first part of the talk will focus on goal-oriented error estimation for nonlinear boundary-value problems and nonlinear quantities of interest, in which case the error representation consists of two contributions: 1) a first contribution, involving the residual and the solution of the linearized adjoint problem, which quantifies the discretization or modeling error; and 2) a second contribution, combining higher-order terms that describe the linearization error. The linearization error contribution is in general neglected with respect to the discretization or modeling error. However, when nonlinear effects are significant, it is unclear whether ignoring linearization effects may produce poor convergence of the adaptive process. The objective will be to show how both contributions can be estimated and employed in an adaptive scheme that simultaneously controls the two errors in a balanced manner. In the second part of the talk, we will present novel approach for calibration of model parameters. The proposed inverse problem not only involves the minimization of the misfit between experimental observables and their theoretical estimates, but also an objective function that takes into account some design goals on specific design scenarios. The method can be viewed as a regularization approach of the inverse problem, one, however, that best respects some design goals for which mathematical models are intended. The inverse problem is solved by a Bayesian method to account for uncertainties in the data. We will show that it shares the same structure as the deterministic problem that one would obtain by multi-objective optimization theory. The method is illustrated on an example of heat transfer in a two-dimensional fin. The proposed approach has the main benefit that it increases the confidence in predictive

  11. Prospective Validation of a High Dimensional Shape Model for Organ Motion in Intact Cervical Cancer

    Energy Technology Data Exchange (ETDEWEB)

    Williamson, Casey W.; Green, Garrett; Noticewala, Sonal S.; Li, Nan; Shen, Hanjie [Department of Radiation Medicine and Applied Sciences, University of California, San Diego, La Jolla, California (United States); Vaida, Florin [Division of Biostatistics and Bioinformatics, Department of Family Medicine and Public Health, University of California, San Diego, La Jolla, California (United States); Mell, Loren K., E-mail: lmell@ucsd.edu [Department of Radiation Medicine and Applied Sciences, University of California, San Diego, La Jolla, California (United States)

    2016-11-15

    Purpose: Validated models are needed to justify strategies to define planning target volumes (PTVs) for intact cervical cancer used in clinical practice. Our objective was to independently validate a previously published shape model, using data collected prospectively from clinical trials. Methods and Materials: We analyzed 42 patients with intact cervical cancer treated with daily fractionated pelvic intensity modulated radiation therapy and concurrent chemotherapy in one of 2 prospective clinical trials. We collected online cone beam computed tomography (CBCT) scans before each fraction. Clinical target volume (CTV) structures from the planning computed tomography scan were cast onto each CBCT scan after rigid registration and manually redrawn to account for organ motion and deformation. We applied the 95% isodose cloud from the planning computed tomography scan to each CBCT scan and computed any CTV outside the 95% isodose cloud. The primary aim was to determine the proportion of CTVs that were encompassed within the 95% isodose volume. A 1-sample t test was used to test the hypothesis that the probability of complete coverage was different from 95%. We used mixed-effects logistic regression to assess effects of time and patient variability. Results: The 95% isodose line completely encompassed 92.3% of all CTVs (95% confidence interval, 88.3%-96.4%), not significantly different from the 95% probability anticipated a priori (P=.19). The overall proportion of missed CTVs was small: the grand mean of covered CTVs was 99.9%, and 95.2% of misses were located in the anterior body of the uterus. Time did not affect coverage probability (P=.71). Conclusions: With the clinical implementation of a previously proposed PTV definition strategy based on a shape model for intact cervical cancer, the probability of CTV coverage was high and the volume of CTV missed was low. This PTV expansion strategy is acceptable for clinical trials and practice; however, we recommend daily

  12. A meta-model for computer executable dynamic clinical safety checklists.

    Science.gov (United States)

    Nan, Shan; Van Gorp, Pieter; Lu, Xudong; Kaymak, Uzay; Korsten, Hendrikus; Vdovjak, Richard; Duan, Huilong

    2017-12-12

    Safety checklist is a type of cognitive tool enforcing short term memory of medical workers with the purpose of reducing medical errors caused by overlook and ignorance. To facilitate the daily use of safety checklists, computerized systems embedded in the clinical workflow and adapted to patient-context are increasingly developed. However, the current hard-coded approach of implementing checklists in these systems increase the cognitive efforts of clinical experts and coding efforts for informaticists. This is due to the lack of a formal representation format that is both understandable by clinical experts and executable by computer programs. We developed a dynamic checklist meta-model with a three-step approach. Dynamic checklist modeling requirements were extracted by performing a domain analysis. Then, existing modeling approaches and tools were investigated with the purpose of reusing these languages. Finally, the meta-model was developed by eliciting domain concepts and their hierarchies. The feasibility of using the meta-model was validated by two case studies. The meta-model was mapped to specific modeling languages according to the requirements of hospitals. Using the proposed meta-model, a comprehensive coronary artery bypass graft peri-operative checklist set and a percutaneous coronary intervention peri-operative checklist set have been developed in a Dutch hospital and a Chinese hospital, respectively. The result shows that it is feasible to use the meta-model to facilitate the modeling and execution of dynamic checklists. We proposed a novel meta-model for the dynamic checklist with the purpose of facilitating creating dynamic checklists. The meta-model is a framework of reusing existing modeling languages and tools to model dynamic checklists. The feasibility of using the meta-model is validated by implementing a use case in the system.

  13. Validation of DNA-based identification software by computation of pedigree likelihood ratios.

    Science.gov (United States)

    Slooten, K

    2011-08-01

    Disaster victim identification (DVI) can be aided by DNA-evidence, by comparing the DNA-profiles of unidentified individuals with those of surviving relatives. The DNA-evidence is used optimally when such a comparison is done by calculating the appropriate likelihood ratios. Though conceptually simple, the calculations can be quite involved, especially with large pedigrees, precise mutation models etc. In this article we describe a series of test cases designed to check if software designed to calculate such likelihood ratios computes them correctly. The cases include both simple and more complicated pedigrees, among which inbred ones. We show how to calculate the likelihood ratio numerically and algebraically, including a general mutation model and possibility of allelic dropout. In Appendix A we show how to derive such algebraic expressions mathematically. We have set up these cases to validate new software, called Bonaparte, which performs pedigree likelihood ratio calculations in a DVI context. Bonaparte has been developed by SNN Nijmegen (The Netherlands) for the Netherlands Forensic Institute (NFI). It is available free of charge for non-commercial purposes (see www.dnadvi.nl for details). Commercial licenses can also be obtained. The software uses Bayesian networks and the junction tree algorithm to perform its calculations. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  14. Neutronic computational modeling of the ASTRA critical facility using MCNPX

    International Nuclear Information System (INIS)

    Rodriguez, L. P.; Garcia, C. R.; Milian, D.; Milian, E. E.; Brayner, C.

    2015-01-01

    The Pebble Bed Very High Temperature Reactor is considered as a prominent candidate among Generation IV nuclear energy systems. Nevertheless the Pebble Bed Very High Temperature Reactor faces an important challenge due to the insufficient validation of computer codes currently available for use in its design and safety analysis. In this paper a detailed IAEA computational benchmark announced by IAEA-TECDOC-1694 in the framework of the Coordinated Research Project 'Evaluation of High Temperature Gas Cooled Reactor (HTGR) Performance' was solved in support of the Generation IV computer codes validation effort using MCNPX ver. 2.6e computational code. In the IAEA-TECDOC-1694 were summarized a set of four calculational benchmark problems performed at the ASTRA critical facility. Benchmark problems include criticality experiments, control rod worth measurements and reactivity measurements. The ASTRA Critical Facility at the Kurchatov Institute in Moscow was used to simulate the neutronic behavior of nuclear pebble bed reactors. (Author)

  15. Computer simulation study of the nematic-vapour interface in the Gay-Berne model

    Science.gov (United States)

    Rull, Luis F.; Romero-Enrique, José Manuel

    2017-06-01

    We present computer simulations of the vapour-nematic interface of the Gay-Berne model. We considered situations which correspond to either prolate or oblate molecules. We determine the anchoring of the nematic phase and correlate it with the intermolecular potential parameters. On the other hand, we evaluate the surface tension associated to this interface. We find a corresponding states law for the surface tension dependence on the temperature, valid for both prolate and oblate molecules.

  16. Trust Models in Ubiquitous Computing

    DEFF Research Database (Denmark)

    Nielsen, Mogens; Krukow, Karl; Sassone, Vladimiro

    2008-01-01

    We recapture some of the arguments for trust-based technologies in ubiquitous computing, followed by a brief survey of some of the models of trust that have been introduced in this respect. Based on this, we argue for the need of more formal and foundational trust models.......We recapture some of the arguments for trust-based technologies in ubiquitous computing, followed by a brief survey of some of the models of trust that have been introduced in this respect. Based on this, we argue for the need of more formal and foundational trust models....

  17. Bayesian model calibration of computational models in velocimetry diagnosed dynamic compression experiments.

    Energy Technology Data Exchange (ETDEWEB)

    Brown, Justin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hund, Lauren [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-02-01

    Dynamic compression experiments are being performed on complicated materials using increasingly complex drivers. The data produced in these experiments are beginning to reach a regime where traditional analysis techniques break down; requiring the solution of an inverse problem. A common measurement in dynamic experiments is an interface velocity as a function of time, and often this functional output can be simulated using a hydrodynamics code. Bayesian model calibration is a statistical framework to estimate inputs into a computational model in the presence of multiple uncertainties, making it well suited to measurements of this type. In this article, we apply Bayesian model calibration to high pressure (250 GPa) ramp compression measurements in tantalum. We address several issues speci c to this calibration including the functional nature of the output as well as parameter and model discrepancy identi ability. Speci cally, we propose scaling the likelihood function by an e ective sample size rather than modeling the autocorrelation function to accommodate the functional output and propose sensitivity analyses using the notion of `modularization' to assess the impact of experiment-speci c nuisance input parameters on estimates of material properties. We conclude that the proposed Bayesian model calibration procedure results in simple, fast, and valid inferences on the equation of state parameters for tantalum.

  18. Computer-assisted teaching of skin flap surgery: validation of a mobile platform software for medical students.

    Directory of Open Access Journals (Sweden)

    David P de Sena

    Full Text Available The purpose of this study was to develop and validate a multimedia software application for mobile platforms to assist in the teaching and learning process of design and construction of a skin flap. Traditional training in surgery is based on learning by doing. Initially, the use of cadavers and animal models appeared to be a valid alternative for training. However, many conflicts with these training models prompted progression to synthetic and virtual reality models. Fifty volunteer fifth- and sixth-year medical students completed a pretest and were randomly allocated into two groups of 25 students each. The control group was exposed for 5 minutes to a standard text-based print article, while the test group used multimedia software describing how to fashion a rhomboid flap. Each group then performed a cutaneous flap on a training bench model while being evaluated by three blinded BSPS (Brazilian Society of Plastic Surgery board-certified surgeons using the OSATS (Objective Structured Assessment of Technical Skill protocol and answered a post-test. The text-based group was then tested again using the software. The computer-assisted learning (CAL group had superior performance as confirmed by checklist scores (p<0.002, overall global assessment (p = 0.017 and post-test results (p<0.001. All participants ranked the multimedia method as the best study tool. CAL learners exhibited better subjective and objective performance when fashioning rhomboid flaps as compared to those taught with standard print material. These findings indicate that students preferred to learn using the multimedia method.

  19. Introducing Seismic Tomography with Computational Modeling

    Science.gov (United States)

    Neves, R.; Neves, M. L.; Teodoro, V.

    2011-12-01

    Learning seismic tomography principles and techniques involves advanced physical and computational knowledge. In depth learning of such computational skills is a difficult cognitive process that requires a strong background in physics, mathematics and computer programming. The corresponding learning environments and pedagogic methodologies should then involve sets of computational modelling activities with computer software systems which allow students the possibility to improve their mathematical or programming knowledge and simultaneously focus on the learning of seismic wave propagation and inverse theory. To reduce the level of cognitive opacity associated with mathematical or programming knowledge, several computer modelling systems have already been developed (Neves & Teodoro, 2010). Among such systems, Modellus is particularly well suited to achieve this goal because it is a domain general environment for explorative and expressive modelling with the following main advantages: 1) an easy and intuitive creation of mathematical models using just standard mathematical notation; 2) the simultaneous exploration of images, tables, graphs and object animations; 3) the attribution of mathematical properties expressed in the models to animated objects; and finally 4) the computation and display of mathematical quantities obtained from the analysis of images and graphs. Here we describe virtual simulations and educational exercises which enable students an easy grasp of the fundamental of seismic tomography. The simulations make the lecture more interactive and allow students the possibility to overcome their lack of advanced mathematical or programming knowledge and focus on the learning of seismological concepts and processes taking advantage of basic scientific computation methods and tools.

  20. Computer models for economic and silvicultural decisions

    Science.gov (United States)

    Rosalie J. Ingram

    1989-01-01

    Computer systems can help simplify decisionmaking to manage forest ecosystems. We now have computer models to help make forest management decisions by predicting changes associated with a particular management action. Models also help you evaluate alternatives. To be effective, the computer models must be reliable and appropriate for your situation.

  1. Anatomical Cystocele Recurrence: Development and Internal Validation of a Prediction Model.

    Science.gov (United States)

    Vergeldt, Tineke F M; van Kuijk, Sander M J; Notten, Kim J B; Kluivers, Kirsten B; Weemhoff, Mirjam

    2016-02-01

    To develop a prediction model that estimates the risk of anatomical cystocele recurrence after surgery. The databases of two multicenter prospective cohort studies were combined, and we performed a retrospective secondary analysis of these data. Women undergoing an anterior colporrhaphy without mesh materials and without previous pelvic organ prolapse (POP) surgery filled in a questionnaire, underwent translabial three-dimensional ultrasonography, and underwent staging of POP preoperatively and postoperatively. We developed a prediction model using multivariable logistic regression and internally validated it using standard bootstrapping techniques. The performance of the prediction model was assessed by computing indices of overall performance, discriminative ability, calibration, and its clinical utility by computing test characteristics. Of 287 included women, 149 (51.9%) had anatomical cystocele recurrence. Factors included in the prediction model were assisted delivery, preoperative cystocele stage, number of compartments involved, major levator ani muscle defects, and levator hiatal area during Valsalva. Potential predictors that were excluded after backward elimination because of high P values were age, body mass index, number of vaginal deliveries, and family history of POP. The shrinkage factor resulting from the bootstrap procedure was 0.91. After correction for optimism, Nagelkerke's R and the Brier score were 0.15 and 0.22, respectively. This indicates satisfactory model fit. The area under the receiver operating characteristic curve of the prediction model was 71.6% (95% confidence interval 65.7-77.5). After correction for optimism, the area under the receiver operating characteristic curve was 69.7%. This prediction model, including history of assisted delivery, preoperative stage, number of compartments, levator defects, and levator hiatus, estimates the risk of anatomical cystocele recurrence.

  2. Scidac-Data: Enabling Data Driven Modeling of Exascale Computing

    Science.gov (United States)

    Mubarak, Misbah; Ding, Pengfei; Aliaga, Leo; Tsaris, Aristeidis; Norman, Andrew; Lyon, Adam; Ross, Robert

    2017-10-01

    The SciDAC-Data project is a DOE-funded initiative to analyze and exploit two decades of information and analytics that have been collected by the Fermilab data center on the organization, movement, and consumption of high energy physics (HEP) data. The project analyzes the analysis patterns and data organization that have been used by NOvA, MicroBooNE, MINERvA, CDF, D0, and other experiments to develop realistic models of HEP analysis workflows and data processing. The SciDAC-Data project aims to provide both realistic input vectors and corresponding output data that can be used to optimize and validate simulations of HEP analysis. These simulations are designed to address questions of data handling, cache optimization, and workflow structures that are the prerequisites for modern HEP analysis chains to be mapped and optimized to run on the next generation of leadership-class exascale computing facilities. We present the use of a subset of the SciDAC-Data distributions, acquired from analysis of approximately 71,000 HEP workflows run on the Fermilab data center and corresponding to over 9 million individual analysis jobs, as the input to detailed queuing simulations that model the expected data consumption and caching behaviors of the work running in high performance computing (HPC) and high throughput computing (HTC) environments. In particular we describe how the Sequential Access via Metadata (SAM) data-handling system in combination with the dCache/Enstore-based data archive facilities has been used to develop radically different models for analyzing the HEP data. We also show how the simulations may be used to assess the impact of design choices in archive facilities.

  3. Prospective validation of pathologic complete response models in rectal cancer: Transferability and reproducibility.

    Science.gov (United States)

    van Soest, Johan; Meldolesi, Elisa; van Stiphout, Ruud; Gatta, Roberto; Damiani, Andrea; Valentini, Vincenzo; Lambin, Philippe; Dekker, Andre

    2017-09-01

    Multiple models have been developed to predict pathologic complete response (pCR) in locally advanced rectal cancer patients. Unfortunately, validation of these models normally omit the implications of cohort differences on prediction model performance. In this work, we will perform a prospective validation of three pCR models, including information whether this validation will target transferability or reproducibility (cohort differences) of the given models. We applied a novel methodology, the cohort differences model, to predict whether a patient belongs to the training or to the validation cohort. If the cohort differences model performs well, it would suggest a large difference in cohort characteristics meaning we would validate the transferability of the model rather than reproducibility. We tested our method in a prospective validation of three existing models for pCR prediction in 154 patients. Our results showed a large difference between training and validation cohort for one of the three tested models [Area under the Receiver Operating Curve (AUC) cohort differences model: 0.85], signaling the validation leans towards transferability. Two out of three models had a lower AUC for validation (0.66 and 0.58), one model showed a higher AUC in the validation cohort (0.70). We have successfully applied a new methodology in the validation of three prediction models, which allows us to indicate if a validation targeted transferability (large differences between training/validation cohort) or reproducibility (small cohort differences). © 2017 American Association of Physicists in Medicine.

  4. Using Modeling and Simulation to Predict Operator Performance and Automation-Induced Complacency With Robotic Automation: A Case Study and Empirical Validation.

    Science.gov (United States)

    Wickens, Christopher D; Sebok, Angelia; Li, Huiyang; Sarter, Nadine; Gacy, Andrew M

    2015-09-01

    The aim of this study was to develop and validate a computational model of the automation complacency effect, as operators work on a robotic arm task, supported by three different degrees of automation. Some computational models of complacency in human-automation interaction exist, but those are formed and validated within the context of fairly simplified monitoring failures. This research extends model validation to a much more complex task, so that system designers can establish, without need for human-in-the-loop (HITL) experimentation, merits and shortcomings of different automation degrees. We developed a realistic simulation of a space-based robotic arm task that could be carried out with three different levels of trajectory visualization and execution automation support. Using this simulation, we performed HITL testing. Complacency was induced via several trials of correctly performing automation and then was assessed on trials when automation failed. Following a cognitive task analysis of the robotic arm operation, we developed a multicomponent model of the robotic operator and his or her reliance on automation, based in part on visual scanning. The comparison of model predictions with empirical results revealed that the model accurately predicted routine performance and predicted the responses to these failures after complacency developed. However, the scanning models do not account for the entire attention allocation effects of complacency. Complacency modeling can provide a useful tool for predicting the effects of different types of imperfect automation. The results from this research suggest that focus should be given to supporting situation awareness in automation development. © 2015, Human Factors and Ergonomics Society.

  5. Quantum vertex model for reversible classical computing.

    Science.gov (United States)

    Chamon, C; Mucciolo, E R; Ruckenstein, A E; Yang, Z-C

    2017-05-12

    Mappings of classical computation onto statistical mechanics models have led to remarkable successes in addressing some complex computational problems. However, such mappings display thermodynamic phase transitions that may prevent reaching solution even for easy problems known to be solvable in polynomial time. Here we map universal reversible classical computations onto a planar vertex model that exhibits no bulk classical thermodynamic phase transition, independent of the computational circuit. Within our approach the solution of the computation is encoded in the ground state of the vertex model and its complexity is reflected in the dynamics of the relaxation of the system to its ground state. We use thermal annealing with and without 'learning' to explore typical computational problems. We also construct a mapping of the vertex model into the Chimera architecture of the D-Wave machine, initiating an approach to reversible classical computation based on state-of-the-art implementations of quantum annealing.

  6. Development of a computational environment for the General Curvilinear Ocean Model

    International Nuclear Information System (INIS)

    Thomas, Mary P; Castillo, Jose E

    2009-01-01

    The General Curvilinear Ocean Model (GCOM) differs significantly from the traditional approach, where the use of Cartesian coordinates forces the model to simulate terrain as a series of steps. GCOM utilizes a full three-dimensional curvilinear transformation, which has been shown to have greater accuracy than similar models and to achieve results more efficiently. The GCOM model has been validated for several types of water bodies, different coastlines and bottom shapes, including the Alarcon Seamount, Southern California Coastal Region, the Valencia Lake in Venezuela, and more recently the Monterey Bay. In this paper, enhancements to the GCOM model and an overview of the computational environment (GCOM-CE) are presented. Model improvements include migration from F77 to F90; approach to a component design; and initial steps towards parallelization of the model. Through the use of the component design, new models are being incorporated including biogeochemical, pollution, and sediment transport. The computational environment is designed to allow various client interactions via secure Web applications (portal, Web services, and Web 2.0 gadgets). Features include building jobs, managing and interacting with long running jobs; managing input and output files; quick visualization of results; publishing of Web services to be used by other systems such as larger climate models. The CE is based mainly on Python tools including a grid-enabled Pylons Web application Framework for Web services, pyWSRF (python-Web Services-Resource Framework), pyGlobus based web services, SciPy, and Google code tools.

  7. Validation of coastal oceanographic models at Forsmark. Site descriptive modelling SDM-Site Forsmark

    Energy Technology Data Exchange (ETDEWEB)

    Engqvist, Anders (A och I Engqvist Konsult HB, Vaxholm (SE)); Andrejev, Oleg (Finnish Inst. of Marine Research, Helsinki (FI))

    2008-01-15

    few months; (ii) Both 3D-models miss some rapid up..and down-welling episodes that were clearly registered on all salinity- and temperature meters near the northern interface; (iii) The velocity profiles measured at the interface between the two nested models display a low but mainly positive correlation; (iv) The salinity dynamics in the interior station is fully acceptably simulated with improved correlation coefficients towards the surface; (v) The temperature profiles also generally display a high correlation between measurements and simulated data, certifying that the heat transfer through the surface is acceptably well simulated to render the salinity the dominating factor determining the density, but yet leaving room for further improvements. It seems safe to conclude that the validation of velocity components has confirmed what has been found in many instances previously, namely that this is a challenge that demands considerably more measuring effort than has been possible to muster in this study in order to average out sub-grid eddies that the model grid does not resolve. For the scalar fields temperature is acceptably well captured by the models, but this is judged to be more an effect of the seasonal variation than an expression of the virtue of the actual models. The internal salinity dynamics is the strong point of the model. Its temporal development at the inner station is convincingly well reproduced by this model approach. This means that the overall computed water exchange of the Oeregrundsgrepen can continued to be invested with due confidence

  8. Validation of coastal oceanographic models at Forsmark. Site descriptive modelling SDM-Site Forsmark

    International Nuclear Information System (INIS)

    Engqvist, Anders; Andrejev, Oleg

    2008-01-01

    few months; (ii) Both 3D-models miss some rapid up..and down-welling episodes that were clearly registered on all salinity- and temperature meters near the northern interface; (iii) The velocity profiles measured at the interface between the two nested models display a low but mainly positive correlation; (iv) The salinity dynamics in the interior station is fully acceptably simulated with improved correlation coefficients towards the surface; (v) The temperature profiles also generally display a high correlation between measurements and simulated data, certifying that the heat transfer through the surface is acceptably well simulated to render the salinity the dominating factor determining the density, but yet leaving room for further improvements. It seems safe to conclude that the validation of velocity components has confirmed what has been found in many instances previously, namely that this is a challenge that demands considerably more measuring effort than has been possible to muster in this study in order to average out sub-grid eddies that the model grid does not resolve. For the scalar fields temperature is acceptably well captured by the models, but this is judged to be more an effect of the seasonal variation than an expression of the virtue of the actual models. The internal salinity dynamics is the strong point of the model. Its temporal development at the inner station is convincingly well reproduced by this model approach. This means that the overall computed water exchange of the Oeregrundsgrepen can continued to be invested with due confidence

  9. A methodology for PSA model validation

    International Nuclear Information System (INIS)

    Unwin, S.D.

    1995-09-01

    This document reports Phase 2 of work undertaken by Science Applications International Corporation (SAIC) in support of the Atomic Energy Control Board's Probabilistic Safety Assessment (PSA) review. A methodology is presented for the systematic review and evaluation of a PSA model. These methods are intended to support consideration of the following question: To within the scope and depth of modeling resolution of a PSA study, is the resultant model a complete and accurate representation of the subject plant? This question was identified as a key PSA validation issue in SAIC's Phase 1 project. The validation methods are based on a model transformation process devised to enhance the transparency of the modeling assumptions. Through conversion to a 'success-oriented' framework, a closer correspondence to plant design and operational specifications is achieved. This can both enhance the scrutability of the model by plant personnel, and provide an alternative perspective on the model that may assist in the identification of deficiencies. The model transformation process is defined and applied to fault trees documented in the Darlington Probabilistic Safety Evaluation. A tentative real-time process is outlined for implementation and documentation of a PSA review based on the proposed methods. (author). 11 refs., 9 tabs., 30 refs

  10. Assessing Discriminative Performance at External Validation of Clinical Prediction Models.

    Directory of Open Access Journals (Sweden)

    Daan Nieboer

    Full Text Available External validation studies are essential to study the generalizability of prediction models. Recently a permutation test, focusing on discrimination as quantified by the c-statistic, was proposed to judge whether a prediction model is transportable to a new setting. We aimed to evaluate this test and compare it to previously proposed procedures to judge any changes in c-statistic from development to external validation setting.We compared the use of the permutation test to the use of benchmark values of the c-statistic following from a previously proposed framework to judge transportability of a prediction model. In a simulation study we developed a prediction model with logistic regression on a development set and validated them in the validation set. We concentrated on two scenarios: 1 the case-mix was more heterogeneous and predictor effects were weaker in the validation set compared to the development set, and 2 the case-mix was less heterogeneous in the validation set and predictor effects were identical in the validation and development set. Furthermore we illustrated the methods in a case study using 15 datasets of patients suffering from traumatic brain injury.The permutation test indicated that the validation and development set were homogenous in scenario 1 (in almost all simulated samples and heterogeneous in scenario 2 (in 17%-39% of simulated samples. Previously proposed benchmark values of the c-statistic and the standard deviation of the linear predictors correctly pointed at the more heterogeneous case-mix in scenario 1 and the less heterogeneous case-mix in scenario 2.The recently proposed permutation test may provide misleading results when externally validating prediction models in the presence of case-mix differences between the development and validation population. To correctly interpret the c-statistic found at external validation it is crucial to disentangle case-mix differences from incorrect regression coefficients.

  11. GRAVTool, a Package to Compute Geoid Model by Remove-Compute-Restore Technique

    Science.gov (United States)

    Marotta, G. S.; Blitzkow, D.; Vidotti, R. M.

    2015-12-01

    Currently, there are several methods to determine geoid models. They can be based on terrestrial gravity data, geopotential coefficients, astro-geodetic data or a combination of them. Among the techniques to compute a precise geoid model, the Remove-Compute-Restore (RCR) has been widely applied. It considers short, medium and long wavelengths derived from altitude data provided by Digital Terrain Models (DTM), terrestrial gravity data and global geopotential coefficients, respectively. In order to apply this technique, it is necessary to create procedures that compute gravity anomalies and geoid models, by the integration of different wavelengths, and that adjust these models to one local vertical datum. This research presents a developed package called GRAVTool based on MATLAB software to compute local geoid models by RCR technique and its application in a study area. The studied area comprehends the federal district of Brazil, with ~6000 km², wavy relief, heights varying from 600 m to 1340 m, located between the coordinates 48.25ºW, 15.45ºS and 47.33ºW, 16.06ºS. The results of the numerical example on the studied area show the local geoid model computed by the GRAVTool package (Figure), using 1377 terrestrial gravity data, SRTM data with 3 arc second of resolution, and geopotential coefficients of the EIGEN-6C4 model to degree 360. The accuracy of the computed model (σ = ± 0.071 m, RMS = 0.069 m, maximum = 0.178 m and minimum = -0.123 m) matches the uncertainty (σ =± 0.073) of 21 points randomly spaced where the geoid was computed by geometrical leveling technique supported by positioning GNSS. The results were also better than those achieved by Brazilian official regional geoid model (σ = ± 0.099 m, RMS = 0.208 m, maximum = 0.419 m and minimum = -0.040 m).

  12. Computer Aided Battery Engineering Consortium

    Energy Technology Data Exchange (ETDEWEB)

    Pesaran, Ahmad

    2016-06-07

    A multi-national lab collaborative team was assembled that includes experts from academia and industry to enhance recently developed Computer-Aided Battery Engineering for Electric Drive Vehicles (CAEBAT)-II battery crush modeling tools and to develop microstructure models for electrode design - both computationally efficient. Task 1. The new Multi-Scale Multi-Domain model framework (GH-MSMD) provides 100x to 1,000x computation speed-up in battery electrochemical/thermal simulation while retaining modularity of particles and electrode-, cell-, and pack-level domains. The increased speed enables direct use of the full model in parameter identification. Task 2. Mechanical-electrochemical-thermal (MECT) models for mechanical abuse simulation were simultaneously coupled, enabling simultaneous modeling of electrochemical reactions during the short circuit, when necessary. The interactions between mechanical failure and battery cell performance were studied, and the flexibility of the model for various batteries structures and loading conditions was improved. Model validation is ongoing to compare with test data from Sandia National Laboratories. The ABDT tool was established in ANSYS. Task 3. Microstructural modeling was conducted to enhance next-generation electrode designs. This 3- year project will validate models for a variety of electrodes, complementing Advanced Battery Research programs. Prototype tools have been developed for electrochemical simulation and geometric reconstruction.

  13. Reduced-Order Computational Model for Low-Frequency Dynamics of Automobiles

    Directory of Open Access Journals (Sweden)

    A. Arnoux

    2013-01-01

    Full Text Available A reduced-order model is constructed to predict, for the low-frequency range, the dynamical responses in the stiff parts of an automobile constituted of stiff and flexible parts. The vehicle has then many elastic modes in this range due to the presence of many flexible parts and equipment. A nonusual reduced-order model is introduced. The family of the elastic modes is not used and is replaced by an adapted vector basis of the admissible space of global displacements. Such a construction requires a decomposition of the domain of the structure in subdomains in order to control the spatial wave length of the global displacements. The fast marching method is used to carry out the subdomain decomposition. A probabilistic model of uncertainties is introduced. The parameters controlling the level of uncertainties are estimated solving a statistical inverse problem. The methodology is validated with a large computational model of an automobile.

  14. Mapping the Most Significant Computer Hacking Events to a Temporal Computer Attack Model

    OpenAIRE

    Heerden , Renier ,; Pieterse , Heloise; Irwin , Barry

    2012-01-01

    Part 4: Section 3: ICT for Peace and War; International audience; This paper presents eight of the most significant computer hacking events (also known as computer attacks). These events were selected because of their unique impact, methodology, or other properties. A temporal computer attack model is presented that can be used to model computer based attacks. This model consists of the following stages: Target Identification, Reconnaissance, Attack, and Post-Attack Reconnaissance stages. The...

  15. Validation of computer code TRAFIC used for estimation of charcoal heatup in containment ventilation systems

    International Nuclear Information System (INIS)

    Yadav, D.H.; Datta, D.; Malhotra, P.K.; Ghadge, S.G.; Bajaj, S.S.

    2005-01-01

    Full text of publication follows: Standard Indian PHWRs are provided with a Primary Containment Filtration and Pump-Back System (PCFPB) incorporating charcoal filters in the ventilation circuit to remove radioactive iodine that may be released from reactor core into the containment during LOCA+ECCS failure which is a Design Basis Accident for containment of radioactive release. This system is provided with two identical air circulation loops, each having 2 full capacity fans (1 operating and 1 standby) for a bank of four combined charcoal and High Efficiency Particulate Activity (HEPA) filters, in addition to other filters. While the filtration circuit is designed to operate under forced flow conditions, it is of interest to understand the performance of the charcoal filters, in the event of failure of the fans after operating for some time, i.e., when radio-iodine inventory is at its peak value. It is of interest to check whether the buoyancy driven natural circulation occurring in the filtration circuit is sufficient enough to keep the temperature in the charcoal under safe limits. A computer code TRAFIC (Transient Analysis of Filters in Containment) was developed using conservative one dimensional model to analyze the system. Suitable parametric studies were carried out to understand the problem and to identify the safety of existing system. TRAFIC Code has two important components. The first one estimates the heat generation in charcoal filter based on 'Source Term'; while the other one performs thermal-hydraulic computations. In an attempt validate the Code, experimental studies have been carried out. For this purpose, an experimental set up comprising of scaled down model of filtration circuit with heating coils embedded in charcoal for simulating the heating effect due to radio iodine has been constructed. The present work of validation consists of utilizing the results obtained from experiments conducted for different heat loads, elevations and adsorbent

  16. Software verification, model validation, and hydrogeologic modelling aspects in nuclear waste disposal system simulations. A paradigm shift

    International Nuclear Information System (INIS)

    Sheng, G.M.

    1994-01-01

    This work reviewed the current concept of nuclear waste disposal in stable, terrestrial geologic media with a system of natural and man-made multi-barriers. Various aspects of this concept and supporting research were examined with the emphasis on the Canadian Nuclear Fuel Waste Management Program. Several of the crucial issues and challenges facing the current concept were discussed. These include: The difficulties inherent in a concept that centres around lithologic studies; the unsatisfactory state of software quality assurance in the present computer simulation programs; and the lack of a standardized, comprehensive, and systematic procedure to carry out a rigorous process of model validation and assessment of simulation studies. An outline of such an approach was presented and some of the principles, tools and techniques for software verification were introduced and described. A case study involving an evaluation of the Canadian performance assessment computer program is presented. A new paradigm to nuclear waste disposal was advocated to address the challenges facing the existing concept. The RRC (Regional Recharge Concept) was introduced and its many advantages were described and shown through a modelling exercise. (orig./HP)

  17. Validation study of computer code SPHINCS for sodium fire safety evaluation of fast reactor

    International Nuclear Information System (INIS)

    Yamaguchi, Akira; Tajima, Yuji

    2003-01-01

    A computer code SPHINCS solves coupled phenomena of thermal hydraulics and sodium fire based on a multi-zone model. It deals with an arbitrary number of rooms, each of which is connected mutually by doorways and penetrations. With regard to the combustion phenomena, a flame sheet model and a liquid droplet combustion model are used for pool and spray fires, respectively, with the chemical equilibrium model based on the Gibbs free energy minimization method. The chemical reaction and mass and heat transfer are solved interactively. A specific feature of SPHINCS is detailed representation of thermalhydraulics of a sodium pool and a steel liner, which is placed on the floor to prevent sodium-concrete contact. The authors analyzed a series of pool combustion experiments, in which gas and liner temperatures are measured in detail. It has been found that good agreement is obtained and the SPHINCS code has been validated with regard to pool combustion phenomena. Further research needs are identified for pool spreading modeling considering thermal deformation of steel liner and measurement of pool fluidity property as a mixture of liquid sodium and reaction products. The SPHINCS code is to be used mainly in the safety evaluation of the consequence of a sodium fire accident in a liquid metal cooled fast reactor as well as fire safety analysis in general

  18. Validation of an Improved Computer-Assisted Technique for Mining Free-Text Electronic Medical Records.

    Science.gov (United States)

    Duz, Marco; Marshall, John F; Parkin, Tim

    2017-06-29

    The use of electronic medical records (EMRs) offers opportunity for clinical epidemiological research. With large EMR databases, automated analysis processes are necessary but require thorough validation before they can be routinely used. The aim of this study was to validate a computer-assisted technique using commercially available content analysis software (SimStat-WordStat v.6 (SS/WS), Provalis Research) for mining free-text EMRs. The dataset used for the validation process included life-long EMRs from 335 patients (17,563 rows of data), selected at random from a larger dataset (141,543 patients, ~2.6 million rows of data) and obtained from 10 equine veterinary practices in the United Kingdom. The ability of the computer-assisted technique to detect rows of data (cases) of colic, renal failure, right dorsal colitis, and non-steroidal anti-inflammatory drug (NSAID) use in the population was compared with manual classification. The first step of the computer-assisted analysis process was the definition of inclusion dictionaries to identify cases, including terms identifying a condition of interest. Words in inclusion dictionaries were selected from the list of all words in the dataset obtained in SS/WS. The second step consisted of defining an exclusion dictionary, including combinations of words to remove cases erroneously classified by the inclusion dictionary alone. The third step was the definition of a reinclusion dictionary to reinclude cases that had been erroneously classified by the exclusion dictionary. Finally, cases obtained by the exclusion dictionary were removed from cases obtained by the inclusion dictionary, and cases from the reinclusion dictionary were subsequently reincluded using Rv3.0.2 (R Foundation for Statistical Computing, Vienna, Austria). Manual analysis was performed as a separate process by a single experienced clinician reading through the dataset once and classifying each row of data based on the interpretation of the free

  19. Content Validity of Temporal Bone Models Printed Via Inexpensive Methods and Materials.

    Science.gov (United States)

    Bone, T Michael; Mowry, Sarah E

    2016-09-01

    Computed tomographic (CT) scans of the 3-D printed temporal bone models will be within 15% accuracy of the CT scans of the cadaveric temporal bones. Previous studies have evaluated the face validity of 3-D-printed temporal bone models designed to train otolaryngology residents. The purpose of the study was to determine the content validity of temporal bone models printed using inexpensive printers and materials. Four cadaveric temporal bones were randomly selected and clinical temporal bone CT scans were obtained. Models were generated using previously described methods in acrylonitrile butadiene styrene (ABS) plastic using the Makerbot Replicator 2× and Hyrel printers. Models were radiographically scanned using the same protocol as the cadaveric bones. Four images from each cadaveric CT series and four corresponding images from the model CT series were selected, and voxel values were normalized to black or white. Scan slices were compared using PixelDiff software. Gross anatomic structures were evaluated in the model scans by four board certified otolaryngologists on a 4-point scale. Mean pixel difference between the cadaver and model scans was 14.25 ± 2.30% at the four selected CT slices. Mean cortical bone width difference and mean external auditory canal width difference were 0.58 ± 0.66 mm and 0.55 ± 0.46 mm, respectively. Expert raters felt the mastoid air cells were well represented (2.5 ± 0.5), while middle ear and otic capsule structures were not accurately rendered (all averaged bones for training residents in cortical mastoidectomies, but less effective for middle ear procedures.

  20. Computational prediction and experimental validation of Ciona intestinalis microRNA genes

    Directory of Open Access Journals (Sweden)

    Pasquinelli Amy E

    2007-11-01

    Full Text Available Abstract Background This study reports the first collection of validated microRNA genes in the sea squirt, Ciona intestinalis. MicroRNAs are processed from hairpin precursors to ~22 nucleotide RNAs that base pair to target mRNAs and inhibit expression. As a member of the subphylum Urochordata (Tunicata whose larval form has a notochord, the sea squirt is situated at the emergence of vertebrates, and therefore may provide information about the evolution of molecular regulators of early development. Results In this study, computational methods were used to predict 14 microRNA gene families in Ciona intestinalis. The microRNA prediction algorithm utilizes configurable microRNA sequence conservation and stem-loop specificity parameters, grouping by miRNA family, and phylogenetic conservation to the related species, Ciona savignyi. The expression for 8, out of 9 attempted, of the putative microRNAs in the adult tissue of Ciona intestinalis was validated by Northern blot analyses. Additionally, a target prediction algorithm was implemented, which identified a high confidence list of 240 potential target genes. Over half of the predicted targets can be grouped into the gene ontology categories of metabolism, transport, regulation of transcription, and cell signaling. Conclusion The computational techniques implemented in this study can be applied to other organisms and serve to increase the understanding of the origins of non-coding RNAs, embryological and cellular developmental pathways, and the mechanisms for microRNA-controlled gene regulatory networks.

  1. Developing a model for validation and prediction of bank customer ...

    African Journals Online (AJOL)

    Credit risk is the most important risk of banks. The main approaches of the bank to reduce credit risk are correct validation using the final status and the validation model parameters. High fuel of bank reserves and lost or outstanding facilities of banks indicate the lack of appropriate validation models in the banking network.

  2. Two-phase wall friction model for the trace computer code

    International Nuclear Information System (INIS)

    Wang Weidong

    2005-01-01

    The wall drag model in the TRAC/RELAP5 Advanced Computational Engine computer code (TRACE) has certain known deficiencies. For example, in an annular flow regime, the code predicts an unphysical high liquid velocity compared to the experimental data. To address those deficiencies, a new wall frictional drag package has been developed and implemented in the TRACE code to model the wall drag for two-phase flow system code. The modeled flow regimes are (1) annular/mist, (2) bubbly/slug, and (3) bubbly/slug with wall nucleation. The new models use void fraction (instead of flow quality) as the correlating variable to minimize the calculation oscillation. In addition, the models allow for transitions between the three regimes. The annular/mist regime is subdivided into three separate regimes for pure annular flow, annular flow with entrainment, and film breakdown. For adiabatic two-phase bubbly/slug flows, the vapor phase primarily exists outside of the boundary layer, and the wall shear uses single-phase liquid velocity for friction calculation. The vapor phase wall friction drag is set to zero for bubbly/slug flows. For bubbly/slug flows with wall nucleation, the bubbles are presented within the hydrodynamic boundary layer, and the two-phase wall friction drag is significantly higher with a pronounced mass flux effect. An empirical correlation has been studied and applied to account for nucleate boiling. Verification and validation tests have been performed, and the test results showed a significant code improvement. (authors)

  3. Validation of heat transfer models for gap cooling

    International Nuclear Information System (INIS)

    Okano, Yukimitsu; Nagae, Takashi; Murase, Michio

    2004-01-01

    For severe accident assessment of a light water reactor, models of heat transfer in a narrow annular gap between overheated core debris and a reactor pressure vessel are important for evaluating vessel integrity and accident management. The authors developed and improved the models of heat transfer. However, validation was not sufficient for applicability of the gap heat flux correlation to the debris cooling in the vessel lower head and applicability of the local boiling heat flux correlations to the high-pressure conditions. Therefore, in this paper, we evaluated the validity of the heat transfer models and correlations by analyses for ALPHA and LAVA experiments where molten aluminum oxide (Al 2 O 3 ) at about 2700 K was poured into the high pressure water pool in a small-scale simulated vessel lower head. In the heating process of the vessel wall, the calculated heating rate and peak temperature agreed well with the measured values, and the validity of the heat transfer models and gap heat flux correlation was confirmed. In the cooling process of the vessel wall, the calculated cooling rate was compared with the measured value, and the validity of the nucleate boiling heat flux correlation was confirmed. The peak temperatures of the vessel wall in ALPHA and LAVA experiments were lower than the temperature at the minimum heat flux point between film boiling and transition boiling, so the minimum heat flux correlation could not be validated. (author)

  4. Solar Sail Models and Test Measurements Correspondence for Validation Requirements Definition

    Science.gov (United States)

    Ewing, Anthony; Adams, Charles

    2004-01-01

    Solar sails are being developed as a mission-enabling technology in support of future NASA science missions. Current efforts have advanced solar sail technology sufficient to justify a flight validation program. A primary objective of this activity is to test and validate solar sail models that are currently under development so that they may be used with confidence in future science mission development (e.g., scalable to larger sails). Both system and model validation requirements must be defined early in the program to guide design cycles and to ensure that relevant and sufficient test data will be obtained to conduct model validation to the level required. A process of model identification, model input/output documentation, model sensitivity analyses, and test measurement correspondence is required so that decisions can be made to satisfy validation requirements within program constraints.

  5. Application of Computer Simulation Modeling to Medication Administration Process Redesign

    Directory of Open Access Journals (Sweden)

    Nathan Huynh

    2012-01-01

    Full Text Available The medication administration process (MAP is one of the most high-risk processes in health care. MAP workflow redesign can precipitate both unanticipated and unintended consequences that can lead to new medication safety risks and workflow inefficiencies. Thus, it is necessary to have a tool to evaluate the impact of redesign approaches in advance of their clinical implementation. This paper discusses the development of an agent-based MAP computer simulation model that can be used to assess the impact of MAP workflow redesign on MAP performance. The agent-based approach is adopted in order to capture Registered Nurse medication administration performance. The process of designing, developing, validating, and testing such a model is explained. Work is underway to collect MAP data in a hospital setting to provide more complex MAP observations to extend development of the model to better represent the complexity of MAP.

  6. Validation of the solar heating and cooling high speed performance (HISPER) computer code

    Science.gov (United States)

    Wallace, D. B.

    1980-01-01

    Developed to give a quick and accurate predictions HISPER, a simplification of the TRNSYS program, achieves its computational speed by not simulating detailed system operations or performing detailed load computations. In order to validate the HISPER computer for air systems the simulation was compared to the actual performance of an operational test site. Solar insolation, ambient temperature, water usage rate, and water main temperatures from the data tapes for an office building in Huntsville, Alabama were used as input. The HISPER program was found to predict the heating loads and solar fraction of the loads with errors of less than ten percent. Good correlation was found on both a seasonal basis and a monthly basis. Several parameters (such as infiltration rate and the outside ambient temperature above which heating is not required) were found to require careful selection for accurate simulation.

  7. Validation of the actuator line/Navier Stokes technique using mexico measurements

    DEFF Research Database (Denmark)

    Shen, Wen Zhong; Zhu, Wei Jun; Sørensen, Jens Nørkær

    2010-01-01

    This paper concerns the contribution of DTU MEK in the international research collaboration project (MexNext) within the framework of IEA Annex 29 to validate aerodynamic models or CFD codes using the existing measurements made in the previous EU funded projectMEXICO (Model Experiments in Control......This paper concerns the contribution of DTU MEK in the international research collaboration project (MexNext) within the framework of IEA Annex 29 to validate aerodynamic models or CFD codes using the existing measurements made in the previous EU funded projectMEXICO (Model Experiments...... in Controlled Conditions). The Actuator Line/Navier Stokes (AL/NS) technique developed at DTU is validated against the detailed MEXICO measurements. The AL/NS computations without the DNW wind tunnel with speeds of 10m/s, 15m/s and 24m/s. Comparisons of blade loading between computations and measurements show...

  8. Validation Techniques of network harmonic models based on switching of a series linear component and measuring resultant harmonic increments

    DEFF Research Database (Denmark)

    Wiechowski, Wojciech Tomasz; Lykkegaard, Jan; Bak, Claus Leth

    2007-01-01

    In this paper two methods of validation of transmission network harmonic models are introduced. The methods were developed as a result of the work presented in [1]. The first method allows calculating the transfer harmonic impedance between two nodes of a network. Switching a linear, series network......, as for example a transmission line. Both methods require that harmonic measurements performed at two ends of the disconnected element are precisely synchronized....... are used for calculation of the transfer harmonic impedance between the nodes. The determined transfer harmonic impedance can be used to validate a computer model of the network. The second method is an extension of the fist one. It allows switching a series element that contains a shunt branch...

  9. 78 FR 18353 - Guidance for Industry: Blood Establishment Computer System Validation in the User's Facility...

    Science.gov (United States)

    2013-03-26

    ... SUPPLEMENTARY INFORMATION section for electronic access to the guidance document. Submit electronic comments on... document entitled ``Guidance for Industry: Blood Establishment Computer System Validation in the User's... document to http://www.regulations.gov or written comments to the Division of Dockets Management (see...

  10. Experimental Validation of Flow Force Models for Fast Switching Valves

    DEFF Research Database (Denmark)

    Bender, Niels Christian; Pedersen, Henrik Clemmensen; Nørgård, Christian

    2017-01-01

    This paper comprises a detailed study of the forces acting on a Fast Switching Valve (FSV) plunger. The objective is to investigate to what extend different models are valid to be used for design purposes. These models depend on the geometry of the moving plunger and the properties of the surroun......This paper comprises a detailed study of the forces acting on a Fast Switching Valve (FSV) plunger. The objective is to investigate to what extend different models are valid to be used for design purposes. These models depend on the geometry of the moving plunger and the properties...... to compare and validate different models, where an effort is directed towards capturing the fluid squeeze effect just before material on material contact. The test data is compared with simulation data relying solely on analytic formulations. The general dynamics of the plunger is validated...

  11. Benchmarking Multilayer-HySEA model for landslide generated tsunami. HTHMP validation process.

    Science.gov (United States)

    Macias, J.; Escalante, C.; Castro, M. J.

    2017-12-01

    Landslide tsunami hazard may be dominant along significant parts of the coastline around the world, in particular in the USA, as compared to hazards from other tsunamigenic sources. This fact motivated NTHMP about the need of benchmarking models for landslide generated tsunamis, following the same methodology already used for standard tsunami models when the source is seismic. To perform the above-mentioned validation process, a set of candidate benchmarks were proposed. These benchmarks are based on a subset of available laboratory data sets for solid slide experiments and deformable slide experiments, and include both submarine and subaerial slides. A benchmark based on a historic field event (Valdez, AK, 1964) close the list of proposed benchmarks. A total of 7 benchmarks. The Multilayer-HySEA model including non-hydrostatic effects has been used to perform all the benchmarking problems dealing with laboratory experiments proposed in the workshop that was organized at Texas A&M University - Galveston, on January 9-11, 2017 by NTHMP. The aim of this presentation is to show some of the latest numerical results obtained with the Multilayer-HySEA (non-hydrostatic) model in the framework of this validation effort.Acknowledgements. This research has been partially supported by the Spanish Government Research project SIMURISK (MTM2015-70490-C02-01-R) and University of Malaga, Campus de Excelencia Internacional Andalucía Tech. The GPU computations were performed at the Unit of Numerical Methods (University of Malaga).

  12. A way forward for the development of an exposure computational model to computed tomography dosimetry

    Energy Technology Data Exchange (ETDEWEB)

    Ferreira, C.C., E-mail: cassio.c.ferreira@gmail.co [Nucleo de Fisica, Universidade Federal de Sergipe, Itabaiana-SE, CEP 49500-000 (Brazil); Galvao, L.A., E-mail: lailagalmeida@gmail.co [Departamento de Fisica, Universidade Federal de Sergipe, Sao Cristovao-SE, CEP 49100-000 (Brazil); Vieira, J.W., E-mail: jose.wilson59@uol.com.b [Instituto Federal de Educacao, Ciencia e Tecnologia de Pernambuco, Recife-PE, CEP 50740-540 (Brazil); Escola Politecnica de Pernambuco, Universidade de Pernambuco, Recife-PE, CEP 50720-001 (Brazil); Maia, A.F., E-mail: afmaia@ufs.b [Departamento de Fisica, Universidade Federal de Sergipe, Sao Cristovao-SE, CEP 49100-000 (Brazil)

    2011-04-15

    A way forward for the development of an exposure computational model to computed tomography dosimetry has been presented. In this way, an exposure computational model (ECM) for computed tomography (CT) dosimetry has been developed and validated through comparison with experimental results. For the development of the ECM, X-ray spectra generator codes have been evaluated and the head bow tie filter has been modelled through a mathematical equation. EGS4 and EGSnrc have been used for simulating the radiation transport by the ECM. Geometrical phantoms, commonly used in CT dosimetry, have been modelled by IDN software. MAX06 has also been used to simulate an adult male patient submitted for CT examinations. The evaluation of the X-ray spectra generator codes in CT dosimetry showed dependence with tube filtration (or HVL value). More generally, with the increment of total filtration (or HVL value) the X-raytbc becomes the best X-ray spectra generator code for CT dosimetry. The EGSnrc/X-raytbc combination has calculated C{sub 100,c} in better concordance with C{sub 100,c} measured in two different CT scanners. For a Toshiba CT scanner, the average percentage difference between the calculated C{sub 100,c} values and measured C{sub 100,c} values was 8.2%. Whilst for a GE CT scanner, the average percentage difference was 10.4%. By the measurements of air kerma through a prototype head bow tie filter a third-order exponential decay equation was found. C{sub 100,c} and C{sub 100,p} values calculated by the ECM are in good agreement with values measured at a specific CT scanner. A maximum percentage difference of 2% has been found in the PMMA CT head phantoms, demonstrating effective modelling of the head bow tie filter by the equation. The absorbed and effective doses calculated by the ECM developed in this work have been compared to those calculated by the ECM of Jones and Shrimpton for an adult male patient. For a head examination the absorbed dose values calculated by the

  13. Microservices Validation: Methodology and Implementation

    OpenAIRE

    Savchenko, D.; Radchenko, G.

    2015-01-01

    Due to the wide spread of cloud computing, arises actual question about architecture, design and implementation of cloud applications. The microservice model describes the design and development of loosely coupled cloud applications when computing resources are provided on the basis of automated IaaS and PaaS cloud platforms. Such applications consist of hundreds and thousands of service instances, so automated validation and testing of cloud applications developed on the basis of microservic...

  14. Computational Analysis and Simulation of Empathic Behaviors: a Survey of Empathy Modeling with Behavioral Signal Processing Framework.

    Science.gov (United States)

    Xiao, Bo; Imel, Zac E; Georgiou, Panayiotis; Atkins, David C; Narayanan, Shrikanth S

    2016-05-01

    Empathy is an important psychological process that facilitates human communication and interaction. Enhancement of empathy has profound significance in a range of applications. In this paper, we review emerging directions of research on computational analysis of empathy expression and perception as well as empathic interactions, including their simulation. We summarize the work on empathic expression analysis by the targeted signal modalities (e.g., text, audio, and facial expressions). We categorize empathy simulation studies into theory-based emotion space modeling or application-driven user and context modeling. We summarize challenges in computational study of empathy including conceptual framing and understanding of empathy, data availability, appropriate use and validation of machine learning techniques, and behavior signal processing. Finally, we propose a unified view of empathy computation and offer a series of open problems for future research.

  15. Exploring Ventilation Efficiency in Poultry Buildings: The Validation of Computational Fluid Dynamics (CFD in a Cross-Mechanically Ventilated Broiler Farm

    Directory of Open Access Journals (Sweden)

    Antonio Hospitaler

    2013-05-01

    Full Text Available Broiler production in modern poultry farms commonly uses mechanical ventilation systems. This mechanical ventilation requires an amount of electric energy and a high level of investment in technology. Nevertheless, broiler production is affected by periodic problems of mortality because of thermal stress, thus being crucial to explore the ventilation efficiency. In this article, we analyze a cross-mechanical ventilation system focusing on air velocity distribution. In this way, two methodologies were used to explore indoor environment in livestock buildings: Computational Fluid Dynamics (CFD simulations and direct measurements for verification and validation (V&V of CFD. In this study, a validation model using a Generalized Linear Model (GLM was conducted to compare these methodologies. The results showed that both methodologies were similar in results: the average of air velocities values were 0.60 ± 0.56 m s−1 for CFD and 0.64 ± 0.54 m s−1 for direct measurements. In conclusion, the air velocity was not affected by the methodology (CFD or direct measurements, and the CFD simulations were therefore validated to analyze indoor environment of poultry farms and its operations. A better knowledge of the indoor environment may contribute to reduce the demand of electric energy, increasing benefits and improving the thermal comfort of broilers.

  16. Influence of Distal Resistance and Proximal Stiffness on Hemodynamics and RV Afterload in Progression and Treatments of Pulmonary Hypertension: A Computational Study with Validation Using Animal Models

    Directory of Open Access Journals (Sweden)

    Zhenbi Su

    2013-01-01

    Full Text Available We develop a simple computational model based on measurements from a hypoxic neonatal calf model of pulmonary hypertension (PH to investigate the interplay between vascular and ventricular measures in the setting of progressive PH. Model parameters were obtained directly from in vivo and ex vivo measurements of neonatal calves. Seventeen sets of model-predicted impedance and mean pulmonary arterial pressure (mPAP show good agreement with the animal measurements, thereby validating the model. Next, we considered a predictive model in which three parameters, PVR, elastic modulus (EM, and arterial thickness, were varied singly from one simulation to the next to study their individual roles in PH progression. Finally, we used the model to predict the individual impacts of clinical (vasodilatory and theoretical (compliance increasing PH treatments on improving pulmonary hemodynamics. Our model (1 displayed excellent patient-specific agreement with measured global pulmonary parameters; (2 quantified relationships between PVR and mean pressure and PVS and pulse pressure, as well as studiying the right ventricular (RV afterload, which could be measured as a hydraulic load calculated from spectral analysis of pulmonary artery pressure and flow waves; (3 qualitatively confirmed the derangement of vascular wall shear stress in progressive PH; and (4 established that decreasing proximal vascular stiffness through a theoretical treatment of reversing proximal vascular remodeling could decrease RV afterload.

  17. Statistical validation of normal tissue complication probability models

    NARCIS (Netherlands)

    Xu, Cheng-Jian; van der Schaaf, Arjen; van t Veld, Aart; Langendijk, Johannes A.; Schilstra, Cornelis

    2012-01-01

    PURPOSE: To investigate the applicability and value of double cross-validation and permutation tests as established statistical approaches in the validation of normal tissue complication probability (NTCP) models. METHODS AND MATERIALS: A penalized regression method, LASSO (least absolute shrinkage

  18. Myocardial segmentation based on coronary anatomy using coronary computed tomography angiography: Development and validation in a pig model

    Energy Technology Data Exchange (ETDEWEB)

    Chung, Mi Sun [Chung-Ang University College of Medicine, Department of Radiology, Chung-Ang University Hospital, Seoul (Korea, Republic of); Yang, Dong Hyun; Seo, Joon Beom; Kang, Joon-Won; Lim, Tae-Hwan [Asan Medical Center, University of Ulsan College of Medicine, Department of Radiology and Research Institute of Radiology, Seoul (Korea, Republic of); Kim, Young-Hak; Kang, Soo-Jin; Jung, Joonho [Asan Medical Center, University of Ulsan College of Medicine, Heart Institute, Seoul (Korea, Republic of); Kim, Namkug [Asan Medical Center, University of Ulsan College of Medicine, Department of Convergence Medicine, Seoul (Korea, Republic of); Heo, Seung-Ho [Asan Medical Center, University of Ulsan College of Medicine, Asan institute for Life Science, Seoul (Korea, Republic of); Baek, Seunghee [Asan Medical Center, University of Ulsan College of Medicine, Department of Clinical Epidemiology and Biostatistics, Seoul (Korea, Republic of); Choi, Byoung Wook [Yonsei University, Department of Diagnostic Radiology, College of Medicine, Seoul (Korea, Republic of)

    2017-10-15

    To validate a method for performing myocardial segmentation based on coronary anatomy using coronary CT angiography (CCTA). Coronary artery-based myocardial segmentation (CAMS) was developed for use with CCTA. To validate and compare this method with the conventional American Heart Association (AHA) classification, a single coronary occlusion model was prepared and validated using six pigs. The unstained occluded coronary territories of the specimens and corresponding arterial territories from CAMS and AHA segmentations were compared using slice-by-slice matching and 100 virtual myocardial columns. CAMS more precisely predicted ischaemic area than the AHA method, as indicated by 95% versus 76% (p < 0.001) of the percentage of matched columns (defined as percentage of matched columns of segmentation method divided by number of unstained columns in the specimen). According to the subgroup analyses, CAMS demonstrated a higher percentage of matched columns than the AHA method in the left anterior descending artery (100% vs. 77%; p < 0.001) and mid- (99% vs. 83%; p = 0.046) and apical-level territories of the left ventricle (90% vs. 52%; p = 0.011). CAMS is a feasible method for identifying the corresponding myocardial territories of the coronary arteries using CCTA. (orig.)

  19. Preliminary validation of a Monte Carlo model for IMRT fields

    International Nuclear Information System (INIS)

    Wright, Tracy; Lye, Jessica; Mohammadi, Mohammad

    2011-01-01

    Full text: A Monte Carlo model of an Elekta linac, validated for medium to large (10-30 cm) symmetric fields, has been investigated for small, irregular and asymmetric fields suitable for IMRT treatments. The model has been validated with field segments using radiochromic film in solid water. The modelled positions of the multileaf collimator (MLC) leaves have been validated using EBT film, In the model, electrons with a narrow energy spectrum are incident on the target and all components of the linac head are included. The MLC is modelled using the EGSnrc MLCE component module. For the validation, a number of single complex IMRT segments with dimensions approximately 1-8 cm were delivered to film in solid water (see Fig, I), The same segments were modelled using EGSnrc by adjusting the MLC leaf positions in the model validated for 10 cm symmetric fields. Dose distributions along the centre of each MLC leaf as determined by both methods were compared. A picket fence test was also performed to confirm the MLC leaf positions. 95% of the points in the modelled dose distribution along the leaf axis agree with the film measurement to within 1%/1 mm for dose difference and distance to agreement. Areas of most deviation occur in the penumbra region. A system has been developed to calculate the MLC leaf positions in the model for any planned field size.

  20. From deep TLS validation to ensembles of atomic models built from elemental motions

    International Nuclear Information System (INIS)

    Urzhumtsev, Alexandre; Afonine, Pavel V.; Van Benschoten, Andrew H.; Fraser, James S.; Adams, Paul D.

    2015-01-01

    Procedures are described for extracting the vibration and libration parameters corresponding to a given set of TLS matrices and their simultaneous validation. Knowledge of these parameters allows the generation of structural ensembles corresponding to these matrices. The translation–libration–screw model first introduced by Cruickshank, Schomaker and Trueblood describes the concerted motions of atomic groups. Using TLS models can improve the agreement between calculated and experimental diffraction data. Because the T, L and S matrices describe a combination of atomic vibrations and librations, TLS models can also potentially shed light on molecular mechanisms involving correlated motions. However, this use of TLS models in mechanistic studies is hampered by the difficulties in translating the results of refinement into molecular movement or a structural ensemble. To convert the matrices into a constituent molecular movement, the matrix elements must satisfy several conditions. Refining the T, L and S matrix elements as independent parameters without taking these conditions into account may result in matrices that do not represent concerted molecular movements. Here, a mathematical framework and the computational tools to analyze TLS matrices, resulting in either explicit decomposition into descriptions of the underlying motions or a report of broken conditions, are described. The description of valid underlying motions can then be output as a structural ensemble. All methods are implemented as part of the PHENIX project

  1. From deep TLS validation to ensembles of atomic models built from elemental motions

    Energy Technology Data Exchange (ETDEWEB)

    Urzhumtsev, Alexandre, E-mail: sacha@igbmc.fr [Centre for Integrative Biology, Institut de Génétique et de Biologie Moléculaire et Cellulaire, CNRS–INSERM–UdS, 1 Rue Laurent Fries, BP 10142, 67404 Illkirch (France); Université de Lorraine, BP 239, 54506 Vandoeuvre-les-Nancy (France); Afonine, Pavel V. [Lawrence Berkeley National Laboratory, Berkeley, California (United States); Van Benschoten, Andrew H.; Fraser, James S. [University of California, San Francisco, San Francisco, CA 94158 (United States); Adams, Paul D. [Lawrence Berkeley National Laboratory, Berkeley, California (United States); University of California Berkeley, Berkeley, CA 94720 (United States); Centre for Integrative Biology, Institut de Génétique et de Biologie Moléculaire et Cellulaire, CNRS–INSERM–UdS, 1 Rue Laurent Fries, BP 10142, 67404 Illkirch (France)

    2015-07-28

    Procedures are described for extracting the vibration and libration parameters corresponding to a given set of TLS matrices and their simultaneous validation. Knowledge of these parameters allows the generation of structural ensembles corresponding to these matrices. The translation–libration–screw model first introduced by Cruickshank, Schomaker and Trueblood describes the concerted motions of atomic groups. Using TLS models can improve the agreement between calculated and experimental diffraction data. Because the T, L and S matrices describe a combination of atomic vibrations and librations, TLS models can also potentially shed light on molecular mechanisms involving correlated motions. However, this use of TLS models in mechanistic studies is hampered by the difficulties in translating the results of refinement into molecular movement or a structural ensemble. To convert the matrices into a constituent molecular movement, the matrix elements must satisfy several conditions. Refining the T, L and S matrix elements as independent parameters without taking these conditions into account may result in matrices that do not represent concerted molecular movements. Here, a mathematical framework and the computational tools to analyze TLS matrices, resulting in either explicit decomposition into descriptions of the underlying motions or a report of broken conditions, are described. The description of valid underlying motions can then be output as a structural ensemble. All methods are implemented as part of the PHENIX project.

  2. Impact of implementation choices on quantitative predictions of cell-based computational models

    Science.gov (United States)

    Kursawe, Jochen; Baker, Ruth E.; Fletcher, Alexander G.

    2017-09-01

    'Cell-based' models provide a powerful computational tool for studying the mechanisms underlying the growth and dynamics of biological tissues in health and disease. An increasing amount of quantitative data with cellular resolution has paved the way for the quantitative parameterisation and validation of such models. However, the numerical implementation of cell-based models remains challenging, and little work has been done to understand to what extent implementation choices may influence model predictions. Here, we consider the numerical implementation of a popular class of cell-based models called vertex models, which are often used to study epithelial tissues. In two-dimensional vertex models, a tissue is approximated as a tessellation of polygons and the vertices of these polygons move due to mechanical forces originating from the cells. Such models have been used extensively to study the mechanical regulation of tissue topology in the literature. Here, we analyse how the model predictions may be affected by numerical parameters, such as the size of the time step, and non-physical model parameters, such as length thresholds for cell rearrangement. We find that vertex positions and summary statistics are sensitive to several of these implementation parameters. For example, the predicted tissue size decreases with decreasing cell cycle durations, and cell rearrangement may be suppressed by large time steps. These findings are counter-intuitive and illustrate that model predictions need to be thoroughly analysed and implementation details carefully considered when applying cell-based computational models in a quantitative setting.

  3. Characterization and validation of an in silico toxicology model to predict the mutagenic potential of drug impurities*

    Energy Technology Data Exchange (ETDEWEB)

    Valerio, Luis G., E-mail: luis.valerio@fda.hhs.gov [Science and Research Staff, Office of Pharmaceutical Science, Center for Drug Evaluation and Research, U.S. Food and Drug Administration, 10903 New Hampshire Avenue, Silver Spring, MD 20993–0002 (United States); Cross, Kevin P. [Leadscope, Inc., 1393 Dublin Road, Columbus, OH, 43215–1084 (United States)

    2012-05-01

    Control and minimization of human exposure to potential genotoxic impurities found in drug substances and products is an important part of preclinical safety assessments of new drug products. The FDA's 2008 draft guidance on genotoxic and carcinogenic impurities in drug substances and products allows use of computational quantitative structure–activity relationships (QSAR) to identify structural alerts for known and expected impurities present at levels below qualified thresholds. This study provides the information necessary to establish the practical use of a new in silico toxicology model for predicting Salmonella t. mutagenicity (Ames assay outcome) of drug impurities and other chemicals. We describe the model's chemical content and toxicity fingerprint in terms of compound space, molecular and structural toxicophores, and have rigorously tested its predictive power using both cross-validation and external validation experiments, as well as case studies. Consistent with desired regulatory use, the model performs with high sensitivity (81%) and high negative predictivity (81%) based on external validation with 2368 compounds foreign to the model and having known mutagenicity. A database of drug impurities was created from proprietary FDA submissions and the public literature which found significant overlap between the structural features of drug impurities and training set chemicals in the QSAR model. Overall, the model's predictive performance was found to be acceptable for screening drug impurities for Salmonella mutagenicity. -- Highlights: ► We characterize a new in silico model to predict mutagenicity of drug impurities. ► The model predicts Salmonella mutagenicity and will be useful for safety assessment. ► We examine toxicity fingerprints and toxicophores of this Ames assay model. ► We compare these attributes to those found in drug impurities known to FDA/CDER. ► We validate the model and find it has a desired predictive

  4. Characterization and validation of an in silico toxicology model to predict the mutagenic potential of drug impurities*

    International Nuclear Information System (INIS)

    Valerio, Luis G.; Cross, Kevin P.

    2012-01-01

    Control and minimization of human exposure to potential genotoxic impurities found in drug substances and products is an important part of preclinical safety assessments of new drug products. The FDA's 2008 draft guidance on genotoxic and carcinogenic impurities in drug substances and products allows use of computational quantitative structure–activity relationships (QSAR) to identify structural alerts for known and expected impurities present at levels below qualified thresholds. This study provides the information necessary to establish the practical use of a new in silico toxicology model for predicting Salmonella t. mutagenicity (Ames assay outcome) of drug impurities and other chemicals. We describe the model's chemical content and toxicity fingerprint in terms of compound space, molecular and structural toxicophores, and have rigorously tested its predictive power using both cross-validation and external validation experiments, as well as case studies. Consistent with desired regulatory use, the model performs with high sensitivity (81%) and high negative predictivity (81%) based on external validation with 2368 compounds foreign to the model and having known mutagenicity. A database of drug impurities was created from proprietary FDA submissions and the public literature which found significant overlap between the structural features of drug impurities and training set chemicals in the QSAR model. Overall, the model's predictive performance was found to be acceptable for screening drug impurities for Salmonella mutagenicity. -- Highlights: ► We characterize a new in silico model to predict mutagenicity of drug impurities. ► The model predicts Salmonella mutagenicity and will be useful for safety assessment. ► We examine toxicity fingerprints and toxicophores of this Ames assay model. ► We compare these attributes to those found in drug impurities known to FDA/CDER. ► We validate the model and find it has a desired predictive performance.

  5. Models of parallel computation :a survey and classification

    Institute of Scientific and Technical Information of China (English)

    ZHANG Yunquan; CHEN Guoliang; SUN Guangzhong; MIAO Qiankun

    2007-01-01

    In this paper,the state-of-the-art parallel computational model research is reviewed.We will introduce various models that were developed during the past decades.According to their targeting architecture features,especially memory organization,we classify these parallel computational models into three generations.These models and their characteristics are discussed based on three generations classification.We believe that with the ever increasing speed gap between the CPU and memory systems,incorporating non-uniform memory hierarchy into computational models will become unavoidable.With the emergence of multi-core CPUs,the parallelism hierarchy of current computing platforms becomes more and more complicated.Describing this complicated parallelism hierarchy in future computational models becomes more and more important.A semi-automatic toolkit that can extract model parameters and their values on real computers can reduce the model analysis complexity,thus allowing more complicated models with more parameters to be adopted.Hierarchical memory and hierarchical parallelism will be two very important features that should be considered in future model design and research.

  6. Advanced computational model for three-phase slurry reactors

    International Nuclear Information System (INIS)

    Goodarz Ahmadi

    2001-10-01

    material parameters of the model. (2) To provide experimental data for phasic fluctuation and mean velocities, as well as the solid volume fraction in the shear flow devices. (3) To develop an accurate computational capability incorporating the new rate-dependent and anisotropic model for analyzing reacting and nonreacting slurry flows, and to solve a number of technologically important problems related to Fischer-Tropsch (F-T) liquid fuel production processes. (4) To verify the validity of the developed model by comparing the predicted results with the performed and the available experimental data under idealized conditions

  7. Computer assistance in the collection, validation and manipulation of data for epidemiological studies

    International Nuclear Information System (INIS)

    Salmon, L.; Venn, J.B.

    1987-01-01

    The difficulties encountered in assembling and storing adequate data for a large cohort study of 50,000 radiation and other workers are discussed. A computer database management system was designed to permit the storage of information that could be conflicting and incomplete. The way in which it was used to validate data and to match records from a variety of sources is described. (author)

  8. Computer model for ductile fracture

    International Nuclear Information System (INIS)

    Moran, B.; Reaugh, J. E.

    1979-01-01

    A computer model is described for predicting ductile fracture initiation and propagation. The computer fracture model is calibrated by simple and notched round-bar tension tests and a precracked compact tension test. The model is used to predict fracture initiation and propagation in a Charpy specimen and compare the results with experiments. The calibrated model provides a correlation between Charpy V-notch (CVN) fracture energy and any measure of fracture toughness, such as J/sub Ic/. A second simpler empirical correlation was obtained using the energy to initiate fracture in the Charpy specimen rather than total energy CVN, and compared the results with the empirical correlation of Rolfe and Novak

  9. An Emotional Agent Model Based on Granular Computing

    Directory of Open Access Journals (Sweden)

    Jun Hu

    2012-01-01

    Full Text Available Affective computing has a very important significance for fulfilling intelligent information processing and harmonious communication between human being and computers. A new model for emotional agent is proposed in this paper to make agent have the ability of handling emotions, based on the granular computing theory and the traditional BDI agent model. Firstly, a new emotion knowledge base based on granular computing for emotion expression is presented in the model. Secondly, a new emotional reasoning algorithm based on granular computing is proposed. Thirdly, a new emotional agent model based on granular computing is presented. Finally, based on the model, an emotional agent for patient assistant in hospital is realized, experiment results show that it is efficient to handle simple emotions.

  10. The Fermilab central computing facility architectural model

    International Nuclear Information System (INIS)

    Nicholls, J.

    1989-01-01

    The goal of the current Central Computing Upgrade at Fermilab is to create a computing environment that maximizes total productivity, particularly for high energy physics analysis. The Computing Department and the Next Computer Acquisition Committee decided upon a model which includes five components: an interactive front-end, a Large-Scale Scientific Computer (LSSC, a mainframe computing engine), a microprocessor farm system, a file server, and workstations. With the exception of the file server, all segments of this model are currently in production: a VAX/VMS cluster interactive front-end, an Amdahl VM Computing engine, ACP farms, and (primarily) VMS workstations. This paper will discuss the implementation of the Fermilab Central Computing Facility Architectural Model. Implications for Code Management in such a heterogeneous environment, including issues such as modularity and centrality, will be considered. Special emphasis will be placed on connectivity and communications between the front-end, LSSC, and workstations, as practiced at Fermilab. (orig.)

  11. The Fermilab Central Computing Facility architectural model

    International Nuclear Information System (INIS)

    Nicholls, J.

    1989-05-01

    The goal of the current Central Computing Upgrade at Fermilab is to create a computing environment that maximizes total productivity, particularly for high energy physics analysis. The Computing Department and the Next Computer Acquisition Committee decided upon a model which includes five components: an interactive front end, a Large-Scale Scientific Computer (LSSC, a mainframe computing engine), a microprocessor farm system, a file server, and workstations. With the exception of the file server, all segments of this model are currently in production: a VAX/VMS Cluster interactive front end, an Amdahl VM computing engine, ACP farms, and (primarily) VMS workstations. This presentation will discuss the implementation of the Fermilab Central Computing Facility Architectural Model. Implications for Code Management in such a heterogeneous environment, including issues such as modularity and centrality, will be considered. Special emphasis will be placed on connectivity and communications between the front-end, LSSC, and workstations, as practiced at Fermilab. 2 figs

  12. Opportunity for Realizing Ideal Computing System using Cloud Computing Model

    OpenAIRE

    Sreeramana Aithal; Vaikunth Pai T

    2017-01-01

    An ideal computing system is a computing system with ideal characteristics. The major components and their performance characteristics of such hypothetical system can be studied as a model with predicted input, output, system and environmental characteristics using the identified objectives of computing which can be used in any platform, any type of computing system, and for application automation, without making modifications in the form of structure, hardware, and software coding by an exte...

  13. A physicist's model of computation

    International Nuclear Information System (INIS)

    Fredkin, E.

    1991-01-01

    An attempt is presented to make a statement about what a computer is and how it works from the perspective of physics. The single observation that computation can be a reversible process allows for the same kind of insight into computing as was obtained by Carnot's discovery that heat engines could be modelled as reversible processes. It allows us to bring computation into the realm of physics, where the power of physics allows us to ask and answer questions that seemed intractable from the viewpoint of computer science. Strangely enough, this effort makes it clear why computers get cheaper every year. (author) 14 refs., 4 figs

  14. Modelling, abstraction, and computation in systems biology: A view from computer science.

    Science.gov (United States)

    Melham, Tom

    2013-04-01

    Systems biology is centrally engaged with computational modelling across multiple scales and at many levels of abstraction. Formal modelling, precise and formalised abstraction relationships, and computation also lie at the heart of computer science--and over the past decade a growing number of computer scientists have been bringing their discipline's core intellectual and computational tools to bear on biology in fascinating new ways. This paper explores some of the apparent points of contact between the two fields, in the context of a multi-disciplinary discussion on conceptual foundations of systems biology. Copyright © 2012 Elsevier Ltd. All rights reserved.

  15. Development, Validation and Parametric study of a 3-Year-Old Child Head Finite Element Model

    Science.gov (United States)

    Cui, Shihai; Chen, Yue; Li, Haiyan; Ruan, ShiJie

    2015-12-01

    Traumatic brain injury caused by drop and traffic accidents is an important reason for children's death and disability. Recently, the computer finite element (FE) head model has been developed to investigate brain injury mechanism and biomechanical responses. Based on CT data of a healthy 3-year-old child head, the FE head model with detailed anatomical structure was developed. The deep brain structures such as white matter, gray matter, cerebral ventricle, hippocampus, were firstly created in this FE model. The FE model was validated by comparing the simulation results with that of cadaver experiments based on reconstructing the child and adult cadaver experiments. In addition, the effects of skull stiffness on the child head dynamic responses were further investigated. All the simulation results confirmed the good biofidelity of the FE model.

  16. Basic data, computer codes and integral experiments: The tools for modelling in nuclear technology

    International Nuclear Information System (INIS)

    Sartori, E.

    2001-01-01

    When studying applications in nuclear technology we need to understand and be able to predict the behavior of systems manufactured by human enterprise. First, the underlying basic physical and chemical phenomena need to be understood. We have then to predict the results from the interplay of the large number of the different basic events: i.e. the macroscopic effects. In order to be able to build confidence in our modelling capability, we need then to compare these results against measurements carried out on such systems. The different levels of modelling require the solution of different types of equations using different type of parameters. The tools required for carrying out a complete validated analysis are: - The basic nuclear or chemical data; - The computer codes, and; - The integral experiments. This article describes the role each component plays in a computational scheme designed for modelling purposes. It describes also which tools have been developed and are internationally available. The role of the OECD/NEA Data Bank, the Radiation Shielding Information Computational Center (RSICC), and the IAEA Nuclear Data Section are playing in making these elements available to the community of scientists and engineers is described. (author)

  17. Towards the Selection of an Optimal Global Geopotential Model for the Computation of the Long-Wavelength Contribution: A Case Study of Ghana

    Directory of Open Access Journals (Sweden)

    Caleb Iddissah Yakubu

    2017-11-01

    Full Text Available The selection of a global geopotential model (GGM for modeling the long-wavelength for geoid computation is imperative not only because of the plethora of GGMs available but more importantly because it influences the accuracy of a geoid model. In this study, we propose using the Gaussian averaging function for selecting an optimal GGM and degree and order (d/o for the remove-compute-restore technique as a replacement for the direct comparison of terrestrial gravity anomalies and GGM anomalies, because ground data and GGM have different frequencies. Overall, EGM2008 performed better than all the tested GGMs and at an optimal d/o of 222. We verified the results by computing geoid models using Heck and Grüninger’s modification and validated them against GPS/trigonometric data. The results of the validation were consistent with those of the averaging process with EGM2008 giving the smallest standard deviation of 0.457 m at d/o 222, resulting in an 8% improvement over the previous geoid model. In addition, this geoid model, the Ghanaian Gravimetric Geoid 2017 (GGG 2017 may be used to replace second-order class II leveling, with an expected error of 6.8 mm/km for baselines ranging from 20 to 225 km.

  18. The problem of fouling in submerged membrane bioreactors - Model validation and experimental evidence

    Science.gov (United States)

    Tsibranska, Irene; Vlaev, Serafim; Tylkowski, Bartosz

    2018-01-01

    Integrating biological treatment with membrane separation has found a broad area of applications and industrial attention. Submerged membrane bioreactors (SMBRs), based on membrane modules immersed in the bioreactor, or side stream ones connected in recycle have been employed in different biotechnological processes for separation of thermally unstable products. Fouling is one of the most important challenges in the integrated SMBRs. A number of works are devoted to fouling analysis and its treatment, especially exploring the opportunity for enhanced fouling control in SMBRs. The main goal of the review is to provide a comprehensive yet concise overview of modeling the fouling in SMBRs in view of the problematics of model validation, either by real system measurements at different scales or by analysis of the obtained theoretical results. The review is focused on the current state of research applying computational fluid dynamics (CFD) modeling techniques.

  19. BIOMOVS: an international model validation study

    International Nuclear Information System (INIS)

    Haegg, C.; Johansson, G.

    1988-01-01

    BIOMOVS (BIOspheric MOdel Validation Study) is an international study where models used for describing the distribution of radioactive and nonradioactive trace substances in terrestrial and aquatic environments are compared and tested. The main objectives of the study are to compare and test the accuracy of predictions between such models, explain differences in these predictions, recommend priorities for future research concerning the improvement of the accuracy of model predictions and act as a forum for the exchange of ideas, experience and information. (author)

  20. BIOMOVS: An international model validation study

    International Nuclear Information System (INIS)

    Haegg, C.; Johansson, G.

    1987-01-01

    BIOMOVS (BIOspheric MOdel Validation Study) is an international study where models used for describing the distribution of radioactive and nonradioactive trace substances in terrestrial and aquatic environments are compared and tested. The main objectives of the study are to compare and test the accuracy of predictions between such models, explain differences in these predictions, recommend priorities for future research concerning the improvement of the accuracy of model predictions and act as a forum for the exchange of ideas, experience and information. (orig.)

  1. Challenges and opportunities of modeling plasma–surface interactions in tungsten using high-performance computing

    Energy Technology Data Exchange (ETDEWEB)

    Wirth, Brian D., E-mail: bdwirth@utk.edu [Department of Nuclear Engineering, University of Tennessee, Knoxville, TN 37996 (United States); Nuclear Science and Engineering Directorate, Oak Ridge National Laboratory, Oak Ridge, TN (United States); Hammond, K.D. [Department of Nuclear Engineering, University of Tennessee, Knoxville, TN 37996 (United States); Krasheninnikov, S.I. [University of California, San Diego, La Jolla, CA (United States); Maroudas, D. [University of Massachusetts, Amherst, Amherst, MA 01003 (United States)

    2015-08-15

    The performance of plasma facing components (PFCs) is critical for ITER and future magnetic fusion reactors. The ITER divertor will be tungsten, which is the primary candidate material for future reactors. Recent experiments involving tungsten exposure to low-energy helium plasmas reveal significant surface modification, including the growth of nanometer-scale tendrils of “fuzz” and formation of nanometer-sized bubbles in the near-surface region. The large span of spatial and temporal scales governing plasma surface interactions are among the challenges to modeling divertor performance. Fortunately, recent innovations in computational modeling, increasingly powerful high-performance computers, and improved experimental characterization tools provide a path toward self-consistent, experimentally validated models of PFC and divertor performance. Recent advances in understanding tungsten–helium interactions are reviewed, including such processes as helium clustering, which serve as nuclei for gas bubbles; and trap mutation, dislocation loop punching and bubble bursting; which together initiate surface morphological modification.

  2. Challenges and opportunities of modeling plasma–surface interactions in tungsten using high-performance computing

    International Nuclear Information System (INIS)

    Wirth, Brian D.; Hammond, K.D.; Krasheninnikov, S.I.; Maroudas, D.

    2015-01-01

    The performance of plasma facing components (PFCs) is critical for ITER and future magnetic fusion reactors. The ITER divertor will be tungsten, which is the primary candidate material for future reactors. Recent experiments involving tungsten exposure to low-energy helium plasmas reveal significant surface modification, including the growth of nanometer-scale tendrils of “fuzz” and formation of nanometer-sized bubbles in the near-surface region. The large span of spatial and temporal scales governing plasma surface interactions are among the challenges to modeling divertor performance. Fortunately, recent innovations in computational modeling, increasingly powerful high-performance computers, and improved experimental characterization tools provide a path toward self-consistent, experimentally validated models of PFC and divertor performance. Recent advances in understanding tungsten–helium interactions are reviewed, including such processes as helium clustering, which serve as nuclei for gas bubbles; and trap mutation, dislocation loop punching and bubble bursting; which together initiate surface morphological modification

  3. A Practical Approach to Validating a PD Model

    NARCIS (Netherlands)

    Medema, L.; Koning, de R.; Lensink, B.W.

    2009-01-01

    The capital adequacy framework Basel II aims to promote the adoption of stronger risk management practices by the banking industry. The implementation makes validation of credit risk models more important. Lenders therefore need a validation methodology to convince their supervisors that their

  4. A practical approach to validating a PD model

    NARCIS (Netherlands)

    Medema, Lydian; Koning, Ruud H.; Lensink, Robert; Medema, M.

    The capital adequacy framework Basel II aims to promote the adoption of stronger risk management practices by the banking industry. The implementation makes validation of credit risk models more important. Lenders therefore need a validation methodology to convince their supervisors that their

  5. Base Flow Model Validation, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The innovation is the systematic "building-block" validation of CFD/turbulence models employing a GUI driven CFD code (RPFM) and existing as well as new data sets to...

  6. VALIDITY IN COMPUTER-BASED TESTING: A LITERATURE REVIEW OF COMPARABILITY ISSUES AND EXAMINEE PERSPECTIVES

    Directory of Open Access Journals (Sweden)

    Ika Kana Trisnawati

    2015-05-01

    Full Text Available These past years have seen the growing popularity of the Computer-Based Tests (CBTs in various disciplines, for various purposes, although the Paper-and Pencil Based Tests (P&Ps are still in use. However, many question on whether the use of CBTs outperform the effectiveness of the P&Ps or if the CBTs can become a valid measuring tool compared to the PBTs. This paper tries to present the comparison on both the CBTs and the P&Ps and their respective examinee perspectives in order to figure out if doubts should arise to the emergence of the CBTs over the classic P&Ps. Findings showed that the CBTs are advantageous in that they are both efficient (reducing testing time and effective (maintaining the test reliability over the P&P versions. Nevertheless, the CBTs still need to have their variables well-designed (e.g., study design, computer algorithm in order for the scores to be comparable to those in the P&P tests since the score equivalence is one of the validity evidences needed in a CBT.

  7. Validation of a Computational Model for the SLS Core Stage Oxygen Tank Diffuser Concept and the Low Profile Diffuser - An Advanced Development Design for the SLS

    Science.gov (United States)

    Brodnick, Jacob; Richardson, Brian; Ramachandran, Narayanan

    2015-01-01

    The Low Profile Diffuser (LPD) project originated as an award from the Marshall Space Flight Center (MSFC) Advanced Development (ADO) office to the Main Propulsion Systems Branch (ER22). The task was created to develop and test an LPD concept that could produce comparable performance to a larger, traditionally designed, ullage gas diffuser while occupying a smaller volume envelope. Historically, ullage gas diffusers have been large, bulky devices that occupy a significant portion of the propellant tank, decreasing the tank volume available for propellant. Ullage pressurization of spacecraft propellant tanks is required to prevent boil-off of cryogenic propellants and to provide a positive pressure for propellant extraction. To achieve this, ullage gas diffusers must slow hot, high-pressure gas entering a propellant tank from supersonic speeds to only a few meters per second. Decreasing the incoming gas velocity is typically accomplished through expansion to larger areas within the diffuser which has traditionally led to large diffuser lengths. The Fluid Dynamics Branch (ER42) developed and applied advanced Computational Fluid Dynamics (CFD) analysis methods in order to mature the LPD design from and initial concept to an optimized test prototype and to provide extremely accurate pre-test predictions of diffuser performance. Additionally, the diffuser concept for the Core Stage of the Space Launch System (SLS) was analyzed in a short amount of time to guide test data collection efforts of the qualification of the device. CFD analysis of the SLS diffuser design provided new insights into the functioning of the device and was qualitatively validated against hot wire anemometry of the exterior flow field. Rigorous data analysis of the measurements was performed on static and dynamic pressure data, data from two microphones, accelerometers and hot wire anemometry with automated traverse. Feasibility of the LPD concept and validation of the computational model were

  8. NUMERICAL MODELLING AND EXPERIMENTAL INFLATION VALIDATION OF A BIAS TWO-WHEEL TIRE

    Directory of Open Access Journals (Sweden)

    CHUNG KET THEIN

    2016-02-01

    Full Text Available This paper presents a parametric study on the development of a computational model for bias two-wheel tire through finite element analysis (FEA. An 80/90- 17 bias two-wheel tire was adopted which made up of four major layers of rubber compound with different material properties to strengthen the structure. Mooney-Rivlin hyperelastic model was applied to represent the behaviour of incompressible rubber compound. A 3D tire model was built for structural static finite element analysis. The result was validated from the inflation analysis. Structural static finite element analysis method is suitable for evaluation of the tire design and improvement of the tire behaviour to desired performance. Experimental tire was inflated at various pressures and the geometry between numerical and experimental tire were compared. There are good agreements between numerical simulation model and the experiment results. This indicates that the simulation model can be applied to the bias two-wheel tire design in order to predict the tire behaviour and improve its mechanical characteristics.

  9. Validation of the community radiative transfer model

    International Nuclear Information System (INIS)

    Ding Shouguo; Yang Ping; Weng Fuzhong; Liu Quanhua; Han Yong; Delst, Paul van; Li Jun; Baum, Bryan

    2011-01-01

    To validate the Community Radiative Transfer Model (CRTM) developed by the U.S. Joint Center for Satellite Data Assimilation (JCSDA), the discrete ordinate radiative transfer (DISORT) model and the line-by-line radiative transfer model (LBLRTM) are combined in order to provide a reference benchmark. Compared with the benchmark, the CRTM appears quite accurate for both clear sky and ice cloud radiance simulations with RMS errors below 0.2 K, except for clouds with small ice particles. In a computer CPU run time comparison, the CRTM is faster than DISORT by approximately two orders of magnitude. Using the operational MODIS cloud products and the European Center for Medium-range Weather Forecasting (ECMWF) atmospheric profiles as an input, the CRTM is employed to simulate the Atmospheric Infrared Sounder (AIRS) radiances. The CRTM simulations are shown to be in reasonably close agreement with the AIRS measurements (the discrepancies are within 2 K in terms of brightness temperature difference). Furthermore, the impact of uncertainties in the input cloud properties and atmospheric profiles on the CRTM simulations has been assessed. The CRTM-based brightness temperatures (BTs) at the top of the atmosphere (TOA), for both thin (τ 30) clouds, are highly sensitive to uncertainties in atmospheric temperature and cloud top pressure. However, for an optically thick cloud, the CRTM-based BTs are not sensitive to the uncertainties of cloud optical thickness, effective particle size, and atmospheric humidity profiles. On the contrary, the uncertainties of the CRTM-based TOA BTs resulting from effective particle size and optical thickness are not negligible in an optically thin cloud.

  10. The turbulent viscosity models and their experimental validation; Les modeles de viscosite turbulente et leur validation experimentale

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-12-31

    This workshop on turbulent viscosity models and on their experimental validation was organized by the `convection` section of the French society of thermal engineers. From the 9 papers presented during this workshop, 8 deal with the modeling of turbulent flows inside combustion chambers, turbo-machineries or in other energy-related applications, and have been selected for ETDE. (J.S.)

  11. Direct-contact condensers for open-cycle OTEC applications: Model validation with fresh water experiments for structured packings

    Energy Technology Data Exchange (ETDEWEB)

    Bharathan, D.; Parsons, B.K.; Althof, J.A.

    1988-10-01

    The objective of the reported work was to develop analytical methods for evaluating the design and performance of advanced high-performance heat exchangers for use in open-cycle thermal energy conversion (OC-OTEC) systems. This report describes the progress made on validating a one-dimensional, steady-state analytical computer of fresh water experiments. The condenser model represents the state of the art in direct-contact heat exchange for condensation for OC-OTEC applications. This is expected to provide a basis for optimizing OC-OTEC plant configurations. Using the model, we examined two condenser geometries, a cocurrent and a countercurrent configuration. This report provides detailed validation results for important condenser parameters for cocurrent and countercurrent flows. Based on the comparisons and uncertainty overlap between the experimental data and predictions, the model is shown to predict critical condenser performance parameters with an uncertainty acceptable for general engineering design and performance evaluations. 33 refs., 69 figs., 38 tabs.

  12. Computational Model Prediction and Biological Validation Using Simplified Mixed Field Exposures for the Development of a GCR Reference Field

    Science.gov (United States)

    Hada, M.; Rhone, J.; Beitman, A.; Saganti, P.; Plante, I.; Ponomarev, A.; Slaba, T.; Patel, Z.

    2018-01-01

    The yield of chromosomal aberrations has been shown to increase in the lymphocytes of astronauts after long-duration missions of several months in space. Chromosome exchanges, especially translocations, are positively correlated with many cancers and are therefore a potential biomarker of cancer risk associated with radiation exposure. Although extensive studies have been carried out on the induction of chromosomal aberrations by low- and high-LET radiation in human lymphocytes, fibroblasts, and epithelial cells exposed in vitro, there is a lack of data on chromosome aberrations induced by low dose-rate chronic exposure and mixed field beams such as those expected in space. Chromosome aberration studies at NSRL will provide the biological validation needed to extend the computational models over a broader range of experimental conditions (more complicated mixed fields leading up to the galactic cosmic rays (GCR) simulator), helping to reduce uncertainties in radiation quality effects and dose-rate dependence in cancer risk models. These models can then be used to answer some of the open questions regarding requirements for a full GCR reference field, including particle type and number, energy, dose rate, and delivery order. In this study, we designed a simplified mixed field beam with a combination of proton, helium, oxygen, and iron ions with shielding or proton, helium, oxygen, and titanium without shielding. Human fibroblasts cells were irradiated with these mixed field beam as well as each single beam with acute and chronic dose rate, and chromosome aberrations (CA) were measured with 3-color fluorescent in situ hybridization (FISH) chromosome painting methods. Frequency and type of CA induced with acute dose rate and chronic dose rates with single and mixed field beam will be discussed. A computational chromosome and radiation-induced DNA damage model, BDSTRACKS (Biological Damage by Stochastic Tracks), was updated to simulate various types of CA induced by

  13. Computer-aided modeling framework for efficient model development, analysis and identification

    DEFF Research Database (Denmark)

    Heitzig, Martina; Sin, Gürkan; Sales Cruz, Mauricio

    2011-01-01

    Model-based computer aided product-process engineering has attained increased importance in a number of industries, including pharmaceuticals, petrochemicals, fine chemicals, polymers, biotechnology, food, energy, and water. This trend is set to continue due to the substantial benefits computer-aided...... methods introduce. The key prerequisite of computer-aided product-process engineering is however the availability of models of different types, forms, and application modes. The development of the models required for the systems under investigation tends to be a challenging and time-consuming task....... The methodology has been implemented into a computer-aided modeling framework, which combines expert skills, tools, and database connections that are required for the different steps of the model development work-flow with the goal to increase the efficiency of the modeling process. The framework has two main...

  14. Development and validation of advanced theoretical modeling for churn-turbulent flows and subsequent transitions

    Energy Technology Data Exchange (ETDEWEB)

    Montoya Zabala, Gustavo Adolfo

    2015-07-01

    The applicability of CFD codes for two-phase flows has always been limited to special cases due to the very complex nature of its interface. Due to its tremendous computational cost, methods based on direct resolution of the interface are not applicable to most problems of practical relevance. Instead, averaging procedures are commonly used for these applications, such as the Eulerian-Eulerian approach, which necessarily means losing detailed information on the interfacial structure. In order to allow widespread application of the two-fluid approach, closure models are required to reintroduce in the simulations the correct interfacial mass, momentum, and heat transfer. It is evident that such closure models will strongly depend on the specific flow pattern. When considering vertical pipe flow with low gas volume flow rates, bubbly flow occurs. With increasing gas volume flow rates larger bubbles are generated by bubble coalescence, which further leads to transition to slug, churn-turbulent, and annular flow. Considering, as an example, a heated tube producing steam by evaporation, as in the case of a vertical steam generator, all these flow patterns including transitions are expected to occur in the system. Despite extensive attempts, robust and accurate simulations approaches for such conditions are still lacking. The purpose of this dissertation is the development, testing, and validation of a multifield model for adiabatic gas-liquid flows at high gas volume fractions, for which a multiple-size bubble approach has been implemented by separating the gas structures into a specified number of groups, each of which represents a prescribed range of sizes. A fully-resolved continuous gas phase is also computed, and represents all the gas structures which are large enough to be resolved within the computational mesh. The concept, known as GENeralized TwO Phase flow or GENTOP, is formulated as an extension to the bubble population balance approach known as the

  15. Development and validation of advanced theoretical modeling for churn-turbulent flows and subsequent transitions

    International Nuclear Information System (INIS)

    Montoya Zabala, Gustavo Adolfo

    2015-01-01

    The applicability of CFD codes for two-phase flows has always been limited to special cases due to the very complex nature of its interface. Due to its tremendous computational cost, methods based on direct resolution of the interface are not applicable to most problems of practical relevance. Instead, averaging procedures are commonly used for these applications, such as the Eulerian-Eulerian approach, which necessarily means losing detailed information on the interfacial structure. In order to allow widespread application of the two-fluid approach, closure models are required to reintroduce in the simulations the correct interfacial mass, momentum, and heat transfer. It is evident that such closure models will strongly depend on the specific flow pattern. When considering vertical pipe flow with low gas volume flow rates, bubbly flow occurs. With increasing gas volume flow rates larger bubbles are generated by bubble coalescence, which further leads to transition to slug, churn-turbulent, and annular flow. Considering, as an example, a heated tube producing steam by evaporation, as in the case of a vertical steam generator, all these flow patterns including transitions are expected to occur in the system. Despite extensive attempts, robust and accurate simulations approaches for such conditions are still lacking. The purpose of this dissertation is the development, testing, and validation of a multifield model for adiabatic gas-liquid flows at high gas volume fractions, for which a multiple-size bubble approach has been implemented by separating the gas structures into a specified number of groups, each of which represents a prescribed range of sizes. A fully-resolved continuous gas phase is also computed, and represents all the gas structures which are large enough to be resolved within the computational mesh. The concept, known as GENeralized TwO Phase flow or GENTOP, is formulated as an extension to the bubble population balance approach known as the

  16. Applications of a computer model to the analysis of rock-backfill interaction in pillar recovery operations

    Energy Technology Data Exchange (ETDEWEB)

    Sinclair, T. J.E. [Dames and Moore, London, England, United Kingdom; Shillabeer, J. H. [Dames and Moore, Toronto (Canada); Herget, G. [CANMET, Ottawa (Canada)

    1980-05-15

    This paper describes the application of a computer model to the analysis of backfill stability in pillar recovery operations with particular reference to two case studies. An explicit finite difference computer program was developed for the purpose of modelling the three-dimensional interaction of rock and backfill in underground excavations. Of particular interest was the mechanics of stress transfer from the rock mass to the pillars and then the backfill. The need, therefore, for a model to allow for the three-dimensional effects and the sequence of operations is evident. The paper gives a brief description of the computer program, descriptions of the mines, the sequences of operations and how they were modelled, and the results of the analyses in graphical form. For both case studies, failure of the backfill was predicted at certain stages. Subsequent reports from the mines indicate that such failures did not occur at the relevant stage. The paper discusses the validity of the model and concludes that the approach accurately represents the principles of rock mechanics in cut-and-fill mining and that further research should be directed towards determining the input parameters to an equal degree of sophistication.

  17. Dynamic behaviour of raft and pile foundations tests and computational models. Pt. 1

    International Nuclear Information System (INIS)

    Betbeder, J.; Garnier, J.C.; Gauvain, J.; Jeandidier, C.

    1981-01-01

    Pile foundations are commonly used for many types of buildings where the bearing capacity of soil is poor. For nuclear power plants buildings, however, there seems to be a fairly general reluctancy to accept design on piles, as it is considered difficult to demonstrate the safety of these foundations with respect to earthquakes, due to the relative lack of validation of the currently available aseismic design methods. Being conscious that pile foundations might be worth considering for future nuclear sites in France and that the reliability of design methods should be backed by experimental data, ELECTRICITE DE FRANCE decided in 1978 to undertake a series of tests, aimed at assessing the validity of computational models for seismic behaviour of pile foundations and trying to define better models if necessary. These tests on reduced scale structure, including various types of raft and pile foundations and different kinds of dynamic excitation (harmonic, earthquake simulation, impulsive release of a static force) have been made at the NICE airport site. The present paper deals with the general description of the tests and the first part of interpretation work, limited to in-structure harmonic excitation and earthquake simulation tests analyzed by simple spring -dashpot analytical models. The two following papers (K5-6 and K5-7) are devoted to specialized topics in relation with the interpretation of tests, i-e ground motions analysis for earthquake simulation and research work on a new computational model. Although preliminary conclusions can be drawn from the results obtained so far, further work will be necessary to reach a conclusive assessment on this difficult subject. (orig.)

  18. Elements of matrix modeling and computing with Matlab

    CERN Document Server

    White, Robert E

    2006-01-01

    As discrete models and computing have become more common, there is a need to study matrix computation and numerical linear algebra. Encompassing a diverse mathematical core, Elements of Matrix Modeling and Computing with MATLAB examines a variety of applications and their modeling processes, showing you how to develop matrix models and solve algebraic systems. Emphasizing practical skills, it creates a bridge from problems with two and three variables to more realistic problems that have additional variables. Elements of Matrix Modeling and Computing with MATLAB focuses on seven basic applicat

  19. Vehicle - Bridge interaction, comparison of two computing models

    Science.gov (United States)

    Melcer, Jozef; Kuchárová, Daniela

    2017-07-01

    The paper presents the calculation of the bridge response on the effect of moving vehicle moves along the bridge with various velocities. The multi-body plane computing model of vehicle is adopted. The bridge computing models are created in two variants. One computing model represents the bridge as the Bernoulli-Euler beam with continuously distributed mass and the second one represents the bridge as the lumped mass model with 1 degrees of freedom. The mid-span bridge dynamic deflections are calculated for both computing models. The results are mutually compared and quantitative evaluated.

  20. Computational Models Used to Assess US Tobacco Control Policies.

    Science.gov (United States)

    Feirman, Shari P; Glasser, Allison M; Rose, Shyanika; Niaura, Ray; Abrams, David B; Teplitskaya, Lyubov; Villanti, Andrea C

    2017-11-01

    Simulation models can be used to evaluate existing and potential tobacco control interventions, including policies. The purpose of this systematic review was to synthesize evidence from computational models used to project population-level effects of tobacco control interventions. We provide recommendations to strengthen simulation models that evaluate tobacco control interventions. Studies were eligible for review if they employed a computational model to predict the expected effects of a non-clinical US-based tobacco control intervention. We searched five electronic databases on July 1, 2013 with no date restrictions and synthesized studies qualitatively. Six primary non-clinical intervention types were examined across the 40 studies: taxation, youth prevention, smoke-free policies, mass media campaigns, marketing/advertising restrictions, and product regulation. Simulation models demonstrated the independent and combined effects of these interventions on decreasing projected future smoking prevalence. Taxation effects were the most robust, as studies examining other interventions exhibited substantial heterogeneity with regard to the outcomes and specific policies examined across models. Models should project the impact of interventions on overall tobacco use, including nicotine delivery product use, to estimate preventable health and cost-saving outcomes. Model validation, transparency, more sophisticated models, and modeling policy interactions are also needed to inform policymakers to make decisions that will minimize harm and maximize health. In this systematic review, evidence from multiple studies demonstrated the independent effect of taxation on decreasing future smoking prevalence, and models for other tobacco control interventions showed that these strategies are expected to decrease smoking, benefit population health, and are reasonable to implement from a cost perspective. Our recommendations aim to help policymakers and researchers minimize harm and