WorldWideScience

Sample records for validation experiment jaivex

  1. Cloud and Thermodynamic Parameters Retrieved from Satellite Ultraspectral Infrared Measurements

    Science.gov (United States)

    Zhou, Daniel K.; Smith, William L.; Larar, Allen M.; Liu, Xu; Taylor, Jonathan P.; Schluessel, Peter; Strow, L. Larrabee; Mango, Stephen A.

    2008-01-01

    Atmospheric-thermodynamic parameters and surface properties are basic meteorological parameters for weather forecasting. A physical geophysical parameter retrieval scheme dealing with cloudy and cloud-free radiance observed with satellite ultraspectral infrared sounders has been developed and applied to the Infrared Atmospheric Sounding Interferometer (IASI) and the Atmospheric InfraRed Sounder (AIRS). The retrieved parameters presented herein are from radiance data gathered during the Joint Airborne IASI Validation Experiment (JAIVEx). JAIVEx provided intensive aircraft observations obtained from airborne Fourier Transform Spectrometer (FTS) systems, in-situ measurements, and dedicated dropsonde and radiosonde measurements for the validation of the IASI products. Here, IASI atmospheric profile retrievals are compared with those obtained from dedicated dropsondes, radiosondes, and the airborne FTS system. The IASI examples presented here demonstrate the ability to retrieve fine-scale horizontal features with high vertical resolution from satellite ultraspectral sounder radiance spectra.

  2. CFD validation experiments for hypersonic flows

    Science.gov (United States)

    Marvin, Joseph G.

    1992-01-01

    A roadmap for CFD code validation is introduced. The elements of the roadmap are consistent with air-breathing vehicle design requirements and related to the important flow path components: forebody, inlet, combustor, and nozzle. Building block and benchmark validation experiments are identified along with their test conditions and measurements. Based on an evaluation criteria, recommendations for an initial CFD validation data base are given and gaps identified where future experiments could provide new validation data.

  3. The difference between traditional experiments and CFD validation benchmark experiments

    Energy Technology Data Exchange (ETDEWEB)

    Smith, Barton L., E-mail: barton.smith@usu.edu

    2017-02-15

    Computation Fluid Dynamics provides attractive features for design, and perhaps licensing, of nuclear power plants. The most important of these features is low cost compared to experiments. However, uncertainty of CFD calculations must accompany these calculations in order for the results to be useful for important decision making. In order to properly assess the uncertainty of a CFD calculation, it must be “validated” against experimental data. Unfortunately, traditional “discovery” experiments are normally ill-suited to provide all of the information necessary for the validation exercise. Traditionally, experiments are performed to discover new physics, determine model parameters, or to test designs. This article will describe a new type of experiment; one that is designed and carried out with the specific purpose of providing Computational Fluid Dynamics (CFD) validation benchmark data. We will demonstrate that the goals of traditional experiments and validation experiments are often in conflict, making use of traditional experimental results problematic and leading directly to larger predictive uncertainty of the CFD model.

  4. The difference between traditional experiments and CFD validation benchmark experiments

    International Nuclear Information System (INIS)

    Smith, Barton L.

    2017-01-01

    Computation Fluid Dynamics provides attractive features for design, and perhaps licensing, of nuclear power plants. The most important of these features is low cost compared to experiments. However, uncertainty of CFD calculations must accompany these calculations in order for the results to be useful for important decision making. In order to properly assess the uncertainty of a CFD calculation, it must be “validated” against experimental data. Unfortunately, traditional “discovery” experiments are normally ill-suited to provide all of the information necessary for the validation exercise. Traditionally, experiments are performed to discover new physics, determine model parameters, or to test designs. This article will describe a new type of experiment; one that is designed and carried out with the specific purpose of providing Computational Fluid Dynamics (CFD) validation benchmark data. We will demonstrate that the goals of traditional experiments and validation experiments are often in conflict, making use of traditional experimental results problematic and leading directly to larger predictive uncertainty of the CFD model.

  5. Validity - a matter of resonant experience

    DEFF Research Database (Denmark)

    Revsbæk, Line

    This paper is about doing interview analysis drawing on researcher’s own lived experience concerning the question of inquiry. The paper exemplifies analyzing case study participants’ experience from the resonant experience of researcher’s own life evoked while listening to recorded interview...... across researcher’s past experience from the case study and her own life. The autobiographic way of analyzing conventional interview material is exemplified with a case of a junior researcher researching newcomer innovation of others, drawing on her own experience of being newcomer in work community...... entry processes. The validity of doing interview analysis drawing on the resonant experience of researcher is argued from a pragmatist perspective....

  6. CFD validation experiments at the Lockheed-Georgia Company

    Science.gov (United States)

    Malone, John B.; Thomas, Andrew S. W.

    1987-01-01

    Information is given in viewgraph form on computational fluid dynamics (CFD) validation experiments at the Lockheed-Georgia Company. Topics covered include validation experiments on a generic fighter configuration, a transport configuration, and a generic hypersonic vehicle configuration; computational procedures; surface and pressure measurements on wings; laser velocimeter measurements of a multi-element airfoil system; the flowfield around a stiffened airfoil; laser velocimeter surveys of a circulation control wing; circulation control for high lift; and high angle of attack aerodynamic evaluations.

  7. Patient Experiences with the Preoperative Assessment Clinic (PEPAC): validation of an instrument to measure patient experiences

    NARCIS (Netherlands)

    Edward, G. M.; Lemaire, L. C.; Preckel, B.; Oort, F. J.; Bucx, M. J. L.; Hollmann, M. W.; de Haes, J. C. J. M.

    2007-01-01

    Background. Presently, no comprehensive and validated questionnaire to measure patient experiences of the preoperative assessment clinic (PAC) is available. We developed and validated the Patient Experiences with the Preoperative Assessment Clinic (PEPAC) questionnaire, which can be used for

  8. The Childbirth Experience Questionnaire (CEQ) - validation of its use in a Danish population

    DEFF Research Database (Denmark)

    Boie, Sidsel; Glavind, Julie; Uldbjerg, Niels

    experience is lacking. The Childbirth Experience Questionnaire (CEQ) was developed in Sweden in 2010 and validated in Swedish women, but never validated in a Danish setting, and population. The purpose of our study was to validate the CEQ as a reliable tool for measuring the childbirth experience in Danish......Title The Childbirth Experience Questionnaire (CEQ) - validation the use in a Danish population Introduction Childbirth experience is arguably as important as measuring birth outcomes such as mode of delivery or perinatal morbidity. A robust, validated, Danish tool for evaluating childbirth...... index of agreement between the two scores. Case description (mandatory for Clinical Report) Results (mandatory for Original Research) Face validity: All respondents stated that it was easy to understand and complete the questionnaire. Construct validity: Statistically significant higher CEQ scores were...

  9. Validation of KENO V.a: Comparison with critical experiments

    International Nuclear Information System (INIS)

    Jordan, W.C.; Landers, N.F.; Petrie, L.M.

    1986-12-01

    Section 1 of this report documents the validation of KENO V.a against 258 critical experiments. Experiments considered were primarily high or low enriched uranium systems. The results indicate that the KENO V.a Monte Carlo Criticality Program accurately calculates a broad range of critical experiments. A substantial number of the calculations showed a positive or negative bias in excess of 1 1/2% in k-effective (k/sub eff/). Classes of criticals which show a bias include 3% enriched green blocks, highly enriched uranyl fluoride slab arrays, and highly enriched uranyl nitrate arrays. If these biases are properly taken into account, the KENO V.a code can be used with confidence for the design and criticality safety analysis of uranium-containing systems. Sections 2 of this report documents the results of investigation into the cause of the bias observed in Sect. 1. The results of this study indicate that the bias seen in Sect. 1 is caused by code bias, cross-section bias, reporting bias, and modeling bias. There is evidence that many of the experiments used in this validation and in previous validations are not adequately documented. The uncertainty in the experimental parameters overshadows bias caused by the code and cross sections and prohibits code validation to better than about 1% in k/sub eff/. 48 refs., 19 figs., 19 tabs

  10. Reconceptualising the external validity of discrete choice experiments.

    Science.gov (United States)

    Lancsar, Emily; Swait, Joffre

    2014-10-01

    External validity is a crucial but under-researched topic when considering using discrete choice experiment (DCE) results to inform decision making in clinical, commercial or policy contexts. We present the theory and tests traditionally used to explore external validity that focus on a comparison of final outcomes and review how this traditional definition has been empirically tested in health economics and other sectors (such as transport, environment and marketing) in which DCE methods are applied. While an important component, we argue that the investigation of external validity should be much broader than a comparison of final outcomes. In doing so, we introduce a new and more comprehensive conceptualisation of external validity, closely linked to process validity, that moves us from the simple characterisation of a model as being or not being externally valid on the basis of predictive performance, to the concept that external validity should be an objective pursued from the initial conceptualisation and design of any DCE. We discuss how such a broader definition of external validity can be fruitfully used and suggest innovative ways in which it can be explored in practice.

  11. SAS validation and analysis of in-pile TUCOP experiments

    International Nuclear Information System (INIS)

    Morman, J.A.; Tentner, A.M.; Dever, D.J.

    1985-01-01

    The validation of the SAS4A accident analysis code centers on its capability to calculate the wide range of tests performed in the TREAT (Transient Reactor Test Facility) in-pile experiments program. This paper presents the SAS4A analysis of a simulated TUCOP (Transient-Under-Cooled-Over-Power) experiment using seven full-length PFR mixed oxide fuel pins in a flowing sodium loop. Calculations agree well with measured thermal-hydraulic, pin failure time and post-failure fuel motion data. The extent of the agreement confirms the validity of the models used in the SAS4A code to describe TUCOP accidents

  12. Validation of the revised Mystical Experience Questionnaire in experimental sessions with psilocybin.

    Science.gov (United States)

    Barrett, Frederick S; Johnson, Matthew W; Griffiths, Roland R

    2015-11-01

    The 30-item revised Mystical Experience Questionnaire (MEQ30) was previously developed within an online survey of mystical-type experiences occasioned by psilocybin-containing mushrooms. The rated experiences occurred on average eight years before completion of the questionnaire. The current paper validates the MEQ30 using data from experimental studies with controlled doses of psilocybin. Data were pooled and analyzed from five laboratory experiments in which participants (n=184) received a moderate to high oral dose of psilocybin (at least 20 mg/70 kg). Results of confirmatory factor analysis demonstrate the reliability and internal validity of the MEQ30. Structural equation models demonstrate the external and convergent validity of the MEQ30 by showing that latent variable scores on the MEQ30 positively predict persisting change in attitudes, behavior, and well-being attributed to experiences with psilocybin while controlling for the contribution of the participant-rated intensity of drug effects. These findings support the use of the MEQ30 as an efficient measure of individual mystical experiences. A method to score a "complete mystical experience" that was used in previous versions of the mystical experience questionnaire is validated in the MEQ30, and a stand-alone version of the MEQ30 is provided for use in future research. © The Author(s) 2015.

  13. Experience with Aero- and Fluid-Dynamic Testing for Engineering and CFD Validation

    Science.gov (United States)

    Ross, James C.

    2016-01-01

    Ever since computations have been used to simulate aerodynamics the need to ensure that the computations adequately represent real life has followed. Many experiments have been performed specifically for validation and as computational methods have improved, so have the validation experiments. Validation is also a moving target because computational methods improve requiring validation for the new aspect of flow physics that the computations aim to capture. Concurrently, new measurement techniques are being developed that can help capture more detailed flow features pressure sensitive paint (PSP) and particle image velocimetry (PIV) come to mind. This paper will present various wind-tunnel tests the author has been involved with and how they were used for validation of various kinds of CFD. A particular focus is the application of advanced measurement techniques to flow fields (and geometries) that had proven to be difficult to predict computationally. Many of these difficult flow problems arose from engineering and development problems that needed to be solved for a particular vehicle or research program. In some cases the experiments required to solve the engineering problems were refined to provide valuable CFD validation data in addition to the primary engineering data. All of these experiments have provided physical insight and validation data for a wide range of aerodynamic and acoustic phenomena for vehicles ranging from tractor-trailers to crewed spacecraft.

  14. Validation Experiments for Spent-Fuel Dry-Cask In-Basket Convection

    International Nuclear Information System (INIS)

    Smith, Barton L.

    2016-01-01

    This work consisted of the following major efforts; 1. Literature survey on validation of external natural convection; 2. Design the experiment; 3. Build the experiment; 4. Run the experiment; 5. Collect results; 6. Disseminate results; and 7. Perform a CFD validation study using the results. We note that while all tasks are complete, some deviations from the original plan were made. Specifically, geometrical changes in the parameter space were skipped in favor of flow condition changes, which were found to be much more practical to implement. Changing the geometry required new as-built measurements, which proved extremely costly and impractical given the time and funds available

  15. Validation Experiments for Spent-Fuel Dry-Cask In-Basket Convection

    Energy Technology Data Exchange (ETDEWEB)

    Smith, Barton L. [Utah State Univ., Logan, UT (United States). Dept. of Mechanical and Aerospace Engineering

    2016-08-16

    This work consisted of the following major efforts; 1. Literature survey on validation of external natural convection; 2. Design the experiment; 3. Build the experiment; 4. Run the experiment; 5. Collect results; 6. Disseminate results; and 7. Perform a CFD validation study using the results. We note that while all tasks are complete, some deviations from the original plan were made. Specifically, geometrical changes in the parameter space were skipped in favor of flow condition changes, which were found to be much more practical to implement. Changing the geometry required new as-built measurements, which proved extremely costly and impractical given the time and funds available

  16. Validation results of satellite mock-up capturing experiment using nets

    Science.gov (United States)

    Medina, Alberto; Cercós, Lorenzo; Stefanescu, Raluca M.; Benvenuto, Riccardo; Pesce, Vincenzo; Marcon, Marco; Lavagna, Michèle; González, Iván; Rodríguez López, Nuria; Wormnes, Kjetil

    2017-05-01

    The PATENDER activity (Net parametric characterization and parabolic flight), funded by the European Space Agency (ESA) via its Clean Space initiative, was aiming to validate a simulation tool for designing nets for capturing space debris. This validation has been performed through a set of different experiments under microgravity conditions where a net was launched capturing and wrapping a satellite mock-up. This paper presents the architecture of the thrown-net dynamics simulator together with the set-up of the deployment experiment and its trajectory reconstruction results on a parabolic flight (Novespace A-310, June 2015). The simulator has been implemented within the Blender framework in order to provide a highly configurable tool, able to reproduce different scenarios for Active Debris Removal missions. The experiment has been performed over thirty parabolas offering around 22 s of zero-g conditions. Flexible meshed fabric structure (the net) ejected from a container and propelled by corner masses (the bullets) arranged around its circumference have been launched at different initial velocities and launching angles using a pneumatic-based dedicated mechanism (representing the chaser satellite) against a target mock-up (the target satellite). High-speed motion cameras were recording the experiment allowing 3D reconstruction of the net motion. The net knots have been coloured to allow the images post-process using colour segmentation, stereo matching and iterative closest point (ICP) for knots tracking. The final objective of the activity was the validation of the net deployment and wrapping simulator using images recorded during the parabolic flight. The high-resolution images acquired have been post-processed to determine accurately the initial conditions and generate the reference data (position and velocity of all knots of the net along its deployment and wrapping of the target mock-up) for the simulator validation. The simulator has been properly

  17. Site characterization and validation - Tracer migration experiment in the validation drift, report 2, part 1: performed experiments, results and evaluation

    International Nuclear Information System (INIS)

    Birgersson, L.; Widen, H.; Aagren, T.; Neretnieks, I.; Moreno, L.

    1992-01-01

    This report is the second of the two reports describing the tracer migration experiment where water and tracer flow has been monitored in a drift at the 385 m level in the Stripa experimental mine. The tracer migration experiment is one of a large number of experiments performed within the Site Characterization and Validation (SCV) project. The upper part of the 50 m long validation drift was covered with approximately 150 plastic sheets, in which the emerging water was collected. The water emerging into the lower part of the drift was collected in short boreholes, sumpholes. Sex different tracer mixtures were injected at distances between 10 and 25 m from the drift. The flowrate and tracer monitoring continued for ten months. Tracer breakthrough curves and flowrate distributions were used to study flow paths, velocities, hydraulic conductivities, dispersivities, interaction with the rock matrix and channelling effects within the rock. The present report describes the structure of the observations, the flowrate measurements and estimated hydraulic conductivities. The main part of this report addresses the interpretation of the tracer movement in fractured rock. The tracer movement as measured by the more than 150 individual tracer curves has been analysed with the traditional advection-dispersion model and a subset of the curves with the advection-dispersion-diffusion model. The tracer experiments have permitted the flow porosity, dispersion and interaction with the rock matrix to be studied. (57 refs.)

  18. ATHLET validation using accident management experiments

    Energy Technology Data Exchange (ETDEWEB)

    Teschendorff, V.; Glaeser, H.; Steinhoff, F. [Gasellschaft fuer Anlagen - und Reaktorsicherheit (GSR) mbH, Garching (Germany)

    1995-09-01

    The computer code ATHLET is being developed as an advanced best-estimate code for the simulation of leaks and transients in PWRs and BWRs including beyond design basis accidents. The code has features that are of special interest for applications to small leaks and transients with accident management, e.g. initialisation by a steady-state calculation, full-range drift-flux model, and dynamic mixture level tracking. The General Control Simulation Module of ATHLET is a flexible tool for the simulation of the balance-of-plant and control systems including the various operator actions in the course of accident sequences with AM measures. The systematic validation of ATHLET is based on a well balanced set of integral and separate effect tests derived from the CSNI proposal emphasising, however, the German combined ECC injection system which was investigated in the UPTF, PKL and LOBI test facilities. PKL-III test B 2.1 simulates a cool-down procedure during an emergency power case with three steam generators isolated. Natural circulation under these conditions was investigated in detail in a pressure range of 4 to 2 MPa. The transient was calculated over 22000 s with complicated boundary conditions including manual control actions. The calculations demonstrations the capability to model the following processes successfully: (1) variation of the natural circulation caused by steam generator isolation, (2) vapour formation in the U-tubes of the isolated steam generators, (3) break-down of circulation in the loop containing the isolated steam generator following controlled cool-down of the secondary side, (4) accumulation of vapour in the pressure vessel dome. One conclusion with respect to the suitability of experiments simulating AM procedures for code validation purposes is that complete documentation of control actions during the experiment must be available. Special attention should be given to the documentation of operator actions in the course of the experiment.

  19. Disruption Tolerant Networking Flight Validation Experiment on NASA's EPOXI Mission

    Science.gov (United States)

    Wyatt, Jay; Burleigh, Scott; Jones, Ross; Torgerson, Leigh; Wissler, Steve

    2009-01-01

    In October and November of 2008, the Jet Propulsion Laboratory installed and tested essential elements of Delay/Disruption Tolerant Networking (DTN) technology on the Deep Impact spacecraft. This experiment, called Deep Impact Network Experiment (DINET), was performed in close cooperation with the EPOXI project which has responsibility for the spacecraft. During DINET some 300 images were transmitted from the JPL nodes to the spacecraft. Then they were automatically forwarded from the spacecraft back to the JPL nodes, exercising DTN's bundle origination, transmission, acquisition, dynamic route computation, congestion control, prioritization, custody transfer, and automatic retransmission procedures, both on the spacecraft and on the ground, over a period of 27 days. All transmitted bundles were successfully received, without corruption. The DINET experiment demonstrated DTN readiness for operational use in space missions. This activity was part of a larger NASA space DTN development program to mature DTN to flight readiness for a wide variety of mission types by the end of 2011. This paper describes the DTN protocols, the flight demo implementation, validation metrics which were created for the experiment, and validation results.

  20. Validating the BISON fuel performance code to integral LWR experiments

    Energy Technology Data Exchange (ETDEWEB)

    Williamson, R.L., E-mail: Richard.Williamson@inl.gov [Fuel Modeling and Simulation, Idaho National Laboratory, P.O. Box 1625, Idaho Falls, ID 83415-3840 (United States); Gamble, K.A., E-mail: Kyle.Gamble@inl.gov [Fuel Modeling and Simulation, Idaho National Laboratory, P.O. Box 1625, Idaho Falls, ID 83415-3840 (United States); Perez, D.M., E-mail: Danielle.Perez@inl.gov [Fuel Modeling and Simulation, Idaho National Laboratory, P.O. Box 1625, Idaho Falls, ID 83415-3840 (United States); Novascone, S.R., E-mail: Stephen.Novascone@inl.gov [Fuel Modeling and Simulation, Idaho National Laboratory, P.O. Box 1625, Idaho Falls, ID 83415-3840 (United States); Pastore, G., E-mail: Giovanni.Pastore@inl.gov [Fuel Modeling and Simulation, Idaho National Laboratory, P.O. Box 1625, Idaho Falls, ID 83415-3840 (United States); Gardner, R.J., E-mail: Russell.Gardner@inl.gov [Fuel Modeling and Simulation, Idaho National Laboratory, P.O. Box 1625, Idaho Falls, ID 83415-3840 (United States); Hales, J.D., E-mail: Jason.Hales@inl.gov [Fuel Modeling and Simulation, Idaho National Laboratory, P.O. Box 1625, Idaho Falls, ID 83415-3840 (United States); Liu, W., E-mail: Wenfeng.Liu@anatech.com [ANATECH Corporation, 5435 Oberlin Dr., San Diego, CA 92121 (United States); Mai, A., E-mail: Anh.Mai@anatech.com [ANATECH Corporation, 5435 Oberlin Dr., San Diego, CA 92121 (United States)

    2016-05-15

    Highlights: • The BISON multidimensional fuel performance code is being validated to integral LWR experiments. • Code and solution verification are necessary prerequisites to validation. • Fuel centerline temperature comparisons through all phases of fuel life are very reasonable. • Accuracy in predicting fission gas release is consistent with state-of-the-art modeling and the involved uncertainties. • Rod diameter comparisons are not satisfactory and further investigation is underway. - Abstract: BISON is a modern finite element-based nuclear fuel performance code that has been under development at Idaho National Laboratory (INL) since 2009. The code is applicable to both steady and transient fuel behavior and has been used to analyze a variety of fuel forms in 1D spherical, 2D axisymmetric, or 3D geometries. Code validation is underway and is the subject of this study. A brief overview of BISON's computational framework, governing equations, and general material and behavioral models is provided. BISON code and solution verification procedures are described, followed by a summary of the experimental data used to date for validation of Light Water Reactor (LWR) fuel. Validation comparisons focus on fuel centerline temperature, fission gas release, and rod diameter both before and following fuel-clad mechanical contact. Comparisons for 35 LWR rods are consolidated to provide an overall view of how the code is predicting physical behavior, with a few select validation cases discussed in greater detail. Results demonstrate that (1) fuel centerline temperature comparisons through all phases of fuel life are very reasonable with deviations between predictions and experimental data within ±10% for early life through high burnup fuel and only slightly out of these bounds for power ramp experiments, (2) accuracy in predicting fission gas release appears to be consistent with state-of-the-art modeling and with the involved uncertainties and (3) comparison

  1. Validation experiments of nuclear characteristics of the fast-thermal system HERBE

    International Nuclear Information System (INIS)

    Pesic, M.; Zavaljevski, N.; Marinkovic, P.; Stefanovis, D.; Nikolic, D.; Avdic, S.

    1992-01-01

    In 1988/90 a coupled fast-thermal system HERBE at RB reactor, based on similar facilities, is designed and realized. Fast core of HERBE is built of natural U fuel in RB reactor center surrounded by the neutron filter and neutron converter located in an independent Al tank. Fast zone is surrounded by thermal neutron core driver. Designed nuclear characteristics of HERBE core are validated in the experiments described in the paper. HERBE cell parameters were calculated with developed computer codes: VESNA and DENEB. HERBE system criticality calculation are performed with 4G 2D RZ computer codes GALER and TWENTY GRAND, 1D multi-group AVERY code and 3D XYZ few-group TRITON computer code. The experiments for determination of critical level, dρ/dH, and reactivity of safety rods are accomplished in order to validate calculation results. Specific safety experiment is performed in aim to determine reactivity of flooded fast zone in possible accident. A very good agreements with calculation results are obtained and the validation procedures are presented. It is expected that HERBE will offer qualitative new opportunities for work with fast neutrons at RB reactor including nuclear data determination. (author)

  2. The Role of Laboratory Experiments in the Validation of Field Data

    DEFF Research Database (Denmark)

    Mouneyrac, Catherine; Lagarde, Fabienne; Chatel, Amelie

    2017-01-01

    The ubiquitous presence and persistency of microplastics (MPs) in aquatic environments are of particular concern, since they constitute a potential threat to marine organisms and ecosystems. However, evaluating this threat and the impacts of MP on aquatic organisms is challenging. MPs form a very...... and to what degree these complexities are addressed in the current literature, to: (1) evaluate how well laboratory studies, investigated so far, represent environmentally relevant processes and scenarios and (2) suggest directions for future research The Role of Laboratory Experiments in the Validation...... of Field Data | Request PDF. Available from: https://www.researchgate.net/publication/310360438_The_Role_of_Laboratory_Experiments_in_the_Validation_of_Field_Data [accessed Jan 15 2018]....

  3. Construction and Initial Validation of the Multiracial Experiences Measure (MEM)

    Science.gov (United States)

    Yoo, Hyung Chol; Jackson, Kelly; Guevarra, Rudy P.; Miller, Matthew J.; Harrington, Blair

    2015-01-01

    This article describes the development and validation of the Multiracial Experiences Measure (MEM): a new measure that assesses uniquely racialized risks and resiliencies experienced by individuals of mixed racial heritage. Across two studies, there was evidence for the validation of the 25-item MEM with 5 subscales including Shifting Expressions, Perceived Racial Ambiguity, Creating Third Space, Multicultural Engagement, and Multiracial Discrimination. The 5-subscale structure of the MEM was supported by a combination of exploratory and confirmatory factor analyses. Evidence of criterion-related validity was partially supported with MEM subscales correlating with measures of racial diversity in one’s social network, color-blind racial attitude, psychological distress, and identity conflict. Evidence of discriminant validity was supported with MEM subscales not correlating with impression management. Implications for future research and suggestions for utilization of the MEM in clinical practice with multiracial adults are discussed. PMID:26460977

  4. Validation of MORET 4 perturbation against 'physical' type fission products experiments

    International Nuclear Information System (INIS)

    Anno, Jacques; Jacquet, Olivier; Miss, Joachim

    2003-01-01

    After shortly recalling one among the many pertinent recent features of the French criticality CRISTAL package i.e. the perturbation algorithm (so called MORET 4 'Perturbation' or MP), this paper presents original MP validations. Numerical and experimental validations are made using close fission products (FP) experiments. As results, it is shown that, all being equal, MP can detect FP's absorption cross-section variations in the range 0.3-1.2%. (author)

  5. A Validation Study of the Adolescent Dissociative Experiences Scale

    Science.gov (United States)

    Keck Seeley, Susan. M.; Perosa, Sandra, L.; Perosa, Linda, M.

    2004-01-01

    Objective: The purpose of this study was to further the validation process of the Adolescent Dissociative Experiences Scale (A-DES). In this study, a 6-item Likert response format with descriptors was used when responding to the A-DES rather than the 11-item response format used in the original A-DES. Method: The internal reliability and construct…

  6. The Development and Validation of the Game User Experience Satisfaction Scale (GUESS).

    Science.gov (United States)

    Phan, Mikki H; Keebler, Joseph R; Chaparro, Barbara S

    2016-12-01

    The aim of this study was to develop and psychometrically validate a new instrument that comprehensively measures video game satisfaction based on key factors. Playtesting is often conducted in the video game industry to help game developers build better games by providing insight into the players' attitudes and preferences. However, quality feedback is difficult to obtain from playtesting sessions without a quality gaming assessment tool. There is a need for a psychometrically validated and comprehensive gaming scale that is appropriate for playtesting and game evaluation purposes. The process of developing and validating this new scale followed current best practices of scale development and validation. As a result, a mixed-method design that consisted of item pool generation, expert review, questionnaire pilot study, exploratory factor analysis (N = 629), and confirmatory factor analysis (N = 729) was implemented. A new instrument measuring video game satisfaction, called the Game User Experience Satisfaction Scale (GUESS), with nine subscales emerged. The GUESS was demonstrated to have content validity, internal consistency, and convergent and discriminant validity. The GUESS was developed and validated based on the assessments of over 450 unique video game titles across many popular genres. Thus, it can be applied across many types of video games in the industry both as a way to assess what aspects of a game contribute to user satisfaction and as a tool to aid in debriefing users on their gaming experience. The GUESS can be administered to evaluate user satisfaction of different types of video games by a variety of users. © 2016, Human Factors and Ergonomics Society.

  7. Non-destructive measurements of nuclear wastes. Validation and industrial operating experience

    International Nuclear Information System (INIS)

    Saas, A.; Tchemitciieff, E.

    1993-01-01

    After a short survey of the means employed for the non-destructive measurement of specific activities (γ and X-ray) in waste packages and raw waste, the performances of the device and the ANDRA requirements are presented. The validation of the γ and X-ray measurements on packages is obtained through determining, by destructive means, the same activity on coring samples. The same procedure is used for validating the homogeneity measurements on packages (either homogeneous or heterogeneous). Different operating experiences are then exposed for several kinds of packages and waste. Up to now, about twenty different types of packages have been examined and more than 200 packages have allowed the calibration, validation, and control

  8. Explicating Experience: Development of a Valid Scale of Past Hazard Experience for Tornadoes.

    Science.gov (United States)

    Demuth, Julie L

    2018-03-23

    People's past experiences with a hazard theoretically influence how they approach future risks. Yet, past hazard experience has been conceptualized and measured in wide-ranging, often simplistic, ways, resulting in mixed findings about its relationship with risk perception. This study develops a scale of past hazard experiences, in the context of tornadoes, that is content and construct valid. A conceptual definition was developed, a set of items were created to measure one's most memorable and multiple tornado experiences, and the measures were evaluated through two surveys of the public who reside in tornado-prone areas. Four dimensions emerged of people's most memorable experience, reflecting their awareness of the tornado risk that day, their personalization of the risk, the intrusive impacts on them personally, and impacts experienced vicariously through others. Two dimensions emerged of people's multiple experiences, reflecting common types of communication received and negative emotional responses. These six dimensions are novel in that they capture people's experience across the timeline of a hazard as well as intangible experiences that are both direct and indirect. The six tornado experience dimensions were correlated with tornado risk perceptions measured as cognitive-affective and as perceived probability of consequences. The varied experience-risk perception results suggest that it is important to understand the nuances of these concepts and their relationships. This study provides a foundation for future work to continue explicating past hazard experience, across different risk contexts, and for understanding its effect on risk assessment and responses. © 2018 Society for Risk Analysis.

  9. Gamma streaming experiments for validation of Monte Carlo code

    International Nuclear Information System (INIS)

    Thilagam, L.; Mohapatra, D.K.; Subbaiah, K.V.; Iliyas Lone, M.; Balasubramaniyan, V.

    2012-01-01

    In-homogeneities in shield structures lead to considerable amount of leakage radiation (streaming) increasing the radiation levels in accessible areas. Development works on experimental as well as computational methods for quantifying this streaming radiation are still continuing. Monte Carlo based radiation transport code, MCNP is usually a tool for modeling and analyzing such problems involving complex geometries. In order to validate this computational method for streaming analysis, it is necessary to carry out some experimental measurements simulating these inhomogeneities like ducts and voids present in the bulk shields for typical cases. The data thus generated will be analysed by simulating the experimental set up employing MCNP code and optimized input parameters for the code in finding solutions for similar radiation streaming problems will be formulated. Comparison of experimental data obtained from radiation streaming experiments through ducts will give a set of thumb rules and analytical fits for total radiation dose rates within and outside the duct. The present study highlights the validation of MCNP code through the gamma streaming experiments carried out with the ducts of various shapes and dimensions. Over all, the present study throws light on suitability of MCNP code for the analysis of gamma radiation streaming problems for all duct configurations considered. In the present study, only dose rate comparisons have been made. Studies on spectral comparison of streaming radiation are in process. Also, it is planned to repeat the experiments with various shield materials. Since the penetrations and ducts through bulk shields are unavoidable in an operating nuclear facility the results on this kind of radiation streaming simulations and experiments will be very useful in the shield structure optimization without compromising the radiation safety

  10. ISOTHERMAL AIR INGRESS VALIDATION EXPERIMENTS

    Energy Technology Data Exchange (ETDEWEB)

    Chang H Oh; Eung S Kim

    2011-09-01

    Idaho National Laboratory carried out air ingress experiments as part of validating computational fluid dynamics (CFD) calculations. An isothermal test loop was designed and set to understand the stratified-flow phenomenon, which is important as the initial air flow into the lower plenum of the very high temperature gas cooled reactor (VHTR) when a large break loss-of-coolant accident occurs. The unique flow characteristics were focused on the VHTR air-ingress accident, in particular, the flow visualization of the stratified flow in the inlet pipe to the vessel lower plenum of the General Atomic’s Gas Turbine-Modular Helium Reactor (GT-MHR). Brine and sucrose were used as heavy fluids, and water was used to represent a light fluid, which mimics a counter current flow due to the density difference between the stimulant fluids. The density ratios were changed between 0.87 and 0.98. This experiment clearly showed that a stratified flow between simulant fluids was established even for very small density differences. The CFD calculations were compared with experimental data. A grid sensitivity study on CFD models was also performed using the Richardson extrapolation and the grid convergence index method for the numerical accuracy of CFD calculations . As a result, the calculated current speed showed very good agreement with the experimental data, indicating that the current CFD methods are suitable for predicting density gradient stratified flow phenomena in the air-ingress accident.

  11. Panamanian women׳s experience of vaginal examination in labour: A questionnaire validation.

    Science.gov (United States)

    Bonilla-Escobar, Francisco J; Ortega-Lenis, Delia; Rojas-Mirquez, Johanna C; Ortega-Loubon, Christian

    2016-05-01

    to validate a tool that allows healthcare providers to obtain accurate information regarding Panamanian women׳s thoughts and feelings about vaginal examination during labour that can be used in other Latin-American countries. validation study based on a database from a cross-sectional study carried out in two tertiary care hospitals in Panama City, Panama. Women in the immediate postpartum period who had spontaneous labour onset and uncomplicated deliveries were included in the study from April to August 2008. Researchers used a survey designed by Lewin et al. that included 20 questions related to a patient׳s experience during a vaginal examination. five constructs (factors) related to a patient׳s experience of vaginal examination during labour were identified: Approval (Alpha Cronbach׳s 0.72), Perception (0.67), Rejection (0.40), Consent (0.51), and Stress (0.20). it was demonstrated the validity of the scale and its constructs used to obtain information related to vaginal examination during labour, including patients' experiences with examination and healthcare staff performance. utilisation of the scale will allow institutions to identify items that need improvement and address these areas in order to promote the best care for patients in labour. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Validation method training: nurses' experiences and ratings of work climate.

    Science.gov (United States)

    Söderlund, Mona; Norberg, Astrid; Hansebo, Görel

    2014-03-01

    Training nursing staff in communication skills can impact on the quality of care for residents with dementia and contributes to nurses' job satisfaction. Changing attitudes and practices takes time and energy and can affect the entire nursing staff, not just the nurses directly involved in a training programme. Therefore, it seems important to study nurses' experiences of a training programme and any influence of the programme on work climate among the entire nursing staff. To explore nurses' experiences of a 1-year validation method training programme conducted in a nursing home for residents with dementia and to describe ratings of work climate before and after the programme. A mixed-methods approach. Twelve nurses participated in the training and were interviewed afterwards. These individual interviews were tape-recorded and transcribed, then analysed using qualitative content analysis. The Creative Climate Questionnaire was administered before (n = 53) and after (n = 56) the programme to the entire nursing staff in the participating nursing home wards and analysed with descriptive statistics. Analysis of the interviews resulted in four categories: being under extra strain, sharing experiences, improving confidence in care situations and feeling uncertain about continuing the validation method. The results of the questionnaire on work climate showed higher mean values in the assessment after the programme had ended. The training strengthened the participating nurses in caring for residents with dementia, but posed an extra strain on them. These nurses also described an extra strain on the entire nursing staff that was not reflected in the results from the questionnaire. The work climate at the nursing home wards might have made it easier to conduct this extensive training programme. Training in the validation method could develop nurses' communication skills and improve their handling of complex care situations. © 2013 Blackwell Publishing Ltd.

  13. The Mistra experiment for field containment code validation first results

    International Nuclear Information System (INIS)

    Caron-Charles, M.; Blumenfeld, L.

    2001-01-01

    The MISTRA facility is a large scale experiment, designed for the purpose of thermal-hydraulics multi-D codes validation. A short description of the facility, the set up of the instrumentation and the test program are presented. Then, the first experimental results, studying helium injection in the containment and their calculations are detailed. (author)

  14. Characterization of a CLYC detector and validation of the Monte Carlo Simulation by measurement experiments

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Hyun Suk; Ye, Sung Joon [Seoul National University, Seoul (Korea, Republic of); Smith, Martin B.; Koslowsky, Martin R. [Bubble Technology Industries Inc., Chalk River (Canada); Kwak, Sung Woo [Korea Institute of Nuclear Nonproliferation And Control (KINAC), Daejeon (Korea, Republic of); Kim Gee Hyun [Sejong University, Seoul (Korea, Republic of)

    2017-03-15

    Simultaneous detection of neutrons and gamma rays have become much more practicable, by taking advantage of good gamma-ray discrimination properties using pulse shape discrimination (PSD) technique. Recently, we introduced a commercial CLYC system in Korea, and performed an initial characterization and simulation studies for the CLYC detector system to provide references for the future implementation of the dual-mode scintillator system in various studies and applications. We evaluated a CLYC detector with 95% 6Li enrichment using various gamma-ray sources and a 252Cf neutron source, with validation of our Monte Carlo simulation results via measurement experiments. Absolute full-energy peak efficiency values were calculated for gamma-ray sources and neutron source using MCNP6 and compared with measurement experiments of the calibration sources. In addition, behavioral characteristics of neutrons were validated by comparing simulations and experiments on neutron moderation with various polyethylene (PE) moderator thicknesses. Both results showed good agreements in overall characteristics of the gamma and neutron detection efficiencies, with consistent ⁓20% discrepancy. Furthermore, moderation of neutrons emitted from {sup 252}Cf showed similarities between the simulation and the experiment, in terms of their relative ratios depending on the thickness of the PE moderator. A CLYC detector system was characterized for its energy resolution and detection efficiency, and Monte Carlo simulations on the detector system was validated experimentally. Validation of the simulation results in overall trend of the CLYC detector behavior will provide the fundamental basis and validity of follow-up Monte Carlo simulation studies for the development of our dual-particle imager using a rotational modulation collimator.

  15. ISOTHERMAL AIR INGRESS VALIDATION EXPERIMENTS AT IDAHO NATIONAL LABORATORY: DESCRIPTION AND SUMMARY OF DATA

    International Nuclear Information System (INIS)

    Oh, Chang H.; Kim, Eung S.

    2010-01-01

    Idaho National Laboratory performed air ingress experiments as part of validating computational fluid dynamics code (CFD). An isothermal stratified flow experiment was designed and set to understand stratified flow phenomena in the very high temperature gas cooled reactor (VHTR) and to provide experimental data for validating computer codes. The isothermal experiment focused on three flow characteristics unique in the VHTR air-ingress accident: stratified flow in the horizontal pipe, stratified flow expansion at the pipe and vessel junction, and stratified flow around supporting structures. Brine and sucrose were used as heavy fluids and water was used as light fluids. The density ratios were changed between 0.87 and 0.98. This experiment clearly showed that a stratified flow between heavy and light fluids is generated even for very small density differences. The code was validated by conducting blind CFD simulations and comparing the results to the experimental data. A grid sensitivity study was also performed based on the Richardson extrapolation and the grid convergence index method for modeling confidence. As a result, the calculated current speed showed very good agreement with the experimental data, indicating that the current CFD methods are suitable for predicting density gradient stratified flow phenomena in the air-ingress accident.

  16. Examining students' views about validity of experiments: From introductory to Ph.D. students

    Science.gov (United States)

    Hu, Dehui; Zwickl, Benjamin M.

    2018-06-01

    We investigated physics students' epistemological views on measurements and validity of experimental results. The roles of experiments in physics have been underemphasized in previous research on students' personal epistemology, and there is a need for a broader view of personal epistemology that incorporates experiments. An epistemological framework incorporating the structure, methodology, and validity of scientific knowledge guided the development of an open-ended survey. The survey was administered to students in algebra-based and calculus-based introductory physics courses, upper-division physics labs, and physics Ph.D. students. Within our sample, we identified several differences in students' ideas about validity and uncertainty in measurement. The majority of introductory students justified the validity of results through agreement with theory or with results from others. Alternatively, Ph.D. students frequently justified the validity of results based on the quality of the experimental process and repeatability of results. When asked about the role of uncertainty analysis, introductory students tended to focus on the representational roles (e.g., describing imperfections, data variability, and human mistakes). However, advanced students focused on the inferential roles of uncertainty analysis (e.g., quantifying reliability, making comparisons, and guiding refinements). The findings suggest that lab courses could emphasize a variety of approaches to establish validity, such as by valuing documentation of the experimental process when evaluating the quality of student work. In order to emphasize the role of uncertainty in an authentic way, labs could provide opportunities to iterate, make repeated comparisons, and make decisions based on those comparisons.

  17. International integral experiments databases in support of nuclear data and code validation

    International Nuclear Information System (INIS)

    Briggs, J. Blair; Gado, Janos; Hunter, Hamilton; Kodeli, Ivan; Salvatores, Massimo; Sartori, Enrico

    2002-01-01

    The OECD/NEA Nuclear Science Committee (NSC) has identified the need to establish international databases containing all the important experiments that are available for sharing among the specialists. The NSC has set up or sponsored specific activities to achieve this. The aim is to preserve them in an agreed standard format in computer accessible form, to use them for international activities involving validation of current and new calculational schemes including computer codes and nuclear data libraries, for assessing uncertainties, confidence bounds and safety margins, and to record measurement methods and techniques. The databases so far established or in preparation related to nuclear data validation cover the following areas: SINBAD - A Radiation Shielding Experiments database encompassing reactor shielding, fusion blanket neutronics, and accelerator shielding. ICSBEP - International Criticality Safety Benchmark Experiments Project Handbook, with more than 2500 critical configurations with different combination of materials and spectral indices. IRPhEP - International Reactor Physics Experimental Benchmarks Evaluation Project. The different projects are described in the following including results achieved, work in progress and planned. (author)

  18. Validation of dispersion model of RTARC-DSS based on ''KIT'' field experiments

    International Nuclear Information System (INIS)

    Duran, J.

    2000-01-01

    The aim of this study is to present the performance of the Gaussian dispersion model RTARC-DSS (Real Time Accident Release Consequences - Decision Support System) at the 'Kit' field experiments. The Model Validation Kit is a collection of three experimental data sets from Kincaid, Copenhagen, Lillestrom and supplementary Indianopolis experimental campaigns accompanied by software for model evaluation. The validation of the model has been performed on the basis of the maximum arc-wise concentrations using the Bootstrap resampling procedure the variation of the model residuals. Validation was performed for the short-range distances (about 1 - 10 km, maximum for Kincaid data set - 50 km from source). Model evaluation procedure and amount of relative over- or under-prediction are discussed and compared with the model. (author)

  19. Development and Validation of an Instrument for Assessing Patient Experience of Chronic Illness Care

    Directory of Open Access Journals (Sweden)

    José Joaquín Mira

    2016-08-01

    Full Text Available Introduction: The experience of chronic patients with the care they receive, fuelled by the focus on patient-centeredness and the increasing evidence on its positive relation with other dimensions of quality, is being acknowledged as a key element in improving the quality of care. There are a dearth of accepted tools and metrics to assess patient experience from the patient’s perspective that have been adapted to the new chronic care context: continued, systemic, with multidisciplinary teams and new technologies. Methods: Development and validation of a scale conducting a literature review, expert panel, pilot and field studies with 356 chronic primary care patients, to assess content and face validities and reliability. Results: IEXPAC is an 11+1 item scale with adequate metric properties measured by Alpha Chronbach, Goodness of fit index, and satisfactory convergence validity around three factors named: productive interactions, new relational model and person’s self-management. Conclusions: IEXPAC allows measurement of the patient experience of chronic illness care. Together with other indicators, IEXPAC can determine the quality of care provided according to the Triple Aim framework, facilitating health systems reorientation towards integrated patient-centred care.

  20. Validation of ASTEC v2.0 corium jet fragmentation model using FARO experiments

    International Nuclear Information System (INIS)

    Hermsmeyer, S.; Pla, P.; Sangiorgi, M.

    2015-01-01

    Highlights: • Model validation base extended to six FARO experiments. • Focus on the calculation of the fragmented particle diameter. • Capability and limits of the ASTEC fragmentation model. • Sensitivity analysis of model outputs. - Abstract: ASTEC is an integral code for the prediction of Severe Accidents in Nuclear Power Plants. As such, it needs to cover all physical processes that could occur during accident progression, yet keeping its models simple enough for the ensemble to stay manageable and produce results within an acceptable time. The present paper is concerned with the validation of the Corium jet fragmentation model of ASTEC v2.0 rev3 by means of a selection of six experiments carried out within the FARO facility. The different conditions applied within these six experiments help to analyse the model behaviour in different situations and to expose model limits. In addition to comparing model outputs with experimental measurements, sensitivity analyses are applied to investigate the model. Results of the paper are (i) validation runs, accompanied by an identification of situations where the implemented fragmentation model does not match the experiments well, and discussion of results; (ii) its special attention to the models calculating the diameter of fragmented particles, the identification of a fault in one model implemented, and the discussion of simplification and ad hoc modification to improve the model fit; and, (iii) an investigation of the sensitivity of predictions towards inputs and parameters. In this way, the paper offers a thorough investigation of the merit and limitation of the fragmentation model used in ASTEC

  1. EXQ: development and validation of a multiple-item scale for assessing customer experience quality

    OpenAIRE

    Klaus, Philipp

    2010-01-01

    Positioned in the deliberations related to service marketing, the conceptualisation of service quality, current service quality measurements, and the importance of the evolving construct of customer experience, this thesis develops and validates a measurement for customer experience quality (EXQ) in the context of repeat purchases of mortgage buyers in the United Kingdom. The thesis explores the relationship between the customer experience quality and the important marketing ou...

  2. Validation of two-phase flow code THYC on VATICAN experiment

    International Nuclear Information System (INIS)

    Maurel, F.; Portesse, A.; Rimbert, P.; Thomas, B.

    1997-01-01

    As part of a comprehensive program for THYC validation (THYC is a 3-dimensional two-phase flow computer code for PWR core configuration), an experimental project > has been initiated by the Direction des Etudes et Recherches of Electricite de France. Two mock-ups tested in Refrigerant-114, VATICAN-1 (with simple space grids) and VATICAN-2 (with mixing grids) were set up to investigate void fraction distributions using a single beam gamma densitometer. First, experiments were conducted with the VATICAN-1 mock-up. A set of constitutive laws to be used in rod bundles was determined but some doubts still remain for friction losses closure laws for oblique flow over tubes. From VATICAN-2 tests, calculations were performed using the standard set of correlations. Comparison with the experimental data shows an underprediction of void fraction by THYC in disturbed regions. Analyses highlight the poor treatment of axial relative velocity in these regions. A fitting of the radial and axial relative velocity values in the disturbed region improves the prediction of void fraction by the code but without any physical explanation. More analytical experiments should be carried out to validate friction losses closure laws for oblique flows and relative velocity downstream of a mixing grid. (author)

  3. Validation of two-phase flow code THYC on VATICAN experiment

    Energy Technology Data Exchange (ETDEWEB)

    Maurel, F.; Portesse, A.; Rimbert, P.; Thomas, B. [EDF/DER, Dept. TTA, 78 - Chatou (France)

    1997-12-31

    As part of a comprehensive program for THYC validation (THYC is a 3-dimensional two-phase flow computer code for PWR core configuration), an experimental project <> has been initiated by the Direction des Etudes et Recherches of Electricite de France. Two mock-ups tested in Refrigerant-114, VATICAN-1 (with simple space grids) and VATICAN-2 (with mixing grids) were set up to investigate void fraction distributions using a single beam gamma densitometer. First, experiments were conducted with the VATICAN-1 mock-up. A set of constitutive laws to be used in rod bundles was determined but some doubts still remain for friction losses closure laws for oblique flow over tubes. From VATICAN-2 tests, calculations were performed using the standard set of correlations. Comparison with the experimental data shows an underprediction of void fraction by THYC in disturbed regions. Analyses highlight the poor treatment of axial relative velocity in these regions. A fitting of the radial and axial relative velocity values in the disturbed region improves the prediction of void fraction by the code but without any physical explanation. More analytical experiments should be carried out to validate friction losses closure laws for oblique flows and relative velocity downstream of a mixing grid. (author)

  4. Fission product release from nuclear fuel II. Validation of ASTEC/ELSA on analytical and large scale experiments

    International Nuclear Information System (INIS)

    Brillant, G.; Marchetto, C.; Plumecocq, W.

    2013-01-01

    Highlights: • A wide range of experiments is presented for the ASTEC/ELSA code validation. • Analytical tests such as AECL, ORNL and VERCORS are considered. • A large-scale experiment, PHEBUS FPT1, is considered. • The good agreement with measurements shows the efficiency of the ASTEC modelling. • Improvements concern the FP release modelling from MOX and high burn-up UO 2 fuels. - Abstract: This article is the second of two articles dedicated to the mechanisms of fission product release from a degraded core. The models of fission product release from nuclear fuel in the ASTEC code have been described in detail in the first part of this work (Brillant et al., this issue). In this contribution, the validation of ELSA, the module of ASTEC that deals with fission product and structural material release from a degraded core, is presented. A large range of experimental tests, with various temperature and conditions for the fuel surrounding atmosphere (oxidising and reducing), is thus simulated with the ASTEC code. The validation database includes several analytical experiments with both bare fuel (e.g. MCE1 experiments) and cladded fuel (e.g. HCE3, VERCORS). Furthermore, the PHEBUS large-scale experiments are used for the validation of ASTEC. The rather satisfactory comparison between ELSA calculations and experimental measurements demonstrates the efficiency of the analytical models to describe fission product release in severe accident conditions

  5. Experiment of Laser Pointing Stability on Different Surfaces to validate Micrometric Positioning Sensor

    CERN Document Server

    AUTHOR|(SzGeCERN)721924; Mainaud Durand, Helene; Piedigrossi, Didier; Sandomierski, Jacek; Sosin, Mateusz; Geiger, Alain; Guillaume, Sebastien

    2014-01-01

    CLIC requires 10 μm precision and accuracy over 200m for the pre-alignment of beam related components. A solution based on laser beam as straight line reference is being studied at CERN. It involves camera/shutter assemblies as micrometric positioning sensors. To validate the sensors, it is necessary to determine an appropriate material for the shutter in terms of laser pointing stability. Experiments are carried out with paper, metal and ceramic surfaces. This paper presents the standard deviations of the laser spot coordinates obtained on the different surfaces, as well as the measurement error. Our experiments validate the choice of paper and ceramic for the shutter of the micrometric positioning sensor. It also provides an estimate of the achievable precision and accuracy of the determination of the laser spot centre with respect to the shutter coordinate system defined by reference targets.

  6. Towards program theory validation: Crowdsourcing the qualitative analysis of participant experiences.

    Science.gov (United States)

    Harman, Elena; Azzam, Tarek

    2018-02-01

    This exploratory study examines a novel tool for validating program theory through crowdsourced qualitative analysis. It combines a quantitative pattern matching framework traditionally used in theory-driven evaluation with crowdsourcing to analyze qualitative interview data. A sample of crowdsourced participants are asked to read an interview transcript and identify whether program theory components (Activities and Outcomes) are discussed and to highlight the most relevant passage about that component. The findings indicate that using crowdsourcing to analyze qualitative data can differentiate between program theory components that are supported by a participant's experience and those that are not. This approach expands the range of tools available to validate program theory using qualitative data, thus strengthening the theory-driven approach. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. The cross-cultural validity of the Caregiving Experiences Questionnaire (CEQ) among Danish mothers with preschool children

    DEFF Research Database (Denmark)

    Røhder, Katrine; George, Carol; Brennan, Jessica

    2018-01-01

    The present study explored the Danish cross-cultural validity of the Caregiving Experiences Questionnaire (CEQ), a new measure of caregiving representations in parent-child relationships. Low-risk Danish mothers (N = 159) with children aged 1.5–5 years completed the CEQ and predictive validity...

  8. EVLncRNAs: a manually curated database for long non-coding RNAs validated by low-throughput experiments

    Science.gov (United States)

    Zhao, Huiying; Yu, Jiafeng; Guo, Chengang; Dou, Xianghua; Song, Feng; Hu, Guodong; Cao, Zanxia; Qu, Yuanxu

    2018-01-01

    Abstract Long non-coding RNAs (lncRNAs) play important functional roles in various biological processes. Early databases were utilized to deposit all lncRNA candidates produced by high-throughput experimental and/or computational techniques to facilitate classification, assessment and validation. As more lncRNAs are validated by low-throughput experiments, several databases were established for experimentally validated lncRNAs. However, these databases are small in scale (with a few hundreds of lncRNAs only) and specific in their focuses (plants, diseases or interactions). Thus, it is highly desirable to have a comprehensive dataset for experimentally validated lncRNAs as a central repository for all of their structures, functions and phenotypes. Here, we established EVLncRNAs by curating lncRNAs validated by low-throughput experiments (up to 1 May 2016) and integrating specific databases (lncRNAdb, LncRANDisease, Lnc2Cancer and PLNIncRBase) with additional functional and disease-specific information not covered previously. The current version of EVLncRNAs contains 1543 lncRNAs from 77 species that is 2.9 times larger than the current largest database for experimentally validated lncRNAs. Seventy-four percent lncRNA entries are partially or completely new, comparing to all existing experimentally validated databases. The established database allows users to browse, search and download as well as to submit experimentally validated lncRNAs. The database is available at http://biophy.dzu.edu.cn/EVLncRNAs. PMID:28985416

  9. Are screening instruments valid for psychotic-like experiences? A validation study of screening questions for psychotic-like experiences using in-depth clinical interview.

    LENUS (Irish Health Repository)

    Kelleher, Ian

    2012-02-01

    Individuals who report psychotic-like experiences are at increased risk of future clinical psychotic disorder. They constitute a unique "high-risk" group for studying the developmental trajectory to schizophrenia and related illnesses. Previous research has used screening instruments to identify this high-risk group, but the validity of these instruments has not yet been established. We administered a screening questionnaire with 7 items designed to assess psychotic-like experiences to 334 adolescents aged 11-13 years. Detailed clinical interviews were subsequently carried out with a sample of these adolescents. We calculated sensitivity and specificity and positive predictive value (PPV) and negative predictive value (NPV) for each screening question for the specific symptom it enquired about and also in relation to any psychotic-like experience. The predictive power varied substantially between items, with the question on auditory hallucinations ("Have you ever heard voices or sounds that no one else can hear?") providing the best predictive power. For interview-verified auditory hallucinations specifically, this question had a PPV of 71.4% and an NPV of 90.4%. When assessed for its predictive power for any psychotic-like experience (including, but not limited to, auditory hallucinations), it provided a PPV of 100% and an NPV of 88.4%. Two further questions-relating to visual hallucinations and paranoid thoughts-also demonstrated good predictive power for psychotic-like experiences. Our results suggest that it may be possible to screen the general adolescent population for psychotic-like experiences with a high degree of accuracy using a short self-report questionnaire.

  10. Are screening instruments valid for psychotic-like experiences? A validation study of screening questions for psychotic-like experiences using in-depth clinical interview.

    LENUS (Irish Health Repository)

    Kelleher, Ian

    2011-03-01

    Individuals who report psychotic-like experiences are at increased risk of future clinical psychotic disorder. They constitute a unique "high-risk" group for studying the developmental trajectory to schizophrenia and related illnesses. Previous research has used screening instruments to identify this high-risk group, but the validity of these instruments has not yet been established. We administered a screening questionnaire with 7 items designed to assess psychotic-like experiences to 334 adolescents aged 11-13 years. Detailed clinical interviews were subsequently carried out with a sample of these adolescents. We calculated sensitivity and specificity and positive predictive value (PPV) and negative predictive value (NPV) for each screening question for the specific symptom it enquired about and also in relation to any psychotic-like experience. The predictive power varied substantially between items, with the question on auditory hallucinations ("Have you ever heard voices or sounds that no one else can hear?") providing the best predictive power. For interview-verified auditory hallucinations specifically, this question had a PPV of 71.4% and an NPV of 90.4%. When assessed for its predictive power for any psychotic-like experience (including, but not limited to, auditory hallucinations), it provided a PPV of 100% and an NPV of 88.4%. Two further questions-relating to visual hallucinations and paranoid thoughts-also demonstrated good predictive power for psychotic-like experiences. Our results suggest that it may be possible to screen the general adolescent population for psychotic-like experiences with a high degree of accuracy using a short self-report questionnaire.

  11. Chemometric and biological validation of a capillary electrophoresis metabolomic experiment of Schistosoma mansoni infection in mice.

    Science.gov (United States)

    Garcia-Perez, Isabel; Angulo, Santiago; Utzinger, Jürg; Holmes, Elaine; Legido-Quigley, Cristina; Barbas, Coral

    2010-07-01

    Metabonomic and metabolomic studies are increasingly utilized for biomarker identification in different fields, including biology of infection. The confluence of improved analytical platforms and the availability of powerful multivariate analysis software have rendered the multiparameter profiles generated by these omics platforms a user-friendly alternative to the established analysis methods where the quality and practice of a procedure is well defined. However, unlike traditional assays, validation methods for these new multivariate profiling tools have yet to be established. We propose a validation for models obtained by CE fingerprinting of urine from mice infected with the blood fluke Schistosoma mansoni. We have analysed urine samples from two sets of mice infected in an inter-laboratory experiment where different infection methods and animal husbandry procedures were employed in order to establish the core biological response to a S. mansoni infection. CE data were analysed using principal component analysis. Validation of the scores consisted of permutation scrambling (100 repetitions) and a manual validation method, using a third of the samples (not included in the model) as a test or prediction set. The validation yielded 100% specificity and 100% sensitivity, demonstrating the robustness of these models with respect to deciphering metabolic perturbations in the mouse due to a S. mansoni infection. A total of 20 metabolites across the two experiments were identified that significantly discriminated between S. mansoni-infected and noninfected control samples. Only one of these metabolites, allantoin, was identified as manifesting different behaviour in the two experiments. This study shows the reproducibility of CE-based metabolic profiling methods for disease characterization and screening and highlights the importance of much needed validation strategies in the emerging field of metabolomics.

  12. Validation experiment of a numerically processed millimeter-wave interferometer in a laboratory

    Energy Technology Data Exchange (ETDEWEB)

    Kogi, Y., E-mail: kogi@fit.ac.jp; Higashi, T.; Matsukawa, S. [Department of Information Electronics, Fukuoka Institute of Technology, Fukuoka 811-0295 (Japan); Mase, A. [Art, Science and Technology Center for Cooperative Research, Kyushu University, Kasuga, Fukuoka 816-0811 (Japan); Kohagura, J.; Yoshikawa, M. [Plasma Research Center, University of Tsukuba, Tsukuba, Ibaraki 305-8577 (Japan); Nagayama, Y.; Kawahata, K. [National Institute for Fusion Science, Toki, Gifu 509-5202 (Japan); Kuwahara, D. [Tokyo University of Agriculture and Technology, Koganei, Tokyo 184-8588 (Japan)

    2014-11-15

    We propose a new interferometer system for density profile measurements. This system produces multiple measurement chords by a leaky-wave antenna driven by multiple frequency inputs. The proposed system was validated in laboratory evaluation experiments. We confirmed that the interferometer generates a clear image of a Teflon plate as well as the phase shift corresponding to the plate thickness. In another experiment, we confirmed that quasi-optical mirrors can produce multiple measurement chords; however, the finite spot size of the probe beam degrades the sharpness of the resulting image.

  13. Validation of RNAi Silencing Efficiency Using Gene Array Data shows 18.5% Failure Rate across 429 Independent Experiments

    Directory of Open Access Journals (Sweden)

    Gyöngyi Munkácsy

    2016-01-01

    Full Text Available No independent cross-validation of success rate for studies utilizing small interfering RNA (siRNA for gene silencing has been completed before. To assess the influence of experimental parameters like cell line, transfection technique, validation method, and type of control, we have to validate these in a large set of studies. We utilized gene chip data published for siRNA experiments to assess success rate and to compare methods used in these experiments. We searched NCBI GEO for samples with whole transcriptome analysis before and after gene silencing and evaluated the efficiency for the target and off-target genes using the array-based expression data. Wilcoxon signed-rank test was used to assess silencing efficacy and Kruskal–Wallis tests and Spearman rank correlation were used to evaluate study parameters. All together 1,643 samples representing 429 experiments published in 207 studies were evaluated. The fold change (FC of down-regulation of the target gene was above 0.7 in 18.5% and was above 0.5 in 38.7% of experiments. Silencing efficiency was lowest in MCF7 and highest in SW480 cells (FC = 0.59 and FC = 0.30, respectively, P = 9.3E−06. Studies utilizing Western blot for validation performed better than those with quantitative polymerase chain reaction (qPCR or microarray (FC = 0.43, FC = 0.47, and FC = 0.55, respectively, P = 2.8E−04. There was no correlation between type of control, transfection method, publication year, and silencing efficiency. Although gene silencing is a robust feature successfully cross-validated in the majority of experiments, efficiency remained insufficient in a significant proportion of studies. Selection of cell line model and validation method had the highest influence on silencing proficiency.

  14. Validation of the ABBN/CONSYST constants system. Part 1: Validation through the critical experiments on compact metallic cores

    International Nuclear Information System (INIS)

    Ivanova, T.T.; Manturov, G.N.; Nikolaev, M.N.; Rozhikhin, E.V.; Semenov, M.Yu.; Tsiboulia, A.M.

    1999-01-01

    Worldwide compilation of criticality safety benchmark experiments, evaluated due to an activity of the International Criticality Safety Benchmark Evaluation Project (ICSBEP), discovers new possibilities for validation of the ABBN-93.1 cross section library for criticality safety analysis. Results of calculations of small assemblies with metal-fuelled cores are presented in this paper. It is concluded that ABBN-93.1 predicts criticality of such systems with required accuracy

  15. Analysis of Fresh Fuel Critical Experiments Appropriate for Burnup Credit Validation

    International Nuclear Information System (INIS)

    DeHart, M.D.

    1995-01-01

    The ANS/ANS-8.1 standard requires that calculational methods used in determining criticality safety limits for applications outside reactors be validated by comparison with appropriate critical experiments. This report provides a detailed description of 34 fresh fuel critical experiments and their analyses using the SCALE-4.2 code system and the 27-group ENDF/B-IV cross-section library. The 34 critical experiments were selected based on geometry, material, and neutron interaction characteristics that are applicable to a transportation cask loaded with pressurized-water-reactor spent fuel. These 34 experiments are a representative subset of a much larger data base of low-enriched uranium and mixed-oxide critical experiments. A statistical approach is described and used to obtain an estimate of the bias and uncertainty in the calculational methods and to predict a confidence limit for a calculated neutron multiplication factor. The SCALE-4.2 results for a superset of approximately 100 criticals are included in uncertainty analyses, but descriptions of the individual criticals are not included

  16. Design of an intermediate-scale experiment to validate unsaturated- zone transport models

    International Nuclear Information System (INIS)

    Siegel, M.D.; Hopkins, P.L.; Glass, R.J.; Ward, D.B.

    1991-01-01

    An intermediate-scale experiment is being carried out to evaluate instrumentation and models that might be used for transport-model validation for the Yucca Mountain Site Characterization Project. The experimental test bed is a 6-m high x 3-m diameter caisson filled with quartz sand with a sorbing layer at an intermediate depth. The experiment involves the detection and prediction of the migration of fluid and tracers through an unsaturated porous medium. Pre-test design requires estimation of physical properties of the porous medium such as the relative permeability, saturation/pressure relations, porosity, and saturated hydraulic conductivity as well as geochemical properties such as surface complexation constants and empircial K d 'S. The pre-test characterization data will be used as input to several computer codes to predict the fluid flow and tracer migration. These include a coupled chemical-reaction/transport model, a stochastic model, and a deterministic model using retardation factors. The calculations will be completed prior to elution of the tracers, providing a basis for validation by comparing the predictions to observed moisture and tracer behavior

  17. TRIMS: Validating T2 Molecular Effects for Neutrino Mass Experiments

    Science.gov (United States)

    Lin, Ying-Ting; Trims Collaboration

    2017-09-01

    The Tritium Recoil-Ion Mass Spectrometer (TRIMS) experiment examines the branching ratio of the molecular tritium (T2) beta decay to the bound state (3HeT+). Measuring this branching ratio helps to validate the current molecular final-state theory applied in neutrino mass experiments such as KATRIN and Project 8. TRIMS consists of a magnet-guided time-of-flight mass spectrometer with a detector located on each end. By measuring the kinetic energy and time-of-flight difference of the ions and beta particles reaching the detectors, we will be able to distinguish molecular ions from atomic ones and hence derive the ratio in question. We will give an update on the apparatus, simulation software, and analysis tools, including efforts to improve the resolution of our detectors and to characterize the stability and uniformity of our field sources. We will also share our commissioning results and prospects for physics data. The TRIMS experiment is supported by U.S. Department of Energy Office of Science, Office of Nuclear Physics, Award Number DE-FG02-97ER41020.

  18. BACCHUS 2: an in situ backfill hydration experiment for model validation

    International Nuclear Information System (INIS)

    Volckaert, G.; Bernier, F.; Alonso, E.; Gens, A.

    1995-01-01

    The BACCHUS 2 experiment is an in situ backfill hydration test performed in the HADES underground research facility situated in the plastic Boom clay layer at 220 m depth. The experiment aims at the optimization and demonstration of an installation procedure for a clay based backfill material. The instrumentation has been optimized in such a way that the results of the experiments can be used for the validation of hydro-mechanical codes such a NOSAT developed at the University of Catalunya Spain (UPC). The experimental set-up consists in a bottom flange and a central filter around which the backfill material was applied. The backfill material consist of a mixture of high density clay pellets and clay powder. The experimental set-up and its instrumentation are described in detail. The results of the hydro-mechanical characterization of the backfill material is summarized. (authors). 8 refs., 16 figs., 1 tab

  19. Validation of the TRACR3D code for soil water flow under saturated/unsaturated conditions in three experiments

    International Nuclear Information System (INIS)

    Perkins, B.; Travis, B.; DePoorter, G.

    1985-01-01

    Validation of the TRACR3D code in a one-dimensional form was obtained for flow of soil water in three experiments. In the first experiment, a pulse of water entered a crushed-tuff soil and initially moved under conditions of saturated flow, quickly followed by unsaturated flow. In the second experiment, steady-state unsaturated flow took place. In the final experiment, two slugs of water entered crushed tuff under field conditions. In all three experiments, experimentally measured data for volumetric water content agreed, within experimental errors, with the volumetric water content predicted by the code simulations. The experiments and simulations indicated the need for accurate knowledge of boundary and initial conditions, amount and duration of moisture input, and relevant material properties as input into the computer code. During the validation experiments, limitations on monitoring of water movement in waste burial sites were also noted. 5 references, 34 figures, 9 tables

  20. Development and Validation of a Scale Assessing Mental Health Clinicians' Experiences of Associative Stigma.

    Science.gov (United States)

    Yanos, Philip T; Vayshenker, Beth; DeLuca, Joseph S; O'Connor, Lauren K

    2017-10-01

    Mental health professionals who work with people with serious mental illnesses are believed to experience associative stigma. Evidence suggests that associative stigma could play an important role in the erosion of empathy among professionals; however, no validated measure of the construct currently exists. This study examined the convergent and discriminant validity and factor structure of a new scale assessing the associative stigma experiences of clinicians working with people with serious mental illnesses. A total of 473 clinicians were recruited from professional associations in the United States and participated in an online study. Participants completed the Clinician Associative Stigma Scale (CASS) and measures of burnout, quality of care, expectations about recovery, and self-efficacy. Associative stigma experiences were commonly endorsed; eight items on the 18-item scale were endorsed as being experienced "sometimes" or "often" by over 50% of the sample. The new measure demonstrated a logical four-factor structure: "negative stereotypes about professional effectiveness," "discomfort with disclosure," "negative stereotypes about people with mental illness," and "stereotypes about professionals' mental health." The measure had good internal consistency. It was significantly related to measures of burnout and quality of care, but it was not related to measures of self-efficacy or expectations about recovery. Findings suggest that the CASS is internally consistent and shows evidence of convergent validity and that associative stigma is commonly experienced by mental health professionals who work with people with serious mental illnesses.

  1. Ensemble of cell survival experiments after ion irradiation for validation of RBE models

    Energy Technology Data Exchange (ETDEWEB)

    Friedrich, Thomas; Scholz, Uwe; Scholz, Michael [GSI Helmholtzzentrum fuer Schwerionenforschung, Darmstadt (Germany); Durante, Marco [GSI Helmholtzzentrum fuer Schwerionenforschung, Darmstadt (Germany); Institut fuer Festkoerperphysik, TU Darmstadt, Darmstadt (Germany)

    2012-07-01

    There is persistent interest in understanding the systematics of the relative biological effectiveness (RBE). Models such as the Local Effect Model (LEM) or the Microdosimetric Kinetic Model have the goal to predict the RBE. For the validation of these models a collection of many in-vitro cell survival experiments is most appropriate. The set-up of an ensemble of in-vitro cell survival data comprising about 850 survival experiments after both ion and photon irradiation is reported. The survival curves have been taken out from publications. The experiments encompass survival curves obtained in different labs, using different ion species from protons to uranium, varying irradiation modalities (shaped or monoenergetic beam), various energies and linear energy transfers, and a whole variety of cell types (human or rodent; normal, mutagenic or tumor; radioresistant or -sensitive). Each cell survival curve has been parameterized by the linear-quadratic model. The photon parameters have been added to the data base to allow to calculate the experimental RBE to any survival level. We report on experimental trends found within the data ensemble. The data will serve as a testing ground for RBE models such as the LEM. Finally, a roadmap for further validation and first model results using the data base in combination with the LEM are presented.

  2. Design of experiments in medical physics: Application to the AAA beam model validation.

    Science.gov (United States)

    Dufreneix, S; Legrand, C; Di Bartolo, C; Bremaud, M; Mesgouez, J; Tiplica, T; Autret, D

    2017-09-01

    The purpose of this study is to evaluate the usefulness of the design of experiments in the analysis of multiparametric problems related to the quality assurance in radiotherapy. The main motivation is to use this statistical method to optimize the quality assurance processes in the validation of beam models. Considering the Varian Eclipse system, eight parameters with several levels were selected: energy, MLC, depth, X, Y 1 and Y 2 jaw dimensions, wedge and wedge jaw. A Taguchi table was used to define 72 validation tests. Measurements were conducted in water using a CC04 on a TrueBeam STx, a TrueBeam Tx, a Trilogy and a 2300IX accelerator matched by the vendor. Dose was computed using the AAA algorithm. The same raw data was used for all accelerators during the beam modelling. The mean difference between computed and measured doses was 0.1±0.5% for all beams and all accelerators with a maximum difference of 2.4% (under the 3% tolerance level). For all beams, the measured doses were within 0.6% for all accelerators. The energy was found to be an influencing parameter but the deviations observed were smaller than 1% and not considered clinically significant. Designs of experiment can help define the optimal measurement set to validate a beam model. The proposed method can be used to identify the prognostic factors of dose accuracy. The beam models were validated for the 4 accelerators which were found dosimetrically equivalent even though the accelerator characteristics differ. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  3. Validation and Scaling of Soil Moisture in a Semi-Arid Environment: SMAP Validation Experiment 2015 (SMAPVEX15)

    Science.gov (United States)

    Colliander, Andreas; Cosh, Michael H.; Misra, Sidharth; Jackson, Thomas J.; Crow, Wade T.; Chan, Steven; Bindlish, Rajat; Chae, Chun; Holifield Collins, Chandra; Yueh, Simon H.

    2017-01-01

    The NASA SMAP (Soil Moisture Active Passive) mission conducted the SMAP Validation Experiment 2015 (SMAPVEX15) in order to support the calibration and validation activities of SMAP soil moisture data products. The main goals of the experiment were to address issues regarding the spatial disaggregation methodologies for improvement of soil moisture products and validation of the in situ measurement upscaling techniques. To support these objectives high-resolution soil moisture maps were acquired with the airborne PALS (Passive Active L-band Sensor) instrument over an area in southeast Arizona that includes the Walnut Gulch Experimental Watershed (WGEW), and intensive ground sampling was carried out to augment the permanent in situ instrumentation. The objective of the paper was to establish the correspondence and relationship between the highly heterogeneous spatial distribution of soil moisture on the ground and the coarse resolution radiometer-based soil moisture retrievals of SMAP. The high-resolution mapping conducted with PALS provided the required connection between the in situ measurements and SMAP retrievals. The in situ measurements were used to validate the PALS soil moisture acquired at 1-km resolution. Based on the information from a dense network of rain gauges in the study area, the in situ soil moisture measurements did not capture all the precipitation events accurately. That is, the PALS and SMAP soil moisture estimates responded to precipitation events detected by rain gauges, which were in some cases not detected by the in situ soil moisture sensors. It was also concluded that the spatial distribution of the soil moisture resulted from the relatively small spatial extents of the typical convective storms in this region was not completely captured with the in situ stations. After removing those cases (approximately10 of the observations) the following metrics were obtained: RMSD (root mean square difference) of0.016m3m3 and correlation of 0.83. The

  4. An intercomparison of a large ensemble of statistical downscaling methods for Europe: Overall results from the VALUE perfect predictor cross-validation experiment

    Science.gov (United States)

    Gutiérrez, Jose Manuel; Maraun, Douglas; Widmann, Martin; Huth, Radan; Hertig, Elke; Benestad, Rasmus; Roessler, Ole; Wibig, Joanna; Wilcke, Renate; Kotlarski, Sven

    2016-04-01

    VALUE is an open European network to validate and compare downscaling methods for climate change research (http://www.value-cost.eu). A key deliverable of VALUE is the development of a systematic validation framework to enable the assessment and comparison of both dynamical and statistical downscaling methods. This framework is based on a user-focused validation tree, guiding the selection of relevant validation indices and performance measures for different aspects of the validation (marginal, temporal, spatial, multi-variable). Moreover, several experiments have been designed to isolate specific points in the downscaling procedure where problems may occur (assessment of intrinsic performance, effect of errors inherited from the global models, effect of non-stationarity, etc.). The list of downscaling experiments includes 1) cross-validation with perfect predictors, 2) GCM predictors -aligned with EURO-CORDEX experiment- and 3) pseudo reality predictors (see Maraun et al. 2015, Earth's Future, 3, doi:10.1002/2014EF000259, for more details). The results of these experiments are gathered, validated and publicly distributed through the VALUE validation portal, allowing for a comprehensive community-open downscaling intercomparison study. In this contribution we describe the overall results from Experiment 1), consisting of a European wide 5-fold cross-validation (with consecutive 6-year periods from 1979 to 2008) using predictors from ERA-Interim to downscale precipitation and temperatures (minimum and maximum) over a set of 86 ECA&D stations representative of the main geographical and climatic regions in Europe. As a result of the open call for contribution to this experiment (closed in Dec. 2015), over 40 methods representative of the main approaches (MOS and Perfect Prognosis, PP) and techniques (linear scaling, quantile mapping, analogs, weather typing, linear and generalized regression, weather generators, etc.) were submitted, including information both data

  5. Use of Sensitivity and Uncertainty Analysis to Select Benchmark Experiments for the Validation of Computer Codes and Data

    International Nuclear Information System (INIS)

    Elam, K.R.; Rearden, B.T.

    2003-01-01

    Sensitivity and uncertainty analysis methodologies under development at Oak Ridge National Laboratory were applied to determine whether existing benchmark experiments adequately cover the area of applicability for the criticality code and data validation of PuO 2 and mixed-oxide (MOX) powder systems. The study examined three PuO 2 powder systems and four MOX powder systems that would be useful for establishing mass limits for a MOX fuel fabrication facility. Using traditional methods to choose experiments for criticality analysis validation, 46 benchmark critical experiments were identified as applicable to the PuO 2 powder systems. However, only 14 experiments were thought to be within the area of applicability for dry MOX powder systems.The applicability of 318 benchmark critical experiments, including the 60 experiments initially identified, was assessed. Each benchmark and powder system was analyzed using the Tools for Sensitivity and UNcertainty Analysis Methodology Implementation (TSUNAMI) one-dimensional (TSUNAMI-1D) or TSUNAMI three-dimensional (TSUNAMI-3D) sensitivity analysis sequences, which will be included in the next release of the SCALE code system. This sensitivity data and cross-section uncertainty data were then processed with TSUNAMI-IP to determine the correlation of each application to each experiment in the benchmarking set. Correlation coefficients are used to assess the similarity between systems and determine the applicability of one system for the code and data validation of another.The applicability of most of the experiments identified using traditional methods was confirmed by the TSUNAMI analysis. In addition, some PuO 2 and MOX powder systems were determined to be within the area of applicability of several other benchmarks that would not have been considered using traditional methods. Therefore, the number of benchmark experiments useful for the validation of these systems exceeds the number previously expected. The TSUNAMI analysis

  6. An Examination and Validation of an Adapted Youth Experience Scale for University Sport

    Science.gov (United States)

    Rathwell, Scott; Young, Bradley W.

    2016-01-01

    Limited tools assess positive development through university sport. Such a tool was validated in this investigation using two independent samples of Canadian university athletes. In Study 1, 605 athletes completed 99 survey items drawn from the Youth Experience Scale (YES 2.0), and separate a priori measurement models were evaluated (i.e., 99…

  7. Closing the patient experience chasm: A two-level validation of the Consumer Quality Index Inpatient Hospital Care

    NARCIS (Netherlands)

    Smirnova, A.; Lombarts, K.; Arah, O.A.; Vleuten, C.P.M. van der

    2017-01-01

    BACKGROUND: Evaluation of patients' health care experiences is central to measuring patient-centred care. However, different instruments tend to be used at the hospital or departmental level but rarely both, leading to a lack of standardization of patient experience measures. OBJECTIVE: To validate

  8. Service validity and service reliability of search, experience and credence services. A scenario study

    NARCIS (Netherlands)

    Galetzka, Mirjam; Verhoeven, J.W.M.; Pruyn, Adriaan T.H.

    2006-01-01

    The purpose of this research is to add to our understanding of the antecedents of customer satisfaction by examining the effects of service reliability (Is the service “correctly” produced?) and service validity (Is the “correct” service produced?) of search, experience and credence services.

  9. Validation of neutronic methods applied to the analysis of fast subcritical systems. The MUSE-2 experiments

    International Nuclear Information System (INIS)

    Soule, R.; Salvatores, M.; Jacqmin, R.; Martini, M.; Lebrat, J.F.; Bertrand, P.; Broccoli, U.; Peluso, V.

    1997-01-01

    In the framework of the French SPIN program devoted to the separation and the transmutation of radioactive wastes, the CEA has launched the ISAAC program to investigate the potential of accelerator-driven systems and to provide an experimental validation of the physics characteristics of these systems. The neutronics of the subcritical core needs experimental validation. This can be done by decoupling the problem of the neutron source from the problem of the subcritical medium. Experiments with a well known external source placed in a subcritical medium have been performed in the MASURCA facility. The results confirm the high accuracy achievable with such experiments and the good quality of the ERANOS code system predictions. (author)

  10. Validation of neutronic methods applied to the analysis of fast subcritical systems. The MUSE-2 experiments

    Energy Technology Data Exchange (ETDEWEB)

    Soule, R; Salvatores, M; Jacqmin, R; Martini, M; Lebrat, J F; Bertrand, P [CEA Centre d` Etudes de Cadarache, Service de Physique des Reacteurs et du Cycle, 13 - Saint-Paul-lez-Durance (France); Broccoli, U; Peluso, V

    1998-12-31

    In the framework of the French SPIN program devoted to the separation and the transmutation of radioactive wastes, the CEA has launched the ISAAC program to investigate the potential of accelerator-driven systems and to provide an experimental validation of the physics characteristics of these systems. The neutronics of the subcritical core needs experimental validation. This can be done by decoupling the problem of the neutron source from the problem of the subcritical medium. Experiments with a well known external source placed in a subcritical medium have been performed in the MASURCA facility. The results confirm the high accuracy achievable with such experiments and the good quality of the ERANOS code system predictions. (author)

  11. The structural validity of the Experience of Work and Life Circumstances Questionnaire (WLQ

    Directory of Open Access Journals (Sweden)

    Pieter Schaap

    2016-09-01

    Full Text Available Orientation: Best practice frameworks suggest that an assessment practitioner’s choice of an assessment tool should be based on scientific evidence that underpins the appropriate and just use of the instrument. This is a context-specific validity study involving a classified psychological instrument against the background of South African regulatory frameworks and contemporary validity theory principles. Research purpose: The aim of the study was to explore the structural validity of the Experience of Work and Life Circumstances Questionnaire (WLQ administered to employees in the automotive assembly plant of a South African automotive manufacturing company. Motivation for the study: Although the WLQ has been used by registered health practitioners and numerous researchers, evidence to support the structural validity is lacking. This study, therefore, addressed the need for context-specific empirical support for the validity of score inferences in respect of employees in a South African automotive manufacturing plant. Research design, approach and method: The research was conducted using a convenience sample (N = 217 taken from the automotive manufacturing company where the instrument was used. Reliability and factor analyses were carried out to explore the structural validity of the WLQ. Main findings: The reliability of the WLQ appeared to be acceptable, and the assumptions made about unidimensionality were mostly confirmed. One of the proposed higher-order structural models of the said questionnaire administered to the sample group was confirmed, whereas the other one was partially confirmed. Practical/managerial implications: The conclusion reached was that preliminary empirical grounds existed for considering the continued use of the WLQ (with some suggested refinements by the relevant company, provided the process of accumulating a body of validity evidence continued. Contribution/value-add: This study identified some of the difficulties

  12. Closing the patient experience chasm: A two-level validation of the Consumer Quality Index Inpatient Hospital Care

    NARCIS (Netherlands)

    Smirnova, Alina; Lombarts, Kiki M. J. M. H.; Arah, Onyebuchi A.; van der Vleuten, Cees P. M.

    2017-01-01

    BackgroundEvaluation of patients' health care experiences is central to measuring patient-centred care. However, different instruments tend to be used at the hospital or departmental level but rarely both, leading to a lack of standardization of patient experience measures. ObjectiveTo validate the

  13. Discrete fracture modelling for the Stripa tracer validation experiment predictions

    International Nuclear Information System (INIS)

    Dershowitz, W.; Wallmann, P.

    1992-02-01

    Groundwater flow and transport through three-dimensional networks of discrete fractures was modeled to predict the recovery of tracer from tracer injection experiments conducted during phase 3 of the Stripa site characterization and validation protect. Predictions were made on the basis of an updated version of the site scale discrete fracture conceptual model used for flow predictions and preliminary transport modelling. In this model, individual fractures were treated as stochastic features described by probability distributions of geometric and hydrologic properties. Fractures were divided into three populations: Fractures in fracture zones near the drift, non-fracture zone fractures within 31 m of the drift, and fractures in fracture zones over 31 meters from the drift axis. Fractures outside fracture zones are not modelled beyond 31 meters from the drift axis. Transport predictions were produced using the FracMan discrete fracture modelling package for each of five tracer experiments. Output was produced in the seven formats specified by the Stripa task force on fracture flow modelling. (au)

  14. Reactivity loss validation of high burn-up PWR fuels with pile-oscillation experiments in MINERVE

    Energy Technology Data Exchange (ETDEWEB)

    Leconte, P.; Vaglio-Gaudard, C.; Eschbach, R.; Di-Salvo, J.; Antony, M.; Pepino, A. [CEA, DEN, DER, Cadarache, F-13108 Saint-Paul-Lez-Durance (France)

    2012-07-01

    The ALIX experimental program relies on the experimental validation of the spent fuel inventory, by chemical analysis of samples irradiated in a PWR between 5 and 7 cycles, and also on the experimental validation of the spent fuel reactivity loss with bum-up, obtained by pile-oscillation measurements in the MINERVE reactor. These latter experiments provide an overall validation of both the fuel inventory and of the nuclear data responsible for the reactivity loss. This program offers also unique experimental data for fuels with a burn-up reaching 85 GWd/t, as spent fuels in French PWRs never exceeds 70 GWd/t up to now. The analysis of these experiments is done in two steps with the APOLLO2/SHEM-MOC/CEA2005v4 package. In the first one, the fuel inventory of each sample is obtained by assembly calculations. The calculation route consists in the self-shielding of cross sections on the 281 energy group SHEM mesh, followed by the flux calculation by the Method Of Characteristics in a 2D-exact heterogeneous geometry of the assembly, and finally a depletion calculation by an iterative resolution of the Bateman equations. In the second step, the fuel inventory is used in the analysis of pile-oscillation experiments in which the reactivity of the ALIX spent fuel samples is compared to the reactivity of fresh fuel samples. The comparison between Experiment and Calculation shows satisfactory results with the JEFF3.1.1 library which predicts the reactivity loss within 2% for burn-up of {approx}75 GWd/t and within 4% for burn-up of {approx}85 GWd/t. (authors)

  15. Characterization of materials for a reactive transport model validation experiment: Interim report on the caisson experiment. Yucca Mountain Site Characterization Project

    International Nuclear Information System (INIS)

    Siegel, M.D.; Cheng, W.C.; Ward, D.B.; Bryan, C.R.

    1995-08-01

    Models used in performance assessment and site characterization activities related to nuclear waste disposal rely on simplified representations of solute/rock interactions, hydrologic flow field and the material properties of the rock layers surrounding the repository. A crucial element in the design of these models is the validity of these simplifying assumptions. An intermediate-scale experiment is being carried out at the Experimental Engineered Test Facility at Los Alamos Laboratory by the Los Alamos and Sandia National Laboratories to develop a strategy to validate key geochemical and hydrological assumptions in performance assessment models used by the Yucca Mountain Site Characterization Project

  16. Characterization of materials for a reactive transport model validation experiment: Interim report on the caisson experiment. Yucca Mountain Site Characterization Project

    Energy Technology Data Exchange (ETDEWEB)

    Siegel, M.D.; Cheng, W.C. [Sandia National Labs., Albuquerque, NM (United States); Ward, D.B.; Bryan, C.R. [Univ. of New Mexico, Albuquerque, NM (United States). Dept. of Earth and Planetary Sciences

    1995-08-01

    Models used in performance assessment and site characterization activities related to nuclear waste disposal rely on simplified representations of solute/rock interactions, hydrologic flow field and the material properties of the rock layers surrounding the repository. A crucial element in the design of these models is the validity of these simplifying assumptions. An intermediate-scale experiment is being carried out at the Experimental Engineered Test Facility at Los Alamos Laboratory by the Los Alamos and Sandia National Laboratories to develop a strategy to validate key geochemical and hydrological assumptions in performance assessment models used by the Yucca Mountain Site Characterization Project.

  17. Proceedings of the workshop on integral experiment covariance data for critical safety validation

    Energy Technology Data Exchange (ETDEWEB)

    Stuke, Maik (ed.)

    2016-04-15

    For some time, attempts to quantify the statistical dependencies of critical experiments and to account for them properly in validation procedures were discussed in the literature by various groups. Besides the development of suitable methods especially the quality and modeling issues of the freely available experimental data are in the focus of current discussions, carried out for example in the Expert Group on Uncertainty Analysis for Criticality Safety Assessment (UACSA) of the OECD-NEA Nuclear Science Committee. The same committee compiles and publishes also the freely available experimental data in the International Handbook of Evaluated Criticality Safety Benchmark Experiments. Most of these experiments were performed as series and might share parts of experimental setups leading to correlated results. The quality of the determination of these correlations and the underlying covariance data depend strongly on the quality of the documentation of experiments.

  18. Proceedings of the workshop on integral experiment covariance data for critical safety validation

    International Nuclear Information System (INIS)

    Stuke, Maik

    2016-04-01

    For some time, attempts to quantify the statistical dependencies of critical experiments and to account for them properly in validation procedures were discussed in the literature by various groups. Besides the development of suitable methods especially the quality and modeling issues of the freely available experimental data are in the focus of current discussions, carried out for example in the Expert Group on Uncertainty Analysis for Criticality Safety Assessment (UACSA) of the OECD-NEA Nuclear Science Committee. The same committee compiles and publishes also the freely available experimental data in the International Handbook of Evaluated Criticality Safety Benchmark Experiments. Most of these experiments were performed as series and might share parts of experimental setups leading to correlated results. The quality of the determination of these correlations and the underlying covariance data depend strongly on the quality of the documentation of experiments.

  19. Experiments of Laser Pointing Stability in Air and in Vacuum to Validate Micrometric Positioning Sensor

    CERN Document Server

    Stern, G; Piedigrossi, D; Sandomierski, J; Sosin, M; Geiger, A; Guillaume, S

    2014-01-01

    Aligning accelerator components over 200m with 10 μm accuracy is a challenging task within the Compact Linear Collider (CLIC) study. A solution based on laser beam in vacuum as straight line reference is proposed. The positions of the accelerator’s components are measured with respect to the laser beam by sensors made of camera/shutter assemblies. To validate these sensors, laser pointing stability has to be studied over 200m. We perform experiments in air and in vacuum in order to know how laser pointing stability varies with the distance of propagation and with the environment. The experiments show that the standard deviations of the laser spot coordinates increase with the distance of propagation. They also show that the standard deviations are much smaller in vacuum (8 μm at 35m) than in air (2000 μm at 200m). Our experiment validates the concept of laser beam in vacuum with camera/shutter assembly for micrometric positioning over 35m. It also gives an estimation of the achievable precision.

  20. Optimal Design and Model Validation for Combustion Experiments in a Shock Tube

    KAUST Repository

    Long, Quan

    2014-01-06

    We develop a Bayesian framework for the optimal experimental design of the shock tube experiments which are being carried out at the KAUST Clean Combustion Center. The unknown parameters are the pre-exponential parameters and the activation energies in the reaction rate functions. The control parameters are the initial hydrogen concentration and the temperature. First, we build a polynomial based surrogate model for the observable related to the reactions in the shock tube. Second, we use a novel MAP based approach to estimate the expected information gain in the proposed experiments and select the best experimental set-ups corresponding to the optimal expected information gains. Third, we use the synthetic data to carry out virtual validation of our methodology.

  1. Development and validation of the Consumer Quality index instrument to measure the experience and priority of chronic dialysis patients

    NARCIS (Netherlands)

    van der Veer, Sabine N.; Jager, Kitty J.; Visserman, Ella; Beekman, Robert J.; Boeschoten, Els W.; de Keizer, Nicolette F.; Heuveling, Lara; Stronks, Karien; Arah, Onyebuchi A.

    2012-01-01

    Patient experience is an established indicator of quality of care. Validated tools that measure both experiences and priorities are lacking for chronic dialysis care, hampering identification of negative experiences that patients actually rate important. We developed two Consumer Quality (CQ) index

  2. [Questionnaire on dissociative symptoms. German adaptation, reliability and validity of the American Dissociative Experience Scale (DES)].

    Science.gov (United States)

    Freyberger, H J; Spitzer, C; Stieglitz, R D; Kuhn, G; Magdeburg, N; Bernstein-Carlson, E

    1998-06-01

    The "Fragebogen zu dissoziativen Symptomen (FDS)" represents the authorised German translation and adaptation of the "Dissociative Experience Scale" (DES; Bernstein and Putnam 1986). The original scale comprises 28 items covering dissociative experiences with regard to memory, identity, awareness and cognition according to DSM-III-R and DSM-IV. For the German version, 16 items were added to cover dissociative phenomena according to ICD-10, mainly pseudoneurological conversion symptoms. Reliability and validity of the German version were studied in a total sample of 813 persons and were compared to the results of the original version. Test-retest reliability of the FDS was rtt = 0.88 and Cronbach's consistency coefficient was alpha = 0.93, which is comparable to the results of the DES. The instrument differentiates between different samples (healthy control subjects, students, unselected neurological and psychiatric inpatients, neurological and psychiatric patients with a dissociative disorder and schizophrenics). The FDS is an easily applicable, reliable and valid measure to quantify dissociative experiences.

  3. CFD Validation Experiment of a Mach 2.5 Axisymmetric Shock-Wave/Boundary-Layer Interaction

    Science.gov (United States)

    Davis, David O.

    2015-01-01

    Experimental investigations of specific flow phenomena, e.g., Shock Wave Boundary-Layer Interactions (SWBLI), provide great insight to the flow behavior but often lack the necessary details to be useful as CFD validation experiments. Reasons include: 1.Undefined boundary conditions Inconsistent results 2.Undocumented 3D effects (CL only measurements) 3.Lack of uncertainty analysis While there are a number of good subsonic experimental investigations that are sufficiently documented to be considered test cases for CFD and turbulence model validation, the number of supersonic and hypersonic cases is much less. This was highlighted by Settles and Dodsons [1] comprehensive review of available supersonic and hypersonic experimental studies. In all, several hundred studies were considered for their database.Of these, over a hundred were subjected to rigorous acceptance criteria. Based on their criteria, only 19 (12 supersonic, 7 hypersonic) were considered of sufficient quality to be used for validation purposes. Aeschliman and Oberkampf [2] recognized the need to develop a specific methodology for experimental studies intended specifically for validation purposes.

  4. Validation experiments of the chimney model for the operational simulation of hydrogen recombiners

    International Nuclear Information System (INIS)

    Simon, Berno

    2013-01-01

    The calculation program REKO-DIREKT allows the simulation of the operational behavior of a hydrogen recombiner during accidents with hydrogen release. The interest is focused on the interaction between the catalyst insertion and the chimney that influences the natural ventilation and thus the throughput through the recombiner significantly. For validation experiments were performed with a small-scale recombiner model in the test facility REKO-4. The results show the correlation between the hydrogen concentration at the recombiner entrance, the temperature on catalyst sheets and the entrance velocity using different chimney heights. The entrance velocity increases with the heights of the installed chimney that influences the natural ventilation significantly. The results allow the generation of a wide data base for validation of the computer code REKO-DIREKT.

  5. Validation of a numerical FSI simulation of an aortic BMHV by in vitro PIV experiments.

    Science.gov (United States)

    Annerel, S; Claessens, T; Degroote, J; Segers, P; Vierendeels, J

    2014-08-01

    In this paper, a validation of a recently developed fluid-structure interaction (FSI) coupling algorithm to simulate numerically the dynamics of an aortic bileaflet mechanical heart valve (BMHV) is performed. This validation is done by comparing the numerical simulation results with in vitro experiments. For the in vitro experiments, the leaflet kinematics and flow fields are obtained via the particle image velocimetry (PIV) technique. Subsequently, the same case is numerically simulated by the coupling algorithm and the resulting leaflet kinematics and flow fields are obtained. Finally, the results are compared, revealing great similarity in leaflet motion and flow fields between the numerical simulation and the experimental test. Therefore, it is concluded that the developed algorithm is able to capture very accurately all the major leaflet kinematics and dynamics and can be used to study and optimize the design of BMHVs. Copyright © 2014 IPEM. Published by Elsevier Ltd. All rights reserved.

  6. Generation of integral experiment covariance data and their impact on criticality safety validation

    Energy Technology Data Exchange (ETDEWEB)

    Stuke, Maik; Peters, Elisabeth; Sommer, Fabian

    2016-11-15

    The quantification of statistical dependencies in data of critical experiments and how to account for them properly in validation procedures has been discussed in the literature by various groups. However, these subjects are still an active topic in the Expert Group on Uncertainty Analysis for Criticality Safety Assessment (UACSA) of the OECDNEA Nuclear Science Committee. The latter compiles and publishes the freely available experimental data collection, the International Handbook of Evaluated Criticality Safety Benchmark Experiments, ICSBEP. Most of the experiments were performed as series and share parts of experimental setups, consequently leading to correlation effects in the results. The correct consideration of correlated data seems to be inevitable if the experimental data in a validation procedure is limited or one cannot rely on a sufficient number of uncorrelated data sets, e.g. from different laboratories using different setups. The general determination of correlations and the underlying covariance data as well as the consideration of them in a validation procedure is the focus of the following work. We discuss and demonstrate possible effects on calculated k{sub eff}'s, their uncertainties, and the corresponding covariance matrices due to interpretation of evaluated experimental data and its translation into calculation models. The work shows effects of various modeling approaches, varying distribution functions of parameters and compares and discusses results from the applied Monte-Carlo sampling method with available data on correlations. Our findings indicate that for the reliable determination of integral experimental covariance matrices or the correlation coefficients a detailed study of the underlying experimental data, the modeling approach and assumptions made, and the resulting sensitivity analysis seems to be inevitable. Further, a Bayesian method is discussed to include integral experimental covariance data when estimating an

  7. Generation of integral experiment covariance data and their impact on criticality safety validation

    International Nuclear Information System (INIS)

    Stuke, Maik; Peters, Elisabeth; Sommer, Fabian

    2016-11-01

    The quantification of statistical dependencies in data of critical experiments and how to account for them properly in validation procedures has been discussed in the literature by various groups. However, these subjects are still an active topic in the Expert Group on Uncertainty Analysis for Criticality Safety Assessment (UACSA) of the OECDNEA Nuclear Science Committee. The latter compiles and publishes the freely available experimental data collection, the International Handbook of Evaluated Criticality Safety Benchmark Experiments, ICSBEP. Most of the experiments were performed as series and share parts of experimental setups, consequently leading to correlation effects in the results. The correct consideration of correlated data seems to be inevitable if the experimental data in a validation procedure is limited or one cannot rely on a sufficient number of uncorrelated data sets, e.g. from different laboratories using different setups. The general determination of correlations and the underlying covariance data as well as the consideration of them in a validation procedure is the focus of the following work. We discuss and demonstrate possible effects on calculated k eff 's, their uncertainties, and the corresponding covariance matrices due to interpretation of evaluated experimental data and its translation into calculation models. The work shows effects of various modeling approaches, varying distribution functions of parameters and compares and discusses results from the applied Monte-Carlo sampling method with available data on correlations. Our findings indicate that for the reliable determination of integral experimental covariance matrices or the correlation coefficients a detailed study of the underlying experimental data, the modeling approach and assumptions made, and the resulting sensitivity analysis seems to be inevitable. Further, a Bayesian method is discussed to include integral experimental covariance data when estimating an application

  8. The Sensed Presence Questionnaire (SenPQ: initial psychometric validation of a measure of the “Sensed Presence” experience

    Directory of Open Access Journals (Sweden)

    Joseph M. Barnby

    2017-03-01

    Full Text Available Background The experience of ‘sensed presence’—a feeling or sense that another entity, individual or being is present despite no clear sensory or perceptual evidence—is known to occur in the general population, appears more frequently in religious or spiritual contexts, and seems to be prominent in certain psychiatric or neurological conditions and may reflect specific functions of social cognition or body-image representation systems in the brain. Previous research has relied on ad-hoc measures of the experience and no specific psychometric scale to measure the experience exists to date. Methods Based on phenomenological description in the literature, we created the 16-item Sensed Presence Questionnaire (SenPQ. We recruited participants from (i a general population sample, and; (ii a sample including specific selection for religious affiliation, to complete the SenPQ and additional measures of well-being, schizotypy, social anxiety, social imagery, and spiritual experience. We completed an analysis to test internal reliability, the ability of the SenPQ to distinguish between religious and non-religious participants, and whether the SenPQ was specifically related to positive schizotypical experiences and social imagery. A factor analysis was also conducted to examine underlying latent variables. Results The SenPQ was found to be reliable and valid, with religious participants significantly endorsing more items than non-religious participants, and the scale showing a selective relationship with construct relevant measures. Principal components analysis indicates two potential underlying factors interpreted as reflecting ‘benign’ and ‘malign’ sensed presence experiences. Discussion The SenPQ appears to be a reliable and valid measure of sensed presence experience although further validation in neurological and psychiatric conditions is warranted.

  9. Validation of large-angle scattering data via shadow-bar experiment

    Energy Technology Data Exchange (ETDEWEB)

    Ohnishi, S., E-mail: ohnishi@nmri.go.jp [National Maritime Research Institute, 6-38-1, Shinkawa, Mitaka, Tokyo 181-0004 (Japan); Tamaki, S.; Murata, I. [Osaka University, 1-14-16-1, Yamadaoka, Suita-si, Osaka 565-0871 (Japan)

    2016-11-15

    Highlights: • An experiment to validate large-angle scattering cross section is conducted. • Pieces of Nb foil are set behind a shadow bar to obtain the {sup 92m}Nb production rates. • The results calculated using ENDF/B-VI library data exhibit a 57% overestimation. • The adjustment of cross section in large-angle region makes the C/E close to 1. - Abstract: An experiment emphasizing the influence of large-angle scattering on nuclear data was conducted, in which a Fe shadow bar and a Fe slab target were placed before a deuterium–tritium fusion (DT) neutron source. Two Nb foils were set on both sides of the shadow bar in order to monitor the neutron source intensity and to measure the neutrons scattered from the slab target. The {sup 93}Nb(n,2n){sup 92m}Nb reaction rate of the foil was measured following the DT neutron irradiation and calculated using the MCNP5 Monte Carlo radiation transportation code. The {sup 92m}Nb production rates calculated using data from the JEFF-3.1 and JENDL-4.0 libraries agreed with that measured in the experiment, while the result calculated using data from the ENDF/B-VI library exhibited a 57% overestimation. Because the sensitivity of the {sup 92m}Nb production rate to the scattering angular distribution was large in the angular region between scattering direction cosines of −0.9 and −0.4, the scattering angular distribution was adjusted in that region. This adjustment resulted in a calculation-to-experiment ratio close to 1, but had little influence on the existing integral benchmark experiment.

  10. Decay heat experiment and validation of calculation code systems for fusion reactor

    Energy Technology Data Exchange (ETDEWEB)

    Maekawa, Fujio; Ikeda, Yujiro [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Wada, Masayuki

    1999-10-01

    Although accurate estimation of decay heat value is essential for safety analyses of fusion reactors against loss of coolant accidents and so on, no experimental work has been devoted to validating the estimation. Hence, a decay heat measurement experiment was performed as a task (T-339) of ITER/EDA. A new detector, the Whole Energy Absorption Spectrometer (WEAS), was developed for accurate and efficient measurements of decay heat. Decay heat produced in the thirty-two sample materials which were irradiated by 14-MeV neutrons at FNS/JAERI were measured with WEAS for a wide cooling time period from 1 min to 400 days. The data presently obtained were the first experimental decay heat data in the field of fusion. Validity of decay heat calculation codes of ACT4 and CINAC-V4, activation cross section libraries of FENDL/A-2.0 and JENDL Activation File, and decay data was investigated through analyses of the experiment. As a result, several points that should be modified were found in the codes and data. After solving the problems, it was demonstrated that decay heat valued calculated for most of samples were in good agreement with the experimental data. Especially for stainless steel 316 and copper, which were important materials for ITER, decay heat could be predicted with accuracy of {+-}10%. (author)

  11. Decay heat experiment and validation of calculation code systems for fusion reactor

    International Nuclear Information System (INIS)

    Maekawa, Fujio; Ikeda, Yujiro; Wada, Masayuki

    1999-10-01

    Although accurate estimation of decay heat value is essential for safety analyses of fusion reactors against loss of coolant accidents and so on, no experimental work has been devoted to validating the estimation. Hence, a decay heat measurement experiment was performed as a task (T-339) of ITER/EDA. A new detector, the Whole Energy Absorption Spectrometer (WEAS), was developed for accurate and efficient measurements of decay heat. Decay heat produced in the thirty-two sample materials which were irradiated by 14-MeV neutrons at FNS/JAERI were measured with WEAS for a wide cooling time period from 1 min to 400 days. The data presently obtained were the first experimental decay heat data in the field of fusion. Validity of decay heat calculation codes of ACT4 and CINAC-V4, activation cross section libraries of FENDL/A-2.0 and JENDL Activation File, and decay data was investigated through analyses of the experiment. As a result, several points that should be modified were found in the codes and data. After solving the problems, it was demonstrated that decay heat valued calculated for most of samples were in good agreement with the experimental data. Especially for stainless steel 316 and copper, which were important materials for ITER, decay heat could be predicted with accuracy of ±10%. (author)

  12. Development and validation of the Consumer Quality index instrument to measure the experience and priority of chronic dialysis patients.

    Science.gov (United States)

    van der Veer, Sabine N; Jager, Kitty J; Visserman, Ella; Beekman, Robert J; Boeschoten, Els W; de Keizer, Nicolette F; Heuveling, Lara; Stronks, Karien; Arah, Onyebuchi A

    2012-08-01

    Patient experience is an established indicator of quality of care. Validated tools that measure both experiences and priorities are lacking for chronic dialysis care, hampering identification of negative experiences that patients actually rate important. We developed two Consumer Quality (CQ) index questionnaires, one for in-centre haemodialysis (CHD) and the other for peritoneal dialysis and home haemodialysis (PHHD) care. The instruments were validated using exploratory factor analyses, reliability analysis of identified scales and assessing the association between reliable scales and global ratings. We investigated opportunities for improvement by combining suboptimal experience with patient priority. Sixteen dialysis centres participated in our study. The pilot CQ index for CHD care consisted of 71 questions. Based on data of 592 respondents, we identified 42 core experience items in 10 scales with Cronbach's α ranging from 0.38 to 0.88; five were reliable (α ≥ 0.70). The instrument identified information on centres' fire procedures as the aspect of care exhibiting the biggest opportunity for improvement. The pilot CQ index PHHD comprised 56 questions. The response of 248 patients yielded 31 core experience items in nine scales with Cronbach's α ranging between 0.53 and 0.85; six were reliable. Information on kidney transplantation during pre-dialysis showed most room for improvement. However, for both types of care, opportunities for improvement were mostly limited. The CQ index reliably and validly captures dialysis patient experience. Overall, most care aspects showed limited room for improvement, mainly because patients participating in our study rated their experience to be optimal. To evaluate items with high priority, but with which relatively few patients have experience, more qualitative instruments should be considered.

  13. Recent validation experience with multigroup cross-section libraries and scale

    International Nuclear Information System (INIS)

    Bowman, S.M.; Wright, R.Q.; DeHart, M.D.; Parks, C.V.; Petrie, L.M.

    1995-01-01

    This paper will discuss the results obtained and lessons learned from an extensive validation of new ENDF/B-V and ENDF/B-VI multigroup cross-section libraries using analyses of critical experiments. The KENO V. a Monte Carlo code in version 4.3 of the SCALE computer code system was used to perform the critical benchmark calculations via the automated SCALE sequence CSAS25. The cross-section data were processed by the SCALE automated problem-dependent resonance-processing procedure included in this sequence. Prior to calling KENO V.a, CSAS25 accesses BONAMI to perform resonance self-shielding for nuclides with Bondarenko factors and NITAWL-II to process nuclides with resonance parameter data via the Nordheim Integral Treatment

  14. Development and validation of the BRIGHTLIGHT Survey, a patient-reported experience measure for young people with cancer.

    Science.gov (United States)

    Taylor, Rachel M; Fern, Lorna A; Solanki, Anita; Hooker, Louise; Carluccio, Anna; Pye, Julia; Jeans, David; Frere-Smith, Tom; Gibson, Faith; Barber, Julie; Raine, Rosalind; Stark, Dan; Feltbower, Richard; Pearce, Susie; Whelan, Jeremy S

    2015-07-28

    Patient experience is increasingly used as an indicator of high quality care in addition to more traditional clinical end-points. Surveys are generally accepted as appropriate methodology to capture patient experience. No validated patient experience surveys exist specifically for adolescents and young adults (AYA) aged 13-24 years at diagnosis with cancer. This paper describes early work undertaken to develop and validate a descriptive patient experience survey for AYA with cancer that encompasses both their cancer experience and age-related issues. We aimed to develop, with young people, an experience survey meaningful and relevant to AYA to be used in a longitudinal cohort study (BRIGHTLIGHT), ensuring high levels of acceptability to maximise study retention. A three-stage approach was employed: Stage 1 involved developing a conceptual framework, conducting literature/Internet searches and establishing content validity of the survey; Stage 2 confirmed the acceptability of methods of administration and consisted of four focus groups involving 11 young people (14-25 years), three parents and two siblings; and Stage 3 established survey comprehension through telephone-administered cognitive interviews with a convenience sample of 23 young people aged 14-24 years. Stage 1: Two-hundred and thirty eight questions were developed from qualitative reports of young people's cancer and treatment-related experience. Stage 2: The focus groups identified three core themes: (i) issues directly affecting young people, e.g. impact of treatment-related fatigue on ability to complete survey; (ii) issues relevant to the actual survey, e.g. ability to answer questions anonymously; (iii) administration issues, e.g. confusing format in some supporting documents. Stage 3: Cognitive interviews indicated high levels of comprehension requiring minor survey amendments. Collaborating with young people with cancer has enabled a survey of to be developed that is both meaningful to young

  15. EPIC Calibration/Validation Experiment Field Campaign Report

    Energy Technology Data Exchange (ETDEWEB)

    Koch, Steven E [National Severe Storm Laboratory/NOAA; Chilson, Phillip [University of Oklahoma; Argrow, Brian [University of Colorado

    2017-03-15

    A field exercise involving several different kinds of Unmanned Aerial Systems (UAS) and supporting instrumentation systems provided by DOE/ARM and NOAA/NSSL was conducted at the ARM SGP site in Lamont, Oklahoma on 29-30 October 2016. This campaign was part of a larger National Oceanic and Atmospheric Administration (NOAA) UAS Program Office program awarded to the National Severe Storms Laboratory (NSSL). named Environmental Profiling and Initiation of Convection (EPIC). The EPIC Field Campaign (Test and Calibration/Validation) proposed to ARM was a test or “dry-run” for a follow-up campaign to be requested for spring/summer 2017. The EPIC project addresses NOAA’s objective to “evaluate options for UAS profiling of the lower atmosphere with applications for severe weather.” The project goal is to demonstrate that fixed-wing and rotary-wing small UAS have the combined potential to provide a unique observing system capable of providing detailed profiles of temperature, moisture, and winds within the atmospheric boundary layer (ABL) to help determine the potential for severe weather development. Specific project objectives are: 1) to develop small UAS capable of acquiring needed wind and thermodynamic profiles and transects of the ABL using one fixed-wing UAS operating in tandem with two different fixed rotary-wing UAS pairs; 2) adapt and test miniaturized, high-precision, and fast-response atmospheric sensors with high accuracy in strong winds characteristic of the pre-convective ABL in Oklahoma; 3) conduct targeted short-duration experiments at the ARM Southern Great Plains site in northern Oklahoma concurrently with a second site to be chosen in “real-time” from the Oklahoma Mesonet in coordination with the (National Weather Service (NWS)-Norman Forecast Office; and 4) gain valuable experience in pursuit of NOAA’s goals for determining the value of airborne, mobile observing systems for monitoring rapidly evolving high-impact severe weather

  16. The inventory for déjà vu experiences assessment. Development, utility, reliability, and validity

    NARCIS (Netherlands)

    Sno, H. N.; Schalken, H. F.; de Jonghe, F.; Koeter, M. W.

    1994-01-01

    In this article the development, utility, reliability, and validity of the Inventory for Déjà vu Experiences Assessment (IDEA) are described. The IDEA is a 23-item self-administered questionnaire consisting of a general section of nine questions and qualitative section of 14 questions. The latter

  17. Improving the quality of discrete-choice experiments in health: how can we assess validity and reliability?

    Science.gov (United States)

    Janssen, Ellen M; Marshall, Deborah A; Hauber, A Brett; Bridges, John F P

    2017-12-01

    The recent endorsement of discrete-choice experiments (DCEs) and other stated-preference methods by regulatory and health technology assessment (HTA) agencies has placed a greater focus on demonstrating the validity and reliability of preference results. Areas covered: We present a practical overview of tests of validity and reliability that have been applied in the health DCE literature and explore other study qualities of DCEs. From the published literature, we identify a variety of methods to assess the validity and reliability of DCEs. We conceptualize these methods to create a conceptual model with four domains: measurement validity, measurement reliability, choice validity, and choice reliability. Each domain consists of three categories that can be assessed using one to four procedures (for a total of 24 tests). We present how these tests have been applied in the literature and direct readers to applications of these tests in the health DCE literature. Based on a stakeholder engagement exercise, we consider the importance of study characteristics beyond traditional concepts of validity and reliability. Expert commentary: We discuss study design considerations to assess the validity and reliability of a DCE, consider limitations to the current application of tests, and discuss future work to consider the quality of DCEs in healthcare.

  18. STORMVEX: The Storm Peak Lab Cloud Property Validation Experiment Science and Operations Plan

    Energy Technology Data Exchange (ETDEWEB)

    Mace, J; Matrosov, S; Shupe, M; Lawson, P; Hallar, G; McCubbin, I; Marchand, R; Orr, B; Coulter, R; Sedlacek, A; Avallone, L; Long, C

    2010-09-29

    During the Storm Peak Lab Cloud Property Validation Experiment (STORMVEX), a substantial correlative data set of remote sensing observations and direct in situ measurements from fixed and airborne platforms will be created in a winter season, mountainous environment. This will be accomplished by combining mountaintop observations at Storm Peak Laboratory and the airborne National Science Foundation-supported Colorado Airborne Multi-Phase Cloud Study campaign with collocated measurements from the second ARM Mobile Facility (AMF2). We describe in this document the operational plans and motivating science for this experiment, which includes deployment of AMF2 to Steamboat Springs, Colorado. The intensive STORMVEX field phase will begin nominally on 1 November 2010 and extend to approximately early April 2011.

  19. Validation of mechanistic models for gas precipitation in solids during postirradiation annealing experiments

    Science.gov (United States)

    Rest, J.

    1989-12-01

    A number of different phenomenological models for gas precipitation in solids during postirradiation annealing experiments have been proposed. Validation of such mechanistic models for gas release and swelling is complicated by the use of data containing large systematic errors, and phenomena characterized by synergistic effects as well as uncertainties in materials properties. Statistical regression analysis is recommended for the selection of a reasonably well characterized data base for gas release from irradiated fuel under transient heating conditions. It is demonstrated that an appropriate data selection method is required in order to realistically examine the impact of differing descriptions of the phenomena, and uncertainties in selected materials properties, on the validation results. The results of the analysis show that the kinetics of gas precipitation in solids depend on bubble overpressurization effects and need to be accounted for during the heatup phase of isothermal heating experiments. It is shown that if only the total gas release values (as opposed to time-dependent data) were available, differentiation between different gas precipitation models would be ambiguous. The observed sustained increase in the fractional release curve at relatively high temperatures after the total precipitation of intragranular gas in fission gas bubbles is ascribed to the effects of a grain-growth/grain-boundary sweeping mechanism.

  20. Validation of mechanistic models for gas precipitation in solids during postirradiation annealing experiments

    International Nuclear Information System (INIS)

    Rest, J.

    1989-01-01

    A number of different phenomenological models for gas precipitation in solids during postirradiation annealing experiments have been proposed. Validation of such mechanistic models for gas release and swelling is complicated by the use of data containing large systematic errors, and phenomena characterized by synergistic effects as well as uncertainties in materials properties. Statistical regression analysis is recommended for the selection of a reasonably well characterized data base for gas release from irradiated fuel under transient heating conditions. It is demonstrated that an appropriate data selection method is required in order to realistically examine the impact of differing descriptions of the phenomena, and uncertainties in selected materials properties, on the validation results. The results of the analysis show that the kinetics of gas precipitation in solid depend on bubble overpressurization effects and need to be accounted for during the heatup phase of isothermal heating experiments. It is shown that if only the total gas release values (as opposed to time-dependent data) were available, differentiation between different gas precipitation models would be ambiguous. The observed sustained increase in the fractional release curve at relatively high temperatures after the total precipitation of intragranular gas in fission gas bubbles is ascribed to the effects of a grain-growth/grain-boundary sweeping mechanism. (orig.)

  1. [Caregiver's health: adaption and validation in a Spanish population of the Experience of Caregiving Inventory (ECI)].

    Science.gov (United States)

    Crespo-Maraver, Mariacruz; Doval, Eduardo; Fernández-Castro, Jordi; Giménez-Salinas, Jordi; Prat, Gemma; Bonet, Pere

    2018-04-04

    To adapt and to validate the Experience of Caregiving Inventory (ECI) in a Spanish population, providing empirical evidence of its internal consistency, internal structure and validity. Psychometric validation of the adapted version of the ECI. One hundred and seventy-two caregivers (69.2% women), mean age 57.51 years (range: 21-89) participated. Demographic and clinical data, standardized measures (ECI, suffering scale of SCL-90-R, Zarit burden scale) were used. The two scales of negative evaluation of the ECI most related to serious mental disorders (disruptive behaviours [DB] and negative symptoms [NS]) and the two scales of positive appreciation (positive personal experiences [PPE], and good aspects of the relationship [GAR]) were analyzed. Exploratory structural equation modelling was used to analyze the internal structure. The relationship between the ECI scales and the SCL-90-R and Zarit scores was also studied. The four-factor model presented a good fit. Cronbach's alpha (DB: 0.873; NS: 0.825; PPE: 0.720; GAR: 0.578) showed a higher homogeneity in the negative scales. The SCL-90-R scores correlated with the negative ECI scales, and none of the ECI scales correlated with the Zarit scale. The Spanish version of the ECI can be considered a valid, reliable, understandable and feasible self-report measure for its administration in the health and community context. Copyright © 2018 SESPAS. Publicado por Elsevier España, S.L.U. All rights reserved.

  2. Validation of the containment code Sirius: interpretation of an explosion experiment on a scale model

    International Nuclear Information System (INIS)

    Blanchet, Y.; Obry, P.; Louvet, J.; Deshayes, M.; Phalip, C.

    1979-01-01

    The explicit 2-D axisymmetric Langrangian code SIRIUS, developed at the CEA/DRNR, Cadarache, deals with transient compressive flows in deformable primary tanks with more or less complex internal component geometries. This code has been subjected to a two-year intensive validation program on scale model experiments and a number of improvements have been incorporated. This paper presents a recent calculation of one of these experiments using the SIRIUS code, and the comparison with experimental results shows the encouraging possibilities of this Lagrangian code

  3. Father for the first time - development and validation of a questionnaire to assess fathers’ experiences of first childbirth (FTFQ

    Directory of Open Access Journals (Sweden)

    Premberg Åsa

    2012-05-01

    Full Text Available Abstract Background A father’s experience of the birth of his first child is important not only for his birth-giving partner but also for the father himself, his relationship with the mother and the newborn. No validated questionnaire assessing first-time fathers' experiences during childbirth is currently available. Hence, the aim of this study was to develop and validate an instrument to assess first-time fathers’ experiences of childbirth. Method Domains and items were initially derived from interviews with first-time fathers, and supplemented by a literature search and a focus group interview with midwives. The comprehensibility, comprehension and relevance of the items were evaluated by four paternity research experts and a preliminary questionnaire was pilot tested in eight first-time fathers. A revised questionnaire was completed by 200 first-time fathers (response rate = 81% Exploratory factor analysis using principal component analysis with varimax rotation was performed and multitrait scaling analysis was used to test scaling assumptions. External validity was assessed by means of known-groups analysis. Results Factor analysis yielded four factors comprising 22 items and accounting 48% of the variance. The domains found were Worry, Information, Emotional support and Acceptance. Multitrait analysis confirmed the convergent and discriminant validity of the domains; however, Cronbach’s alpha did not meet conventional reliability standards in two domains. The questionnaire was sensitive to differences between groups of fathers hypothesized to differ on important socio demographic or clinical variables. Conclusions The questionnaire adequately measures important dimensions of first-time fathers’ childbirth experience and may be used to assess aspects of fathers’ experiences during childbirth. To obtain the FTFQ and permission for its use, please contact the corresponding author.

  4. Directed Design of Experiments for Validating Probability of Detection Capability of NDE Systems (DOEPOD)

    Science.gov (United States)

    Generazio, Edward R.

    2015-01-01

    Directed Design of Experiments for Validating Probability of Detection Capability of NDE Systems (DOEPOD) Manual v.1.2 The capability of an inspection system is established by applications of various methodologies to determine the probability of detection (POD). One accepted metric of an adequate inspection system is that there is 95% confidence that the POD is greater than 90% (90/95 POD). Design of experiments for validating probability of detection capability of nondestructive evaluation (NDE) systems (DOEPOD) is a methodology that is implemented via software to serve as a diagnostic tool providing detailed analysis of POD test data, guidance on establishing data distribution requirements, and resolving test issues. DOEPOD demands utilization of observance of occurrences. The DOEPOD capability has been developed to provide an efficient and accurate methodology that yields observed POD and confidence bounds for both Hit-Miss or signal amplitude testing. DOEPOD does not assume prescribed POD logarithmic or similar functions with assumed adequacy over a wide range of flaw sizes and inspection system technologies, so that multi-parameter curve fitting or model optimization approaches to generate a POD curve are not required. DOEPOD applications for supporting inspector qualifications is included.

  5. Containment Code Validation Matrix

    International Nuclear Information System (INIS)

    Chin, Yu-Shan; Mathew, P.M.; Glowa, Glenn; Dickson, Ray; Liang, Zhe; Leitch, Brian; Barber, Duncan; Vasic, Aleks; Bentaib, Ahmed; Journeau, Christophe; Malet, Jeanne; Studer, Etienne; Meynet, Nicolas; Piluso, Pascal; Gelain, Thomas; Michielsen, Nathalie; Peillon, Samuel; Porcheron, Emmanuel; Albiol, Thierry; Clement, Bernard; Sonnenkalb, Martin; Klein-Hessling, Walter; Arndt, Siegfried; Weber, Gunter; Yanez, Jorge; Kotchourko, Alexei; Kuznetsov, Mike; Sangiorgi, Marco; Fontanet, Joan; Herranz, Luis; Garcia De La Rua, Carmen; Santiago, Aleza Enciso; Andreani, Michele; Paladino, Domenico; Dreier, Joerg; Lee, Richard; Amri, Abdallah

    2014-01-01

    The Committee on the Safety of Nuclear Installations (CSNI) formed the CCVM (Containment Code Validation Matrix) task group in 2002. The objective of this group was to define a basic set of available experiments for code validation, covering the range of containment (ex-vessel) phenomena expected in the course of light and heavy water reactor design basis accidents and beyond design basis accidents/severe accidents. It was to consider phenomena relevant to pressurised heavy water reactor (PHWR), pressurised water reactor (PWR) and boiling water reactor (BWR) designs of Western origin as well as of Eastern European VVER types. This work would complement the two existing CSNI validation matrices for thermal hydraulic code validation (NEA/CSNI/R(1993)14) and In-vessel core degradation (NEA/CSNI/R(2001)21). The report initially provides a brief overview of the main features of a PWR, BWR, CANDU and VVER reactors. It also provides an overview of the ex-vessel corium retention (core catcher). It then provides a general overview of the accident progression for light water and heavy water reactors. The main focus is to capture most of the phenomena and safety systems employed in these reactor types and to highlight the differences. This CCVM contains a description of 127 phenomena, broken down into 6 categories: - Containment Thermal-hydraulics Phenomena; - Hydrogen Behaviour (Combustion, Mitigation and Generation) Phenomena; - Aerosol and Fission Product Behaviour Phenomena; - Iodine Chemistry Phenomena; - Core Melt Distribution and Behaviour in Containment Phenomena; - Systems Phenomena. A synopsis is provided for each phenomenon, including a description, references for further information, significance for DBA and SA/BDBA and a list of experiments that may be used for code validation. The report identified 213 experiments, broken down into the same six categories (as done for the phenomena). An experiment synopsis is provided for each test. Along with a test description

  6. Analysis and evaluation of critical experiments for validation of neutron transport calculations

    International Nuclear Information System (INIS)

    Bazzana, S.; Blaumann, H; Marquez Damian, J.I

    2009-01-01

    The calculation schemes, computational codes and nuclear data used in neutronic design require validation to obtain reliable results. In the nuclear criticality safety field this reliability also translates into a higher level of safety in procedures involving fissile material. The International Criticality Safety Benchmark Evaluation Project is an OECD/NEA activity led by the United States, in which participants from over 20 countries evaluate and publish criticality safety benchmarks. The product of this project is a set of benchmark experiment evaluations that are published annually in the International Handbook of Evaluated Criticality Safety Benchmark Experiments. With the recent participation of Argentina, this information is now available for use by the neutron calculation and criticality safety groups in Argentina. This work presents the methodology used for the evaluation of experimental data, some results obtained by the application of these methods, and some examples of the data available in the Handbook. [es

  7. Benchmark validation by means of pulsed sphere experiment at OKTAVIAN

    Energy Technology Data Exchange (ETDEWEB)

    Ichihara, Chihiro [Kyoto Univ., Kumatori, Osaka (Japan). Research Reactor Inst.; Hayashi, Shu A.; Kimura, Itsuro; Yamamoto, Junji; Takahashi, Akito

    1998-03-01

    In order to make benchmark validation of the existing evaluated nuclear data for fusion related material, neutron leakage spectra from spherical piles were measured with a time-of-flight technique using the intense 14 MeV neutron source, OKTAVIAN in the energy range from 0.1 to 15 MeV. The neutron energy spectra were obtained as the absolute value normalized per the source neutron. The measured spectra were compared with those by theoretical calculation using a Monte Carlo neutron transport code, MCNP with several libraries processed from the evaluated nuclear data files. Comparison has been made with the spectrum shape, the C/E values of neutron numbers integrated in 4 energy regions and the calculated spectra unfolded by the number of collisions, especially those after a single collision. The new libraries predicted the experiment fairly well for Li, Cr, Mn, Cu and Mo. For Al, Si, Zr, Nb and W, new data files could give fair prediction. However, C/E differed more than 20% for several regions. For LiF, CF{sub 2}, Ti and Co, no calculation could predict the experiment. The detailed discussion has been given for Cr, Mn and Cu samples. EFF-2 calculation overestimated by 24% for the Cr experiment between 1 and 5-MeV neutron energy region, presumably because of overestimation of inelastic cross section and {sup 52}Cr(n,2n) cross section and the problem in energy and angular distribution of secondary neutrons in EFF-2. For Cu, ENDF/B-VI and EFF-2 overestimated the experiment by about 20 to 30-% in the energy range between 5 and 12-MeV, presumably from the problem in inelastic scattering cross section. (author)

  8. Langmuir probe-based observables for plasma-turbulence code validation and application to the TORPEX basic plasma physics experiment

    International Nuclear Information System (INIS)

    Ricci, Paolo; Theiler, C.; Fasoli, A.; Furno, I.; Labit, B.; Mueller, S. H.; Podesta, M.; Poli, F. M.

    2009-01-01

    The methodology for plasma-turbulence code validation is discussed, with focus on the quantities to use for the simulation-experiment comparison, i.e., the validation observables, and application to the TORPEX basic plasma physics experiment [A. Fasoli et al., Phys. Plasmas 13, 055902 (2006)]. The considered validation observables are deduced from Langmuir probe measurements and are ordered into a primacy hierarchy, according to the number of model assumptions and to the combinations of measurements needed to form each of them. The lowest levels of the primacy hierarchy correspond to observables that require the lowest number of model assumptions and measurement combinations, such as the statistical and spectral properties of the ion saturation current time trace, while at the highest levels, quantities such as particle transport are considered. The comparison of the observables at the lowest levels in the hierarchy is more stringent than at the highest levels. Examples of the use of the proposed observables are applied to a specific TORPEX plasma configuration characterized by interchange-driven turbulence.

  9. Fluid-structure interaction in non-rigid pipeline systems - large scale validation experiments

    International Nuclear Information System (INIS)

    Heinsbroek, A.G.T.J.; Kruisbrink, A.C.H.

    1993-01-01

    The fluid-structure interaction computer code FLUSTRIN, developed by DELFT HYDRAULICS, enables the user to determine dynamic fluid pressures, structural stresses and displacements in a liquid-filled pipeline system under transient conditions. As such, the code is a useful tool to process and mechanical engineers in the safe design and operation of pipeline systems in nuclear power plants. To validate FLUSTRIN, experiments have been performed in a large scale 3D test facility. The test facility consists of a flexible pipeline system which is suspended by wires, bearings and anchors. Pressure surges, which excite the system, are generated by a fast acting shut-off valve. Dynamic pressures, structural displacements and strains (in total 70 signals) have been measured under well determined initial and boundary conditions. The experiments have been simulated with FLUSTRIN, which solves the acoustic equations using the method of characteristics (fluid) and the finite element method (structure). The agreement between experiments and simulations is shown to be good: frequencies, amplitudes and wave phenomena are well predicted by the numerical simulations. It is demonstrated that an uncoupled water hammer computation would render unreliable and useless results. (author)

  10. Experiences using IAEA Code of practice for radiation sterilization of tissue allografts: Validation and routine control

    Energy Technology Data Exchange (ETDEWEB)

    Hilmy, N. [Batan Research Tissue Bank (BRTB), Centre for Research and Development of Isotopes and Radiation Technology, P.O. Box 7002, JKSKL, Jakarta 12070 (Indonesia)], E-mail: nazly@batan.go.id; Febrida, A.; Basril, A. [Batan Research Tissue Bank (BRTB), Centre for Research and Development of Isotopes and Radiation Technology, P.O. Box 7002, JKSKL, Jakarta 12070 (Indonesia)

    2007-11-15

    Problems of tissue allografts in using International Standard (ISO) 11137 for validation of radiation sterilization dose (RSD) are limited and low numbers of uniform samples per production batch, those are products obtained from one donor. Allograft is a graft transplanted between two different individuals of the same species. The minimum number of uniform samples needed for verification dose (VD) experiment at the selected sterility assurance level (SAL) per production batch according to the IAEA Code is 20, i.e., 10 for bio-burden determination and the remaining 10 for sterilization test. Three methods of the IAEA Code have been used for validation of RSD, i.e., method A1 that is a modification of method 1 of ISO 11137:1995, method B (ISO 13409:1996), and method C (AAMI TIR 27:2001). This paper describes VD experiments using uniform products obtained from one cadaver donor, i.e., cancellous bones, demineralized bone powders and amnion grafts from one life donor. Results of the verification dose experiments show that RSD is 15.4 kGy for cancellous and demineralized bone grafts and 19.2 kGy for amnion grafts according to method A1 and 25 kGy according to methods B and C.

  11. Use of integral experiments in support to the validation of JEFF-3.2 nuclear data evaluation

    Science.gov (United States)

    Leclaire, Nicolas; Cochet, Bertrand; Jinaphanh, Alexis; Haeck, Wim

    2017-09-01

    For many years now, IRSN has developed its own Monte Carlo continuous energy capability, which allows testing various nuclear data libraries. In that prospect, a validation database of 1136 experiments was built from cases used for the validation of the APOLLO2-MORET 5 multigroup route of the CRISTAL V2.0 package. In this paper, the keff obtained for more than 200 benchmarks using the JEFF-3.1.1 and JEFF-3.2 libraries are compared to benchmark keff values and main discrepancies are analyzed regarding the neutron spectrum. Special attention is paid on benchmarks for which the results have been highly modified between both JEFF-3 versions.

  12. Monitoring Ground Subsidence in Hong Kong via Spaceborne Radar: Experiments and Validation

    Directory of Open Access Journals (Sweden)

    Yuxiao Qin

    2015-08-01

    Full Text Available The persistent scatterers interferometry (PSI technique is gradually becoming known for its capability of providing up to millimeter accuracy of measurement on ground displacement. Nevertheless, there is still quite a good amount of doubt regarding its correctness or accuracy. In this paper, we carried out an experiment corroborating the capability of the PSI technique with the help of a traditional survey method in the urban area of Hong Kong, China. Seventy three TerraSAR-X (TSX and TanDEM-X (TDX images spanning over four years are used for the data process. There are three aims of this study. The first is to generate a displacement map of urban Hong Kong and to check for spots with possible ground movements. This information will be provided to the local surveyors so that they can check these specific locations. The second is to validate if the accuracy of the PSI technique can indeed reach the millimeter level in this real application scenario. For validating the accuracy of PSI, four corner reflectors (CR were installed at a construction site on reclaimed land in Hong Kong. They were manually moved up or down by a few to tens of millimeters, and the value derived from the PSI analysis was compared to the true value. The experiment, carried out in unideal conditions, nevertheless proved undoubtedly that millimeter accuracy can be achieved by the PSI technique. The last is to evaluate the advantages and limitations of the PSI technique. Overall, the PSI technique can be extremely useful if used in collaboration with other techniques, so that the advantages can be highlighted and the drawbacks avoided.

  13. Validation of a CFD Analysis Model for Predicting CANDU-6 Moderator Temperature Against SPEL Experiments

    International Nuclear Information System (INIS)

    Churl Yoon; Bo Wook Rhee; Byung-Joo Min

    2002-01-01

    A validation of a 3D CFD model for predicting local subcooling of the moderator in the vicinity of calandria tubes in a CANDU-6 reactor is performed. The small scale moderator experiments performed at Sheridan Park Experimental Laboratory (SPEL) in Ontario, Canada[1] is used for the validation. Also a comparison is made between previous CFD analyses based on 2DMOTH and PHOENICS, and the current analysis for the same SPEL experiment. For the current model, a set of grid structures for the same geometry as the experimental test section is generated and the momentum, heat and continuity equations are solved by CFX-4.3, a CFD code developed by AEA technology. The matrix of calandria tubes is simplified by the porous media approach. The standard k-ε turbulence model associated with logarithmic wall treatment and SIMPLEC algorithm on the body fitted grid are used. Buoyancy effects are accounted for by the Boussinesq approximation. For the test conditions simulated in this study, the flow pattern identified is the buoyancy-dominated flow, which is generated by the interaction between the dominant buoyancy force by heating and inertial momentum forces by the inlet jets. As a result, the current CFD moderator analysis model predicts the moderator temperature reasonably, and the maximum error against the experimental data is kept at less than 2.0 deg. C over the whole domain. The simulated velocity field matches with the visualization of SPEL experiments quite well. (authors)

  14. A systematic review of the reliability and validity of discrete choice experiments in valuing non-market environmental goods.

    Science.gov (United States)

    Rakotonarivo, O Sarobidy; Schaafsma, Marije; Hockley, Neal

    2016-12-01

    While discrete choice experiments (DCEs) are increasingly used in the field of environmental valuation, they remain controversial because of their hypothetical nature and the contested reliability and validity of their results. We systematically reviewed evidence on the validity and reliability of environmental DCEs from the past thirteen years (Jan 2003-February 2016). 107 articles met our inclusion criteria. These studies provide limited and mixed evidence of the reliability and validity of DCE. Valuation results were susceptible to small changes in survey design in 45% of outcomes reporting reliability measures. DCE results were generally consistent with those of other stated preference techniques (convergent validity), but hypothetical bias was common. Evidence supporting theoretical validity (consistency with assumptions of rational choice theory) was limited. In content validity tests, 2-90% of respondents protested against a feature of the survey, and a considerable proportion found DCEs to be incomprehensible or inconsequential (17-40% and 10-62% respectively). DCE remains useful for non-market valuation, but its results should be used with caution. Given the sparse and inconclusive evidence base, we recommend that tests of reliability and validity are more routinely integrated into DCE studies and suggest how this might be achieved. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  15. TU-D-201-05: Validation of Treatment Planning Dose Calculations: Experience Working with MPPG 5.a

    Energy Technology Data Exchange (ETDEWEB)

    Xue, J; Park, J; Kim, L; Wang, C [MD Anderson Cancer Center at Cooper, Camden, NJ (United States); Balter, P; Ohrt, J; Kirsner, S; Ibbott, G [UT MD Anderson Cancer Center, Houston, TX (United States)

    2016-06-15

    Purpose: Newly published medical physics practice guideline (MPPG 5.a.) has set the minimum requirements for commissioning and QA of treatment planning dose calculations. We present our experience in the validation of a commercial treatment planning system based on MPPG 5.a. Methods: In addition to tests traditionally performed to commission a model-based dose calculation algorithm, extensive tests were carried out at short and extended SSDs, various depths, oblique gantry angles and off-axis conditions to verify the robustness and limitations of a dose calculation algorithm. A comparison between measured and calculated dose was performed based on validation tests and evaluation criteria recommended by MPPG 5.a. An ion chamber was used for the measurement of dose at points of interest, and diodes were used for photon IMRT/VMAT validations. Dose profiles were measured with a three-dimensional scanning system and calculated in the TPS using a virtual water phantom. Results: Calculated and measured absolute dose profiles were compared at each specified SSD and depth for open fields. The disagreement is easily identifiable with the difference curve. Subtle discrepancy has revealed the limitation of the measurement, e.g., a spike at the high dose region and an asymmetrical penumbra observed on the tests with an oblique MLC beam. The excellent results we had (> 98% pass rate on 3%/3mm gamma index) on the end-to-end tests for both IMRT and VMAT are attributed to the quality beam data and the good understanding of the modeling. The limitation of the model and the uncertainty of measurement were considered when comparing the results. Conclusion: The extensive tests recommended by the MPPG encourage us to understand the accuracy and limitations of a dose algorithm as well as the uncertainty of measurement. Our experience has shown how the suggested tests can be performed effectively to validate dose calculation models.

  16. Use of integral experiments in support to the validation of JEFF-3.2 nuclear data evaluation

    Directory of Open Access Journals (Sweden)

    Leclaire Nicolas

    2017-01-01

    Full Text Available For many years now, IRSN has developed its own Monte Carlo continuous energy capability, which allows testing various nuclear data libraries. In that prospect, a validation database of 1136 experiments was built from cases used for the validation of the APOLLO2-MORET 5 multigroup route of the CRISTAL V2.0 package. In this paper, the keff obtained for more than 200 benchmarks using the JEFF-3.1.1 and JEFF-3.2 libraries are compared to benchmark keff values and main discrepancies are analyzed regarding the neutron spectrum. Special attention is paid on benchmarks for which the results have been highly modified between both JEFF-3 versions.

  17. Validation analysis of pool fire experiment (Run-F7) using SPHINCS code

    International Nuclear Information System (INIS)

    Yamaguchi, Akira; Tajima, Yuji

    1998-04-01

    SPHINCS (Sodium Fire Phenomenology IN multi-Cell System) code has been developed for the safety analysis of sodium fire accident in a Fast Breeder Reactor. The main features of the SPHINCS code with respect to the sodium pool fire phenomena are multi-dimensional modeling of the thermal behavior in sodium pool and steel liner, modeling of the extension of sodium pool area based on the sodium mass conservation, and equilibrium model for the chemical reaction of pool fire on the flame sheet at the surface of sodium pool during. Therefore, the SPHINCS code is capable of temperature evaluation of the steel liner in detail during the small and/or medium scale sodium leakage accidents. In this study, Run-F7 experiment in which the sodium leakage rate is 11.8 kg/hour has been analyzed. In the experiment the diameter of the sodium pool is approximately 60 cm and the maximum steel liner temperature was 616 degree C. The analytical results tell us the agreement between the SPHINCS analysis and the experiment is excellent with respect to the time history and spatial distribution of the liner temperature, sodium pool extension behavior, as well as atmosphere gas temperature. It is concluded that the pool fire modeling of the SPHINCS code has been validated for this experiment. The SPHINCS code is currently applicable to the sodium pool fire phenomena and the temperature evaluation of the steel liner. The experiment series are continued to check some parameters, i.e., sodium leakage rate and the height of sodium leakage. Thus, the author will analyze the subsequent experiments to check the influence of the parameters and applies SPHINCS to the sodium fire consequence analysis of fast reactor. (author)

  18. Validity And Practicality of Experiment Integrated Guided Inquiry-Based Module on Topic of Colloidal Chemistry for Senior High School Learning

    Science.gov (United States)

    Andromeda, A.; Lufri; Festiyed; Ellizar, E.; Iryani, I.; Guspatni, G.; Fitri, L.

    2018-04-01

    This Research & Development study aims to produce a valid and practical experiment integrated guided inquiry based module on topic of colloidal chemistry. 4D instructional design model was selected in this study. Limited trial of the product was conducted at SMAN 7 Padang. Instruments used were validity and practicality questionnaires. Validity and practicality data were analyzed using Kappa moment. Analysis of the data shows that Kappa moment for validity was 0.88 indicating a very high degree of validity. Kappa moments for the practicality from students and teachers were 0.89 and 0.95 respectively indicating high degree of practicality. Analysis on the module filled in by students shows that 91.37% students could correctly answer critical thinking, exercise, prelab, postlab and worksheet questions asked in the module. These findings indicate that the integrated guided inquiry based module on topic of colloidal chemistry was valid and practical for chemistry learning in senior high school.

  19. Validation of Friction Models in MARS-MultiD Module with Two-Phase Cross Flow Experiment

    International Nuclear Information System (INIS)

    Choi, Chi-Jin; Yang, Jin-Hwa; Cho, Hyoung-Kyu; Park, Goon-Cher; Euh, Dong-Jin

    2015-01-01

    In the downcomer of Advanced Power Reactor 1400 (APR1400) which has direct vessel injection (DVI) lines as an emergency core cooling system, multidimensional two-phase flow may occur due to the Loss-of-Coolant-Accident (LOCA). The accurate prediction about that is high relevance to evaluation of the integrity of the reactor core. For this reason, Yang performed an experiment that was to investigate the two-dimensional film flow which simulated the two-phase cross flow in the upper downcomer, and obtained the local liquid film velocity and thickness data. From these data, it could be possible to validate the multidimensional modules of system analysis codes. In this study, MARS-MultiD was used to simulate the Yang's experiment, and obtained the local variables. Then, the friction models used in MARS-MultiD were validated by comparing the two-phase flow experimental results with the calculated local variables. In this study, the two-phase cross flow experiment was modeled by the MARS-MultiD. Compared with the experimental results, the calculated results by the code properly presented mass conservation which could be known from the relation between the liquid film velocity and thickness at the same flow rate. The magnitude and direction of the liquid film, however, did not follow well with experimental results. According to the results of Case-2, wall friction should be increased, and interfacial friction should be decreased in MARS-MultiD. These results show that it is needed to modify the friction models in the MARS-MultiD to simulate the two-phase cross flow

  20. Integral large scale experiments on hydrogen combustion for severe accident code validation-HYCOM

    International Nuclear Information System (INIS)

    Breitung, W.; Dorofeev, S.; Kotchourko, A.; Redlinger, R.; Scholtyssek, W.; Bentaib, A.; L'Heriteau, J.-P.; Pailhories, P.; Eyink, J.; Movahed, M.; Petzold, K.-G.; Heitsch, M.; Alekseev, V.; Denkevits, A.; Kuznetsov, M.; Efimenko, A.; Okun, M.V.; Huld, T.; Baraldi, D.

    2005-01-01

    A joint research project was carried out in the EU Fifth Framework Programme, concerning hydrogen risk in a nuclear power plant. The goals were: Firstly, to create a new data base of results on hydrogen combustion experiments in the slow to turbulent combustion regimes. Secondly, to validate the partners CFD and lumped parameter codes on the experimental data, and to evaluate suitable parameter sets for application calculations. Thirdly, to conduct a benchmark exercise by applying the codes to the full scale analysis of a postulated hydrogen combustion scenario in a light water reactor containment after a core melt accident. The paper describes the work programme of the project and the partners activities. Significant progress has been made in the experimental area, where test series in medium and large scale facilities have been carried out with the focus on specific effects of scale, multi-compartent geometry, heat losses and venting. The data were used for the validation of the partners CFD and lumped parameter codes, which included blind predictive calculations and pre- and post-test intercomparison exercises. Finally, a benchmark exercise was conducted by applying the codes to the full scale analysis of a hydrogen combustion scenario. The comparison and assessment of the results of the validation phase and of the challenging containment calculation exercise allows a deep insight in the quality, capabilities and limits of the CFD and the lumped parameter tools which are currently in use at various research laboratories

  1. The Development of the Functional Literacy Experience Scale Based upon Ecological Theory (FLESBUET) and Validity-Reliability Study

    Science.gov (United States)

    Özenç, Emine Gül; Dogan, M. Cihangir

    2014-01-01

    This study aims to perform a validity-reliability test by developing the Functional Literacy Experience Scale based upon Ecological Theory (FLESBUET) for primary education students. The study group includes 209 fifth grade students at Sabri Taskin Primary School in the Kartal District of Istanbul, Turkey during the 2010-2011 academic year.…

  2. Numerical studies and metric development for validation of magnetohydrodynamic models on the HIT-SI experiment

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, C., E-mail: hansec@uw.edu [PSI-Center, University of Washington, Seattle, Washington 98195 (United States); Columbia University, New York, New York 10027 (United States); Victor, B.; Morgan, K.; Hossack, A.; Sutherland, D. [HIT-SI Group, University of Washington, Seattle, Washington 98195 (United States); Jarboe, T.; Nelson, B. A. [HIT-SI Group, University of Washington, Seattle, Washington 98195 (United States); PSI-Center, University of Washington, Seattle, Washington 98195 (United States); Marklin, G. [PSI-Center, University of Washington, Seattle, Washington 98195 (United States)

    2015-05-15

    We present application of three scalar metrics derived from the Biorthogonal Decomposition (BD) technique to evaluate the level of agreement between macroscopic plasma dynamics in different data sets. BD decomposes large data sets, as produced by distributed diagnostic arrays, into principal mode structures without assumptions on spatial or temporal structure. These metrics have been applied to validation of the Hall-MHD model using experimental data from the Helicity Injected Torus with Steady Inductive helicity injection experiment. Each metric provides a measure of correlation between mode structures extracted from experimental data and simulations for an array of 192 surface-mounted magnetic probes. Numerical validation studies have been performed using the NIMROD code, where the injectors are modeled as boundary conditions on the flux conserver, and the PSI-TET code, where the entire plasma volume is treated. Initial results from a comprehensive validation study of high performance operation with different injector frequencies are presented, illustrating application of the BD method. Using a simplified (constant, uniform density and temperature) Hall-MHD model, simulation results agree with experimental observation for two of the three defined metrics when the injectors are driven with a frequency of 14.5 kHz.

  3. Assessing decentering: validation, psychometric properties, and clinical usefulness of the Experiences Questionnaire in a Spanish sample.

    Science.gov (United States)

    Soler, Joaquim; Franquesa, Alba; Feliu-Soler, Albert; Cebolla, Ausias; García-Campayo, Javier; Tejedor, Rosa; Demarzo, Marcelo; Baños, Rosa; Pascual, Juan Carlos; Portella, Maria J

    2014-11-01

    Decentering is defined as the ability to observe one's thoughts and feelings in a detached manner. The Experiences Questionnaire (EQ) is a self-report instrument that originally assessed decentering and rumination. The purpose of this study was to evaluate the psychometric properties of the Spanish version of EQ-Decentering and to explore its clinical usefulness. The 11-item EQ-Decentering subscale was translated into Spanish and psychometric properties were examined in a sample of 921 adult individuals, 231 with psychiatric disorders and 690 without. The subsample of nonpsychiatric participants was also split according to their previous meditative experience (meditative participants, n=341; and nonmeditative participants, n=349). Additionally, differences among these three subgroups were explored to determine clinical validity of the scale. Finally, EQ-Decentering was administered twice in a group of borderline personality disorder, before and after a 10-week mindfulness intervention. Confirmatory factor analysis indicated acceptable model fit, sbχ(2)=243.8836 (p.46; and divergent validity: r<-.35). The scale detected changes in decentering after a 10-session intervention in mindfulness (t=-4.692, p<.00001). Differences among groups were significant (F=134.8, p<.000001), where psychiatric participants showed the lowest scores compared to nonpsychiatric meditative and nonmeditative participants. The Spanish version of the EQ-Decentering is a valid and reliable instrument to assess decentering either in clinical and nonclinical samples. In addition, the findings show that EQ-Decentering seems an adequate outcome instrument to detect changes after mindfulness-based interventions. Copyright © 2014. Published by Elsevier Ltd.

  4. Continuously revised assurance cases with stakeholders’ cross-validation: a DEOS experience

    Directory of Open Access Journals (Sweden)

    Kimio Kuramitsu

    2016-12-01

    Full Text Available Recently, assurance cases have received much attention in the field of software-based computer systems and IT services. However, software changes very often, and there are no strong regulations for software. These facts are two main challenges to be addressed in the development of software assurance cases. We propose a method of developing assurance cases by means of continuous revision at every stage of the system life cycle, including in operation and service recovery in failure cases. Instead of a regulator, dependability arguments are validated by multiple stakeholders competing with each other. This paper reported our experience with the proposed method in the case of Aspen education service. The case study demonstrates that continuous revisions enable stakeholders to share dependability problems across software life cycle stages, which will lead to the long-term improvement of service dependability.

  5. A CFD validation roadmap for hypersonic flows

    Science.gov (United States)

    Marvin, Joseph G.

    1993-01-01

    A roadmap for computational fluid dynamics (CFD) code validation is developed. The elements of the roadmap are consistent with air-breathing vehicle design requirements and related to the important flow path components: forebody, inlet, combustor, and nozzle. Building block and benchmark validation experiments are identified along with their test conditions and measurements. Based on an evaluation criteria, recommendations for an initial CFD validation data base are given and gaps identified where future experiments would provide the needed validation data.

  6. Computational Design and Discovery of Ni-Based Alloys and Coatings: Thermodynamic Approaches Validated by Experiments

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Zi-Kui [Pennsylvania State University; Gleeson, Brian [University of Pittsburgh; Shang, Shunli [Pennsylvania State University; Gheno, Thomas [University of Pittsburgh; Lindwall, Greta [Pennsylvania State University; Zhou, Bi-Cheng [Pennsylvania State University; Liu, Xuan [Pennsylvania State University; Ross, Austin [Pennsylvania State University

    2018-04-23

    This project developed computational tools that can complement and support experimental efforts in order to enable discovery and more efficient development of Ni-base structural materials and coatings. The project goal was reached through an integrated computation-predictive and experimental-validation approach, including first-principles calculations, thermodynamic CALPHAD (CALculation of PHAse Diagram), and experimental investigations on compositions relevant to Ni-base superalloys and coatings in terms of oxide layer growth and microstructure stabilities. The developed description included composition ranges typical for coating alloys and, hence, allow for prediction of thermodynamic properties for these material systems. The calculation of phase compositions, phase fraction, and phase stabilities, which are directly related to properties such as ductility and strength, was a valuable contribution, along with the collection of computational tools that are required to meet the increasing demands for strong, ductile and environmentally-protective coatings. Specifically, a suitable thermodynamic description for the Ni-Al-Cr-Co-Si-Hf-Y system was developed for bulk alloy and coating compositions. Experiments were performed to validate and refine the thermodynamics from the CALPHAD modeling approach. Additionally, alloys produced using predictions from the current computational models were studied in terms of their oxidation performance. Finally, results obtained from experiments aided in the development of a thermodynamic modeling automation tool called ESPEI/pycalphad - for more rapid discovery and development of new materials.

  7. Strain gauge validation experiments for the Sandia 34-meter VAWT (Vertical Axis Wind Turbine) test bed

    Science.gov (United States)

    Sutherland, Herbert J.

    1988-08-01

    Sandia National Laboratories has erected a research oriented, 34- meter diameter, Darrieus vertical axis wind turbine near Bushland, Texas. This machine, designated the Sandia 34-m VAWT Test Bed, is equipped with a large array of strain gauges that have been placed at critical positions about the blades. This manuscript details a series of four-point bend experiments that were conducted to validate the output of the blade strain gauge circuits. The output of a particular gauge circuit is validated by comparing its output to equivalent gauge circuits (in this stress state) and to theoretical predictions. With only a few exceptions, the difference between measured and predicted strain values for a gauge circuit was found to be of the order of the estimated repeatability for the measurement system.

  8. Validation study of the reactor physics lattice transport code WIMSD-5B by TRX and BAPL critical experiments of light water reactors

    International Nuclear Information System (INIS)

    Khan, M.J.H.; Alam, A.B.M.K.; Ahsan, M.H.; Mamun, K.A.A.; Islam, S.M.A.

    2015-01-01

    Highlights: • To validate the reactor physics lattice code WIMSD-5B by this analysis. • To model TRX and BAPL critical experiments using WIMSD-5B. • To compare the calculated results with experiment and MCNP results. • To rely on WIMSD-5B code for TRIGA calculations. - Abstract: The aim of this analysis is to validate the reactor physics lattice transport code WIMSD-5B by TRX (thermal reactor-one region lattice) and BAPL (Bettis Atomic Power Laboratory-one region lattice) critical experiments of light water reactors for neutronics analysis of 3 MW TRIGA Mark-II research reactor at AERE, Dhaka, Bangladesh. This analysis is achieved through the analysis of integral parameters of five light water reactor critical experiments TRX-1, TRX-2, BAPL-UO 2 -1, BAPL-UO 2 -2 and BAPL-UO 2 -3 based on evaluated nuclear data libraries JEFF-3.1 and ENDF/B-VII.1. In integral measurements, these experiments are considered as standard benchmark lattices for validating the reactor physics lattice transport code WIMSD-5B as well as evaluated nuclear data libraries. The integral parameters of the said critical experiments are calculated using the reactor physics lattice transport code WIMSD-5B. The calculated integral parameters are compared to the measured values as well as the earlier published MCNP results based on the Chinese evaluated nuclear data library CENDL-3.0 for assessment of deterministic calculation. It was found that the calculated integral parameters give mostly reasonable and globally consistent results with the experiment and the MCNP results. Besides, the group constants in WIMS format for the isotopes U-235 and U-238 between two data files have been compared using WIMS library utility code WILLIE and it was found that the group constants are well consistent with each other. Therefore, this analysis reveals the validation study of the reactor physics lattice transport code WIMSD-5B based on JEFF-3.1 and ENDF/B-VII.1 libraries and can also be essential to

  9. The role of CFD combustion modeling in hydrogen safety management-II: Validation based on homogeneous hydrogen-air experiments

    Energy Technology Data Exchange (ETDEWEB)

    Sathiah, Pratap, E-mail: sathiah@nrg.eu [Nuclear Research and Consultancy Group (NRG), Westerduinweg 3, 1755 ZG Petten (Netherlands); Haren, Steven van, E-mail: vanharen@nrg.eu [Nuclear Research and Consultancy Group (NRG), Westerduinweg 3, 1755 ZG Petten (Netherlands); Komen, Ed, E-mail: komen@nrg.eu [Nuclear Research and Consultancy Group (NRG), Westerduinweg 3, 1755 ZG Petten (Netherlands); Roekaerts, Dirk, E-mail: d.j.e.m.roekaerts@tudelft.nl [Department of Multi-Scale Physics, Delft University of Technology, P.O. Box 5, 2600 AA Delft (Netherlands)

    2012-11-15

    Highlights: Black-Right-Pointing-Pointer A CFD based method is proposed for the simulation of hydrogen deflagration. Black-Right-Pointing-Pointer A dynamic grid adaptation method is proposed to resolve turbulent flame brush thickness. Black-Right-Pointing-Pointer The predictions obtained using this method is in good agreement with the static grid method. Black-Right-Pointing-Pointer TFC model results are in good agreement with large-scale homogeneous hydrogen-air experiments. - Abstract: During a severe accident in a PWR, large quantities of hydrogen can be generated and released into the containment. The generated hydrogen, when mixed with air, can lead to hydrogen combustion. The dynamic pressure loads resulting from hydrogen combustion can be detrimental to the structural integrity of the reactor safety systems and the reactor containment. Therefore, accurate prediction of these pressure loads is an important safety issue. In a previous article, we presented a CFD based method to determine these pressure loads. This CFD method is based on the application of a turbulent flame speed closure combustion model. The validation analyses in our previous paper demonstrated that it is of utmost importance to apply successive mesh and time step refinement in order to get reliable results. In this article, we first determined to what extent the required computational effort required for our CFD approach can be reduced by the application of adaptive mesh refinement, while maintaining the accuracy requirements. Experiments performed within a small fan stirred explosion bomb were used for this purpose. It could be concluded that adaptive grid adaptation is a reliable and efficient method for usage in hydrogen deflagration analyses. For the two-dimensional validation analyses, the application of dynamic grid adaptation resulted in a reduction of the required computational effort by about one order of magnitude. In a second step, the considered CFD approach including adaptive

  10. Validation suite for MCNP

    International Nuclear Information System (INIS)

    Mosteller, Russell D.

    2002-01-01

    Two validation suites, one for criticality and another for radiation shielding, have been defined and tested for the MCNP Monte Carlo code. All of the cases in the validation suites are based on experiments so that calculated and measured results can be compared in a meaningful way. The cases in the validation suites are described, and results from those cases are discussed. For several years, the distribution package for the MCNP Monte Carlo code1 has included an installation test suite to verify that MCNP has been installed correctly. However, the cases in that suite have been constructed primarily to test options within the code and to execute quickly. Consequently, they do not produce well-converged answers, and many of them are physically unrealistic. To remedy these deficiencies, sets of validation suites are being defined and tested for specific types of applications. All of the cases in the validation suites are based on benchmark experiments. Consequently, the results from the measurements are reliable and quantifiable, and calculated results can be compared with them in a meaningful way. Currently, validation suites exist for criticality and radiation-shielding applications.

  11. Measuring Black men's police-based discrimination experiences: Development and validation of the Police and Law Enforcement (PLE) Scale.

    Science.gov (United States)

    English, Devin; Bowleg, Lisa; Del Río-González, Ana Maria; Tschann, Jeanne M; Agans, Robert P; Malebranche, David J

    2017-04-01

    Although social science research has examined police and law enforcement-perpetrated discrimination against Black men using policing statistics and implicit bias studies, there is little quantitative evidence detailing this phenomenon from the perspective of Black men. Consequently, there is a dearth of research detailing how Black men's perspectives on police and law enforcement-related stress predict negative physiological and psychological health outcomes. This study addresses these gaps with the qualitative development and quantitative test of the Police and Law Enforcement (PLE) Scale. In Study 1, we used thematic analysis on transcripts of individual qualitative interviews with 90 Black men to assess key themes and concepts and develop quantitative items. In Study 2, we used 2 focus groups comprised of 5 Black men each (n = 10), intensive cognitive interviewing with a separate sample of Black men (n = 15), and piloting with another sample of Black men (n = 13) to assess the ecological validity of the quantitative items. For Study 3, we analyzed data from a sample of 633 Black men between the ages of 18 and 65 to test the factor structure of the PLE, as we all as its concurrent validity and convergent/discriminant validity. Qualitative analyses and confirmatory factor analyses suggested that a 5-item, 1-factor measure appropriately represented respondents' experiences of police/law enforcement discrimination. As hypothesized, the PLE was positively associated with measures of racial discrimination and depressive symptoms. Preliminary evidence suggests that the PLE is a reliable and valid measure of Black men's experiences of discrimination with police/law enforcement. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  12. The Depressive Experiences Questionnaire: validity and psychological correlates in a clinical sample.

    Science.gov (United States)

    Riley, W T; McCranie, E W

    1990-01-01

    This study sought to compare the original and revised scoring systems of the Depressive Experiences Questionnaire (DEQ) and to assess the construct validity of the Dependent and Self-Critical subscales of the DEQ in a clinically depressed sample. Subjects were 103 depressed inpatients who completed the DEQ, the Beck Depression Inventory (BDI), the Hopelessness Scale, the Automatic Thoughts Questionnaire (ATQ), the Rathus Assertiveness Schedule (RAS), and the Minnesota Multiphasic Personality Inventory (MMPI). The original and revised scoring systems of the DEQ evidenced good concurrent validity for each factor scale, but the revised system did not sufficiently discriminate dependent and self-critical dimensions. Using the original scoring system, self-criticism was significantly and positively related to severity of depression, whereas dependency was not, particularly for males. Factor analysis of the DEQ scales and the other scales used in this study supported the dependent and self-critical dimensions. For men, the correlation of the DEQ with the MMPI scales indicated that self-criticism was associated with psychotic symptoms, hostility/conflict, and a distress/exaggerated response set, whereas dependency did not correlate significantly with any MMPI scales. Females, however, did not exhibit a differential pattern of correlations between either the Dependency or the Self-Criticism scales and the MMPI. These findings suggest possible gender differences in the clinical characteristics of male and female dependent and self-critical depressive subtypes.

  13. Benchmark validation by means of pulsed sphere experiment at OKTAVIAN

    Energy Technology Data Exchange (ETDEWEB)

    Ichihara, Chihiro [Kyoto Univ., Kumatori, Osaka (Japan). Research Reactor Inst.; Hayashi, S.A.; Kimura, Itsuro; Yamamoto, Junji; Takahashi, Akito

    1997-03-01

    The new version of Japanese nuclear data library JENDL-3.2 has recently been released. JENDL Fusion File which adopted DDX representations for secondary neutrons was also improved with the new evaluation method. On the other hand, FENDL nuclear data project to compile nuclear data library for fusion related research has been conducted partly under auspices of International Atomic Energy Agency (IAEA). The first version FENDL-1 consists of JENDL-3.1, ENDF/B-VI, BROND-2 and EFF-1 and has been released in 1995. The work for the second version FENDL-2 is now ongoing. The Bench mark validation of the nuclear data libraries have been performed to help selecting the candidate for the FENDL-2. The benchmark experiment have been conducted at OKTAVIAN of Osaka university. The sample spheres were constructed by filling the spherical shells with sample. The leakage neutron spectra from sphere piles were measured with a time-of-flight method. The measured spectra were compared with the theoretical calculation using MCNP 4A and the processed libraries from JENDL-3.1, JENDL-3.2, JENDL Fusion File, and FENDL-1. JENDL Fusion File and JENDL-3.2 gave almost the same prediction for the experiment. And both prediction are almost satisfying for Li, Cr, Mn, Cu, Zr, Nb and Mo, whereas for Al, LiF, CF2, Si, Ti, Co and W there is some discrepancy. However, they gave better prediction than the calculations using the library from FENDL-1, except for W. (author)

  14. A validation of DRAGON based on lattice experiments

    International Nuclear Information System (INIS)

    Marleau, G.

    1996-01-01

    Here we address the validation of DRAGON using the Chalk River Laboratory experimental database which has already been used for the validation of other codes. Because of the large variety of information for different fuel and moderator types compiled on this database, the most basic modules of DRAGON are thoroughly tested. The general behaviour observed with DRAGON is very good. Its main weakness is seen in the self-shielding ,calculation where the correction applied to the inner fuel pin seems to be overevaluated with respect to the outer fuel pins. One question which is left open this paper concerns the need for inserting end-regions in the DRAGON cells when the heterogeneous B, leakage model is used. (author)

  15. Measuring Black Men’s Police-Based Discrimination Experiences: Development and Validation of the Police and Law Enforcement (PLE) Scale

    Science.gov (United States)

    English, Devin; Bowleg, Lisa; del Río-González, Ana Maria; Tschann, Jeanne M.; Agans, Robert; Malebranche, David J

    2017-01-01

    Objectives Although social science research has examined police and law enforcement-perpetrated discrimination against Black men using policing statistics and implicit bias studies, there is little quantitative evidence detailing this phenomenon from the perspective of Black men. Consequently, there is a dearth of research detailing how Black men’s perspectives on police and law enforcement-related stress predict negative physiological and psychological health outcomes. This study addresses these gaps with the qualitative development and quantitative test of the Police and Law Enforcement (PLE) scale. Methods In Study 1, we employed thematic analysis on transcripts of individual qualitative interviews with 90 Black men to assess key themes and concepts and develop quantitative items. In Study 2, we used 2 focus groups comprised of 5 Black men each (n=10), intensive cognitive interviewing with a separate sample of Black men (n=15), and piloting with another sample of Black men (n=13) to assess the ecological validity of the quantitative items. For study 3, we analyzed data from a sample of 633 Black men between the ages of 18 and 65 to test the factor structure of the PLE, as we all as its concurrent validity and convergent/discriminant validity. Results Qualitative analyses and confirmatory factor analyses suggested that a 5-item, 1-factor measure appropriately represented respondents’ experiences of police/law enforcement discrimination. As hypothesized, the PLE was positively associated with measures of racial discrimination and depressive symptoms. Conclusions Preliminary evidence suggests that the PLE is a reliable and valid measure of Black men’s experiences of discrimination with police/law enforcement. PMID:28080104

  16. Two-phase CFD PTS validation in an extended range of thermohydraulics conditions covered by the COSI experiment

    International Nuclear Information System (INIS)

    Coste, P.; Ortolan, A.

    2014-01-01

    Highlights: • Models for large interfaces in two-phase CFD were developed for PTS. • The COSI experiment is used for NEPTUNE C FD integral validation. • COSI is a PWR cold leg scaled 1/100 for volume. • Fifty runs are calculated, covering a large range of flow configurations. • The CFD predicting capability is analysed using global and local measurements. - Abstract: In the context of the Pressurized Water Reactors (PWR) life duration safety studies, some models were developed to address the Pressurized Thermal Shock (PTS) from the two-phase CFD angle, dealing with interfaces much larger than cells size and with direct contact condensation. Such models were implemented in NEPTUNE C FD, a 3D transient Eulerian two-fluid model. The COSI experiment is used for its integral validation. It represents a cold leg scaled 1/100 for volume and power from a 900 MW PWR under a large range of LOCA PTS conditions. In this study, the CFD is evaluated in the whole range of parameters and flow configurations covered by the experiment. In a first step, a single choice of mesh and CFD models parameters is fixed and justified. In a second step, fifty runs are calculated. The CFD predicting capability is analysed, comparing the liquid temperature and the total condensation rate with the experiment, discussing their dependency on the inlet cold liquid rate, on the liquid level in the cold leg and on the difference between co-current and counter-current runs. It is shown that NEPTUNE C FD 1.0.8 calculates with a fair agreement a large range of flow configurations related to ECCS injection and steam condensation

  17. Preliminary characterization of materials for a reactive transport model validation experiment

    International Nuclear Information System (INIS)

    Siegel, M.D.; Ward, D.B.; Cheng, W.C.; Bryant, C.; Chocas, C.S.; Reynolds, C.G.

    1993-01-01

    The geochemical properties of a porous sand and several tracers (Ni, Br, and Li) have been characterized for use in a caisson experiment designed to validate sorption models used in models of inactive transport. The surfaces of the sand grains have been examined by a combination of techniques including potentiometric titration, acid leaching, optical microscopy, and scanning electron microscopy with energy-dispersive spectroscopy. The surface studies indicate the presence of small amounts of carbonate, kaolinite and iron-oxyhydroxides. Adsorption of nickel, lithium and bromide by the sand was measured using batch techniques. Bromide was not sorbed by the sand. A linear (K d ) or an isotherm sorption model may adequately describe transport of Li; however, a model describing the changes of pH and the concentrations of other solution species as a function of time and position within the caisson and the concomitant effects on Ni sorption may be required for accurate predictions of nickel transport

  18. Validation of ASTECV2.1 based on the QUENCH-08 experiment

    Energy Technology Data Exchange (ETDEWEB)

    Gómez-García-Toraño, Ignacio, E-mail: ignacio.torano@kit.edu [Karlsruhe Institute of Technology, Institute for Neutron Physics and Reactor Technology (INR), Hermann-von-Helmholtz-Platz 1, D-76344 Eggenstein-Leopoldshafen (Germany); Sánchez-Espinoza, Víctor-Hugo; Stieglitz, Robert [Karlsruhe Institute of Technology, Institute for Neutron Physics and Reactor Technology (INR), Hermann-von-Helmholtz-Platz 1, D-76344 Eggenstein-Leopoldshafen (Germany); Stuckert, Juri [Karlsruhe Institute of Technology, Institute for Applied Materials-Applied Materials Physics (IAM-AWP), Hermann-von-Helmholtz-Platz 1, D-76344 Eggenstein-Leopoldshafen (Germany); Laborde, Laurent; Belon, Sébastien [Institut de Radioprotection et de Sûreté Nucléaire (IRSN), Nuclear Safety Division/Safety Research/Severe Accident Department, Saint Paul Lez Durance 13115 (France)

    2017-04-01

    Highlights: • ASTECV2.1 can reproduce QUENCH-08 experimental trends e.g. hydrogen generation. • Radial temperature gradient and heat transfer through argon gap are underestimated. • Mesh sizes lower than 55 mm needed to capture the strong axial temperature gradient. • Minor variations of external electrical resistance strongly affect bundle heat-up. • Modelling of a bypass and inclusion of currents partially overcome discrepancies. - Abstract: The Fukushima accidents have shown that further improvements of Severe Accident Management Guidelines (SAMGs) are still necessary. Hence, the enhancement of severe accident codes and their validation based on integral experiments is pursued worldwide. In particular, the capabilities of the European integral severe accident ASTECV2.1 code are being extended within the CESAM project through the improvement of physical models, code numerics and an extensive code validation. Among the different strategies encompassed in the plant SAMGs, one of the most important ones to prevent core damage is the injection of water into the overheated core (reflooding). However, under certain conditions, reflooding may trigger a sharp hydrogen generation that may jeopardize the containment. Within this work, ASTECV2.1 models describing the early in-vessel phase of the severe accident and its termination by core reflooding are validated against data from the QUENCH test facility. The QUENCH-08, involving the injection of 15 g/s (about 0.6 g/s/rod) of saturated steam at a bundle temperature of 2073 K, has been selected for this comparison. Results show that ASTECV2.1 is able to reproduce the experimental temperatures and oxide thicknesses at representative bundle locations. The predicted total hydrogen generation (76 g) is similar to the experimental one (84 g). In addition, the choices of an axial mesh size lower than 55 mm and of an external electrical resistance of a 7 mΩ/rod have been justified with parametric analyses. Finally, new

  19. Validation of NEPTUNE-CFD on ULPU-V experiments

    Energy Technology Data Exchange (ETDEWEB)

    Jamet, Mathieu, E-mail: mathieu.jamet@edf.fr; Lavieville, Jerome; Atkhen, Kresna; Mechitoua, Namane

    2015-11-15

    In-vessel retention (IVR) of molten corium through external cooling of the reactor pressure vessel is one possible means of severe accident mitigation for a class of nuclear power plants. The aim is to successfully terminate the progression of a core melt within the reactor vessel. The probability of success depends on the efficacy of the cooling strategy; hence one of the key aspects of an IVR demonstration relates to the heat removal capability through the vessel wall by convection and boiling in the external water flow. This is only possible if the in-vessel thermal loading is lower than the local critical heat flux expected along the outer wall of the vessel, which is in turn highly dependent on the flow characteristics between the vessel and the insulator. The NEPTUNE-CFD multiphase flow solver is used to obtain a better understanding at local scale of the thermal hydraulics involved in this situation. The validation of the NEPTUNE-CFD code on the ULPU-V facility experiments carried out at the University of California Santa Barbara is presented as a first attempt of using CFD codes at EDF to address such an issue. Two types of computation are performed. On the one hand, a steady state algorithm is used to compute natural circulation flow rates and differential pressures and, on the other, a transient algorithm computation reveals the oscillatory nature of the pressure data recorded in the ULPU facility. Several dominant frequencies are highlighted. In both cases, the CFD simulations reproduce reasonably well the experimental data for these quantities.

  20. The role of CFD combustion modeling in hydrogen safety management – V: Validation for slow deflagrations in homogeneous hydrogen-air experiments

    Energy Technology Data Exchange (ETDEWEB)

    Sathiah, Pratap [Nuclear Research and Consultancy Group (NRG), Westerduinweg 3, 1755 ZG Petten (Netherlands); Holler, Tadej, E-mail: tadej.holler@ijs.si [Jozef Stefan Institute (JSI), Jamova cesta 39, 1000 Ljubljana (Slovenia); Kljenak, Ivo [Jozef Stefan Institute (JSI), Jamova cesta 39, 1000 Ljubljana (Slovenia); Komen, Ed [Nuclear Research and Consultancy Group (NRG), Westerduinweg 3, 1755 ZG Petten (Netherlands)

    2016-12-15

    Highlights: • Validation of the modeling approach for hydrogen deflagration is presented. • Modeling approach is based on two combustion models implemented in ANSYS Fluent. • Experiments with various initial hydrogen concentrations were used for validation. • The effects of heat transfer mechanisms selection were also investigated. • The grid sensitivity analysis was performed as well. - Abstract: The control of hydrogen in the containment is an important safety issue following rapid oxidation of the uncovered reactor core during a severe accident in a Nuclear Power Plant (NPP), because dynamic pressure loads from eventual hydrogen combustion can be detrimental to the structural integrity of the reactor safety systems and the reactor containment. In the set of our previous papers, a CFD-based method to assess the consequence of fast combustion of uniform hydrogen-air mixtures was presented, followed by its validation for hydrogen-air mixtures with diluents and for non-uniform hydrogen-air mixtures. In the present paper, the extension of this model for the slow deflagration regime is presented and validated using the hydrogen deflagration experiments performed in the medium-scale experimental facility THAI. The proposed method is implemented in the CFD software ANSYS Fluent using user defined functions. The paper describes the combustion model and the main results of code validation. It addresses questions regarding turbulence model selection, effect of heat transfer mechanisms, and grid sensitivity, as well as provides insights into the importance of combustion model choice for the slow deflagration regime of hydrogen combustion in medium-scale and large-scale experimental vessels mimicking the NPP containment.

  1. Roll-up of validation results to a target application.

    Energy Technology Data Exchange (ETDEWEB)

    Hills, Richard Guy

    2013-09-01

    Suites of experiments are preformed over a validation hierarchy to test computational simulation models for complex applications. Experiments within the hierarchy can be performed at different conditions and configurations than those for an intended application, with each experiment testing only part of the physics relevant for the application. The purpose of the present work is to develop methodology to roll-up validation results to an application, and to assess the impact the validation hierarchy design has on the roll-up results. The roll-up is accomplished through the development of a meta-model that relates validation measurements throughout a hierarchy to the desired response quantities for the target application. The meta-model is developed using the computation simulation models for the experiments and the application. The meta-model approach is applied to a series of example transport problems that represent complete and incomplete coverage of the physics of the target application by the validation experiments.

  2. Development and initial validation of the Parental PELICAN Questionnaire (PaPEQu)--an instrument to assess parental experiences and needs during their child's end-of-life care.

    Science.gov (United States)

    Zimmermann, Karin; Cignacco, Eva; Eskola, Katri; Engberg, Sandra; Ramelet, Anne-Sylvie; Von der Weid, Nicolas; Bergstraesser, Eva

    2015-12-01

    To develop and test the Parental PELICAN Questionnaire, an instrument to retrospectively assess parental experiences and needs during their child's end-of-life care. To offer appropriate care for dying children, healthcare professionals need to understand the illness experience from the family perspective. A questionnaire specific to the end-of-life experiences and needs of parents losing a child is needed to evaluate the perceived quality of paediatric end-of-life care. This is an instrument development study applying mixed methods based on recommendations for questionnaire design and validation. The Parental PELICAN Questionnaire was developed in four phases between August 2012-March 2014: phase 1: item generation; phase 2: validity testing; phase 3: translation; phase 4: pilot testing. Psychometric properties were assessed after applying the Parental PELICAN Questionnaire in a sample of 224 bereaved parents in April 2014. Validity testing covered the evidence based on tests of content, internal structure and relations to other variables. The Parental PELICAN Questionnaire consists of approximately 90 items in four slightly different versions accounting for particularities of the four diagnostic groups. The questionnaire's items were structured according to six quality domains described in the literature. Evidence of initial validity and reliability could be demonstrated with the involvement of healthcare professionals and bereaved parents. The Parental PELICAN Questionnaire holds promise as a measure to assess parental experiences and needs and is applicable to a broad range of paediatric specialties and settings. Future validation is needed to evaluate its suitability in different cultures. © 2015 John Wiley & Sons Ltd.

  3. Validation of the Danish language Injustice Experience Questionnaire

    DEFF Research Database (Denmark)

    la Cour, Peter; Schultz, Rikke; Smith, Anne Agerskov

    2017-01-01

    /somatoform symptoms. These patients also completed questionnaires concerning sociodemographics, anxiety and depression, subjective well-being, and overall physical and mental functioning. Our results showed satisfactory interpretability and face validity, and high internal consistency (Cronbach's alpha = .90...

  4. The benchmark experiment on slab beryllium with D–T neutrons for validation of evaluated nuclear data

    Energy Technology Data Exchange (ETDEWEB)

    Nie, Y., E-mail: nieyb@ciae.ac.cn [Science and Technology on Nuclear Data Laboratory, China Institute of Atomic Energy, Beijing 102413 (China); Ren, J.; Ruan, X.; Bao, J. [Science and Technology on Nuclear Data Laboratory, China Institute of Atomic Energy, Beijing 102413 (China); Han, R. [Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou 730000 (China); Zhang, S. [Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou 730000 (China); Inner Mongolia University for the Nationalities, Inner Mongolia, Tongliao 028000 (China); Huang, H.; Li, X. [Science and Technology on Nuclear Data Laboratory, China Institute of Atomic Energy, Beijing 102413 (China); Ding, Y. [Science and Technology on Nuclear Data Laboratory, China Institute of Atomic Energy, Beijing 102413 (China); School of Nuclear Science and Technology, Lanzhou University, Lanzhou 730000 (China); Wu, H.; Liu, P.; Zhou, Z. [Science and Technology on Nuclear Data Laboratory, China Institute of Atomic Energy, Beijing 102413 (China)

    2016-04-15

    Highlights: • Evaluated data for beryllium are validated by a high precision benchmark experiment. • Leakage neutron spectra from pure beryllium slab are measured at 61° and 121° using time-of-flight method. • The experimental results are compared with the MCNP-4B calculations with the evaluated data from different libraries. - Abstract: Beryllium is the most favored neutron multiplier candidate for solid breeder blankets of future fusion power reactors. However, beryllium nuclear data are differently presented in modern nuclear data evaluations. In order to validate the evaluated nuclear data on beryllium, in the present study, a benchmark experiment has been performed at China Institution of Atomic Energy (CIAE). Neutron leakage spectra from pure beryllium slab samples were measured at 61° and 121° using time-of-flight method. The experimental results were compared with the calculated ones by MCNP-4B simulation, using the evaluated data of beryllium from the CENDL-3.1, ENDF/B-VII.1 and JENDL-4.0 libraries. From the comparison between the measured and the calculated spectra, it was found that the calculation results based on CENDL-3.1 caused overestimation in the energy range from about 3–12 MeV at 61°, while at 121°, all the libraries led to underestimation below 3 MeV.

  5. The benchmark experiment on slab beryllium with D–T neutrons for validation of evaluated nuclear data

    International Nuclear Information System (INIS)

    Nie, Y.; Ren, J.; Ruan, X.; Bao, J.; Han, R.; Zhang, S.; Huang, H.; Li, X.; Ding, Y.; Wu, H.; Liu, P.; Zhou, Z.

    2016-01-01

    Highlights: • Evaluated data for beryllium are validated by a high precision benchmark experiment. • Leakage neutron spectra from pure beryllium slab are measured at 61° and 121° using time-of-flight method. • The experimental results are compared with the MCNP-4B calculations with the evaluated data from different libraries. - Abstract: Beryllium is the most favored neutron multiplier candidate for solid breeder blankets of future fusion power reactors. However, beryllium nuclear data are differently presented in modern nuclear data evaluations. In order to validate the evaluated nuclear data on beryllium, in the present study, a benchmark experiment has been performed at China Institution of Atomic Energy (CIAE). Neutron leakage spectra from pure beryllium slab samples were measured at 61° and 121° using time-of-flight method. The experimental results were compared with the calculated ones by MCNP-4B simulation, using the evaluated data of beryllium from the CENDL-3.1, ENDF/B-VII.1 and JENDL-4.0 libraries. From the comparison between the measured and the calculated spectra, it was found that the calculation results based on CENDL-3.1 caused overestimation in the energy range from about 3–12 MeV at 61°, while at 121°, all the libraries led to underestimation below 3 MeV.

  6. First experience from in-core sensor validation based on correlation and neuro-fuzzy techniques

    International Nuclear Information System (INIS)

    Figedy, S.

    2011-01-01

    In this work new types of nuclear reactor in-core sensor validation methods are outlined. The first one is based on combination of correlation coefficients and mutual information indices, which reflect the correlation of signals in linear and nonlinear regions. The method may be supplemented by wavelet transform based signal features extraction and pattern recognition by artificial neural networks and also fuzzy logic based decision making. The second one is based on neuro-fuzzy modeling of residuals between experimental values and their theoretical counterparts obtained from the reactor core simulator calculations. The first experience with this approach is described and further improvements to enhance the outcome reliability are proposed (Author)

  7. In Situ Experiment and Numerical Model Validation of a Borehole Heat Exchanger in Shallow Hard Crystalline Rock

    Directory of Open Access Journals (Sweden)

    Mateusz Janiszewski

    2018-04-01

    Full Text Available Accurate and fast numerical modelling of the borehole heat exchanger (BHE is required for simulation of long-term thermal energy storage in rocks using boreholes. The goal of this study was to conduct an in situ experiment to validate the proposed numerical modelling approach. In the experiment, hot water was circulated for 21 days through a single U-tube BHE installed in an underground research tunnel located at a shallow depth in crystalline rock. The results of the simulations using the proposed model were validated against the measurements. The numerical model simulated the BHE’s behaviour accurately and compared well with two other modelling approaches from the literature. The model is capable of replicating the complex geometrical arrangement of the BHE and is considered to be more appropriate for simulations of BHE systems with complex geometries. The results of the sensitivity analysis of the proposed model have shown that low thermal conductivity, high density, and high heat capacity of rock are essential for maximising the storage efficiency of a borehole thermal energy storage system. Other characteristics of BHEs, such as a high thermal conductivity of the grout, a large radius of the pipe, and a large distance between the pipes, are also preferred for maximising efficiency.

  8. An enhanced fire hazard assessment model and validation experiments for vertical cable trays

    International Nuclear Information System (INIS)

    Li, Lu; Huang, Xianjia; Bi, Kun; Liu, Xiaoshuang

    2016-01-01

    Highlights: • An enhanced model was developed for vertical cable fire hazard assessment in NPP. • The validated experiments on vertical cable tray fires were conducted. • The capability of the model for cable tray with different cable spacing were tested. - Abstract: The model, referred to as FLASH-CAT (Flame Spread over Horizontal Cable Trays), was developed to estimate the heat release rate for vertical cable tray fire. The focus of this work is to investigate the application of an enhanced model to the single vertical cable tray fires with different cable spacing. The experiments on vertical cable tray fires with three typical cable spacing were conducted. The histories of mass loss rate and flame length were recorded during the cable fire. From the experimental results, it is found that the space between cable lines intensifies the cable combustion and accelerates the flame spread. The predictions by the enhanced model show good agreements with the experimental data. At the same time, it is shown that the enhanced model is capable of predicting the different behaviors of cable fires with different cable spacing by adjusting the flame spread speed only.

  9. An enhanced fire hazard assessment model and validation experiments for vertical cable trays

    Energy Technology Data Exchange (ETDEWEB)

    Li, Lu [Sate Key Laboratory of Fire Science, University of Science and Technology of China, Hefei 230027 (China); Huang, Xianjia, E-mail: huangxianjia@gziit.ac.cn [Joint Laboratory of Fire Safety in Nuclear Power Plants, Institute of Industry Technology Guangzhou & Chinese Academy of Sciences, Guangzhou 511458 (China); Bi, Kun; Liu, Xiaoshuang [China Nuclear Power Design Co., Ltd., Shenzhen 518045 (China)

    2016-05-15

    Highlights: • An enhanced model was developed for vertical cable fire hazard assessment in NPP. • The validated experiments on vertical cable tray fires were conducted. • The capability of the model for cable tray with different cable spacing were tested. - Abstract: The model, referred to as FLASH-CAT (Flame Spread over Horizontal Cable Trays), was developed to estimate the heat release rate for vertical cable tray fire. The focus of this work is to investigate the application of an enhanced model to the single vertical cable tray fires with different cable spacing. The experiments on vertical cable tray fires with three typical cable spacing were conducted. The histories of mass loss rate and flame length were recorded during the cable fire. From the experimental results, it is found that the space between cable lines intensifies the cable combustion and accelerates the flame spread. The predictions by the enhanced model show good agreements with the experimental data. At the same time, it is shown that the enhanced model is capable of predicting the different behaviors of cable fires with different cable spacing by adjusting the flame spread speed only.

  10. SCALE Validation Experience Using an Expanded Isotopic Assay Database for Spent Nuclear Fuel

    International Nuclear Information System (INIS)

    Gauld, Ian C.; Radulescu, Georgeta; Ilas, Germina

    2009-01-01

    The availability of measured isotopic assay data to validate computer code predictions of spent fuel compositions applied in burnup-credit criticality calculations is an essential component for bias and uncertainty determination in safety and licensing analyses. In recent years, as many countries move closer to implementing or expanding the use of burnup credit in criticality safety for licensing, there has been growing interest in acquiring additional high-quality assay data. The well-known open sources of assay data are viewed as potentially limiting for validating depletion calculations for burnup credit due to the relatively small number of isotopes measured (primarily actinides with relatively few fission products), sometimes large measurement uncertainties, incomplete documentation, and the limited burnup and enrichment range of the fuel samples. Oak Ridge National Laboratory (ORNL) recently initiated an extensive isotopic validation study that includes most of the public data archived in the Organization for Economic Cooperation and Development/Nuclear Energy Agency (OECD/NEA) electronic database, SFCOMPO, and new datasets obtained through participation in commercial experimental programs. To date, ORNL has analyzed approximately 120 different spent fuel samples from pressurized-water reactors that span a wide enrichment and burnup range and represent a broad class of assembly designs. The validation studies, completed using SCALE 5.1, are being used to support a technical basis for expanded implementation of burnup credit for spent fuel storage facilities, and other spent fuel analyses including radiation source term, dose assessment, decay heat, and waste repository safety analyses. This paper summarizes the isotopic assay data selected for this study, presents validation results obtained with SCALE 5.1, and discusses some of the challenges and experience associated with evaluating the results. Preliminary results obtained using SCALE 6 and ENDF

  11. Instrumented anvil-on-rod impact experiments for validating constitutive strength model for simulating transient dynamic deformation response of metals

    International Nuclear Information System (INIS)

    Martin, M.; Shen, T.; Thadhani, N.N.

    2008-01-01

    Instrumented anvil-on-rod impact experiments were performed to access the applicability of this approach for validating a constitutive strength model for dynamic, transient-state deformation and elastic-plastic wave interactions in vanadium, 21-6-9 stainless steel, titanium, and Ti-6Al-4V. In addition to soft-catching the impacted rod-shaped samples, their transient deformation states were captured by high-speed imaging, and velocity interferometry was used to record the sample back (free) surface velocity and monitor elastic-plastic wave interactions. Simulations utilizing AUTODYN-2D hydrocode with Steinberg-Guinan constitutive equation were used to generate simulated free surface velocity traces and final/transient deformation profiles for comparisons with experiments. The simulations were observed to under-predict the radial strain for bcc vanadium and fcc steel, but over-predict the radial strain for hcp titanium and Ti-6Al-4V. The correlations illustrate the applicability of the instrumented anvil-on-rod impact test as a method for providing robust model validation based on the entire deformation event, and not just the final deformed state

  12. CFD Validation with a Multi-Block Experiment to Evaluate the Core Bypass Flow in VHTR

    International Nuclear Information System (INIS)

    Yoon, Su Jong; Lee, Jeong Hun; Park, Goon Cherl; Kim, Min Hwan

    2010-01-01

    Core bypass flow of Very High Temperature Reactor (VHTR) is defined as the ineffective coolant which passes through the bypass gaps between the block columns and the crossflow gaps between the stacked blocks. This flows lead to the variation of the flow distribution in the core and affect the core thermal margin and the safety of VHTR. Therefore, bypass flow should be investigated and quantified. However, it is not a simple question, because the flow path of VHTR core is very complex. In particular, since dimensions of the bypass gap and the crossflow gap are of the order of few millimeters, it is very difficult to measure and to analyze the flow field at those gaps. Seoul National University (SNU) multi-block experiment was carried out to evaluate the bypass flow distribution and the flow characteristics. The coolant flow rate through outlet of each block column was measured, but the local flow field was measured restrictively in the experiment. Instead, CFD analysis was carried out to investigate the local phenomena of the experiment. A commercial CFD code CFX-12 was validated by comparing the simulation results and the experimental data

  13. The role of CFD combustion modeling in hydrogen safety management – III: Validation based on homogeneous hydrogen–air–diluent experiments

    Energy Technology Data Exchange (ETDEWEB)

    Sathiah, Pratap, E-mail: pratap.sathiah78@gmail.com [Shell Global Solutions Ltd., Brabazon House, Concord Business Park, Threapwood Road, Manchester M220RR (United Kingdom); Komen, Ed [Nuclear Research and Consultancy Group – NRG, P.O. Box 25, 1755 ZG Petten (Netherlands); Roekaerts, Dirk [Delft University of Technology, P.O. Box 5, 2600 AA Delft (Netherlands)

    2015-08-15

    Highlights: • A CFD based method proposed in the previous article is used for the simulation of the effect of CO{sub 2}–He dilution on hydrogen deflagration. • A theoretical study is presented to verify whether CO{sub 2}–He diluent can be used as a replacement for H{sub 2}O as diluent. • CFD model used for the validation work is described. • TFC combustion model results are in good agreement with large-scale homogeneous hydrogen–air–CO{sub 2}–He experiments. - Abstract: Large quantities of hydrogen can be generated and released into the containment during a severe accident in a PWR. The generated hydrogen, when mixed with air, can lead to hydrogen combustion. The dynamic pressure loads resulting from hydrogen combustion can be detrimental to the structural integrity of the reactor safety systems and the reactor containment. Therefore, accurate prediction of these pressure loads is an important safety issue. In our previous article, a CFD based method to determine these pressure loads was presented. This CFD method is based on the application of a turbulent flame speed closure combustion model. The method was validated against three uniform hydrogen–air deflagration experiments with different blockage ratio performed in the ENACCEF facility. It was concluded that the maximum pressures were predicted within 13% accuracy, while the rate of pressure rise dp/dt was predicted within about 30%. The eigen frequencies of the residual pressure wave phenomena were predicted within a few %. In the present article, we perform additional validation of the CFD based method against three uniform hydrogen–air–CO{sub 2}–He deflagration experiments with three different concentrations of the CO{sub 2}–He diluent. The trends of decrease in the flame velocity, the intermediate peak pressure, the rate of pressure rise dp/dt, and the maximum value of the mean pressure with an increase in the CO{sub 2}–He dilution are captured well in the simulations. From the

  14. Reactivity worth measurements on the CALIBAN reactor: interpretation of integral experiments for the nuclear data validation

    International Nuclear Information System (INIS)

    Richard, B.

    2012-01-01

    The good knowledge of nuclear data, input parameters for the neutron transport calculation codes, is necessary to support the advances of the nuclear industry. The purpose of this work is to bring pertinent information regarding the nuclear data integral validation process. Reactivity worth measurements have been performed on the Caliban reactor, they concern four materials of interest for the nuclear industry: gold, lutetium, plutonium and uranium 238. Experiments which have been conducted in order to improve the characterization of the core are also described and discussed, the latter are necessary to the good interpretation of reactivity worth measurements. The experimental procedures are described with their associated uncertainties, measurements are then compared to numerical results. The methods used in numerical calculations are reported, especially the multigroup cross sections generation for deterministic codes. The modeling of the experiments is presented along with the associated uncertainties. This comparison led to an interpretation concerning the qualification of nuclear data libraries. Discrepancies are reported, discussed and justify the need of such experiments. (author) [fr

  15. Continuous validation of ASTEC containment models and regression testing

    International Nuclear Information System (INIS)

    Nowack, Holger; Reinke, Nils; Sonnenkalb, Martin

    2014-01-01

    The focus of the ASTEC (Accident Source Term Evaluation Code) development at GRS is primarily on the containment module CPA (Containment Part of ASTEC), whose modelling is to a large extent based on the GRS containment code COCOSYS (COntainment COde SYStem). Validation is usually understood as the approval of the modelling capabilities by calculations of appropriate experiments done by external users different from the code developers. During the development process of ASTEC CPA, bugs and unintended side effects may occur, which leads to changes in the results of the initially conducted validation. Due to the involvement of a considerable number of developers in the coding of ASTEC modules, validation of the code alone, even if executed repeatedly, is not sufficient. Therefore, a regression testing procedure has been implemented in order to ensure that the initially obtained validation results are still valid with succeeding code versions. Within the regression testing procedure, calculations of experiments and plant sequences are performed with the same input deck but applying two different code versions. For every test-case the up-to-date code version is compared to the preceding one on the basis of physical parameters deemed to be characteristic for the test-case under consideration. In the case of post-calculations of experiments also a comparison to experimental data is carried out. Three validation cases from the regression testing procedure are presented within this paper. The very good post-calculation of the HDR E11.1 experiment shows the high quality modelling of thermal-hydraulics in ASTEC CPA. Aerosol behaviour is validated on the BMC VANAM M3 experiment, and the results show also a very good agreement with experimental data. Finally, iodine behaviour is checked in the validation test-case of the THAI IOD-11 experiment. Within this test-case, the comparison of the ASTEC versions V2.0r1 and V2.0r2 shows how an error was detected by the regression testing

  16. Measures of agreement between computation and experiment:validation metrics.

    Energy Technology Data Exchange (ETDEWEB)

    Barone, Matthew Franklin; Oberkampf, William Louis

    2005-08-01

    With the increasing role of computational modeling in engineering design, performance estimation, and safety assessment, improved methods are needed for comparing computational results and experimental measurements. Traditional methods of graphically comparing computational and experimental results, though valuable, are essentially qualitative. Computable measures are needed that can quantitatively compare computational and experimental results over a range of input, or control, variables and sharpen assessment of computational accuracy. This type of measure has been recently referred to as a validation metric. We discuss various features that we believe should be incorporated in a validation metric and also features that should be excluded. We develop a new validation metric that is based on the statistical concept of confidence intervals. Using this fundamental concept, we construct two specific metrics: one that requires interpolation of experimental data and one that requires regression (curve fitting) of experimental data. We apply the metrics to three example problems: thermal decomposition of a polyurethane foam, a turbulent buoyant plume of helium, and compressibility effects on the growth rate of a turbulent free-shear layer. We discuss how the present metrics are easily interpretable for assessing computational model accuracy, as well as the impact of experimental measurement uncertainty on the accuracy assessment.

  17. Preliminary validation and principal components analysis of the Control of Eating Questionnaire (CoEQ) for the experience of food craving.

    Science.gov (United States)

    Dalton, M; Finlayson, G; Hill, A; Blundell, J

    2015-12-01

    The Control of Eating Questionnaire (CoEQ) comprises 21-items that are designed to assess the severity and type of food cravings an individual experiences over the previous 7 days. The CoEQ has been used in clinical trials as a multi-dimensional measure of appetite, craving and mood regulation however its underlying component structure has yet to be determined. The current paper has two aims; (1) to examine the psychometric properties, and internal consistency of the CoEQ; and (2) to provide a preliminary examination of the underlying components by exploring their construct and predictive validity. Data were pooled from four studies in which a total 215 adults (80% women; Age=29.7 ± 10.3; BMI=26.5 ± 5.2) had completed the CoEQ alongside measures of psychometric eating behaviour traits, ad libitum food intake, and body composition. A principal components analysis (PCA) and parallel analysis was conducted to examine the underlying structure of the questionnaire. The resulting subscales were tested for internal consistency (Cronbach's α=0.66-0.88). PCA revealed four components that explained 54.5% of the variance. The components were identified as: Craving Control, Positive Mood, Craving for Sweet, and Craving for Savoury. Associations between the underlying CoEQ subscales and measures of body composition and eating behaviour traits confirmed construct validity of the subscales. The associations between the subscales and snack food selection and intake of palatable snack foods supported the CoEQ's predictive validity. The CoEQ has good psychometric properties with a clear component structure and acceptable internal consistency. This preliminary validation supports the CoEQ as a measure of the experience of food cravings.

  18. Validation Study of Unnotched Charpy and Taylor-Anvil Impact Experiments using Kayenta

    Energy Technology Data Exchange (ETDEWEB)

    Kamojjala, Krishna [Idaho National Lab. (INL), Idaho Falls, ID (United States); Lacy, Jeffrey [Idaho National Lab. (INL), Idaho Falls, ID (United States); Chu, Henry S. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Brannon, Rebecca [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-03-01

    Validation of a single computational model with multiple available strain-to-failure fracture theories is presented through experimental tests and numerical simulations of the standardized unnotched Charpy and Taylor-anvil impact tests, both run using the same material model (Kayenta). Unnotched Charpy tests are performed on rolled homogeneous armor steel. The fracture patterns using Kayenta’s various failure options that include aleatory uncertainty and scale effects are compared against the experiments. Other quantities of interest include the average value of the absorbed energy and bend angle of the specimen. Taylor-anvil impact tests are performed on Ti6Al4V titanium alloy. The impact speeds of the specimen are 321 m/s and 393 m/s. The goal of the numerical work is to reproduce the damage patterns observed in the laboratory. For the numerical study, the Johnson-Cook failure model is used as the ductile fracture criterion, and aleatory uncertainty is applied to rate-dependence parameters to explore its effect on the fracture patterns.

  19. Validation philosophy

    International Nuclear Information System (INIS)

    Vornehm, D.

    1994-01-01

    To determine when a set of calculations falls within an umbrella of an existing validation documentation, it is necessary to generate a quantitative definition of range of applicability (our definition is only qualitative) for two reasons: (1) the current trend in our regulatory environment will soon make it impossible to support the legitimacy of a validation without quantitative guidelines; and (2) in my opinion, the lack of support by DOE for further critical experiment work is directly tied to our inability to draw a quantitative open-quotes line-in-the-sandclose quotes beyond which we will not use computer-generated values

  20. EXCALIBUR-at-CALIBAN: a neutron transmission experiment for {sup 238}U(n,n'{sub continuum}γ) nuclear data validation

    Energy Technology Data Exchange (ETDEWEB)

    Bernard, David; Leconte, Pierre; Destouches, Christophe [CEA, DEN, DER, SPRC et SPEX, Cadarache F-13108 SAINT-PAUL-LEZ-DURANCE (France); Casoli, Pierre; Chambru, Laurent; Chanussot, Didier; Chateauvieux, Herve; Gevrey, Gaetan; Guilbert, Frederique; Lereuil, Hugues; Rousseau, Guillaume; Schaub, Muriel [CEA, DAM, Valduc F-21120 IS-SUR-TILLE (France); Heusch, Murielle; Meplan, Olivier; Ramdhane, Mourad [CNRS/IN2P3, 53 rue des Martyrs, F-38026 Grenoble, Cedex (France)

    2015-07-01

    Two recent papers justified a new experimental program to give a new basis for the validation of {sup 238}U nuclear data, namely neutron induced inelastic scattering and transport codes at neutron fission energies. The general idea is to perform a neutron transmission experiment through natural uranium material. As shown by Hans Bethe, neutron transmissions measured by dosimetric responses are linked to inelastic cross sections. This paper describes the principle and the results of such an experience called EXCALIBUR performed recently (January and October 2014) at the CALIBAN reactor facility. (authors)

  1. Reactivity worth measurements on fast burst reactor Caliban - description and interpretation of integral experiments for the validation of nuclear data

    Energy Technology Data Exchange (ETDEWEB)

    Richard, B. [Commissariat a l' Energie Atomique et Aux Energies Alternatives CEA, DAM, VALDUC, F-21120 Is-sur-Tille (France)

    2012-07-01

    Reactivity perturbation experiments using various materials are being performed on the HEU fast core CALIBAN, an experimental device operated by the CEA VALDUC Criticality and Neutron Transport Research Laboratory. These experiments provide valuable information to contribute to the validation of nuclear data for the materials used in such measurements. This paper presents the results obtained in a first series of measurements performed with Au-197 samples. Experiments which have been conducted in order to improve the characterization of the core are also described and discussed. The experimental results have been compared to numerical calculation using both deterministic and Monte Carlo neutron transport codes with a simplified model of the reactor. This early work led to a methodology which will be applied to the future experiments which will concern other materials of interest. (authors)

  2. The COSIMA-experiments, a data base for validation of two-phase flow computer codes

    International Nuclear Information System (INIS)

    Class, G.; Meyder, R.; Stratmanns, E.

    1985-12-01

    The report presents an overview on the large data base generated with COSIMA. The data base is to be used to validate and develop computer codes for two-phase flow. In terms of fuel rod behavior it was found that during blowdown under realistic conditions only small strains are reached. For clad rupture extremely high rod internal pressure is necessary. Additionally important results were found in the behavior of a fuel rod simulator and on the effect of thermocouples attached on the cladding outer surface. Post-test calculations, performed with the codes RELAP and DRUFAN show a good agreement with the experiments. This however can be improved if the phase separation models in the codes would be updated. (orig./HP) [de

  3. Examining the Internal Validity and Statistical Precision of the Comparative Interrupted Time Series Design by Comparison with a Randomized Experiment

    Science.gov (United States)

    St.Clair, Travis; Cook, Thomas D.; Hallberg, Kelly

    2014-01-01

    Although evaluators often use an interrupted time series (ITS) design to test hypotheses about program effects, there are few empirical tests of the design's validity. We take a randomized experiment on an educational topic and compare its effects to those from a comparative ITS (CITS) design that uses the same treatment group as the experiment…

  4. SAS4A and FPIN2X validation for slow ramp TOP accidents: experiments TS-1 and TS-2

    International Nuclear Information System (INIS)

    Hill, D.J.

    1986-01-01

    The purpose of this paper is to present further results in the series of experimental analyses being performed using SAS4A and FPIN2X in order to provide a systematic validation of these codes. The two experiments discussed here, TS-1 and TS-2, were performed by Westinghouse Hanford/Hanford Engineering Development Laboratory (WHC/HEDL) in the Transient Reactor Test (TREAT) Facility. They were slow ramp transient overpowers (TOPs) of ∼ 5 cent/s equivalent Fast Flux Test Facility (FFTF) ramp rate, single-pin experiments in flowing sodium loops. The good agreement found here adds significantly to the experimental data base that provides the foundation for SAS4A and FPIN2X validation. It also shows that prefailure internal fuel motion is a phenomenon that has to be correctly accounted for, not only as a potential inherent safety mechanism, but also before any accurate prediction of fuel failure and subsequent fuel motion and the associated reactivity effects can be made. This is also true for metal-fueled pins. This capability is provided by PINACLE, which is being incorporated into SAS4A

  5. EOS Terra Validation Program

    Science.gov (United States)

    Starr, David

    2000-01-01

    The EOS Terra mission will be launched in July 1999. This mission has great relevance to the atmospheric radiation community and global change issues. Terra instruments include Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER), Clouds and Earth's Radiant Energy System (CERES), Multi-Angle Imaging Spectroradiometer (MISR), Moderate Resolution Imaging Spectroradiometer (MODIS) and Measurements of Pollution in the Troposphere (MOPITT). In addition to the fundamental radiance data sets, numerous global science data products will be generated, including various Earth radiation budget, cloud and aerosol parameters, as well as land surface, terrestrial ecology, ocean color, and atmospheric chemistry parameters. Significant investments have been made in on-board calibration to ensure the quality of the radiance observations. A key component of the Terra mission is the validation of the science data products. This is essential for a mission focused on global change issues and the underlying processes. The Terra algorithms have been subject to extensive pre-launch testing with field data whenever possible. Intensive efforts will be made to validate the Terra data products after launch. These include validation of instrument calibration (vicarious calibration) experiments, instrument and cross-platform comparisons, routine collection of high quality correlative data from ground-based networks, such as AERONET, and intensive sites, such as the SGP ARM site, as well as a variety field experiments, cruises, etc. Airborne simulator instruments have been developed for the field experiment and underflight activities including the MODIS Airborne Simulator (MAS) AirMISR, MASTER (MODIS-ASTER), and MOPITT-A. All are integrated on the NASA ER-2 though low altitude platforms are more typically used for MASTER. MATR is an additional sensor used for MOPITT algorithm development and validation. The intensive validation activities planned for the first year of the Terra

  6. Validation of the Child HCAHPS survey to measure pediatric inpatient experience of care in Flanders.

    Science.gov (United States)

    Bruyneel, Luk; Coeckelberghs, Ellen; Buyse, Gunnar; Casteels, Kristina; Lommers, Barbara; Vandersmissen, Jo; Van Eldere, Johan; Van Geet, Chris; Vanhaecht, Kris

    2017-07-01

    The recently developed Child HCAHPS provides a standard to measure US hospitals' performance on pediatric inpatient experiences of care. We field-tested Child HCAHPS in Belgium to instigate international comparison. In the development stage, forward/backward translation was conducted and patients assessed content validity index as excellent. The draft Flemish Child HCAHPS included 63 items: 38 items for five topics hypothesized to be similar to those proposed in the US (communication with parent, communication with child, attention to safety and comfort, hospital environment, and global rating), 10 screeners, a 14-item demographic and descriptive section, and one open-ended item. A 6-week pilot test was subsequently performed in three pediatric wards (general ward, hematology and oncology ward, infant and toddler ward) at a JCI-accredited university hospital. An overall response rate of 90.99% (303/333) was achieved and was consistent across wards. Confirmatory factor analysis largely confirmed the configuration of the proposed composites. Composite and single-item measures related well to patients' global rating of the hospital. Interpretation of different patient experiences across types of wards merits further investigation. Child HCAHPS provides an opportunity for systematic and cross-national assessment of pediatric inpatient experiences. Sharing and implementing international best practices are the next logical step. What is Known: • Patient experience surveys are increasingly used to reflect on the quality, safety, and centeredness of patient care. • While adult inpatient experience surveys are routinely used across countries around the world, the measurement of pediatric inpatient experiences is a young field of research that is essential to reflect on family-centered care. What is New: • We demonstrate that the US-developed Child HCAHPS provides an opportunity for international benchmarking of pediatric inpatient experiences with care through parents

  7. Validating presupposed versus focused text information.

    Science.gov (United States)

    Singer, Murray; Solar, Kevin G; Spear, Jackie

    2017-04-01

    There is extensive evidence that readers continually validate discourse accuracy and congruence, but that they may also overlook conspicuous text contradictions. Validation may be thwarted when the inaccurate ideas are embedded sentence presuppositions. In four experiments, we examined readers' validation of presupposed ("given") versus new text information. Throughout, a critical concept, such as a truck versus a bus, was introduced early in a narrative. Later, a character stated or thought something about the truck, which therefore matched or mismatched its antecedent. Furthermore, truck was presented as either given or new information. Mismatch target reading times uniformly exceeded the matching ones by similar magnitudes for given and new concepts. We obtained this outcome using different grammatical constructions and with different antecedent-target distances. In Experiment 4, we examined only given critical ideas, but varied both their matching and the main verb's factivity (e.g., factive know vs. nonfactive think). The Match × Factivity interaction closely resembled that previously observed for new target information (Singer, 2006). Thus, readers can successfully validate given target information. Although contemporary theories tend to emphasize either deficient or successful validation, both types of theory can accommodate the discourse and reader variables that may regulate validation.

  8. Direct-contact condensers for open-cycle OTEC applications: Model validation with fresh water experiments for structured packings

    Energy Technology Data Exchange (ETDEWEB)

    Bharathan, D.; Parsons, B.K.; Althof, J.A.

    1988-10-01

    The objective of the reported work was to develop analytical methods for evaluating the design and performance of advanced high-performance heat exchangers for use in open-cycle thermal energy conversion (OC-OTEC) systems. This report describes the progress made on validating a one-dimensional, steady-state analytical computer of fresh water experiments. The condenser model represents the state of the art in direct-contact heat exchange for condensation for OC-OTEC applications. This is expected to provide a basis for optimizing OC-OTEC plant configurations. Using the model, we examined two condenser geometries, a cocurrent and a countercurrent configuration. This report provides detailed validation results for important condenser parameters for cocurrent and countercurrent flows. Based on the comparisons and uncertainty overlap between the experimental data and predictions, the model is shown to predict critical condenser performance parameters with an uncertainty acceptable for general engineering design and performance evaluations. 33 refs., 69 figs., 38 tabs.

  9. Monitoring Building Deformation with InSAR: Experiments and Validation

    Science.gov (United States)

    Yang, Kui; Yan, Li; Huang, Guoman; Chen, Chu; Wu, Zhengpeng

    2016-01-01

    Synthetic Aperture Radar Interferometry (InSAR) techniques are increasingly applied for monitoring land subsidence. The advantages of InSAR include high accuracy and the ability to cover large areas; nevertheless, research validating the use of InSAR on building deformation is limited. In this paper, we test the monitoring capability of the InSAR in experiments using two landmark buildings; the Bohai Building and the China Theater, located in Tianjin, China. They were selected as real examples to compare InSAR and leveling approaches for building deformation. Ten TerraSAR-X images spanning half a year were used in Permanent Scatterer InSAR processing. These extracted InSAR results were processed considering the diversity in both direction and spatial distribution, and were compared with true leveling values in both Ordinary Least Squares (OLS) regression and measurement of error analyses. The detailed experimental results for the Bohai Building and the China Theater showed a high correlation between InSAR results and the leveling values. At the same time, the two Root Mean Square Error (RMSE) indexes had values of approximately 1 mm. These analyses show that a millimeter level of accuracy can be achieved by means of InSAR technique when measuring building deformation. We discuss the differences in accuracy between OLS regression and measurement of error analyses, and compare the accuracy index of leveling in order to propose InSAR accuracy levels appropriate for monitoring buildings deformation. After assessing the advantages and limitations of InSAR techniques in monitoring buildings, further applications are evaluated. PMID:27999403

  10. Monitoring Building Deformation with InSAR: Experiments and Validation

    Directory of Open Access Journals (Sweden)

    Kui Yang

    2016-12-01

    Full Text Available Synthetic Aperture Radar Interferometry (InSAR techniques are increasingly applied for monitoring land subsidence. The advantages of InSAR include high accuracy and the ability to cover large areas; nevertheless, research validating the use of InSAR on building deformation is limited. In this paper, we test the monitoring capability of the InSAR in experiments using two landmark buildings; the Bohai Building and the China Theater, located in Tianjin, China. They were selected as real examples to compare InSAR and leveling approaches for building deformation. Ten TerraSAR-X images spanning half a year were used in Permanent Scatterer InSAR processing. These extracted InSAR results were processed considering the diversity in both direction and spatial distribution, and were compared with true leveling values in both Ordinary Least Squares (OLS regression and measurement of error analyses. The detailed experimental results for the Bohai Building and the China Theater showed a high correlation between InSAR results and the leveling values. At the same time, the two Root Mean Square Error (RMSE indexes had values of approximately 1 mm. These analyses show that a millimeter level of accuracy can be achieved by means of InSAR technique when measuring building deformation. We discuss the differences in accuracy between OLS regression and measurement of error analyses, and compare the accuracy index of leveling in order to propose InSAR accuracy levels appropriate for monitoring buildings deformation. After assessing the advantages and limitations of InSAR techniques in monitoring buildings, further applications are evaluated.

  11. Further Validation of the Coach Identity Prominence Scale

    Science.gov (United States)

    Pope, J. Paige; Hall, Craig R.

    2014-01-01

    This study was designed to examine select psychometric properties of the Coach Identity Prominence Scale (CIPS), including the reliability, factorial validity, convergent validity, discriminant validity, and predictive validity. Coaches (N = 338) who averaged 37 (SD = 12.27) years of age, had a mean of 13 (SD = 9.90) years of coaching experience,…

  12. Human Factors methods concerning integrated validation of nuclear power plant control rooms; Metodutveckling foer integrerad validering

    Energy Technology Data Exchange (ETDEWEB)

    Oskarsson, Per-Anders; Johansson, Bjoern J.E.; Gonzalez, Natalia (Swedish Defence Research Agency, Information Systems, Linkoeping (Sweden))

    2010-02-15

    The frame of reference for this work was existing recommendations and instructions from the NPP area, experiences from the review of the Turbic Validation and experiences from system validations performed at the Swedish Armed Forces, e.g. concerning military control rooms and fighter pilots. These enterprises are characterized by complex systems in extreme environments, often with high risks, where human error can lead to serious consequences. A focus group has been performed with representatives responsible for Human Factors issues from all Swedish NPP:s. The questions that were discussed were, among other things, for whom an integrated validation (IV) is performed and its purpose, what should be included in an IV, the comparison with baseline measures, the design process, the role of SSM, which methods of measurement should be used, and how the methods are affected of changes in the control room. The report brings different questions to discussion concerning the validation process. Supplementary methods of measurement for integrated validation are discussed, e.g. dynamic, psychophysiological, and qualitative methods for identification of problems. Supplementary methods for statistical analysis are presented. The study points out a number of deficiencies in the validation process, e.g. the need of common guidelines for validation and design, criteria for different types of measurements, clarification of the role of SSM, and recommendations for the responsibility of external participants in the validation process. The authors propose 12 measures for taking care of the identified problems

  13. The role of CFD combustion modeling in hydrogen safety management – IV: Validation based on non-homogeneous hydrogen–air experiments

    Energy Technology Data Exchange (ETDEWEB)

    Sathiah, Pratap, E-mail: pratap.sathiah78@gmail.com [Nuclear Research and Consultancy Group (NRG), Westerduinweg 3, 1755 ZG Petten (Netherlands); Komen, Ed, E-mail: komen@nrg.eu [Nuclear Research and Consultancy Group (NRG), Westerduinweg 3, 1755 ZG Petten (Netherlands); Roekaerts, Dirk, E-mail: d.j.e.m.roekaerts@tudelft.nl [Delft University of Technology, Department of Process and Energy, Section Fluid Mechanics, Mekelweg 2, 2628 CD Delft (Netherlands)

    2016-12-15

    Highlights: • TFC combustion model is further extended to simulate flame propagation in non-homogeneous hydrogen–air mixtures. • TFC combustion model results are in good agreement with large-scale non-homogeneous hydrogen–air experiments. • The model is further extended to account for the non-uniform hydrogen–air–steam mixture for the presence of PARs on hydrogen deflagration. - Abstract: The control of hydrogen in the containment is an important safety issue in NPPs during a loss of coolant accident, because the dynamic pressure loads from hydrogen combustion can be detrimental to the structural integrity of the reactor safety systems and the reactor containment. In Sathiah et al. (2012b), we presented a computational fluid dynamics based method to assess the consequence of the combustion of uniform hydrogen–air mixtures. In the present article, the extension of this method to and its validation for non-uniform hydrogen–air mixture is described. The method is implemented in the CFD software ANSYS FLUENT using user defined functions. The extended code is validated against non-uniform hydrogen–air experiments in the ENACCEF facility. It is concluded that the maximum pressure and intermediate peak pressure were predicted within 12% and 18% accuracy. The eigen frequencies of the residual pressure wave phenomena were predicted within 4%. It is overall concluded that the current model predicts the considered ENACCEF experiments well.

  14. Copper benchmark experiment at the Frascati Neutron Generator for nuclear data validation

    Energy Technology Data Exchange (ETDEWEB)

    Angelone, M., E-mail: maurizio.angelone@enea.it; Flammini, D.; Loreti, S.; Moro, F.; Pillon, M.; Villari, R.

    2016-11-01

    Highlights: • A benchmark experiment was performed using pure copper with 14 MeV neutrons. • The experiment was performed at the Frascati Neutron Generator (FNG). • Activation foils, thermoluminescent dosimeters and scintillators were used to measure reactions rates (RR), nuclear heating and neutron spectra. • The paper presents the RR measurements and the post analysis using MCNP5 and JEFF-3.1.1, JEFF-3.2 and FENDL-3.1 libraries. • C/Es are presented showing the need for deep revision of Cu cross sections. - Abstract: A neutronics benchmark experiment on a pure Copper block (dimensions 60 × 70 × 60 cm{sup 3}), aimed at testing and validating the recent nuclear data libraries for fusion applications, was performed at the 14-MeV Frascati Neutron Generator (FNG) as part of a F4E specific grant (F4E-FPA-395-01) assigned to the European Consortium on Nuclear Data and Experimental Techniques. The relevant neutronics quantities (e.g., reaction rates, neutron flux spectra, doses, etc.) were measured using different experimental techniques and the results were compared to the calculated quantities using fusion relevant nuclear data libraries. This paper focuses on the analyses carried-out by ENEA through the activation foils techniques. {sup 197}Au(n,γ){sup 198}Au, {sup 186}W(n,γ){sup 187}W, {sup 115}In(n,n′){sup 115}In, {sup 58}Ni(n,p){sup 58}Co, {sup 27}Al(n,α){sup 24}Na, {sup 93}Nb(n,2n){sup 92}Nb{sup m} activation reactions were used. The foils were placed at eight different positions along the Cu block and irradiated with 14 MeV neutrons. Activation measurements were performed by means of High Purity Germanium (HPGe) detector. Detailed simulation of the experiment was carried-out using MCNP5 Monte Carlo code and the European JEFF-3.1.1 and 3.2 nuclear cross-sections data files for neutron transport and IRDFF-v1.05 library for the reaction rates in activation foils. The calculated reaction rates (C) were compared to the experimental quantities (E) and

  15. Fission Product Experimental Program: Validation and Computational Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Leclaire, N.; Ivanova, T.; Letang, E. [Inst Radioprotect and Surete Nucl, F-92262 Fontenay Aux Roses (France); Girault, E. [CEA Valduc, Serv Rech Neutron and Critcite, 21 - Is-sur-Tille (France); Thro, J. F. [AREVA NC, F-78000 Versailles (France)

    2009-02-15

    From 1998 to 2004, a series of critical experiments referred to as the fission product (FP) experimental program was performed at the Commissariat a l'Energie Atomique Valduc research facility. The experiments were designed by Institut de Radioprotection et de Surete Nucleaire (IRSN) and funded by AREVA NC and IRSN within the French program supporting development of a technical basis for burnup credit validation. The experiments were performed with the following six key fission products encountered in solution either individually or as mixtures: {sup 103}Rh, {sup 133}Cs, {sup nat}Nd, {sup 149}Sm, {sup 152}Sm, and {sup 155}Gd. The program aimed at compensating for the lack of information on critical experiments involving FPs and at establishing a basis for FPs credit validation. One hundred forty-five critical experiments were performed, evaluated, and analyzed with the French CRISTAL criticality safety package and the American SCALE5. 1 code system employing different cross-section libraries. The aim of the paper is to show the experimental data potential to improve the ability to perform validation of full burnup credit calculation. The paper describes three Phases of the experimental program; the results of preliminary evaluation, the calculation, and the sensitivity/uncertainty study of the FP experiments used to validate the APOLLO2-MORET 4 route in the CRISTAL criticality package for burnup credit applications. (authors)

  16. Development of a monitoring tool to validate trigger level analysis in the ATLAS experiment

    CERN Document Server

    Hahn, Artur

    2014-01-01

    This report summarizes my thirteen week summer student project at CERN from June 30th until September 26th of 2014. My task was to contribute to a monitoring tool for the ATLAS experiment, comparing jets reconstructed by the trigger to fully offline reconstructed and saved events by creating a set of insightful histograms to be used during run 2 of the Large Hadron Collider, planned to start in early 2015. The motivation behind this project is to validate the use of data taken solely from the high level trigger for analysis purposes. Once the code generating the plots was completed, it was tested on data collected during run 1 up to the year 2012 and Monte Carlo simulated events with center-of-mass energies ps = 8TeV and ps = 14TeV.

  17. WSRC approach to validation of criticality safety computer codes

    International Nuclear Information System (INIS)

    Finch, D.R.; Mincey, J.F.

    1991-01-01

    Recent hardware and operating system changes at Westinghouse Savannah River Site (WSRC) have necessitated review of the validation for JOSHUA criticality safety computer codes. As part of the planning for this effort, a policy for validation of JOSHUA and other criticality safety codes has been developed. This policy will be illustrated with the steps being taken at WSRC. The objective in validating a specific computational method is to reliably correlate its calculated neutron multiplication factor (K eff ) with known values over a well-defined set of neutronic conditions. Said another way, such correlations should be: (1) repeatable; (2) demonstrated with defined confidence; and (3) identify the range of neutronic conditions (area of applicability) for which the correlations are valid. The general approach to validation of computational methods at WSRC must encompass a large number of diverse types of fissile material processes in different operations. Special problems are presented in validating computational methods when very few experiments are available (such as for enriched uranium systems with principal second isotope 236 U). To cover all process conditions at WSRC, a broad validation approach has been used. Broad validation is based upon calculation of many experiments to span all possible ranges of reflection, nuclide concentrations, moderation ratios, etc. Narrow validation, in comparison, relies on calculations of a few experiments very near anticipated worst-case process conditions. The methods and problems of broad validation are discussed

  18. The Space Technology-7 Disturbance Reduction System Precision Control Flight Validation Experiment Control System Design

    Science.gov (United States)

    O'Donnell, James R.; Hsu, Oscar C.; Maghami, Peirman G.; Markley, F. Landis

    2006-01-01

    As originally proposed, the Space Technology-7 Disturbance Reduction System (DRS) project, managed out of the Jet Propulsion Laboratory, was designed to validate technologies required for future missions such as the Laser Interferometer Space Antenna (LISA). The two technologies to be demonstrated by DRS were Gravitational Reference Sensors (GRSs) and Colloidal MicroNewton Thrusters (CMNTs). Control algorithms being designed by the Dynamic Control System (DCS) team at the Goddard Space Flight Center would control the spacecraft so that it flew about a freely-floating GRS test mass, keeping it centered within its housing. For programmatic reasons, the GRSs were descoped from DRS. The primary goals of the new mission are to validate the performance of the CMNTs and to demonstrate precise spacecraft position control. DRS will fly as a part of the European Space Agency (ESA) LISA Pathfinder (LPF) spacecraft along with a similar ESA experiment, the LISA Technology Package (LTP). With no GRS, the DCS attitude and drag-free control systems make use of the sensor being developed by ESA as a part of the LTP. The control system is designed to maintain the spacecraft s position with respect to the test mass, to within 10 nm/the square root of Hz over the DRS science frequency band of 1 to 30 mHz.

  19. The reactor kinetics code tank: a validation against selected SPERT-1b experiments

    International Nuclear Information System (INIS)

    Ellis, R.J.

    1990-01-01

    The two-dimensional space-time analysis code TANK is being developed for the simulation of transient behaviour in the MAPLE class of research reactors. MAPLE research reactor cores are compact, light-water-cooled and -moderated, with a high degree of forced subcooling. The SPERT-1B(24/32) reactor core had many similarities to MAPLE-X10, and the results of the SPERT transient experiments are well documented. As a validation of TANK, a series of simulations of certain SPERT reactor transients was undertaken. Special features were added to the TANK code to model reactors with plate-type fuel and to allow for the simulation of rapid void production. The results of a series of super-prompt-critical reactivity step-insertion transient simulations are presented. The selected SPERT transients were all initiated from low power, at ambient temperatures, and with negligible coolant flow. Th results of the TANK simulations are in good agreement with the trends in the experimental SPERT data

  20. Assessing movement quality in persons with severe mental illness - Reliability and validity of the Body Awareness Scale Movement Quality and Experience.

    Science.gov (United States)

    Hedlund, Lena; Gyllensten, Amanda Lundvik; Waldegren, Tomas; Hansson, Lars

    2016-05-01

    Motor disturbances and disturbed self-recognition are common features that affect mobility in persons with schizophrenia spectrum disorder and bipolar disorder. Physiotherapists in Scandinavia assess and treat movement difficulties in persons with severe mental illness. The Body Awareness Scale Movement Quality and Experience (BAS MQ-E) is a new and shortened version of the commonly used Body Awareness Scale-Health (BAS-H). The purpose of this study was to investigate the inter-rater reliability and the concurrent validity of BAS MQ-E in persons with severe mental illness. The concurrent validity was examined by investigating the relationships between neurological soft signs, alexithymia, fatigue, anxiety, and mastery. Sixty-two persons with severe mental illness participated in the study. The results showed a satisfactory inter-rater reliability (n = 53) and a concurrent validity (n = 62) with neurological soft signs, especially cognitive and perceptual based signs. There was also a concurrent validity linked to physical fatigue and aspects of alexithymia. The scores of BAS MQ-E were in general higher for persons with schizophrenia compared to persons with other diagnoses within the schizophrenia spectrum disorders and bipolar disorder. The clinical implications are presented in the discussion.

  1. Verification and validation in computational fluid dynamics

    Science.gov (United States)

    Oberkampf, William L.; Trucano, Timothy G.

    2002-04-01

    Verification and validation (V&V) are the primary means to assess accuracy and reliability in computational simulations. This paper presents an extensive review of the literature in V&V in computational fluid dynamics (CFD), discusses methods and procedures for assessing V&V, and develops a number of extensions to existing ideas. The review of the development of V&V terminology and methodology points out the contributions from members of the operations research, statistics, and CFD communities. Fundamental issues in V&V are addressed, such as code verification versus solution verification, model validation versus solution validation, the distinction between error and uncertainty, conceptual sources of error and uncertainty, and the relationship between validation and prediction. The fundamental strategy of verification is the identification and quantification of errors in the computational model and its solution. In verification activities, the accuracy of a computational solution is primarily measured relative to two types of highly accurate solutions: analytical solutions and highly accurate numerical solutions. Methods for determining the accuracy of numerical solutions are presented and the importance of software testing during verification activities is emphasized. The fundamental strategy of validation is to assess how accurately the computational results compare with the experimental data, with quantified error and uncertainty estimates for both. This strategy employs a hierarchical methodology that segregates and simplifies the physical and coupling phenomena involved in the complex engineering system of interest. A hypersonic cruise missile is used as an example of how this hierarchical structure is formulated. The discussion of validation assessment also encompasses a number of other important topics. A set of guidelines is proposed for designing and conducting validation experiments, supported by an explanation of how validation experiments are different

  2. Psychological and interactional characteristics of patients with somatoform disorders: Validation of the Somatic Symptoms Experiences Questionnaire (SSEQ) in a clinical psychosomatic population.

    Science.gov (United States)

    Herzog, Annabel; Voigt, Katharina; Meyer, Björn; Wollburg, Eileen; Weinmann, Nina; Langs, Gernot; Löwe, Bernd

    2015-06-01

    The new DSM-5 Somatic Symptom Disorder (SSD) emphasizes the importance of psychological processes related to somatic symptoms in patients with somatoform disorders. To address this, the Somatic Symptoms Experiences Questionnaire (SSEQ), the first self-report scale that assesses a broad range of psychological and interactional characteristics relevant to patients with a somatoform disorder or SSD, was developed. This prospective study was conducted to validate the SSEQ. The 15-item SSEQ was administered along with a battery of self-report questionnaires to psychosomatic inpatients. Patients were assessed with the Structured Clinical Interview for DSM-IV to confirm a somatoform, depressive, or anxiety disorder. Confirmatory factor analyses, tests of internal consistency and tests of validity were performed. Patients (n=262) with a mean age of 43.4 years, 60.3% women, were included in the analyses. The previously observed four-factor model was replicated and internal consistency was good (Cronbach's α=.90). Patients with a somatoform disorder had significantly higher scores on the SSEQ (t=4.24, pquality of life. Sensitivity to change was shown by significantly higher effect sizes of the SSEQ change scores for improved patients than for patients without improvement. The SSEQ appears to be a reliable, valid, and efficient instrument to assess a broad range of psychological and interactional features related to the experience of somatic symptoms. Copyright © 2015 Elsevier Inc. All rights reserved.

  3. Status Update on the GPM Ground Validation Iowa Flood Studies (IFloodS) Field Experiment

    Science.gov (United States)

    Petersen, Walt; Krajewski, Witold

    2013-04-01

    The overarching objective of integrated hydrologic ground validation activities supporting the Global Precipitation Measurement Mission (GPM) is to provide better understanding of the strengths and limitations of the satellite products, in the context of hydrologic applications. To this end, the GPM Ground Validation (GV) program is conducting the first of several hydrology-oriented field efforts: the Iowa Flood Studies (IFloodS) experiment. IFloodS will be conducted in the central to northeastern part of Iowa in Midwestern United States during the months of April-June, 2013. Specific science objectives and related goals for the IFloodS experiment can be summarized as follows: 1. Quantify the physical characteristics and space/time variability of rain (rates, DSD, process/"regime") and map to satellite rainfall retrieval uncertainty. 2. Assess satellite rainfall retrieval uncertainties at instantaneous to daily time scales and evaluate propagation/impact of uncertainty in flood-prediction. 3. Assess hydrologic predictive skill as a function of space/time scales, basin morphology, and land use/cover. 4. Discern the relative roles of rainfall quantities such as rate and accumulation as compared to other factors (e.g. transport of water in the drainage network) in flood genesis. 5. Refine approaches to "integrated hydrologic GV" concept based on IFloodS experiences and apply to future GPM Integrated GV field efforts. These objectives will be achieved via the deployment of the NASA NPOL S-band and D3R Ka/Ku-band dual-polarimetric radars, University of Iowa X-band dual-polarimetric radars, a large network of paired rain gauge platforms with attendant soil moisture and temperature probes, a large network of both 2D Video and Parsivel disdrometers, and USDA-ARS gauge and soil-moisture measurements (in collaboration with the NASA SMAP mission). The aforementioned measurements will be used to complement existing operational WSR-88D S-band polarimetric radar measurements

  4. Analysis of the impact of correlated benchmark experiments on the validation of codes for criticality safety analysis

    International Nuclear Information System (INIS)

    Bock, M.; Stuke, M.; Behler, M.

    2013-01-01

    The validation of a code for criticality safety analysis requires the recalculation of benchmark experiments. The selected benchmark experiments are chosen such that they have properties similar to the application case that has to be assessed. A common source of benchmark experiments is the 'International Handbook of Evaluated Criticality Safety Benchmark Experiments' (ICSBEP Handbook) compiled by the 'International Criticality Safety Benchmark Evaluation Project' (ICSBEP). In order to take full advantage of the information provided by the individual benchmark descriptions for the application case, the recommended procedure is to perform an uncertainty analysis. The latter is based on the uncertainties of experimental results included in most of the benchmark descriptions. They can be performed by means of the Monte Carlo sampling technique. The consideration of uncertainties is also being introduced in the supplementary sheet of DIN 25478 'Application of computer codes in the assessment of criticality safety'. However, for a correct treatment of uncertainties taking into account the individual uncertainties of the benchmark experiments is insufficient. In addition, correlations between benchmark experiments have to be handled correctly. For example, these correlations can arise due to different cases of a benchmark experiment sharing the same components like fuel pins or fissile solutions. Thus, manufacturing tolerances of these components (e.g. diameter of the fuel pellets) have to be considered in a consistent manner in all cases of the benchmark experiment. At the 2012 meeting of the Expert Group on 'Uncertainty Analysis for Criticality Safety Assessment' (UACSA) of the OECD/NEA a benchmark proposal was outlined that aimed for the determination of the impact on benchmark correlations on the estimation of the computational bias of the neutron multiplication factor (k eff ). The analysis presented here is based on this proposal. (orig.)

  5. Content validity and its estimation

    Directory of Open Access Journals (Sweden)

    Yaghmale F

    2003-04-01

    Full Text Available Background: Measuring content validity of instruments are important. This type of validity can help to ensure construct validity and give confidence to the readers and researchers about instruments. content validity refers to the degree that the instrument covers the content that it is supposed to measure. For content validity two judgments are necessary: the measurable extent of each item for defining the traits and the set of items that represents all aspects of the traits. Purpose: To develop a content valid scale for assessing experience with computer usage. Methods: First a review of 2 volumes of International Journal of Nursing Studies, was conducted with onlyI article out of 13 which documented content validity did so by a 4-point content validity index (CV! and the judgment of 3 experts. Then a scale with 38 items was developed. The experts were asked to rate each item based on relevance, clarity, simplicity and ambiguity on the four-point scale. Content Validity Index (CVI for each item was determined. Result: Of 38 items, those with CVIover 0.75 remained and the rest were discarded reSulting to 25-item scale. Conclusion: Although documenting content validity of an instrument may seem expensive in terms of time and human resources, its importance warrants greater attention when a valid assessment instrument is to be developed. Keywords: Content Validity, Measuring Content Validity

  6. Validation of the U-238 inelastic scattering neutron cross section through the EXCALIBUR dedicated experiment

    Directory of Open Access Journals (Sweden)

    Leconte Pierre

    2017-01-01

    Full Text Available EXCALIBUR is an integral transmission experiment based on the fast neutron source produced by the bare highly enriched fast burst reactor CALIBAN, located in CEA/DAM Valduc (France. Two experimental campaigns have been performed, one using a sphere of diameter 17 cm and one using two cylinders of 17 cm diameter 9 cm height, both made of metallic Uranium 238. A set of 15 different dosimeters with specific threshold energies have been employed to provide information on the neutron flux attenuation as a function of incident energy. Measurements uncertainties are typically in the range of 0.5–3% (1σ. The analysis of these experiments is performed with the TRIPOLI4 continuous energy Monte Carlo code. A calculation benchmark with validated simplifications is defined in order to improve the statistical convergence under 2%. Various 238U evaluations have been tested: JEFF-3.1.1, ENDF/B-VII.1 and the IB36 evaluation from IAEA. A sensitivity analysis is presented to identify the contribution of each reaction cross section to the integral transmission rate. This feedback may be of interest for the international effort on 238U, through the CIELO project.

  7. Validation of the U-238 inelastic scattering neutron cross section through the EXCALIBUR dedicated experiment

    Science.gov (United States)

    Leconte, Pierre; Bernard, David

    2017-09-01

    EXCALIBUR is an integral transmission experiment based on the fast neutron source produced by the bare highly enriched fast burst reactor CALIBAN, located in CEA/DAM Valduc (France). Two experimental campaigns have been performed, one using a sphere of diameter 17 cm and one using two cylinders of 17 cm diameter 9 cm height, both made of metallic Uranium 238. A set of 15 different dosimeters with specific threshold energies have been employed to provide information on the neutron flux attenuation as a function of incident energy. Measurements uncertainties are typically in the range of 0.5-3% (1σ). The analysis of these experiments is performed with the TRIPOLI4 continuous energy Monte Carlo code. A calculation benchmark with validated simplifications is defined in order to improve the statistical convergence under 2%. Various 238U evaluations have been tested: JEFF-3.1.1, ENDF/B-VII.1 and the IB36 evaluation from IAEA. A sensitivity analysis is presented to identify the contribution of each reaction cross section to the integral transmission rate. This feedback may be of interest for the international effort on 238U, through the CIELO project.

  8. Experiment designs offered for discussion preliminary to an LLNL field scale validation experiment in the Yucca Mountain Exploratory Shaft Facility

    International Nuclear Information System (INIS)

    Lowry, B.; Keller, C.

    1988-01-01

    It has been proposed (''Progress Report on Experiment Rationale for Validation of LLNL Models of Ground Water Behavior Near Nuclear Waste Canisters,'' Keller and Lowry, Dec. 7, 1988) that a heat generating spent fuel canister emplaced in unsaturated tuff, in a ventilated hole, will cause a net flux of water into the borehole during the heating cycle of the spent fuel. Accompanying this mass flux will be the formation of mineral deposits near the borehole wall as the water evaporates and leaves behind its dissolved solids. The net effect of this process upon the containment of radioactive wastes is a function of (1) where and how much solid material is deposited in the tuff matrix and cracks, and (2) the resultant effect on the medium flow characteristics. Experimental concepts described in this report are designed to quantify the magnitude and relative location of solid mineral deposit formation due to a heated and vented borehole environment. The most simple tests address matrix effects only; after the process is understood in the homogeneous matrix, fracture effects would be investigated. Three experiment concepts have been proposed. Each has unique advantages and allows investigation of specific aspects of the precipitate formation process. All could be done in reasonable time (less than a year) and none of them are extremely expensive (the most expensive is probably the structurally loaded block test). The calculational ability exists to analyze the ''real'' situation and each of the experiment designs, and produce a credible series of tests. None of the designs requires the acquisition of material property data beyond current capabilities. The tests could be extended, if our understanding is consistent with the data produced, to analyze fracture effects. 7 figs

  9. Validation: an overview of definitions

    International Nuclear Information System (INIS)

    Pescatore, C.

    1995-01-01

    The term validation is featured prominently in the literature on radioactive high-level waste disposal and is generally understood to be related to model testing using experiments. In a first class, validation is linked to the goal of predicting the physical world as faithfully as possible but is unattainable and unsuitable for setting goals for the safety analyses. In a second class, validation is associated to split-sampling or to blind-tests predictions. In the third class of definition, validation focuses on the quality of the decision-making process. Most prominent in the present review is the observed lack of use of the term validation in the field of low-level radioactive waste disposal. The continued informal use of the term validation in the field of high level wastes disposals can become cause for misperceptions and endless speculations. The paper proposes either abandoning the use of this term or agreeing to a definition which would be common to all. (J.S.). 29 refs

  10. Validation and comparison of dispersion models of RTARC DSS

    International Nuclear Information System (INIS)

    Duran, J.; Pospisil, M.

    2004-01-01

    RTARC DSS (Real Time Accident Release Consequences - Decision Support System) is a computer code developed at the VUJE Trnava, Inc. (Stubna, M. et al, 1993). The code calculations include atmospheric transport and diffusion, dose assessment, evaluation and displaying of the affected zones, evaluation of the early health effects, concentration and dose rate time dependence in the selected sites etc. The simulation of the protective measures (sheltering, iodine administration) is involved. The aim of this paper is to present the process of validation of the RTARC dispersion models. RTARC includes models for calculations of release for very short (Method Monte Carlo - MEMOC), short (Gaussian Straight-Line Model) and long distances (Puff Trajectory Model - PTM). Validation of the code RTARC was performed using the results of comparisons and experiments summarized in the Table 1.: 1. Experiments and comparisons in the process of validation of the system RTARC - experiments or comparison - distance - model. Wind tunnel experiments (Universitaet der Bundeswehr, Muenchen) - Area of NPP - Method Monte Carlo. INEL (Idaho National Engineering Laboratory) - short/medium - Gaussian model and multi tracer atmospheric experiment - distances - PTM. Model Validation Kit - short distances - Gaussian model. STEP II.b 'Realistic Case Studies' - long distances - PTM. ENSEMBLE comparison - long distances - PTM (orig.)

  11. Validation of scaffold design optimization in bone tissue engineering: finite element modeling versus designed experiments.

    Science.gov (United States)

    Uth, Nicholas; Mueller, Jens; Smucker, Byran; Yousefi, Azizeh-Mitra

    2017-02-21

    This study reports the development of biological/synthetic scaffolds for bone tissue engineering (TE) via 3D bioplotting. These scaffolds were composed of poly(L-lactic-co-glycolic acid) (PLGA), type I collagen, and nano-hydroxyapatite (nHA) in an attempt to mimic the extracellular matrix of bone. The solvent used for processing the scaffolds was 1,1,1,3,3,3-hexafluoro-2-propanol. The produced scaffolds were characterized by scanning electron microscopy, microcomputed tomography, thermogravimetric analysis, and unconfined compression test. This study also sought to validate the use of finite-element optimization in COMSOL Multiphysics for scaffold design. Scaffold topology was simplified to three factors: nHA content, strand diameter, and strand spacing. These factors affect the ability of the scaffold to bear mechanical loads and how porous the structure can be. Twenty four scaffolds were constructed according to an I-optimal, split-plot designed experiment (DE) in order to generate experimental models of the factor-response relationships. Within the design region, the DE and COMSOL models agreed in their recommended optimal nHA (30%) and strand diameter (460 μm). However, the two methods disagreed by more than 30% in strand spacing (908 μm for DE; 601 μm for COMSOL). Seven scaffolds were 3D-bioplotted to validate the predictions of DE and COMSOL models (4.5-9.9 MPa measured moduli). The predictions for these scaffolds showed relative agreement for scaffold porosity (mean absolute percentage error of 4% for DE and 13% for COMSOL), but were substantially poorer for scaffold modulus (51% for DE; 21% for COMSOL), partly due to some simplifying assumptions made by the models. Expanding the design region in future experiments (e.g., higher nHA content and strand diameter), developing an efficient solvent evaporation method, and exerting a greater control over layer overlap could allow developing PLGA-nHA-collagen scaffolds to meet the mechanical requirements for

  12. Measuring experience of hospitality : scale development and validation

    NARCIS (Netherlands)

    Pijls-Hoekstra, Ruth; Groen, Brenda H.; Galetzka, Mirjam; Pruyn, Adriaan T.H.

    This paper describes the development of the Experience of Hospitality Scale (EH-Scale) for assessing hospitality in service environments from a guest point of view. In contrast to other scales, which focus specifically on staff behaviour, the present scale focuses on the experience of hospitality

  13. Microgravity Flammability Experiments for Spacecraft Fire Safety

    DEFF Research Database (Denmark)

    Legros, Guillaume; Minster, Olivier; Tóth, Balazs

    2012-01-01

    As fire behaviour in manned spacecraft still remains poorly understood, an international topical team has been created to design a validation experiment that has an unprecedented large scale for a microgravity flammability experiment. While the validation experiment is being designed for a re-sup...

  14. Validation of a 16-Item Short Form of the Czech Version of the Experiences in Close Relationships Revised Questionnaire in a Representative Sample

    Czech Academy of Sciences Publication Activity Database

    Kaščáková, N.; Husárová, D.; Hašto, J.; Kolarčik, P.; Poláčková Šolcová, Iva; Madarasová Gecková, A.; Tavel, P.

    2016-01-01

    Roč. 119, č. 3 (2016), s. 804-825 ISSN 0033-2941 Institutional support: RVO:68081740 Keywords : Short form of the ECR-R * Experiences in Close Relationships Revised Questionnaire * validation * attachment anxiety * attachment avoidance * attachment styles * representative sample Subject RIV: AN - Psychology Impact factor: 0.629, year: 2016

  15. Measuring the experience of hospitality : Scale development and validation

    NARCIS (Netherlands)

    Pijls-Hoekstra, Ruth; Groen, Brenda H.; Galetzka, Mirjam; Pruyn, Adriaan T.H.

    2017-01-01

    This paper identifies what customers experience as hospitality and subsequently presents a novel and compact assessment scale for measuring customers’ experience of hospitality at any kind of service organization. The Experience of Hospitality Scale (EH-Scale) takes a broader perspective compared to

  16. Prospective Validation of the Decalogue, a Set of Doctor-Patient Communication Recommendations to Improve Patient Illness Experience and Mood States within a Hospital Cardiologic Ambulatory Setting

    Directory of Open Access Journals (Sweden)

    Piercarlo Ballo

    2017-01-01

    Full Text Available Strategies to improve doctor-patient communication may have a beneficial impact on patient’s illness experience and mood, with potential favorable clinical effects. We prospectively tested the psychometric and clinical validity of the Decalogue, a tool utilizing 10 communication recommendations for patients and physicians. The Decalogue was administered to 100 consecutive patients referred for a cardiologic consultation, whereas 49 patients served as controls. The POMS-2 questionnaire was used to measure the total mood disturbance at the end of the consultation. Structural equation modeling showed high internal consistency (Cronbach alpha 0.93, good test-retest reproducibility, and high validity of the psychometric construct (all > 0.80, suggesting a positive effect on patients’ illness experience. The total mood disturbance was lower in the patients exposed to the Decalogue as compared to the controls (1.4±12.1 versus 14.8±27.6, p=0.0010. In an additional questionnaire, patients in the Decalogue group showed a trend towards a better understanding of their state of health (p=0.07. In a cardiologic ambulatory setting, the Decalogue shows good validity and reliability as a tool to improve patients’ illness experience and could have a favorable impact on mood states. These effects might potentially improve patient engagement in care and adherence to therapy, as well as clinical outcome.

  17. Validation of VHTRC calculation benchmark of critical experiment using the MCB code

    Directory of Open Access Journals (Sweden)

    Stanisz Przemysław

    2016-01-01

    Full Text Available The calculation benchmark problem Very High Temperature Reactor Critical (VHTR a pin-in-block type core critical assembly has been investigated with the Monte Carlo Burnup (MCB code in order to validate the latest version of Nuclear Data Library based on ENDF format. Executed benchmark has been made on the basis of VHTR benchmark available from the International Handbook of Evaluated Reactor Physics Benchmark Experiments. This benchmark is useful for verifying the discrepancies in keff values between various libraries and experimental values. This allows to improve accuracy of the neutron transport calculations that may help in designing the high performance commercial VHTRs. Almost all safety parameters depend on the accuracy of neutron transport calculation results that, in turn depend on the accuracy of nuclear data libraries. Thus, evaluation of the libraries applicability to VHTR modelling is one of the important subjects. We compared the numerical experiment results with experimental measurements using two versions of available nuclear data (ENDF-B-VII.1 and JEFF-3.2 prepared for required temperatures. Calculations have been performed with the MCB code which allows to obtain very precise representation of complex VHTR geometry, including the double heterogeneity of a fuel element. In this paper, together with impact of nuclear data, we discuss also the impact of different lattice modelling inside the fuel pins. The discrepancies of keff have been successfully observed and show good agreement with each other and with the experimental data within the 1 σ range of the experimental uncertainty. Because some propagated discrepancies observed, we proposed appropriate corrections in experimental constants which can improve the reactivity coefficient dependency. Obtained results confirm the accuracy of the new Nuclear Data Libraries.

  18. RCCS Experiments and Validation for High Temperature Gas-Cooled Reactor

    International Nuclear Information System (INIS)

    Chang Oh; Cliff Davis; Goon C. Park

    2007-01-01

    A reactor cavity cooling system (RCCS), an air-cooled helical coil RCCS unit immersed in the water pool, was proposed to overcome the disadvantages of the weak cooling ability of air-cooled RCCS and the complex structure of water-cooled RCCS for the high temperature gas-cooled reactor (HTGR). An experimental apparatus was constructed to investigate the various heat transfer phenomena in the water pool type RCCS, such as the natural convection of air inside the cavity, radiation in the cavity, the natural convection of water in the water pool and the forced convection of air in the cooling pipe. The RCCS experimental results were compared with published correlations. The CFX code was validated using data from the air-cooled portion of the RCCS. The RELAP5 code was validated using measured temperatures from the reactor vessel and cavity walls

  19. Optimization of the Severe Accident Management Strategy for Domestic Plants and Validation Experiments

    International Nuclear Information System (INIS)

    Kim, S. B.; Kim, H. D.; Koo, K. M.; Park, R. J.; Hong, S. H.; Cho, Y. R.; Kim, J. T.; Ha, K. S.; Kang, K. H.

    2007-04-01

    nuclear power plants, a technical basis report and computational aid tools were developed in parallel with the experimental and analytical works for the resolution of the uncertain safety issues. ELIAS experiments were carried out to quantify the boiling heat removal rate at the upper surface of a metallic layer for precise evaluations on the effect of a late in-vessel coolant injection. T-HERMES experiments were performed to examine the two-phase natural circulation phenomena through the gap between the reactor vessel and the insulator in the APR1400. Detailed analyses on the hydrogen control in the APR1400 containment were performed focused on the effect of spray system actuation on the hydrogen burning and the evaluation of the hydrogen behavior in the IRWST. To develop the technical basis report for the severe accident management, analyses using SCDAP/RELAP5 code were performed for the accident sequences of the OPR1000. Based on the experimental and analytical results performed in this study, the computational aids for the evaluations of hydrogen flammability in the containment, criteria of the in-vessel corium cooling, criteria of the external reactor vessel cooling were developed. An ASSA code was developed to validate the signal from the instrumentations during the severe accidents and to process the abnormal signal. Since ASSA can perform the signal processing from the direct input of the nuclear power plant during the severe accident, it can be platform of the computational aids. In this study, the ASSA was linked with the computaional aids for the hydrogen flammability

  20. Optimization of the Severe Accident Management Strategy for Domestic Plants and Validation Experiments

    Energy Technology Data Exchange (ETDEWEB)

    Kim, S. B.; Kim, H. D.; Koo, K. M.; Park, R. J.; Hong, S. H.; Cho, Y. R.; Kim, J. T.; Ha, K. S.; Kang, K. H

    2007-04-15

    nuclear power plants, a technical basis report and computational aid tools were developed in parallel with the experimental and analytical works for the resolution of the uncertain safety issues. ELIAS experiments were carried out to quantify the boiling heat removal rate at the upper surface of a metallic layer for precise evaluations on the effect of a late in-vessel coolant injection. T-HERMES experiments were performed to examine the two-phase natural circulation phenomena through the gap between the reactor vessel and the insulator in the APR1400. Detailed analyses on the hydrogen control in the APR1400 containment were performed focused on the effect of spray system actuation on the hydrogen burning and the evaluation of the hydrogen behavior in the IRWST. To develop the technical basis report for the severe accident management, analyses using SCDAP/RELAP5 code were performed for the accident sequences of the OPR1000. Based on the experimental and analytical results performed in this study, the computational aids for the evaluations of hydrogen flammability in the containment, criteria of the in-vessel corium cooling, criteria of the external reactor vessel cooling were developed. An ASSA code was developed to validate the signal from the instrumentations during the severe accidents and to process the abnormal signal. Since ASSA can perform the signal processing from the direct input of the nuclear power plant during the severe accident, it can be platform of the computational aids. In this study, the ASSA was linked with the computaional aids for the hydrogen flammability.

  1. Literature research concerning alternative methods for validation of criticality calculation systems

    International Nuclear Information System (INIS)

    Behler, Matthias

    2016-05-01

    Beside radiochemical analysis of irradiated fuel and critical experiments, which has become a well-established basis for the validation of depletion code and criticality codes respectively, also results of oscillation experiments or the operating conditions of power reactor and research reactors can provide useful information for the validation of the above mentioned codes. Based on a literature review the potential of the utilization of oscillation experiment measurements for the validation of criticality codes is estimated. It is found that the reactivity measurements for actinides and fission products within the CERES program on the reactors DIMPLE (Winfrith, UK) and MINERVE (Cadarache, France) can give a valuable addition to the commonly used critical experiments for criticality code validation. However, there are approaches but yet no generally satisfactory solution for integrating the reactivity measurements in a quantitative bias determination for the neutron multiplication factor of typical application cases including irradiated spent fuel outside reactor cores, calculated using common criticality codes.

  2. CosmoQuest:Using Data Validation for More Than Just Data Validation

    Science.gov (United States)

    Lehan, C.; Gay, P.

    2016-12-01

    It is often taken for granted that different scientists completing the same task (e.g. mapping geologic features) will get the same results, and data validation is often skipped or under-utilized due to time and funding constraints. Robbins et. al (2014), however, demonstrated that this is a needed step, as large variation can exist even among collaborating team members completing straight-forward tasks like marking craters. Data Validation should be much more than a simple post-project verification of results. The CosmoQuest virtual research facility employs regular data-validation for a variety of benefits, including real-time user feedback, real-time tracking to observe user activity while it's happening, and using pre-solved data to analyze users' progress and to help them retain skills. Some creativity in this area can drastically improve project results. We discuss methods of validating data in citizen science projects and outline the variety of uses for validation, which, when used properly, improves the scientific output of the project and the user experience for the citizens doing the work. More than just a tool for scientists, validation can assist users in both learning and retaining important information and skills, improving the quality and quantity of data gathered. Real-time analysis of user data can give key information in the effectiveness of the project that a broad glance would miss, and properly presenting that analysis is vital. Training users to validate their own data, or the data of others, can significantly improve the accuracy of misinformed or novice users.

  3. Validation of the new filters configuration for the RPC gas systems at LHC experiments

    CERN Document Server

    Mandelli, Beatrice; Guida, Roberto; Hahn, Ferdinand; Haider, Stefan

    2012-01-01

    Resistive Plate Chambers (RPCs) are widely employed as muon trigger systems at the Large Hadron Collider (LHC) experiments. Their large detector volume and the use of a relatively expensive gas mixture make a closed-loop gas circulation unavoidable. The return gas of RPCs operated in conditions similar to the experimental background foreseen at LHC contains large amount of impurities potentially dangerous for long-term operation. Several gas-cleaning agents, characterized during the past years, are currently in use. New test allowed understanding of the properties and performance of a large number of purifiers. On that basis, an optimal combination of different filters consisting of Molecular Sieve (MS) 5Å and 4Å, and a Cu catalyst R11 has been chosen and validated irradiating a set of RPCs at the CERN Gamma Irradiation Facility (GIF) for several years. A very important feature of this new configuration is the increase of the cycle duration for each purifier, which results in better system stabilit...

  4. Validation of models with multivariate output

    International Nuclear Information System (INIS)

    Rebba, Ramesh; Mahadevan, Sankaran

    2006-01-01

    This paper develops metrics for validating computational models with experimental data, considering uncertainties in both. A computational model may generate multiple response quantities and the validation experiment might yield corresponding measured values. Alternatively, a single response quantity may be predicted and observed at different spatial and temporal points. Model validation in such cases involves comparison of multiple correlated quantities. Multiple univariate comparisons may give conflicting inferences. Therefore, aggregate validation metrics are developed in this paper. Both classical and Bayesian hypothesis testing are investigated for this purpose, using multivariate analysis. Since, commonly used statistical significance tests are based on normality assumptions, appropriate transformations are investigated in the case of non-normal data. The methodology is implemented to validate an empirical model for energy dissipation in lap joints under dynamic loading

  5. Site characterization and validation - Inflow to the validation drift

    International Nuclear Information System (INIS)

    Harding, W.G.C.; Black, J.H.

    1992-01-01

    Hydrogeological experiments have had an essential role in the characterization of the drift site on the Stripa project. This report focuses on the methods employed and the results obtained from inflow experiments performed on the excavated drift in stage 5 of the SCV programme. Inflows were collected in sumps on the floor, in plastic sheeting on the upper walls and ceiling, and measured by means of differential humidity of ventilated air at the bulkhead. Detailed evaporation experiments were also undertaken on uncovered areas of the excavated drift. The inflow distribution was determined on the basis of a system of roughly equal sized grid rectangles. The results have highlighted the overriding importance of fractures in the supply of water to the drift site. The validation drift experiment has revealed that in excess of 99% of inflow comes from a 5 m section corresponding to the 'H' zone, and that as much as 57% was observed coming from a single grid square (267). There was considerable heterogeneity even within the 'H' zone, with 38% of such samples areas yielding no flow at all. Model predictions in stage 4 underestimated the very substantial declines in inflow observed in the validation drift when compared to the SDE; this was especially so in the 'good' rock areas. Increased drawdowns in the drift have generated less flow and reduced head responses in nearby boreholes by a similar proportion. This behaviour has been the focus for considerable study in the latter part of the SCV project, and a number of potential processes have been proposed. These include 'transience', stress redistribution resulting from the creation of the drift, chemical precipitation, blast-induced dynamic unloading and related gas intrusion, and degassing. (au)

  6. Development and preliminary validation of a screen for ...

    African Journals Online (AJOL)

    Development and preliminary validation of a screen for interpersonal childhood trauma experiences among school-going youth in Durban, South Africa. ... validity in the sense that all scales were significantly correlated with scores on clinical measures of post-traumatic stress disorder (PTSD) and/or complex PTSD.

  7. Patient experience and satisfaction with inpatient service: development of short form survey instrument measuring the core aspect of inpatient experience.

    Directory of Open Access Journals (Sweden)

    Eliza L Y Wong

    Full Text Available Patient experience reflects quality of care from the patients' perspective; therefore, patients' experiences are important data in the evaluation of the quality of health services. The development of an abbreviated, reliable and valid instrument for measuring inpatients' experience would reflect the key aspect of inpatient care from patients' perspective as well as facilitate quality improvement by cultivating patient engagement and allow the trends in patient satisfaction and experience to be measured regularly. The study developed a short-form inpatient instrument and tested its ability to capture a core set of inpatients' experiences. The Hong Kong Inpatient Experience Questionnaire (HKIEQ was established in 2010; it is an adaptation of the General Inpatient Questionnaire of the Care Quality Commission created by the Picker Institute in United Kingdom. This study used a consensus conference and a cross-sectional validation survey to create and validate a short-form of the Hong Kong Inpatient Experience Questionnaire (SF-HKIEQ. The short-form, the SF-HKIEQ, consisted of 18 items derived from the HKIEQ. The 18 items mainly covered relational aspects of care under four dimensions of the patient's journey: hospital staff, patient care and treatment, information on leaving the hospital, and overall impression. The SF-HKIEQ had a high degree of face validity, construct validity and internal reliability. The validated SF-HKIEQ reflects the relevant core aspects of inpatients' experience in a hospital setting. It provides a quick reference tool for quality improvement purposes and a platform that allows both healthcare staff and patients to monitor the quality of hospital care over time.

  8. Validation experience with the core calculation program karate

    International Nuclear Information System (INIS)

    Hegyi, Gy.; Hordosy, G.; Kereszturi, A.; Makai, M.; Maraczy, Cs.

    1995-01-01

    A relatively fast and easy-to-handle modular code system named KARATE-440 has been elaborated for steady-state operational calculations of VVER-440 type reactors. It is built up from cell, assembly and global calculations. In the frame of the program neutron physical and thermohydraulic process of the core at normal startup, steady and slow transient can be simulated. The verification and validation of the global code have been prepared recently. The test cases include mathematical benchmark and measurements on operating VVER-440 units. Summary of the results, such as startup parameters, boron letdown curves, radial and axial power distributions of some cycles of Paks NPP is presented. (author)

  9. The correlation of in vivo and ex vivo tissue dielectric properties to validate electromagnetic breast imaging: initial clinical experience

    International Nuclear Information System (INIS)

    Halter, Ryan J; Zhou, Tian; Meaney, Paul M; Hartov, Alex; Barth, Richard J Jr; Rosenkranz, Kari M; Wells, Wendy A; Kogel, Christine A; Borsic, Andrea; Rizzo, Elizabeth J; Paulsen, Keith D

    2009-01-01

    Electromagnetic (EM) breast imaging provides low-cost, safe and potentially a more specific modality for cancer detection than conventional imaging systems. A primary difficulty in validating these EM imaging modalities is that the true dielectric property values of the particular breast being imaged are not readily available on an individual subject basis. Here, we describe our initial experience in seeking to correlate tomographic EM imaging studies with discrete point spectroscopy measurements of the dielectric properties of breast tissue. The protocol we have developed involves measurement of in vivo tissue properties during partial and full mastectomy procedures in the operating room (OR) followed by ex vivo tissue property recordings in the same locations in the excised tissue specimens in the pathology laboratory immediately after resection. We have successfully applied all of the elements of this validation protocol in a series of six women with cancer diagnoses. Conductivity and permittivity gauged from ex vivo samples over the frequency range 100 Hz–8.5 GHz are found to be similar to those reported in the literature. A decrease in both conductivity and permittivity is observed when these properties are gauged from ex vivo samples instead of in vivo. We present these results in addition to a case study demonstrating how discrete point spectroscopy measurements of the tissue can be correlated and used to validate EM imaging studies

  10. Light ion fusion experiment (L.I.F.E.) concept validation studies. Final report, July 1979-May 1980

    International Nuclear Information System (INIS)

    Christensen, T.E.; Orthel, J.L.; Thomson, J.J.

    1980-12-01

    This report reflects the considerable advances made for the objectives of the contractual program, validating by detailed anaytical studies the concept of a new Light Ion Fusion Experiment for Inertial Confinement Fusion. The studies have produced an analytical design of a novel electrostatic accelerator based on separate function and strong channel focusing principles, to launch 3 to 10 MeV, 23 kA, He + neutralized beams in 400 ns pulses, delivering on a 5 mm radius target located 10 m downstream, 50 kJ of implosion energy in approx. 20 ns impact times The control, stability and focusing of beams is made by electrostatic quadrupoles, producing overall beam normalized emittance of approx. 3 x 10 -5 m-rad

  11. Three-dimensional fuel pin model validation by prediction of hydrogen distribution in cladding and comparison with experiment

    Energy Technology Data Exchange (ETDEWEB)

    Aly, A. [North Carolina State Univ., Raleigh, NC (United States); Avramova, Maria [North Carolina State Univ., Raleigh, NC (United States); Ivanov, Kostadin [Pennsylvania State Univ., University Park, PA (United States); Motta, Arthur [Pennsylvania State Univ., University Park, PA (United States); Lacroix, E. [Pennsylvania State Univ., University Park, PA (United States); Manera, Annalisa [Univ. of Michigan, Ann Arbor, MI (United States); Walter, D. [Univ. of Michigan, Ann Arbor, MI (United States); Williamson, R. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Gamble, K. [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2017-10-29

    To correctly describe and predict this hydrogen distribution there is a need for multi-physics coupling to provide accurate three-dimensional azimuthal, radial, and axial temperature distributions in the cladding. Coupled high-fidelity reactor-physics codes with a sub-channel code as well as with a computational fluid dynamics (CFD) tool have been used to calculate detailed temperature distributions. These high-fidelity coupled neutronics/thermal-hydraulics code systems are coupled further with the fuel-performance BISON code with a kernel (module) for hydrogen. Both hydrogen migration and precipitation/dissolution are included in the model. Results from this multi-physics analysis is validated utilizing calculations of hydrogen distribution using models informed by data from hydrogen experiments and PIE data.

  12. Mini-channel flow experiments and CFD validation analyses with the IFMIF Thermo- Hydraulic Experimental facility (ITHEX)

    International Nuclear Information System (INIS)

    Arbeiter, F.; Heinzel, V.; Leichtle, D.; Stratmanns, E.; Gordeev, S.

    2006-01-01

    The design of the IFMIF High Flux Test Module (HFTM) is based on the predictions for the heat transfer in narrow channels conducting helium flow of 50 o C inlet temperature at 0.3 MPa. The emerging helium flow conditions are in the transition regime of laminar to turbulent flow. The rectangular cooling channels are too short for the full development of the coolant flow. Relaminarization along the cooling passage is expected. At the shorter sides of the channels secondary flow occurs, which may have an impact on the temperature field inside the irradiation specimen's stack. As those conditions are not covered by available experimental data, the dedicated gas loop ITHEX has been constructed to operate up to a pressure of 0.42 MPa and temperatures of 200 o C. It's objective is to conduct experiments for the validation of the STAR-CD CFD code used for the design of the HFTM. As a first stage, two annular test-sections with hydraulic diameter of 1.2 mm have been used, where the experiments have been varied with respect to gas species (N 2 , He), inlet pressure, dimensionless heating span and Reynolds number encompassing the range of operational parameters of the HFTM. Local friction factors and Nusselt numbers have been obtained giving evidence that the transition regime will extend to Reynolds 10,000. For heating rates comparable to the HFTM filled with RAFM steels, local heat transfer coefficients are in consistence with the measured friction data. To validate local velocity profiles the ITHEX facility was further equipped with a flat rectangular test-section and a Laser Doppler Anemometry (LDA) system. An appropriate optical system has been developed and tested for the tiny observation volume of 40 μm diameter. Velocity profiles as induced by the transition of a wide inlet plenum to the flat mini-channels have been measured. Whereas the CFD models were able to reproduce the patterns far away from the nozzle, they show some disagreement for the conditions at the

  13. Análisis de confiabilidad y de validez del instrumento Course Experience Questionnaire (CEQ Análise de confiabilidade e de validade do instrumento Course Experience Questionnaire (CEQ Analysis of the Reliability and Validity of the Course Experience Questionnaire (CEQ

    Directory of Open Access Journals (Sweden)

    Carlos González

    2012-04-01

    Full Text Available El objetivo de este estudio es analizar la validez del instrumento Course Experience Questionnaire (CEQ, empleado para conocer la percepción de los estudiantes sobre la calidad del aprendizaje en la educación superior. El cuestionario fue traducido y aplicado a 325 estudiantes de ingeniería de una universidad pública de la región metropolitana de Chile. Se generaron estadísticas descriptivas y tanto los niveles de confiabilidad como los análisis de validez mostraron resultados mayoritariamente adecuados. El instrumento CEQ puede emplearse para medir la calidad de la docencia en universidades latinoamericanas y se sugiere su uso con fines de investigación. Nuevas investigaciones deberán continuar el proceso de validación e incorporar otras variables consideradas clave por la línea Student Learning Research para indagar la experiencia de aprendizaje de los estudiantes universitarios.O objetivo deste estudo é analisar a validade do instrumento Course Experience Questionnaire (CEQ, empregado para conhecer a percepção dos estudantes sobre a qualidade da aprendizagem na educação superior. O questionário foi traduzido e aplicado com 325 estudantes de engenharia de uma universidade pública da região metropolitana do Chile. Geraram-se estatísticas descritivas e tanto os níveis de confiabilidade quanto as análises de validade mostraram resultados majoritariamente adequados. O instrumento CEQ pode empregar-se para medir a qualidade da docência em universidades latino-americanas e se sugere seu uso com fins de pesquisa. Novas pesquisas deverão continuar o processo de validade e incorporar outras variáveis consideradas chave pela linha Student Learning Research para indagar a experiência de aprendizagem dos estudantes universitários.The objective of this study is to analyze the validity of the Course Experience Questionnaire (CEQ used to know how students perceive the quality of learning in higher education. The CEQ was translated

  14. Validation of the updated ArthroS simulator: face and construct validity of a passive haptic virtual reality simulator with novel performance metrics.

    Science.gov (United States)

    Garfjeld Roberts, Patrick; Guyver, Paul; Baldwin, Mathew; Akhtar, Kash; Alvand, Abtin; Price, Andrew J; Rees, Jonathan L

    2017-02-01

    To assess the construct and face validity of ArthroS, a passive haptic VR simulator. A secondary aim was to evaluate the novel performance metrics produced by this simulator. Two groups of 30 participants, each divided into novice, intermediate or expert based on arthroscopic experience, completed three separate tasks on either the knee or shoulder module of the simulator. Performance was recorded using 12 automatically generated performance metrics and video footage of the arthroscopic procedures. The videos were blindly assessed using a validated global rating scale (GRS). Participants completed a survey about the simulator's realism and training utility. This new simulator demonstrated construct validity of its tasks when evaluated against a GRS (p ≤ 0.003 in all cases). Regarding it's automatically generated performance metrics, established outputs such as time taken (p ≤ 0.001) and instrument path length (p ≤ 0.007) also demonstrated good construct validity. However, two-thirds of the proposed 'novel metrics' the simulator reports could not distinguish participants based on arthroscopic experience. Face validity assessment rated the simulator as a realistic and useful tool for trainees, but the passive haptic feedback (a key feature of this simulator) is rated as less realistic. The ArthroS simulator has good task construct validity based on established objective outputs, but some of the novel performance metrics could not distinguish between surgical experience. The passive haptic feedback of the simulator also needs improvement. If simulators could offer automated and validated performance feedback, this would facilitate improvements in the delivery of training by allowing trainees to practise and self-assess.

  15. Reliability and Validity of Qualitative and Operational Research Paradigm

    Directory of Open Access Journals (Sweden)

    Muhammad Bashir

    2008-01-01

    Full Text Available Both qualitative and quantitative paradigms try to find the same result; the truth. Qualitative studies are tools used in understanding and describing the world of human experience. Since we maintain our humanity throughout the research process, it is largely impossible to escape the subjective experience, even for the most experienced of researchers. Reliability and Validity are the issue that has been described in great deal by advocates of quantitative researchers. The validity and the norms of rigor that are applied to quantitative research are not entirely applicable to qualitative research. Validity in qualitative research means the extent to which the data is plausible, credible and trustworthy; and thus can be defended when challenged. Reliability and validity remain appropriate concepts for attaining rigor in qualitative research. Qualitative researchers have to salvage responsibility for reliability and validity by implementing verification strategies integral and self-correcting during the conduct of inquiry itself. This ensures the attainment of rigor using strategies inherent within each qualitative design, and moves the responsibility for incorporating and maintaining reliability and validity from external reviewers’ judgments to the investigators themselves. There have different opinions on validity with some suggesting that the concepts of validity is incompatible with qualitative research and should be abandoned while others argue efforts should be made to ensure validity so as to lend credibility to the results. This paper is an attempt to clarify the meaning and use of reliability and validity in the qualitative research paradigm.

  16. Validation of the Vanderbilt Holistic Face Processing Test.

    Science.gov (United States)

    Wang, Chao-Chih; Ross, David A; Gauthier, Isabel; Richler, Jennifer J

    2016-01-01

    The Vanderbilt Holistic Face Processing Test (VHPT-F) is a new measure of holistic face processing with better psychometric properties relative to prior measures developed for group studies (Richler et al., 2014). In fields where psychologists study individual differences, validation studies are commonplace and the concurrent validity of a new measure is established by comparing it to an older measure with established validity. We follow this approach and test whether the VHPT-F measures the same construct as the composite task, which is group-based measure at the center of the large literature on holistic face processing. In Experiment 1, we found a significant correlation between holistic processing measured in the VHPT-F and the composite task. Although this correlation was small, it was comparable to the correlation between holistic processing measured in the composite task with the same faces, but different target parts (top or bottom), which represents a reasonable upper limit for correlations between the composite task and another measure of holistic processing. These results confirm the validity of the VHPT-F by demonstrating shared variance with another measure of holistic processing based on the same operational definition. These results were replicated in Experiment 2, but only when the demographic profile of our sample matched that of Experiment 1.

  17. Validation of the Vanderbilt Holistic Face Processing Test.

    Directory of Open Access Journals (Sweden)

    Chao-Chih Wang

    2016-11-01

    Full Text Available The Vanderbilt Holistic Face Processing Test (VHPT-F is a new measure of holistic face processing with better psychometric properties relative to prior measures developed for group studies (Richler et al., 2014. In fields where psychologists study individual differences, validation studies are commonplace and the concurrent validity of a new measure is established by comparing it to an older measure with established validity. We follow this approach and test whether the VHPT-F measures the same construct as the composite task, which is group-based measure at the center of the large literature on holistic face processing. In Experiment 1, we found a significant correlation between holistic processing measured in the VHPT-F and the composite task. Although this correlation was small, it was comparable to the correlation between holistic processing measured in the composite task with the same faces, but different target parts (top or bottom, which represents a reasonable upper limit for correlations between the composite task and another measure of holistic processing. These results confirm the validity of the VHPT-F by demonstrating shared variance with another measure of holistic processing based on the same operational definition. These results were replicated in Experiment 2, but only when the demographic profile of our sample matched that of Experiment 1.

  18. Validation of wind loading codes by experiments

    NARCIS (Netherlands)

    Geurts, C.P.W.

    1998-01-01

    Between 1994 and 1997, full scale measurements of the wind and wind induced pressures were carried out on the main building of Eindhoven University of Technology. Simultaneously, a comparative wind tunnel experiment was performed in an atmospheric boundary layer wind tunnel. In this paper, the

  19. A comparison of measurements and calculations for the Stripa validation drift inflow experiment

    International Nuclear Information System (INIS)

    Hodgkinson, D.P.; Cooper, N.S.

    1992-01-01

    This data presents a comparison of measurements and predictions for groundwater flow to the validation drift and remaining portions of the D-holes in the Site Characterisation and Validation (SCV) block. The comparison was carried out of behalf of the Stripa task force on fracture flow modelling. The paper summarises the characterisation data and their preliminary interpretation, and reviews the fracture flow modelling approaches and predictions made by teams from AEA Technology/Fracflow, Golder Associated and Lawrence Berkely Laboratory. The predictions are compared with the inflow measurements on the basis of the validation process and criteria defined by the Task Force. The results of all three modelling groups meet the validation criteria, with the predictions of the inflow being of the same order of magnitude as the observations. Also the AEA/Fracflow and Golder approaches allow the inflow pattern to be predicted and this too is reproduced with reasonable accuracy. The successful completion of this project demonstrates the feasibility of discrete fracture flow modelling, and in particular the ability to collect and analyse all the necessary characterization data in a timely and economic manner. (32 refs.) (au)

  20. WEC-SIM Phase 1 Validation Testing -- Numerical Modeling of Experiments: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Ruehl, Kelley; Michelen, Carlos; Bosma, Bret; Yu, Yi-Hsiang

    2016-08-01

    The Wave Energy Converter Simulator (WEC-Sim) is an open-source code jointly developed by Sandia National Laboratories and the National Renewable Energy Laboratory. It is used to model wave energy converters subjected to operational and extreme waves. In order for the WEC-Sim code to be beneficial to the wave energy community, code verification and physical model validation is necessary. This paper describes numerical modeling of the wave tank testing for the 1:33-scale experimental testing of the floating oscillating surge wave energy converter. The comparison between WEC-Sim and the Phase 1 experimental data set serves as code validation. This paper is a follow-up to the WEC-Sim paper on experimental testing, and describes the WEC-Sim numerical simulations for the floating oscillating surge wave energy converter.

  1. Our experience with the acceptance and dosimetric validation of Somatom Force dual head MDCT in the Royal Hospital, Oman

    International Nuclear Information System (INIS)

    Al-Harthi, Ruqaia; Al-Kalbani, Munira; Arun Kumar, L.S.; Al-Shanfari, Jamal

    2017-01-01

    Computed Tomography (CT) has revolutionized diagnostic imaging since its discovery in early 70's. In Oman; 70,353 CT examinations were carried out in the year 2015. The increase in CT examinations will eventually result in the increase of population dose and the consequent risk of cancer in adults and particularly in children. Here, we discuss and share our experience with the acceptance and dosimetric validation of second Dual Head Somatom Force MDCT installed in the Royal Hospital, Oman using Ministry of Health's radiation acceptance and quality assurance protocol, before handing over for routine patient care work

  2. ER-2 #809 on the SAGE III Ozone Loss and Validation Experiment (SOLVE) with pilot Dee Porter prepari

    Science.gov (United States)

    2000-01-01

    Lockheed Martin pilot Dee Porter climbs up the ladder wearing a heavy tan pressure suit, preparing to board NASA ER-2 #809 at Kiruna, Sweden, for the third flight in the SAGE III Ozone Loss and Validation Experiment. Assisting him is Jim Sokolik, a Lockheed Martin life support technician. Number 809, one of Dryden's two high-flying ER-2 Airborne Science aircraft, a civilian variant of Lockheed's U-2, and another NASA flying laboratory, Dryden's DC-8, were based north of the Arctic Circle in Kiruna, Sweden during the winter of 2000 to study ozone depletion as part of the SAGE III Ozone Loss and Validation Experiment (SOLVE). A large hangar built especially for research, 'Arena Arctica' housed the instrumented aircraft and the scientists. Scientists have observed unusually low levels of ozone over the Arctic during recent winters, raising concerns that ozone depletion there could become more widespread as in the Antarctic ozone hole. The NASA-sponsored international mission took place between November 1999 and March 2000 and was divided into three phases. The DC-8 was involved in all three phases returning to Dryden between each phase. The ER-2 flew sample collection flights between January and March, remaining in Sweden from Jan. 9 through March 16. 'The collaborative campaign will provide an immense new body of information about the Arctic stratosphere,' said program scientist Dr. Michael Kurylo, NASA Headquarters. 'Our understanding of the Earth's ozone will be greatly enhanced by this research.' ER-2s bearing tail numbers 806 and 809 are used as airborne science platforms by NASA's Dryden Flight Research Center. The aircraft are platforms for a variety of high-altitude science missions flown over various parts of the world. They are also used for earth science and atmospheric sensor research and development, satellite calibration and data validation. The ER-2s are capable of carrying a maximum payload of 2,600 pounds of experiments in a nose bay, the main

  3. The Challenging Experience Questionnaire: Characterization of challenging experiences with psilocybin mushrooms.

    Science.gov (United States)

    Barrett, Frederick S; Bradstreet, Matthew P; Leoutsakos, Jeannie-Marie S; Johnson, Matthew W; Griffiths, Roland R

    2016-12-01

    Acute adverse psychological reactions to classic hallucinogens ("bad trips" or "challenging experiences"), while usually benign with proper screening, preparation, and support in controlled settings, remain a safety concern in uncontrolled settings (such as illicit use contexts). Anecdotal and case reports suggest potential adverse acute symptoms including affective (panic, depressed mood), cognitive (confusion, feelings of losing sanity), and somatic (nausea, heart palpitation) symptoms. Responses to items from several hallucinogen-sensitive questionnaires (Hallucinogen Rating Scale, the States of Consciousness Questionnaire, and the Five-Dimensional Altered States of Consciousness questionnaire) in an Internet survey of challenging experiences with the classic hallucinogen psilocybin were used to construct and validate a Challenging Experience Questionnaire. The stand-alone Challenging Experience Questionnaire was then validated in a separate sample. Seven Challenging Experience Questionnaire factors (grief, fear, death, insanity, isolation, physical distress, and paranoia) provide a phenomenological profile of challenging aspects of experiences with psilocybin. Factor scores were associated with difficulty, meaningfulness, spiritual significance, and change in well-being attributed to the challenging experiences. The factor structure did not differ based on gender or prior struggle with anxiety or depression. The Challenging Experience Questionnaire provides a basis for future investigation of predictors and outcomes of challenging experiences with classic hallucinogens. © The Author(s) 2016.

  4. The List of Threatening Experiences: the reliability and validity of a brief life events questionnaire.

    Science.gov (United States)

    Brugha, T S; Cragg, D

    1990-07-01

    During the 23 years since the original work of Holmes & Rahe, research into stressful life events on human subjects has tended towards the development of longer and more complex inventories. The List of Threatening Experiences (LTE) of Brugha et al., by virtue of its brevity, overcomes difficulties of clinical application. In a study of 50 psychiatric patients and informants, the questionnaire version of the list (LTE-Q) was shown to have high test-retest reliability, and good agreement with informant information. Concurrent validity, based on the criterion of independently rated adversity derived from a semistructured life events interview, making use of the Life Events and Difficulties Scales (LEDS) method developed by Brown & Harris, showed both high specificity and sensitivity. The LTE-Q is particularly recommended for use in psychiatric, psychological and social studies in which other intervening variables such as social support, coping, and cognitive variables are of interest, and resources do not allow for the use of extensive interview measures of stress.

  5. An attempt to calibrate and validate a simple ductile failure model against axial-torsion experiments on Al 6061-T651

    Energy Technology Data Exchange (ETDEWEB)

    Reedlunn, Benjamin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Lu, Wei -Yang [Sandia National Lab. (SNL-CA), Livermore, CA (United States)

    2015-01-01

    This report details a work in progress. We have attempted to calibrate and validate a Von Mises plasticity model with the Johnson-Cook failure criterion ( Johnson & Cook , 1985 ) against a set of experiments on various specimens of Al 6061-T651. As will be shown, the effort was not successful, despite considerable attention to detail. When the model was com- pared against axial-torsion experiments on tubes, it over predicted failure by 3 x in tension, and never predicted failure in torsion, even when the tube was twisted by 4 x further than the experiment. While this result is unfortunate, it is not surprising. Ductile failure is not well understood. In future work, we will explore whether more sophisticated material mod- els of plasticity and failure will improve the predictions. Selecting the appropriate advanced material model and interpreting the results of said model are not trivial exercises, so it is worthwhile to fully investigate the behavior of a simple plasticity model before moving on to an anisotropic yield surface or a similarly complicated model.

  6. DebrisInterMixing-2.3: a finite volume solver for three-dimensional debris-flow simulations with two calibration parameters – Part 2: Model validation with experiments

    Directory of Open Access Journals (Sweden)

    A. von Boetticher

    2017-11-01

    Full Text Available Here, we present validation tests of the fluid dynamic solver presented in von Boetticher et al. (2016, simulating both laboratory-scale and large-scale debris-flow experiments. The new solver combines a Coulomb viscoplastic rheological model with a Herschel–Bulkley model based on material properties and rheological characteristics of the analyzed debris flow. For the selected experiments in this study, all necessary material properties were known – the content of sand, clay (including its mineral composition and gravel as well as the water content and the angle of repose of the gravel. Given these properties, two model parameters are sufficient for calibration, and a range of experiments with different material compositions can be reproduced by the model without recalibration. One calibration parameter, the Herschel–Bulkley exponent, was kept constant for all simulations. The model validation focuses on different case studies illustrating the sensitivity of debris flows to water and clay content, channel curvature, channel roughness and the angle of repose. We characterize the accuracy of the model using experimental observations of flow head positions, front velocities, run-out patterns and basal pressures.

  7. Blast Load Simulator Experiments for Computational Model Validation: Report 2

    Science.gov (United States)

    2017-02-01

    simulations of these explosive events and their effects . These codes are continuously improving, but still require validation against experimental data to...contents of this report are not to be used for advertising , publication, or promotional purposes. Citation of trade names does not constitute an...12 Figure 18. Ninety-five percent confidence intervals on measured peak pressure. ............................ 14 Figure 19. Ninety-five percent

  8. Contact Modelling in Resistance Welding, Part II: Experimental Validation

    DEFF Research Database (Denmark)

    Song, Quanfeng; Zhang, Wenqi; Bay, Niels

    2006-01-01

    Contact algorithms in resistance welding presented in the previous paper are experimentally validated in the present paper. In order to verify the mechanical contact algorithm, two types of experiments, i.e. sandwich upsetting of circular, cylindrical specimens and compression tests of discs...... with a solid ring projection towards a flat ring, are carried out at room temperature. The complete algorithm, involving not only the mechanical model but also the thermal and electrical models, is validated by projection welding experiments. The experimental results are in satisfactory agreement...

  9. Evaluating the experiences and support needs of people living with chronic cancer: development and initial validation of the Chronic Cancer Experiences Questionnaire (CCEQ).

    Science.gov (United States)

    Harley, Clare; Pini, Simon; Kenyon, Lucille; Daffu-O'Reilly, Amrit; Velikova, Galina

    2016-08-10

    Many advanced cancers are managed as chronic diseases, yet there are currently no international guidelines for the support of patients living with chronic cancer. It is important to understand whether care and service arrangements meet the needs of this rapidly growing patient group. This study aimed to develop and validate a questionnaire to capture patients' experiences of living with chronic cancer and their views of clinical and support services. The research was carried out between 1 July 2010 and 21 February 2013. A conceptual framework and initial item bank were derived from prior interviews with 56 patients with chronic cancer. Items were reviewed by 4 oncologists and 1 clinical nurse specialist and during 2 focus groups with 9 patients. Pilot questionnaires were completed by 416 patients across 5 cancer units. Item selection and scale reliability was explored using descriptive data, exploratory factor analysis, internal consistency analyses, multitrait scaling analyses and known-groups comparisons. The final Chronic Cancer Experiences Questionnaire (CCEQ) includes 75 items. 62 items contribute to 14 subscales with internal consistency between α 0·68-0·88 and minimal scaling errors. Known-groups comparisons confirmed subscale utility in distinguishing between patient groups. Subscales were labelled: managing appointments, coordination of care, general practitioner involvement, clinical trials, information and questions, making treatment decisions, symptom non-reporting, key worker, limitations, sustaining normality, financial advice, worries and anxieties, sharing feelings with others, and accessing support. 13 items assessing symptom experiences were retained as single items. The CCEQ has the potential to be used as a clinical instrument to assess patient experiences of chronic cancer or to screen for patient needs. It may also be used as an outcome measure for evaluating programmes and models of care and may identify areas for service development that

  10. THX Experiment Overview

    Science.gov (United States)

    Wernet, Mark; Wroblewski, Adam; Locke, Randy; Georgiadis, Nick

    2016-01-01

    This presentation provides an overview of experiments conducted at NASA GRC to provide turbulent flow measurements needed for new turbulence model development and validation. The experiments include particle image velocimetry (PIV) and hot-wire measurements of mean flow velocity and temperature fields, as well as fluctuating components.

  11. Validating the Patient Experience with Treatment and Self-Management (PETS), a patient-reported measure of treatment burden, in people with diabetes.

    Science.gov (United States)

    Rogers, Elizabeth A; Yost, Kathleen J; Rosedahl, Jordan K; Linzer, Mark; Boehm, Deborah H; Thakur, Azra; Poplau, Sara; Anderson, Roger T; Eton, David T

    2017-01-01

    To validate a comprehensive general measure of treatment burden, the Patient Experience with Treatment and Self-Management (PETS), in people with diabetes. We conducted a secondary analysis of a cross-sectional survey study with 120 people diagnosed with type 1 or type 2 diabetes and at least one additional chronic illness. Surveys included established patient-reported outcome measures and a 48-item version of the PETS, a new measure comprised of multi-item scales assessing the burden of chronic illness treatment and self-care as it relates to nine domains: medical information, medications, medical appointments, monitoring health, interpersonal challenges, health care expenses, difficulty with health care services, role activity limitations, and physical/mental exhaustion from self-management. Internal reliability of PETS scales was determined using Cronbach's alpha. Construct validity was determined through correlation of PETS scores with established measures (measures of chronic condition distress, medication satisfaction, self-efficacy, and global well-being), and known-groups validity through comparisons of PETS scores across clinically distinct groups. In an exploratory test of predictive validity, step-wise regressions were used to determine which PETS scales were most associated with outcomes of chronic condition distress, overall physical and mental health, and medication adherence. Respondents were 37-88 years old, 59% female, 29% non-white, and 67% college-educated. PETS scales showed good reliability (Cronbach's alphas ≥0.74). Higher PETS scale scores (greater treatment burden) were correlated with more chronic condition distress, less medication convenience, lower self-efficacy, and worse general physical and mental health. Participants less (versus more) adherent to medications and those with more (versus fewer) health care financial difficulties had higher mean PETS scores. Medication burden was the scale that was most consistently associated with

  12. HTC Experimental Program: Validation and Calculational Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Fernex, F.; Ivanova, T.; Bernard, F.; Letang, E. [Inst Radioprotect and Surete Nucl, F-92262 Fontenay Aux Roses (France); Fouillaud, P. [CEA Valduc, Serv Rech Neutron and Critcite, 21 - Is-sur-Tille (France); Thro, J. F. [AREVA NC, F-78000 Versailles (France)

    2009-05-15

    In the 1980's a series of the Haut Taux de Combustion (HTC) critical experiments with fuel pins in a water-moderated lattice was conducted at the Apparatus B experimental facility in Valduc (Commissariat a I'Energie Atomique, France) with the support of the Institut de Radioprotection et de Surete Nucleaire and AREVA NC. Four series of experiments were designed to assess profit associated with actinide-only burnup credit in the criticality safety evaluation for fuel handling, pool storage, and spent-fuel cask conditions. The HTC rods, specifically fabricated for the experiments, simulated typical pressurized water reactor uranium oxide spent fuel that had an initial enrichment of 4. 5 wt% {sup 235}U and was burned to 37.5 GWd/tonne U. The configurations have been modeled with the CRISTAL criticality package and SCALE 5.1 code system. Sensitivity/uncertainty analysis has been employed to evaluate the HTC experiments and to study their applicability for validation of burnup credit calculations. This paper presents the experimental program, the principal results of the experiment evaluation, and modeling. The HTC data applicability to burnup credit validation is demonstrated with an example of spent-fuel storage models. (authors)

  13. Applying the Mixed Methods Instrument Development and Construct Validation Process: the Transformative Experience Questionnaire

    Science.gov (United States)

    Koskey, Kristin L. K.; Sondergeld, Toni A.; Stewart, Victoria C.; Pugh, Kevin J.

    2018-01-01

    Onwuegbuzie and colleagues proposed the Instrument Development and Construct Validation (IDCV) process as a mixed methods framework for creating and validating measures. Examples applying IDCV are lacking. We provide an illustrative case integrating the Rasch model and cognitive interviews applied to the development of the Transformative…

  14. Measures of aggression and victimization in portuguese adolescents: Cross-cultural validation of the Revised Peer Experience Questionnaire.

    Science.gov (United States)

    Queirós, Andreia N; Vagos, Paula

    2016-10-01

    The goal of this research was to develop and psychometrically evaluate the Portuguese version of the Revised Peer Experience Questionnaire, which assesses aggression, victimization and prosocial behavior. Victimization and aggression among adolescents in school settings is a growing problem, not yet fully understood or properly evaluated, particularly in Portugal. A sample of 1320 adolescents was recruited (52.7% female), with ages varying from 10 to 18 years old, attending middle and high school. Confirmatory factor analysis confirms the measurement model of the instrument's bully and victim versions, as evaluating overt, relational, and reputational aggression/victimization and providing/receiving prosocial behavior, respectively. This measurement model was invariant across schooling and gender, showed adequate internal consistency indicators, and presented evidence for construct validity in relation to other variables. Descriptive analyses indicate that boys are more aggressive in overt and relational forms and victimized through overt aggression, whereas girls are more aggressive and victimized relationally. More than any form of aggression or victimization, boys and girls revealed higher values for engaging in and receiving prosocial behavior. These results suggest that this instrument is a reliable, valid, and structurally sound measure of aggression, victimization and prosocial behavior in this Portuguese school-based community sample. Hence, its use may assist researchers in gaining a better understanding of adolescent aggression and victimization. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  15. Model and experiences of initiating collaboration with traditional healers in validation of ethnomedicines for HIV/AIDS in Namibia

    Directory of Open Access Journals (Sweden)

    Chinsembu Kazhila C

    2009-10-01

    Full Text Available Abstract Many people with Human Immunodeficiency Virus/Acquired Immunodeficiency Syndrome (HIV/AIDS in Namibia have access to antiretroviral drugs but some still use traditional medicines to treat opportunistic infections and offset side-effects from antiretroviral medication. Namibia has a rich biodiversity of indigenous plants that could contain novel anti-HIV agents. However, such medicinal plants have not been identified and properly documented. Various ethnomedicines used to treat HIV/AIDS opportunistic infections have not been scientifically validated for safety and efficacy. These limitations are mostly attributable to the lack of collaboration between biomedical scientists and traditional healers. This paper presents a five-step contextual model for initiating collaboration with Namibian traditional healers in order that candidate plants that may contain novel anti-HIV agents are identified, and traditional medicines used to treat HIV/AIDS opportunistic infections are subjected to scientific validation. The model includes key structures and processes used to initiate collaboration with traditional healers in Namibia; namely, the National Biosciences Forum, a steering committee with the University of Namibia (UNAM as the focal point, a study tour to Zambia and South Africa where other collaborative frameworks were examined, commemorations of the African Traditional Medicine Day (ATMD, and consultations with stakeholders in north-eastern Namibia. Experiences from these structures and processes are discussed. All traditional healers in north-eastern Namibia were willing to collaborate with UNAM in order that their traditional medicines could be subjected to scientific validation. The current study provides a framework for future collaboration with traditional healers and the selection of candidate anti-HIV medicinal plants and ethnomedicines for scientific testing in Namibia.

  16. Blast Load Simulator Experiments for Computational Model Validation Report 3

    Science.gov (United States)

    2017-07-01

    the effect of the contact surface on the measurement . For gauge locations where a clearly defined initial peak is not present, Figure 24 for example...these explosive events and their effects . These codes are continuously improving, but still require validation against experimental data to...DISCLAIMER: The contents of this report are not to be used for advertising , publication, or promotional purposes. Citation of trade names does not

  17. Experiments to populate and validate a processing model for polyurethane foam. BKC 44306 PMDI-10

    Energy Technology Data Exchange (ETDEWEB)

    Mondy, Lisa Ann [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Rao, Rekha Ranjana [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Shelden, Bion [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Soehnel, Melissa Marie [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); O' Hern, Timothy J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Grillet, Anne [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Celina, Mathias C. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Wyatt, Nicholas B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Russick, Edward Mark [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bauer, Stephen J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hileman, Michael Bryan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Urquhart, Alexander [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Thompson, Kyle Richard [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Smith, David Michael [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2014-03-01

    We are developing computational models to elucidate the expansion and dynamic filling process of a polyurethane foam, PMDI. The polyurethane of interest is chemically blown, where carbon dioxide is produced via the reaction of water, the blowing agent, and isocyanate. The isocyanate also reacts with polyol in a competing reaction, which produces the polymer. Here we detail the experiments needed to populate a processing model and provide parameters for the model based on these experiments. The model entails solving the conservation equations, including the equations of motion, an energy balance, and two rate equations for the polymerization and foaming reactions, following a simplified mathematical formalism that decouples these two reactions. Parameters for the polymerization kinetics model are reported based on infrared spectrophotometry. Parameters describing the gas generating reaction are reported based on measurements of volume, temperature and pressure evolution with time. A foam rheology model is proposed and parameters determined through steady-shear and oscillatory tests. Heat of reaction and heat capacity are determined through differential scanning calorimetry. Thermal conductivity of the foam as a function of density is measured using a transient method based on the theory of the transient plane source technique. Finally, density variations of the resulting solid foam in several simple geometries are directly measured by sectioning and sampling mass, as well as through x-ray computed tomography. These density measurements will be useful for model validation once the complete model is implemented in an engineering code.

  18. SASSYS validation with the EBR-II shutdown heat removal tests

    International Nuclear Information System (INIS)

    Herzog, J.P.

    1989-01-01

    SASSYS is a coupled neutronic and thermal hydraulic code developed for the analysis of transients in liquid metal cooled reactors (LMRs). The code is especially suited for evaluating of normal reactor transients -- protected (design basis) and unprotected (anticipated transient without scram) transients. Because SASSYS is heavily used in support of the IFR concept and of innovative LMR designs, such as PRISM, a strong validation base for the code must exist. Part of the validation process for SASSYS is analysis of experiments performed on operating reactors, such as the metal fueled Experimental Breeder Reactor -- II (EBR-II). During the course of a series of historic whole-plant experiments, EBR-II illustrated key safety features of metal fueled LMRs. These experiments, the Shutdown Heat Removal Tests (SHRT), culminated in unprotected loss of flow and loss of heat sink transients from full power and flow. Analysis of these and earlier SHRT experiments constitutes a vital part of SASSYS validation, because it facilitates scrutiny of specific SASSYS models and of integrated code capability. 12 refs., 11 figs

  19. Verification and validation benchmarks.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy

    2007-02-01

    Verification and validation (V&V) are the primary means to assess the accuracy and reliability of computational simulations. V&V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V&V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the level of

  20. Site characterization and validation - Tracer migration experiment in the validation drift, report 2, Part 2: breakthrough curves in the validation drift appendices 5-9

    International Nuclear Information System (INIS)

    Birgersson, L.; Widen, H.; Aagren, T.; Neretnieks, I.; Moreno, L.

    1992-01-01

    Flowrate curves for the 53 sampling areas in the validation drift with measureable flowrates are given. The sampling area 267 is treated as three separate sampling areas; 267:1, 267:2 and 267:3. The total flowrate for these three sampling areas is given in a separate plot. The flowrates are given in ml/h. The time is given in hours since April 27 00:00, 1990. Disturbances in flowrates are observed after 8500 hours due to opening of boreholes C1 and W1. Results from flowrate measurements after 8500 hours are therefore excluded. The tracer breakthrough curves for 38 sampling areas in the validation drift are given as concentration values versus time. The sampling area 267 is treated as three separate sampling areas; 267:1, 267:2 and 267:3. This gives a total of 40 breakthrough curves for each tracer. (au)

  1. SDG and qualitative trend based model multiple scale validation

    Science.gov (United States)

    Gao, Dong; Xu, Xin; Yin, Jianjin; Zhang, Hongyu; Zhang, Beike

    2017-09-01

    Verification, Validation and Accreditation (VV&A) is key technology of simulation and modelling. For the traditional model validation methods, the completeness is weak; it is carried out in one scale; it depends on human experience. The SDG (Signed Directed Graph) and qualitative trend based multiple scale validation is proposed. First the SDG model is built and qualitative trends are added to the model. And then complete testing scenarios are produced by positive inference. The multiple scale validation is carried out by comparing the testing scenarios with outputs of simulation model in different scales. Finally, the effectiveness is proved by carrying out validation for a reactor model.

  2. 41 CFR 60-3.14 - Technical standards for validity studies.

    Science.gov (United States)

    2010-07-01

    ..., such as intelligence, aptitude, personality, commonsense, judgment, leadership, and spatial ability.... (6) Prior training or experience. A requirement for or evaluation of specific prior training or experience based on content validity, including a specification of level or amount of training or experience...

  3. Monte Carlo validation experiments for the gas Cherenkov detectors at the National Ignition Facility and Omega

    Energy Technology Data Exchange (ETDEWEB)

    Rubery, M. S.; Horsfield, C. J. [Plasma Physics Department, AWE plc, Reading RG7 4PR (United Kingdom); Herrmann, H.; Kim, Y.; Mack, J. M.; Young, C.; Evans, S.; Sedillo, T.; McEvoy, A.; Caldwell, S. E. [Plasma Physics Department, Los Alamos National Laboratory, Los Alamos, New Mexico 87545 (United States); Grafil, E.; Stoeffl, W. [Physics, Lawrence Livermore National Laboratory, Livermore, California 94551 (United States); Milnes, J. S. [Photek Limited UK, 26 Castleham Road, St. Leonards-on-sea TN38 9NS (United Kingdom)

    2013-07-15

    The gas Cherenkov detectors at NIF and Omega measure several ICF burn characteristics by detecting multi-MeV nuclear γ emissions from the implosion. Of primary interest are γ bang-time (GBT) and burn width defined as the time between initial laser-plasma interaction and peak in the fusion reaction history and the FWHM of the reaction history respectively. To accurately calculate such parameters the collaboration relies on Monte Carlo codes, such as GEANT4 and ACCEPT, for diagnostic properties that cannot be measured directly. This paper describes a series of experiments performed at the High Intensity γ Source (HIγS) facility at Duke University to validate the geometries and material data used in the Monte Carlo simulations. Results published here show that model-driven parameters such as intensity and temporal response can be used with less than 50% uncertainty for all diagnostics and facilities.

  4. Empirical Validation and Application of the Computing Attitudes Survey

    Science.gov (United States)

    Dorn, Brian; Elliott Tew, Allison

    2015-01-01

    Student attitudes play an important role in shaping learning experiences. However, few validated instruments exist for measuring student attitude development in a discipline-specific way. In this paper, we present the design, development, and validation of the computing attitudes survey (CAS). The CAS is an extension of the Colorado Learning…

  5. Nuclear Energy Knowledge and Validation Center (NEKVaC) Needs Workshop Summary Report

    Energy Technology Data Exchange (ETDEWEB)

    Gougar, Hans [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-02-01

    The Department of Energy (DOE) has made significant progress developing simulation tools to predict the behavior of nuclear systems with greater accuracy and of increasing our capability to predict the behavior of these systems outside of the standard range of applications. These analytical tools require a more complex array of validation tests to accurately simulate the physics and multiple length and time scales. Results from modern simulations will allow experiment designers to narrow the range of conditions needed to bound system behavior and to optimize the deployment of instrumentation to limit the breadth and cost of the campaign. Modern validation, verification and uncertainty quantification (VVUQ) techniques enable analysts to extract information from experiments in a systematic manner and provide the users with a quantified uncertainty estimate. Unfortunately, the capability to perform experiments that would enable taking full advantage of the formalisms of these modern codes has progressed relatively little (with some notable exceptions in fuels and thermal-hydraulics); the majority of the experimental data available today is the "historic" data accumulated over the last decades of nuclear systems R&D. A validated code-model is a tool for users. An unvalidated code-model is useful for code developers to gain understanding, publish research results, attract funding, etc. As nuclear analysis codes have become more sophisticated, so have the measurement and validation methods and the challenges that confront them. A successful yet cost-effective validation effort requires expertise possessed only by a few, resources possessed only by the well-capitalized (or a willing collective), and a clear, well-defined objective (validating a code that is developed to satisfy the need(s) of an actual user). To that end, the Idaho National Laboratory established the Nuclear Energy Knowledge and Validation Center to address the challenges of modern code validation and to

  6. Airborne campaigns for CryoSat pre-launch calibration and validation

    DEFF Research Database (Denmark)

    Hvidegaard, Sine Munk; Forsberg, René; Skourup, Henriette

    2010-01-01

    From 2003 to 2008 DTU Space together with ESA and several international partners carried out airborne and ground field campaigns in preparation for CryoSat validation; called CryoVEx: CryoSat Validation Experiments covering the main ice caps in Greenland, Canada and Svalbard and sea ice in the Ar......From 2003 to 2008 DTU Space together with ESA and several international partners carried out airborne and ground field campaigns in preparation for CryoSat validation; called CryoVEx: CryoSat Validation Experiments covering the main ice caps in Greenland, Canada and Svalbard and sea ice...... in the Arctic Ocean. The main goal of the airborne surveys was to acquire coincident scanning laser and CryoSat type radar elevation measurements of the surface; either sea ice or land ice. Selected lines have been surveyed along with detailed mapping of validation sites coordinated with insitu field work...... and helicopter electromagnetic surveying. This paper summarises the pre-launch campaigns and presents some of the result from the coincident measurement from airborne and ground observations....

  7. Construct Validity and Reliability of Structured Assessment of endoVascular Expertise in a Simulated Setting

    DEFF Research Database (Denmark)

    Bech, B; Lönn, L; Falkenberg, M

    2011-01-01

    Objectives To study the construct validity and reliability of a novel endovascular global rating scale, Structured Assessment of endoVascular Expertise (SAVE). Design A Clinical, experimental study. Materials Twenty physicians with endovascular experiences ranging from complete novices to highly....... Validity was analysed by correlating experience with performance results. Reliability was analysed according to generalisability theory. Results The mean score on the 29 items of the SAVE scale correlated well with clinical experience (R = 0.84, P ... with clinical experience (R = -0.53, P validity and reliability of assessment with the SAVE scale was high when applied to performances in a simulation setting with advanced realism. No ceiling effect...

  8. The Second Victim Experience and Support Tool: Validation of an Organizational Resource for Assessing Second Victim Effects and the Quality of Support Resources.

    Science.gov (United States)

    Burlison, Jonathan D; Scott, Susan D; Browne, Emily K; Thompson, Sierra G; Hoffman, James M

    2017-06-01

    Medical errors and unanticipated negative patient outcomes can damage the well-being of health care providers. These affected individuals, referred to as "second victims," can experience various psychological and physical symptoms. Support resources provided by health care organizations to prevent and reduce second victim-related harm are often inadequate. In this study, we present the development and psychometric evaluation of the Second Victim Experience and Support Tool (SVEST), a survey instrument that can assist health care organizations to implement and track the performance of second victim support resources. The SVEST (29 items representing 7 dimensions and 2 outcome variables) was completed by 303 health care providers involved in direct patient care. The survey collected responses on second victim-related psychological and physical symptoms and the quality of support resources. Desirability of possible support resources was also measured. The SVEST was assessed for content validity, internal consistency, and construct validity with confirmatory factor analysis. Confirmatory factor analysis results suggested good model fit for the survey. Cronbach α reliability scores for the survey dimensions ranged from 0.61 to 0.89. The most desired second victim support option was "A respected peer to discuss the details of what happened." The SVEST can be used by health care organizations to evaluate second victim experiences of their staff and the quality of existing support resources. It can also provide health care organization leaders with information on second victim-related support resources most preferred by their staff. The SVEST can be administered before and after implementing new second victim resources to measure perceptions of effectiveness.

  9. Fundamental validation of simulation method for thermal stratification in upper plenum of fast reactors. Analysis of sodium experiment

    International Nuclear Information System (INIS)

    Ohno, Shuji; Ohshima, Hiroyuki; Sugahara, Akihiro; Ohki, Hiroshi

    2010-01-01

    Three-dimensional thermal-hydraulic analyses have been carried out for a sodium experiment in a relatively simple axis-symmetric geometry using a commercial CFD code in order to validate simulating methods for thermal stratification behavior in an upper plenum of sodium-cooled fast reactor. Detailed comparison between simulated results and experimental measurement has demonstrated that the code reproduced fairly well the fundamental thermal stratification behaviors such as vertical temperature gradient and upward movement of a stratification interface when utilizing high-order discretization scheme and appropriate mesh size. Furthermore, the investigation has clarified the influence of RANS type turbulence models on phenomena predictability; i.e. the standard k-ε model, the RNG k-ε model and the Reynolds Stress Model. (author)

  10. Validation of ASTEC core degradation and containment models

    International Nuclear Information System (INIS)

    Kruse, Philipp; Brähler, Thimo; Koch, Marco K.

    2014-01-01

    Ruhr-Universitaet Bochum performed in a German funded project validation of in-vessel and containment models of the integral code ASTEC V2, jointly developed by IRSN (France) and GRS (Germany). In this paper selected results of this validation are presented. In the in-vessel part, the main point of interest was the validation of the code capability concerning cladding oxidation and hydrogen generation. The ASTEC calculations of QUENCH experiments QUENCH-03 and QUENCH-11 show satisfactory results, despite of some necessary adjustments in the input deck. Furthermore, the oxidation models based on the Cathcart–Pawel and Urbanic–Heidrick correlations are not suitable for higher temperatures while the ASTEC model BEST-FIT based on the Prater–Courtright approach at high temperature gives reliable enough results. One part of the containment model validation was the assessment of three hydrogen combustion models of ASTEC against the experiment BMC Ix9. The simulation results of these models differ from each other and therefore the quality of the simulations depends on the characteristic of each model. Accordingly, the CPA FRONT model, corresponding to the simplest necessary input parameters, provides the best agreement to the experimental data

  11. Tackling the Reproducibility Problem in Systems Research with Declarative Experiment Specifications

    Energy Technology Data Exchange (ETDEWEB)

    Jimenez, Ivo [Univ. of California, Santa Cruz, CA (United States); Maltzahn, Carlos [Univ. of California, Santa Cruz, CA (United States); Lofstead, Jay [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Moody, Adam [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Mohror, Kathryn [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Arpaci-Dusseau, Remzi [Univ. of Wisconsin, Madison, WI (United States); Arpaci-Dusseau, Andrea [Univ. of Wisconsin, Madison, WI (United States)

    2015-05-04

    Validating experimental results in the field of computer systems is a challenging task, mainly due to the many changes in software and hardware that computational environments go through. Determining if an experiment is reproducible entails two separate tasks: re-executing the experiment and validating the results. Existing reproducibility efforts have focused on the former, envisioning techniques and infrastructures that make it easier to re-execute an experiment. In this work we focus on the latter by analyzing the validation workflow that an experiment re-executioner goes through. We notice that validating results is done on the basis of experiment design and high-level goals, rather than exact quantitative metrics. Based on this insight, we introduce a declarative format for specifying the high-level components of an experiment as well as describing generic, testable conditions that serve as the basis for validation. We present a use case in the area of storage systems to illustrate the usefulness of this approach. We also discuss limitations and potential benefits of using this approach in other areas of experimental systems research.

  12. The Validity and Incremental Validity of Knowledge Tests, Low-Fidelity Simulations, and High-Fidelity Simulations for Predicting Job Performance in Advanced-Level High-Stakes Selection

    Science.gov (United States)

    Lievens, Filip; Patterson, Fiona

    2011-01-01

    In high-stakes selection among candidates with considerable domain-specific knowledge and experience, investigations of whether high-fidelity simulations (assessment centers; ACs) have incremental validity over low-fidelity simulations (situational judgment tests; SJTs) are lacking. Therefore, this article integrates research on the validity of…

  13. In-Trail Procedure Air Traffic Control Procedures Validation Simulation Study

    Science.gov (United States)

    Chartrand, Ryan C.; Hewitt, Katrin P.; Sweeney, Peter B.; Graff, Thomas J.; Jones, Kenneth M.

    2012-01-01

    In August 2007, Airservices Australia (Airservices) and the United States National Aeronautics and Space Administration (NASA) conducted a validation experiment of the air traffic control (ATC) procedures associated with the Automatic Dependant Surveillance-Broadcast (ADS-B) In-Trail Procedure (ITP). ITP is an Airborne Traffic Situation Awareness (ATSA) application designed for near-term use in procedural airspace in which ADS-B data are used to facilitate climb and descent maneuvers. NASA and Airservices conducted the experiment in Airservices simulator in Melbourne, Australia. Twelve current operational air traffic controllers participated in the experiment, which identified aspects of the ITP that could be improved (mainly in the communication and controller approval process). Results showed that controllers viewed the ITP as valid and acceptable. This paper describes the experiment design and results.

  14. The daily spiritual experiences scale and well-being: demographic comparisons and scale validation with older jewish adults and a diverse internet sample.

    Science.gov (United States)

    Kalkstein, Solomon; Tower, Roni Beth

    2009-12-01

    A substantive literature connects spirituality to positive physical, social, and mental health. In this study, the Daily Spiritual Experiences Scale (DSES) was administered to 410 subjects who participated in a community study and to 87 residents at the Hebrew Home for the Aged at Riverdale (HHAR), the latter sample consisting primarily of older Jewish respondents. Internal consistency of the DSES in both samples was high and exploratory factor analyses revealed one dominant factor and a second factor, which included 14 and 2 items, respectively, consistent with the scale's original validation (Underwood and Teresi 2002). Demographic subgroup comparison among religious groups revealed significantly fewer daily spiritual experiences among Jews, and lowest scores among those respondents endorsing no religious affiliation. Women exhibited more frequent daily experience than men, and attainment of higher levels of education was associated with less frequent daily spiritual experience. All but one of the outcome measures of physical and psychologic well-being were found to be positively associated with the DSES so that more frequent daily spiritual experience correlated with less psychopathology, more close friendships, and better self-rated health. Directions for future research, study interpretation and limitations, and clinical implications for use of the DSES are discussed.

  15. CFD Validation Studies for Hypersonic Flow Prediction

    Science.gov (United States)

    Gnoffo, Peter A.

    2001-01-01

    A series of experiments to measure pressure and heating for code validation involving hypersonic, laminar, separated flows was conducted at the Calspan-University at Buffalo Research Center (CUBRC) in the Large Energy National Shock (LENS) tunnel. The experimental data serves as a focus for a code validation session but are not available to the authors until the conclusion of this session. The first set of experiments considered here involve Mach 9.5 and Mach 11.3 N2 flow over a hollow cylinder-flare with 30 degree flare angle at several Reynolds numbers sustaining laminar, separated flow. Truncated and extended flare configurations are considered. The second set of experiments, at similar conditions, involves flow over a sharp, double cone with fore-cone angle of 25 degrees and aft-cone angle of 55 degrees. Both sets of experiments involve 30 degree compressions. Location of the separation point in the numerical simulation is extremely sensitive to the level of grid refinement in the numerical predictions. The numerical simulations also show a significant influence of Reynolds number on extent of separation. Flow unsteadiness was easily introduced into the double cone simulations using aggressive relaxation parameters that normally promote convergence.

  16. [Validity of psychoprophylaxis in obstetrics. Authors' experience].

    Science.gov (United States)

    D'Alfonso, A; Zaurito, V; Facchini, D; Di Stefano, L; Patacchiola, F; Cappa, F

    1990-12-01

    The Authors report the results based on 20 years of practice on obstetric psycho-prophylaxis (PPO). Data on presence at course, on frequency, on primipares/pluripares ratio, on labour, on timing and mode of delivery, are assembled. Moreover, neonatal status at birth and at 10th day of life, are investigated. The data obtained were compared with a control group, constituted by women without any treatment before delivery. The acquired experience confirm the utility of PPO in the ordinary clinical practice.

  17. Validation of OpenFoam for heavy gas dispersion applications

    NARCIS (Netherlands)

    Mack, A.; Spruijt, M.P.N.

    2013-01-01

    In the present paper heavy gas dispersion calculations were performed with OpenFoam. For a windtunnel test case, numerical data was validated with experiments. For a full scale numerical experiment,a code to code comparison was performed with numerical results obtained from Fluent. The validationwas

  18. GPM GROUND VALIDATION CITATION VIDEOS IPHEX V1

    Data.gov (United States)

    National Aeronautics and Space Administration — The GPM Ground Validation Citation Videos IPHEx data were collected during the Integrated Precipitation and Hydrology Experiment (IPHEx) in the Southern...

  19. Verification and validation benchmarks

    International Nuclear Information System (INIS)

    Oberkampf, William Louis; Trucano, Timothy Guy

    2007-01-01

    Verification and validation (V and V) are the primary means to assess the accuracy and reliability of computational simulations. V and V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V and V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the

  20. Verification and validation benchmarks

    International Nuclear Information System (INIS)

    Oberkampf, William L.; Trucano, Timothy G.

    2008-01-01

    Verification and validation (V and V) are the primary means to assess the accuracy and reliability of computational simulations. V and V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V and V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the

  1. Fire Intensity Data for Validation of the Radiative Transfer Equation

    Energy Technology Data Exchange (ETDEWEB)

    Blanchat, Thomas K. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Jernigan, Dann A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2016-01-01

    A set of experiments and test data are outlined in this report that provides radiation intensity data for the validation of models for the radiative transfer equation. The experiments were performed with lightly-sooting liquid hydrocarbon fuels that yielded fully turbulent fires 2 m diameter). In addition, supplemental measurements of air flow and temperature, fuel temperature and burn rate, and flame surface emissive power, wall heat, and flame height and width provide a complete set of boundary condition data needed for validation of models used in fire simulations.

  2. Mold-filling experiments for validation of modeling encapsulation. Part 1, "wine glass" mold.

    Energy Technology Data Exchange (ETDEWEB)

    Castaneda, Jaime N.; Grillet, Anne Mary; Altobelli, Stephen A. (New Mexico Resonance, Albuquerque, NM); Cote, Raymond O.; Mondy, Lisa Ann

    2005-06-01

    The C6 project 'Encapsulation Processes' has been designed to obtain experimental measurements for discovery of phenomena critical to improving these processes, as well as data required in the verification and validation plan (Rao et al. 2001) for model validation of flow in progressively complex geometries. We have observed and recorded the flow of clear, Newtonian liquids and opaque, rheologically complex suspensions in two mold geometries. The first geometry is a simple wineglass geometry in a cylinder and is reported here in Part 1. The results in a more realistic encapsulation geometry are reported in Part 2.

  3. Physics validation of detector simulation tools for LHC

    International Nuclear Information System (INIS)

    Beringer, J.

    2004-01-01

    Extensive studies aimed at validating the physics processes built into the detector simulation tools Geant4 and Fluka are in progress within all Large Hardon Collider (LHC) experiments, within the collaborations developing these tools, and within the LHC Computing Grid (LCG) Simulation Physics Validation Project, which has become the primary forum for these activities. This work includes detailed comparisons with test beam data, as well as benchmark studies of simple geometries and materials with single incident particles of various energies for which experimental data is available. We give an overview of these validation activities with emphasis on the latest results

  4. Shielding experiments for accelerator facilities

    Energy Technology Data Exchange (ETDEWEB)

    Nakashima, Hiroshi; Tanaka, Susumu; Sakamoto, Yukio [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment] [and others

    2000-06-01

    A series of shielding experiments was carried out by using AVF cyclotron accelerator of TIARA at JAERI in order to validate shielding design methods for accelerator facilities in intermediate energy region. In this paper neutron transmission experiment through thick shields and radiation streaming experiment through a labyrinth are reported. (author)

  5. Shielding experiments for accelerator facilities

    International Nuclear Information System (INIS)

    Nakashima, Hiroshi; Tanaka, Susumu; Sakamoto, Yukio

    2000-01-01

    A series of shielding experiments was carried out by using AVF cyclotron accelerator of TIARA at JAERI in order to validate shielding design methods for accelerator facilities in intermediate energy region. In this paper neutron transmission experiment through thick shields and radiation streaming experiment through a labyrinth are reported. (author)

  6. Benchmarks for GADRAS performance validation

    International Nuclear Information System (INIS)

    Mattingly, John K.; Mitchell, Dean James; Rhykerd, Charles L. Jr.

    2009-01-01

    The performance of the Gamma Detector Response and Analysis Software (GADRAS) was validated by comparing GADRAS model results to experimental measurements for a series of benchmark sources. Sources for the benchmark include a plutonium metal sphere, bare and shielded in polyethylene, plutonium oxide in cans, a highly enriched uranium sphere, bare and shielded in polyethylene, a depleted uranium shell and spheres, and a natural uranium sphere. The benchmark experimental data were previously acquired and consist of careful collection of background and calibration source spectra along with the source spectra. The calibration data were fit with GADRAS to determine response functions for the detector in each experiment. A one-dimensional model (pie chart) was constructed for each source based on the dimensions of the benchmark source. The GADRAS code made a forward calculation from each model to predict the radiation spectrum for the detector used in the benchmark experiment. The comparisons between the GADRAS calculation and the experimental measurements are excellent, validating that GADRAS can correctly predict the radiation spectra for these well-defined benchmark sources.

  7. The Tromso Infant Faces Database (TIF): Development, Validation and Application to Assess Parenting Experience on Clarity and Intensity Ratings.

    Science.gov (United States)

    Maack, Jana K; Bohne, Agnes; Nordahl, Dag; Livsdatter, Lina; Lindahl, Åsne A W; Øvervoll, Morten; Wang, Catharina E A; Pfuhl, Gerit

    2017-01-01

    Newborns and infants are highly depending on successfully communicating their needs; e.g., through crying and facial expressions. Although there is a growing interest in the mechanisms of and possible influences on the recognition of facial expressions in infants, heretofore there exists no validated database of emotional infant faces. In the present article we introduce a standardized and freely available face database containing Caucasian infant face images from 18 infants 4 to 12 months old. The development and validation of the Tromsø Infant Faces (TIF) database is presented in Study 1. Over 700 adults categorized the photographs by seven emotion categories (happy, sad, disgusted, angry, afraid, surprised, neutral) and rated intensity, clarity and their valance. In order to examine the relevance of TIF, we then present its first application in Study 2, investigating differences in emotion recognition across different stages of parenthood. We found a small gender effect in terms of women giving higher intensity and clarity ratings than men. Moreover, parents of young children rated the images as clearer than all the other groups, and parents rated "neutral" expressions as more clearly and more intense. Our results suggest that caretaking experience provides an implicit advantage in the processing of emotional expressions in infant faces, especially for the more difficult, ambiguous expressions.

  8. Development and validation of sodium fire analysis code ASSCOPS

    International Nuclear Information System (INIS)

    Ohno, Shuji

    2001-01-01

    A version 2.1 of the ASSCOPS sodium fire analysis code was developed to evaluate the thermal consequences of a sodium leak and consequent fire in LMFBRs. This report describes the computational models and the validation studies using the code. The ASSCOPS calculates sodium droplet and pool fire, and consequential heat/mass transfer behavior. Analyses of sodium pool or spray fire experiments confirmed that this code and parameters used in the validation studies gave valid results on the thermal consequences of sodium leaks and fires. (author)

  9. Assessment of juveniles testimonies’ validity

    Directory of Open Access Journals (Sweden)

    Dozortseva E.G.

    2015-12-01

    Full Text Available The article presents a review of the English language publications concerning the history and the current state of differential psychological assessment of validity of testimonies produced by child and adolescent victims of crimes. The topicality of the problem in Russia is high due to the tendency of Russian specialists to use methodical means and instruments developed abroad in this sphere for forensic assessments of witness testimony veracity. A system of Statement Validity Analysis (SVA by means of Criteria-Based Content Analysis (CBCA and Validity Checklist is described. The results of laboratory and field studies of validity of CBCA criteria on the basis of child and adult witnesses are discussed. The data display a good differentiating capacity of the method, however, a high level of error probability. The researchers recommend implementation of SVA in the criminal investigation process, but not in the forensic assessment. New perspective developments in the field of methods for differentiation of witness statements based on the real experience and fictional are noted. The conclusion is drawn that empirical studies and a special work for adaptation and development of new approaches should precede their implementation into Russian criminal investigation and forensic assessment practice

  10. Validation process of simulation model

    International Nuclear Information System (INIS)

    San Isidro, M. J.

    1998-01-01

    It is presented a methodology on empirical validation about any detailed simulation model. This king of validation it is always related with an experimental case. The empirical validation has a residual sense, because the conclusions are based on comparisons between simulated outputs and experimental measurements. This methodology will guide us to detect the fails of the simulation model. Furthermore, it can be used a guide in the design of posterior experiments. Three steps can be well differentiated: Sensitivity analysis. It can be made with a DSA, differential sensitivity analysis, and with a MCSA, Monte-Carlo sensitivity analysis. Looking the optimal domains of the input parameters. It has been developed a procedure based on the Monte-Carlo methods and Cluster techniques, to find the optimal domains of these parameters. Residual analysis. This analysis has been made on the time domain and on the frequency domain, it has been used the correlation analysis and spectral analysis. As application of this methodology, it is presented the validation carried out on a thermal simulation model on buildings, Esp., studying the behavior of building components on a Test Cell of LECE of CIEMAT. (Author) 17 refs

  11. Validation of heat transfer models for gap cooling

    International Nuclear Information System (INIS)

    Okano, Yukimitsu; Nagae, Takashi; Murase, Michio

    2004-01-01

    For severe accident assessment of a light water reactor, models of heat transfer in a narrow annular gap between overheated core debris and a reactor pressure vessel are important for evaluating vessel integrity and accident management. The authors developed and improved the models of heat transfer. However, validation was not sufficient for applicability of the gap heat flux correlation to the debris cooling in the vessel lower head and applicability of the local boiling heat flux correlations to the high-pressure conditions. Therefore, in this paper, we evaluated the validity of the heat transfer models and correlations by analyses for ALPHA and LAVA experiments where molten aluminum oxide (Al 2 O 3 ) at about 2700 K was poured into the high pressure water pool in a small-scale simulated vessel lower head. In the heating process of the vessel wall, the calculated heating rate and peak temperature agreed well with the measured values, and the validity of the heat transfer models and gap heat flux correlation was confirmed. In the cooling process of the vessel wall, the calculated cooling rate was compared with the measured value, and the validity of the nucleate boiling heat flux correlation was confirmed. The peak temperatures of the vessel wall in ALPHA and LAVA experiments were lower than the temperature at the minimum heat flux point between film boiling and transition boiling, so the minimum heat flux correlation could not be validated. (author)

  12. Stratification issues in the primary system. Review of available validation experiments and State-of-the-Art in modelling capabilities (StratRev)

    International Nuclear Information System (INIS)

    Westin, J.; Henriksson, M.; Paettikangas, T.; Toppila, T.; Raemae, T.; Kudinov, P.; Anglart, H.

    2009-08-01

    The objective of the present report is to review available validation experiments and State-of-the-Art in modelling of stratification and mixing in the primary system of Light Water Reactors. A topical workshop was arranged in Aelvkarleby in June 2008 within the framework of BWR-OG, and the presentations from various utilities showed that stratification issues are not unusual and can cause costly stops in the production. It is desirable to take actions in order to reduce the probability for stratification to occur, and to develop well-validated and accepted tools and procedures for analyzing upcoming stratification events. A research plan covering the main questions is outlined, and a few suggestions regarding more limited research activities are given. Since many of the stratification events results in thermal loads that are localized in time and space, CFD is a suitable tool. However, the often very large and complex geometry posses a great challenge to CFD, and it is important to perform a step-by-step increase in complexity with intermediate validation versus relevant experimental data. The ultimate goal is to establish Best Practice Guidelines that can be followed both by utilities and authorities in case of an event including stratification and thermal loads. An extension of the existing Best Practice Guidelines for CFD in nuclear safety applications developed by OECD/NEA is thus suggested as a relevant target for a continuation project. (au)

  13. Stratification issues in the primary system. Review of available validation experiments and State-of-the-Art in modelling capabilities (StratRev)

    Energy Technology Data Exchange (ETDEWEB)

    Westin, J.; Henriksson, M. (Vattenfall Research and Development AB (Sweden)); Paettikangas, T. (VTT (Finland)); Toppila, T.; Raemae, T. (Fortum Nuclear Services Ltd (Finland)); Kudinov, P. (KTH Nuclear Power Safety (Sweden)); Anglart, H. (KTH Nuclear Reactor Technology (Sweden))

    2009-08-15

    The objective of the present report is to review available validation experiments and State-of-the-Art in modelling of stratification and mixing in the primary system of Light Water Reactors. A topical workshop was arranged in AElvkarleby in June 2008 within the framework of BWR-OG, and the presentations from various utilities showed that stratification issues are not unusual and can cause costly stops in the production. It is desirable to take actions in order to reduce the probability for stratification to occur, and to develop well-validated and accepted tools and procedures for analyzing upcoming stratification events. A research plan covering the main questions is outlined, and a few suggestions regarding more limited research activities are given. Since many of the stratification events results in thermal loads that are localized in time and space, CFD is a suitable tool. However, the often very large and complex geometry posses a great challenge to CFD, and it is important to perform a step-by-step increase in complexity with intermediate validation versus relevant experimental data. The ultimate goal is to establish Best Practice Guidelines that can be followed both by utilities and authorities in case of an event including stratification and thermal loads. An extension of the existing Best Practice Guidelines for CFD in nuclear safety applications developed by OECD/NEA is thus suggested as a relevant target for a continuation project. (au)

  14. Performance support system in higher engineering education - introduction and empirical validation

    NARCIS (Netherlands)

    Stoyanov, S.; Stoyanov, Slavi; Kommers, Petrus A.M.; Bastiaens, T.J.; Martinez Mediano, Catalina

    2008-01-01

    The paper defines and empirically validates the concept of performance support system in higher engineering education. The validation of the concept is based upon two studies: a pilot and an experiment, on the effect of performance support system on achievements and attitudes of students. The

  15. Development and validation of the Alcohol Myopia Scale.

    Science.gov (United States)

    Lac, Andrew; Berger, Dale E

    2013-09-01

    Alcohol myopia theory conceptualizes the ability of alcohol to narrow attention and how this demand on mental resources produces the impairments of self-inflation, relief, and excess. The current research was designed to develop and validate a scale based on this framework. People who were alcohol users rated items representing myopic experiences arising from drinking episodes in the past month. In Study 1 (N = 260), the preliminary 3-factor structure was supported by exploratory factor analysis. In Study 2 (N = 289), the 3-factor structure was substantiated with confirmatory factor analysis, and it was superior in fit to an empirically indefensible 1-factor structure. The final 14-item scale was evaluated with internal consistency reliability, discriminant validity, convergent validity, criterion validity, and incremental validity. The alcohol myopia scale (AMS) illuminates conceptual underpinnings of this theory and yields insights for understanding the tunnel vision that arises from intoxication.

  16. Assessment of validity with polytrauma Veteran populations.

    Science.gov (United States)

    Bush, Shane S; Bass, Carmela

    2015-01-01

    Veterans with polytrauma have suffered injuries to multiple body parts and organs systems, including the brain. The injuries can generate a triad of physical, neurologic/cognitive, and emotional symptoms. Accurate diagnosis is essential for the treatment of these conditions and for fair allocation of benefits. To accurately diagnose polytrauma disorders and their related problems, clinicians take into account the validity of reported history and symptoms, as well as clinical presentations. The purpose of this article is to describe the assessment of validity with polytrauma Veteran populations. Review of scholarly and other relevant literature and clinical experience are utilized. A multimethod approach to validity assessment that includes objective, standardized measures increases the confidence that can be placed in the accuracy of self-reported symptoms and physical, cognitive, and emotional test results. Due to the multivariate nature of polytrauma and the multiple disciplines that play a role in diagnosis and treatment, an ideal model of validity assessment with polytrauma Veteran populations utilizes neurocognitive, neurological, neuropsychiatric, and behavioral measures of validity. An overview of these validity assessment approaches as applied to polytrauma Veteran populations is presented. Veterans, the VA, and society are best served when accurate diagnoses are made.

  17. Site characterization and validation - monitoring of saline tracer transport by borehole radar measurements

    International Nuclear Information System (INIS)

    Olsson, O.; Andersson, P.; Gustafsson, E.

    1991-08-01

    The objective of this experiment was to map tracer transport in fractured crystalline rock through a combination of radar difference tomography and measurements of tracer concentration in boreholes and the validation drift. The experiment was performed twice, first the D-boreholes were used as a sink and then they were replaced by the validation drift and the experiment repeated. In both experiments saline tracer (200 ml/min, 2% salinity) was injected into fracture zone H about 25 m from the validation drift. The experiment revealed an inhomogeneous transmissivity distribution in Zone H. A significant portion of the tracer is transported upwards along Zone H and towards boreholes T1, T2, and W1. The breakthrough data from both experiments indicate that there are two major transport paths from borehole C2 to the D-boreholes/validation drift. One slow and diluted path to the bottom of the drift which carries the bulk of the mass and one fast path to the crown of the drift with high tracer concentration. The radar difference tomograms show that some tracer is lost through Zone S which intersects Zone H and is nearly perpendicular to it. The intersection between the two zones seems to constitute a preferred flow path. The breakthrough data and the radar difference tomograms have also been used to estimate flow porosity. The estimate obtained area of the same order approximately 10 -4 . (au) (28 refs.)

  18. The Challenging Experience Questionnaire: Characterization of challenging experiences with psilocybin mushrooms

    Science.gov (United States)

    Barrett, Frederick S.; Bradstreet, Matthew P.; Leoutsakos, Jeannie-Marie S.; Johnson, Matthew W.; Griffiths, Roland R.

    2017-01-01

    Acute adverse psychological reactions to classic hallucinogens (“bad trips”, or “challenging experiences”), while usually benign with proper screening, preparation, and support in controlled settings, remain a safety concern in uncontrolled settings (such as illicit use contexts). Anecdotal and case reports suggest potential adverse acute symptoms including affective (panic, depressed mood), cognitive (confusion, feelings of losing sanity), and somatic (nausea, heart palpitation) symptoms. Responses to items from several hallucinogen-sensitive questionnaires (Hallucinogen Rating Scale, the States of Consciousness Questionnaire, and the 5-Dimensional Altered States of Consciousness questionnaire) in an internet survey of challenging experiences with the classic hallucinogen psilocybin were used to construct and validate a Challenging Experience Questionnaire (CEQ). The stand-alone CEQ was then validated in a separate sample. Seven CEQ factors (grief, fear, death, insanity, isolation, physical distress, and paranoia) provide a phenomenological profile of challenging aspects of experiences with psilocybin. Factor scores were associated with the difficulty, meaningfulness, spiritual significance, and change in well-being attributed to the challenging experiences. The factor structure did not differ based on gender or prior struggle with anxiety or depression. The CEQ provides a basis for future investigation of predictors and outcomes of challenging experiences with psilocybin, and should be explored as a measure of challenging experiences with the broad class of classic hallucinogens. PMID:27856683

  19. Threats to the Internal Validity of Experimental and Quasi-Experimental Research in Healthcare.

    Science.gov (United States)

    Flannelly, Kevin J; Flannelly, Laura T; Jankowski, Katherine R B

    2018-01-24

    The article defines, describes, and discusses the seven threats to the internal validity of experiments discussed by Donald T. Campbell in his classic 1957 article: history, maturation, testing, instrument decay, statistical regression, selection, and mortality. These concepts are said to be threats to the internal validity of experiments because they pose alternate explanations for the apparent causal relationship between the independent variable and dependent variable of an experiment if they are not adequately controlled. A series of simple diagrams illustrate three pre-experimental designs and three true experimental designs discussed by Campbell in 1957 and several quasi-experimental designs described in his book written with Julian C. Stanley in 1966. The current article explains why each design controls for or fails to control for these seven threats to internal validity.

  20. A broad view of model validation

    International Nuclear Information System (INIS)

    Tsang, C.F.

    1989-10-01

    The safety assessment of a nuclear waste repository requires the use of models. Such models need to be validated to ensure, as much as possible, that they are a good representation of the actual processes occurring in the real system. In this paper we attempt to take a broad view by reviewing step by step the modeling process and bringing out the need to validating every step of this process. This model validation includes not only comparison of modeling results with data from selected experiments, but also evaluation of procedures for the construction of conceptual models and calculational models as well as methodologies for studying data and parameter correlation. The need for advancing basic scientific knowledge in related fields, for multiple assessment groups, and for presenting our modeling efforts in open literature to public scrutiny is also emphasized. 16 refs

  1. Validation of the factor structure of the adolescent dissociative experiences scale in a sample of trauma-exposed detained youth.

    Science.gov (United States)

    Kerig, Patricia K; Charak, Ruby; Chaplo, Shannon D; Bennett, Diana C; Armour, Cherie; Modrowski, Crosby A; McGee, Andrew B

    2016-09-01

    The inclusion of a dissociative subtype in the Diagnostic and Statistical Manual of Mental Disorders (5th ed.; DSM–5 ) criteria for the diagnosis of posttraumatic stress disorder (PTSD) has highlighted the need for valid and reliable measures of dissociative symptoms across developmental periods. The Adolescent Dissociative Experiences Scale (A-DES) is 1 of the few measures validated for young persons, but previous studies have yielded inconsistent results regarding its factor structure. Further, research to date on the A-DES has been based upon nonclinical samples of youth or those without a known history of trauma. To address these gaps in the literature, the present study investigated the factor structure and construct validity of the A-DES in a sample of highly trauma-exposed youth involved in the juvenile justice system. A sample of 784 youth (73.7% boys) recruited from a detention center completed self-report measures of trauma exposure and the A-DES, a subset of whom (n = 212) also completed a measure of PTSD symptoms. Confirmatory factor analyses revealed a best fitting 3-factor structure comprised of depersonalization or derealization, amnesia, and loss of conscious control, with configural and metric invariance across gender. Logistic regression analyses indicated that the depersonalization or derealization factor effectively distinguished between those youth who did and did not likely meet criteria for a diagnosis of PTSD as well as those with PTSD who did and did not likely meet criteria for the dissociative subtype. These results provide support for the multidimensionality of the construct of posttraumatic dissociation and contribute to the understanding of the dissociative subtype of PTSD among adolescents. (PsycINFO Database Record PsycINFO Database Record (c) 2016 APA, all rights reserved

  2. Investigation of the uncertainty of a validation experiment due to uncertainty in its boundary conditions

    International Nuclear Information System (INIS)

    Harris, J.; Nani, D.; Jones, K.; Khodier, M.; Smith, B.L.

    2011-01-01

    Elements contributing to uncertainty in experimental repeatability are quantified for data acquisition in a bank of cylinders. The cylinder bank resembles the lower plenum of a high temperature reactor with cylinders arranged on equilateral triangles with a pitch to diameter ratio of 1.7. The 3-D as-built geometry was measured by imaging reflections off the internal surfaces of the facility. This information is useful for building CFD grids for Validation studies. Time-averaged Particle Image Velocimetry (PIV) measurements were acquired daily over several months along with the pressure drop between two cylinders. The atmospheric pressure was measured along with the data set. The PIV data and pressure drop were correlated with atmospheric conditions and changes in experimental setup. It was found that atmospheric conditions play little role in the channel velocity, but impact the pressure drop significantly. The adjustments made to the experiment setup did not change the results. However, in some cases, the wake behind a cylinder was shifted significantly from one day to the next. These changes did not correlate with ambient pressure, room temperature, nor tear down/rebuilds of the facility. (author)

  3. GPM GROUND VALIDATION KCBW NEXRAD GCPEX V1

    Data.gov (United States)

    National Aeronautics and Space Administration — The GPM Ground Validation KCBW NEXRAD GCPEx dataset was collected during January 9, 2012 to March 12, 2012 for the GPM Cold-season Precipitation Experiment (GCPEx)....

  4. CANDU radiotoxicity inventories estimation: A calculated experiment cross-check for data verification and validation

    International Nuclear Information System (INIS)

    Pavelescu, Alexandru Octavian; Cepraga, Dan Gabriel

    2007-01-01

    This paper is related to the Clearance Potential Index, Ingestion and Inhalation Hazard Factors of the nuclear spent fuel and radioactive wastes. This study required a complex activity that consisted of various phases such us: the acquisition, setting up, validation and application of procedures, codes and libraries. The paper reflects the validation phase of this study. Its objective was to compare the measured inventories of selected actinide and fission products radionuclides in an element from a Pickering CANDU reactor with inventories predicted using a recent version of the ORIGEN-ARP from SCALE 5 coupled with the time dependent cross sections library, CANDU 28.lib, produced by the sequence SAS2H of SCALE 4.4a. In this way, the procedures, codes and libraries for the characterization of radioactive material in terms of radioactive inventories, clearance, and biological hazard factors are being qualified and validated, in support for the safety management of the radioactive wastes. (authors)

  5. College Experience Scale (EExU

    Directory of Open Access Journals (Sweden)

    Angélica Juárez

    2017-07-01

    Full Text Available The experience of being a university student (University Experiences had been poorly studied so far. However, research in this field can provide valuable information about the quality of academic life, wellbeing or stress in this population. There is a lack of psychological tests that explore this theoretical construct. The aim of this study was to develop and validate a scale for measuring Univeristy Experiences, for that reason 314 college students were invited to participate for the validation. This students coursed different careers and reported 20 years old as average age. The University Experiences Scale (EExU has adequate psychometric properties. It has a structure of four factors: experience satisfaction, support perception, experience perception and life style adjustment. This factors explain 43.1% of the variance. The grouping of the factors of the College Experience Scale concurs with data reported in the literature about such concept, however this is the first questionnaire designed for measuring it. We anticipate that future studies will seek to verify the performance of the scale in different populations of students and analyze its psychometric properties and its possible association with other psychological variables that affect college students and their health.

  6. Thermal-hydraulic codes validation for safety analysis of NPPs with RBMK

    International Nuclear Information System (INIS)

    Brus, N.A.; Ioussoupov, O.E.

    2000-01-01

    This work is devoted to validation of western thermal-hydraulic codes (RELAP5/MOD3 .2 and ATHLET 1.1 Cycle C) in application to Russian designed light water reactors. Such validation is needed due to features of RBMK reactor design and thermal-hydraulics in comparison with PWR and BWR reactors, for which these codes were developed and validated. These validation studies are concluded with a comparison of calculation results of modeling with the thermal-hydraulics codes with the experiments performed earlier using the thermal-hydraulics test facilities with the experimental data. (authors)

  7. Validation of gamma irradiator controls for quality and regulatory compliance

    International Nuclear Information System (INIS)

    Harding, R.B.; Pinteric, F.J.A.

    1995-01-01

    Since 1978 the U.S. Food and Drug Administration (FDA) has had both the legal authority and the Current Good Manufacturing Practice (CGMP) regulations in place to require irradiator owners who process medical devices to produce evidence of Irradiation Process Validation. One of the key components of Irradiation Process Validation is the validation of the irradiator controls. However, it is only recently that FDA audits have focused on this component of the process validation. What is Irradiator Control System Validation? What constitutes evidence of control? How do owners obtain evidence? What is the irradiator supplier's role in validation? How does the ISO 9000 Quality Standard relate to the FDA's CGMP requirement for evidence of Control System Validation? This paper presents answers to these questions based on the recent experiences of Nordion's engineering and product management staff who have worked with several US-based irradiator owners. This topic - Validation of Irradiator Controls - is a significant regulatory compliance and operations issues within the irradiator suppliers' and users' community. (author)

  8. Experimenting with a design experiment

    Directory of Open Access Journals (Sweden)

    Bakker, Judith

    2012-12-01

    Full Text Available The design experiment is an experimental research method that aims to help design and further develop new (policy instruments. For the development of a set of guidelines for the facilitation of citizens’ initiatives by local governments, we are experimenting with this method. It offers good opportunities for modeling interventions by testing their instrumental validity –the usefulness for the intended practical purposes. At the same time design experiments are also useful for evaluating the empirical validity of theoretical arguments and the further development of these arguments in the light of empirical evidence (by using e.g. the technique of pattern matching. We describe how we have applied this methodology in two cases and discuss our research approach. We encountered some unexpected difficulties, especially in the cooperation with professionals and citizens. These difficulties complicate the valid attribution of causal effects to the use of the new instrument. However, our preliminary conclusion is that design experiments are useful in our field of study

    El experimento de diseño es un método de investigación experimental que tiene como objetivo diseñar y desarrollar posteriormente nuevas herramientas (políticas. En este artículo experimentamos con este método para desarrollar un conjunto de directrices que permitan a los gobiernos locales facilitar las iniciativas ciudadanas. El método ofrece la oportunidad de modelar las intervenciones poniendo a prueba su validez instrumental (su utilidad para el fin práctico que se proponen. Al mismo tiempo, los experimentos de diseño son útiles también para evaluar la validez empírica de las discusiones teóricas y el posterior desarrollo de esas discusiones a la luz de la evidencia empírica (usando, por ejemplo, técnicas de concordancia de patrones. En este trabajo describimos cómo hemos aplicado este método a dos casos y discutimos nuestro enfoque de

  9. Child abuse: validation of a questionnaire translated into Brazilian Portuguese

    Directory of Open Access Journals (Sweden)

    Glaucia Marengo

    2013-04-01

    Full Text Available This study sought to validate the Portuguese translation of a questionnaire on maltreatment of children and adolescents, developed by Russell et al. and to test its psychometric properties for use in Brazil. The original questionnaire was translated into Portuguese using a standardized forward-backward linguistic translation method. Both face and content validity were tested in a small pilot study (n = 8. In the main study, a convenience sample of 80 graduate dentistry students with different specialties, from Curitiba, PR, Brazil, were invited to complete the final Brazilian version of the questionnaire. Discriminant validity was assessed by comparing the results obtained from the questionnaire for different specialties (pediatric dentistry, for example. The respondents completed the questionnaire again after 4 weeks to evaluate test-retest reliability. The comparison of test versus retest questionnaire answers showed good agreement (kappa > 0.53, intraclass correlation > 0.84 for most questions. In regard to discriminant validity, a statistically significant difference was observed only in the experience and interest domains, in which pediatric dentists showed more experience with and interest in child abuse compared with dentists of other specialties (Mann-Whitney test, p < 0.05. The Brazilian version of the questionnaire was valid and reliable for assessing knowledge regarding child abuse by Portuguese-speaking dentists.

  10. Jendl-3.1 iron validation on the PCA-REPLICA (H2O/Fe) shielding benchmark experiment

    International Nuclear Information System (INIS)

    Pescarini, M.; Borgia, M. G.

    1997-03-01

    The PCA-REPLICA (H 2 O/Fe) neutron shielding benchmarks experiment is analysed using the SN 2-D DOT 3.5-E code and the 3-D-equivalent flux synthesis method. This engineering benchmark reproduces the ex-core radial geometry of a PWR, including a mild steel reactor pressure vessel (RPV) simulator, and is designed to test the accuracy of the calculation of the in-vessel neutron exposure parameters. This accuracy is strongly dependent on the quality of the iron neutron cross sections used to describe the nuclear reactions within the RPV simulator. In particular, in this report, the cross sections based on the JENDL-3.1 iron data files are tested, through a comparison of the calculated integral and spectral results with the corresponding experimental data. In addition, the present results are compared, on the same benchmark experiment, with those of a preceding ENEA-Bologna validation of the ENDF/B VI iron cross sections. The integral result comparison indicates that, for all the threshold detectors considered (Rh-103 (n, n') Rh-103m, In-115 (n, n') In-115m and S-32 (n, p) P-32), the JENDL-3.1 natural iron data produce satisfactory results similar to those obtained with the ENDF/B VI iron data. On the contrary, when the JENDL/3.1 Fe-56 data file is used, strongly underestimated results are obtained for the lower energy threshold detectors, Rh-103 and In-115. This fact, in particular, becomes more evident with increasing the neutron penetration depth in the RPV simulator

  11. Validation of the Nuclear Design Method for MOX Fuel Loaded LWR Cores

    International Nuclear Information System (INIS)

    Saji, E.; Inoue, Y.; Mori, M.; Ushio, T.

    2001-01-01

    The actual batch loading of mixed-oxide (MOX) fuel in light water reactors (LWRs) is now ready to start in Japan. One of the efforts that have been devoted to realizing this batch loading has been validation of the nuclear design methods calculating the MOX-fuel-loaded LWR core characteristics. This paper summarizes the validation work for the applicability of the CASMO-4/SIMULATE-3 in-core fuel management code system to MOX-fuel-loaded LWR cores. This code system is widely used by a number of electric power companies for the core management of their commercial LWRs. The validation work was performed for both boiling water reactor (BWR) and pressurized water reactor (PWR) applications. Each validation consists of two parts: analyses of critical experiments and core tracking calculations of operating plants. For the critical experiments, we have chosen a series of experiments known as the VENUS International Program (VIP), which was performed at the SCK/CEN MOL laboratory in Belgium. VIP consists of both BWR and PWR fuel assembly configurations. As for the core tracking calculations, the operating data of MOX-fuel-loaded BWR and PWR cores in Europe have been utilized

  12. Towards a realistic approach to validation of reactive transport models for performance assessment

    International Nuclear Information System (INIS)

    Siegel, M.D.

    1993-01-01

    Performance assessment calculations are based on geochemical models that assume that interactions among radionuclides, rocks and groundwaters under natural conditions, can be estimated or bound by data obtained from laboratory-scale studies. The data include radionuclide distribution coefficients, measured in saturated batch systems of powdered rocks, and retardation factors measured in short-term column experiments. Traditional approaches to model validation cannot be applied in a straightforward manner to the simple reactive transport models that use these data. An approach to model validation in support of performance assessment is described in this paper. It is based on a recognition of different levels of model validity and is compatible with the requirements of current regulations for high-level waste disposal. Activities that are being carried out in support of this approach include (1) laboratory and numerical experiments to test the validity of important assumptions inherent in current performance assessment methodologies,(2) integrated transport experiments, and (3) development of a robust coupled reaction/transport code for sensitivity analyses using massively parallel computers

  13. Report on the evaluation and validation of the ELECTRA WoC control concept

    DEFF Research Database (Denmark)

    Strasser, Thomas; Latif, Aadil; Leimgruber, Fabian

    This report summarizes the evaluation and validation of the ELECTRA Web-of-Cells concept which has been implemented for the proof of concept in selected validation environments provided by the project partners. The performed experiments have been realized in simulations and in laboratory environm......This report summarizes the evaluation and validation of the ELECTRA Web-of-Cells concept which has been implemented for the proof of concept in selected validation environments provided by the project partners. The performed experiments have been realized in simulations and in laboratory...... environments and showed the feasibility of the Web-of-Cells concept and integrated functions for balancing and voltage control. Future work is necessary to further refine the concept and functions and to focus the controller implementations on higher technology readiness levels....

  14. Ensuring the validity of calculated subcritical limits

    International Nuclear Information System (INIS)

    Clark, H.K.

    1977-01-01

    The care taken at the Savannah River Laboratory and Plant to ensure the validity of calculated subcritical limits is described. Close attention is given to ANSI N16.1-1975, ''Validation of Calculational Methods for Nuclear Criticality Safety.'' The computer codes used for criticality safety computations, which are listed and are briefly described, have been placed in the SRL JOSHUA system to facilitate calculation and to reduce input errors. A driver module, KOKO, simplifies and standardizes input and links the codes together in various ways. For any criticality safety evaluation, correlations of the calculational methods are made with experiment to establish bias. Occasionally subcritical experiments are performed expressly to provide benchmarks. Calculated subcritical limits contain an adequate but not excessive margin to allow for uncertainty in the bias. The final step in any criticality safety evaluation is the writing of a report describing the calculations and justifying the margin

  15. Use of the recognition heuristic depends on the domain's recognition validity, not on the recognition validity of selected sets of objects.

    Science.gov (United States)

    Pohl, Rüdiger F; Michalkiewicz, Martha; Erdfelder, Edgar; Hilbig, Benjamin E

    2017-07-01

    According to the recognition-heuristic theory, decision makers solve paired comparisons in which one object is recognized and the other not by recognition alone, inferring that recognized objects have higher criterion values than unrecognized ones. However, success-and thus usefulness-of this heuristic depends on the validity of recognition as a cue, and adaptive decision making, in turn, requires that decision makers are sensitive to it. To this end, decision makers could base their evaluation of the recognition validity either on the selected set of objects (the set's recognition validity), or on the underlying domain from which the objects were drawn (the domain's recognition validity). In two experiments, we manipulated the recognition validity both in the selected set of objects and between domains from which the sets were drawn. The results clearly show that use of the recognition heuristic depends on the domain's recognition validity, not on the set's recognition validity. In other words, participants treat all sets as roughly representative of the underlying domain and adjust their decision strategy adaptively (only) with respect to the more general environment rather than the specific items they are faced with.

  16. Methodology for Computational Fluid Dynamic Validation for Medical Use: Application to Intracranial Aneurysm.

    Science.gov (United States)

    Paliwal, Nikhil; Damiano, Robert J; Varble, Nicole A; Tutino, Vincent M; Dou, Zhongwang; Siddiqui, Adnan H; Meng, Hui

    2017-12-01

    Computational fluid dynamics (CFD) is a promising tool to aid in clinical diagnoses of cardiovascular diseases. However, it uses assumptions that simplify the complexities of the real cardiovascular flow. Due to high-stakes in the clinical setting, it is critical to calculate the effect of these assumptions in the CFD simulation results. However, existing CFD validation approaches do not quantify error in the simulation results due to the CFD solver's modeling assumptions. Instead, they directly compare CFD simulation results against validation data. Thus, to quantify the accuracy of a CFD solver, we developed a validation methodology that calculates the CFD model error (arising from modeling assumptions). Our methodology identifies independent error sources in CFD and validation experiments, and calculates the model error by parsing out other sources of error inherent in simulation and experiments. To demonstrate the method, we simulated the flow field of a patient-specific intracranial aneurysm (IA) in the commercial CFD software star-ccm+. Particle image velocimetry (PIV) provided validation datasets for the flow field on two orthogonal planes. The average model error in the star-ccm+ solver was 5.63 ± 5.49% along the intersecting validation line of the orthogonal planes. Furthermore, we demonstrated that our validation method is superior to existing validation approaches by applying three representative existing validation techniques to our CFD and experimental dataset, and comparing the validation results. Our validation methodology offers a streamlined workflow to extract the "true" accuracy of a CFD solver.

  17. ASTEC validation on PANDA SETH

    International Nuclear Information System (INIS)

    Bentaib, Ahmed; Bleyer, Alexandre; Schwarz, Siegfried

    2009-01-01

    The ASTEC code development by IRSN and GRS is aimed to provide an integral code for the simulation of the whole course of severe accidents in Light-Water Reactors. ASTEC is a complex system of codes for reactor safety assessment. In this validation, only the thermal-hydraulic module of ASTEC code is used. ASTEC is a lumped-parameter code able to represent multi-compartment containments. It uses the following main elements: zones (compartments), junctions (liquids and atmospherics) and structures. The zones are connected by junctions and contain steam, water and non condensable gases. They exchange heat with structures by different heat transfer regimes: convection, radiation and condensation. This paper presents the validation of ASTEC V1.3 on the tests T9 and T9bis, of the PANDA OECD/SETH experimental program, investigating the impact of injection velocity and steam condensation on the plume shape and on the gas distribution. Dedicated meshes were developed to simulate the test facility with the two vessels DW1, DW2 and the interconnection pipe. The obtained numerical results are analyzed and compared to the experiments. The comparison shows a good agreement between experiments and calculations. (author)

  18. Validation of the MC{sup 2}-3/DIF3D Code System for Control Rod Worth via the BFS-75-1 Reactor Physics Experiment

    Energy Technology Data Exchange (ETDEWEB)

    Yun, Sunghwan; Kim, Sang Ji [KAERI, Daejeon (Korea, Republic of)

    2016-05-15

    In this paper, control rod worths of the BFS-75-1 reactor physics experiments were examined using continuous energy MCNP models and deterministic MC2-3/DIF3D models based on the ENDF/B-VII.0 library. We can conclude that the ENDF/B-VII.0 library shows very good agreement in small-size metal uranium fuel loaded core which is surrounded by the depleted uranium blanket. However, the control rod heterogeneity effect reported by the reference is not significant in this problem because the tested control rod models were configured by single rod. Hence comparison with other control rod worth measurements data such as the BFS-109-2A reactor physics experiment is planned as a future study. The BFS-75-1 critical experiment was carried out in the BFS-1 facility of IPPE in Russia within the framework of validating an early phase of KALIMER- 150 design. The Monte-Carlo model of the BFS- 75-1 critical experiment had been developed. However, due to incomplete information for the BFS- 75-1 experiments, Monte-Carlo models had been generated for the reference criticality and sodium void reactivity measurements with disk-wise homogeneous model. Recently, KAERI performed another physics experiment, BFS-109-2A, by collaborating with Russian IPPE. During the review process of the experimental report of the BFS-109-2A critical experiments, valuable information for the BFS-1 facility which can also be used for the BFS-75-1 experiments was discovered.

  19. Validation of fracture flow models in the Stripa project

    International Nuclear Information System (INIS)

    Herbert, A.; Dershowitz, W.; Long, J.; Hodgkinson, D.

    1991-01-01

    One of the objectives of Phase III of the Stripa Project is to develop and evaluate approaches for the prediction of groundwater flow and nuclide transport in a specific unexplored volume of the Stripa granite and make a comparison with data from field measurements. During the first stage of the project, a prediction of inflow to the D-holes, an array of six parallel closely spaced 100m boreholes, was made based on data from six other boreholes. This data included fracture geometry, stress, single borehole geophysical logging, crosshole and reflection radar and seismic tomogram, head monitoring and single hole packer test measurements. Maps of fracture traces on the drift walls have also been made. The D-holes are located along a future Validation Drift which will be excavated. The water inflow to the D-holes has been measured in an experiment called the Simulated Drift Experiment. The paper reviews the Simulated Drift Experiment validation exercise. Following a discussion of the approach to validation, the characterization data and its preliminary interpretation are summarised and commented upon. That work has proved feasible to carry through all the complex and interconnected tasks associated with the gathering and interpretation of characterization data, the development and application of complex models, and the comparison with measured inflows. This exercise has provided detailed feed-back to the experimental and theoretical work required for measurements and predictions of flow into the Validation Drift. Computer codes used: CHANGE, FRACMAN, MAFIC, NAPSAC and TRINET. 2 figs., 2 tabs., 19 refs

  20. Translation and cultural adaptation of the States of Consciousness Questionnaire (SOCQ and statistical validation of the Mystical Experience Questionnaire (MEQ30 in Brazilian Portuguese

    Directory of Open Access Journals (Sweden)

    EDUARDO EKMAN SCHENBERG

    Full Text Available Abstract Background The States of Consciousness Questionnaire (SOCQ was developed to assess the occurrence features of the change in consciousness induced by psilocybin, and includes the Mystical Experience Questionnaire (MEQ, developed to assess the ocurrence of mystical experiences in altered states of consciousness. Objective To translate the SOCQ to Brazilian Portuguese and validate the 30-item MEQ. Methods The SOCQ was translated to Brazilian Portuguese and backtranslated into English. The two English versions were compared and differences corrected, resulting in a Brazilian translation. Using an internet-survey, 1504 Portuguese-speaking subjects answered the translated version of the SOCQ. The 4-factor version of MEQ30 was analyzed using confirmatory factor analysis and reliability analysis. Results A Brazilian Portuguese version of the SOCQ was made available. Goodness-of-fit indexes indicated that data met the factorial structure proposed for the English MEQ30. Factors presented excellent to acceptable reliability according to Cronbach’s alpha: mystical (0.95; positive mood (0.71; transcendence of time/space (0.83; and ineffability (0.81. Discussion The Brazilian Portuguese version of the MEQ30 is validated and it fits in the factorial structure performed on the original English version. The SOCQ is also available to the Brazilian Portuguese speaking population, allowing studies in different languages to be conducted and compared systematically.

  1. The Bolund Experiment: Overview and Background

    DEFF Research Database (Denmark)

    Bechmann, Andreas; Berg, Jacob; Courtney, Michael

    The Bolund experiment is a measuring campaign performed in 2007 and 2008. The aim of the experiment is to measure the flow field around the Bolund hill in order to provide a dataset for validating numerical flow models. The present report gives an overview of the whole experiment including a desc...

  2. Estimating uncertainty of inference for validation

    Energy Technology Data Exchange (ETDEWEB)

    Booker, Jane M [Los Alamos National Laboratory; Langenbrunner, James R [Los Alamos National Laboratory; Hemez, Francois M [Los Alamos National Laboratory; Ross, Timothy J [UNM

    2010-09-30

    We present a validation process based upon the concept that validation is an inference-making activity. This has always been true, but the association has not been as important before as it is now. Previously, theory had been confirmed by more data, and predictions were possible based on data. The process today is to infer from theory to code and from code to prediction, making the role of prediction somewhat automatic, and a machine function. Validation is defined as determining the degree to which a model and code is an accurate representation of experimental test data. Imbedded in validation is the intention to use the computer code to predict. To predict is to accept the conclusion that an observable final state will manifest; therefore, prediction is an inference whose goodness relies on the validity of the code. Quantifying the uncertainty of a prediction amounts to quantifying the uncertainty of validation, and this involves the characterization of uncertainties inherent in theory/models/codes and the corresponding data. An introduction to inference making and its associated uncertainty is provided as a foundation for the validation problem. A mathematical construction for estimating the uncertainty in the validation inference is then presented, including a possibility distribution constructed to represent the inference uncertainty for validation under uncertainty. The estimation of inference uncertainty for validation is illustrated using data and calculations from Inertial Confinement Fusion (ICF). The ICF measurements of neutron yield and ion temperature were obtained for direct-drive inertial fusion capsules at the Omega laser facility. The glass capsules, containing the fusion gas, were systematically selected with the intent of establishing a reproducible baseline of high-yield 10{sup 13}-10{sup 14} neutron output. The deuterium-tritium ratio in these experiments was varied to study its influence upon yield. This paper on validation inference is the

  3. ENDF/B VI iron validation onpca-replica (H2O/FE) shielding benchmark experiment

    Energy Technology Data Exchange (ETDEWEB)

    Pescarini, M. [ENEA, Bologna (Italy). Centro Ricerche Energia `E. Clementel` - Area Energia e Innovazione

    1994-05-01

    The PCA-REPLICA (H2O/Fe) neutron shielding benchmark experiment is analysed using the SN 2-D DOT 3.5 code and the 3-D-equivalent flux synthesis method. This engineering benchmark reproduces the ex-core radial geometry of a PWR, including a mild steel reactor pressure vessel (RPV) simulator, and is dsigned to test the accuracy of the calculation of the in-vessel neutron exposure parameters (fast fluence and iron displacement rates). This accuracy is strongly dependent on the quality of the iron neutron cross section used to describe the nuclear reactions within the RPV simulator. In particular, in this report, the cross sections based on the ENDF/B VI iron data files are tested, through a comparison of the calculated integral and spectral results with the corresponding experimental data. In addition, the present results are compared, on the same benchmark experiment, with those of a preceding ENEA (Italian Agency for Energy, New Technologies and Environment)-Bologna validation of the JEF-2.1 iron cross sections. The integral result comparison indicates that, for all the thresold detectors considered (Rh-103 (n,n) Rh-103m, In-115 (n,n) In-115 (n,n) In-115m and S-32 (n.p) P-32), the ENDF/B VI iron data produce better results than the JEF-2.1 iron data. In particular, in the ENDF/B VI calcultaions, an improvement of the in-vessel C/E (Calculated/Experimental) activity ratios for the lower energy threshold detectors, Rh-103 and In-115, is observed. This improvement becomes more evident with increasing neutron penetration depth in the vessel. This is probably attributable to the fact that the inelastic scattering cross section values of the ENDF/B VI Fe-56 data file, approximately in the 0.86 - 1.5 MeV energy range, are lower then the corresponding values of the JEF-2.1 data file.

  4. A systematic review of the reliability and validity of discrete choice experiments in valuing non-market environmental goods

    DEFF Research Database (Denmark)

    Rokotonarivo, Sarobidy; Schaafsma, Marije; Hockley, Neal

    2016-01-01

    reliability measures. DCE results were generally consistent with those of other stated preference techniques (convergent validity), but hypothetical bias was common. Evidence supporting theoretical validity (consistency with assumptions of rational choice theory) was limited. In content validity tests, 2...

  5. Sampling Participants' Experience in Laboratory Experiments: Complementary challenges for more complete data collection

    Directory of Open Access Journals (Sweden)

    Alan eMcAuliffe

    2016-05-01

    Full Text Available Speelman and McGann's (2013 examination of the uncritical way in which the mean is often used in psychological research raises questions both about the average's reliability and its validity. In the present paper, we argue that interrogating the validity of the mean involves, amongst other things, a better understanding of the person's experiences, the meaning of their actions, at the time that the behaviour of interest is carried out. Recently emerging approaches within Psychology and Cognitive Science have argued strongly that experience should play a more central role in our examination of behavioural data, but the relationship between experience and behaviour remains very poorly understood. We outline some of the history of the science on this fraught relationship, as well as arguing that contemporary methods for studying experience fall into one of two categories. Wide approaches tend to incorporate naturalistic behaviour settings, but sacrifice accuracy and reliability in behavioural measurement. Narrow approaches maintain controlled measurement of behaviour, but involve too specific a sampling of experience, which obscures crucial temporal characteristics. We therefore argue for a novel, mid-range sampling technique, that extends Hurlburt's Descriptive Experience Sampling, and adapts it for the controlled setting of the laboratory. This Controlled Descriptive Experience Sampling may be an appropriate tool to help calibrate both the mean and the meaning of an experimental situation with one another.

  6. The proportion valid effect in covert orienting: strategic control or implicit learning?

    Science.gov (United States)

    Risko, Evan F; Stolz, Jennifer A

    2010-03-01

    It is well known that the difference in performance between valid and invalid trials in the covert orienting paradigm (i.e., the cueing effect) increases as the proportion of valid trials increases. This proportion valid effect is widely assumed to reflect "strategic" control over the distribution of attention. In the present experiments we determine if this effect results from an explicit strategy or implicit learning by probing participant's awareness of the proportion of valid trials. Results support the idea that the proportion valid effect in the covert orienting paradigm reflects implicit learning not an explicit strategy.

  7. Valid methods: the quality assurance of test method development, validation, approval, and transfer for veterinary testing laboratories.

    Science.gov (United States)

    Wiegers, Ann L

    2003-07-01

    Third-party accreditation is a valuable tool to demonstrate a laboratory's competence to conduct testing. Accreditation, internationally and in the United States, has been discussed previously. However, accreditation is only I part of establishing data credibility. A validated test method is the first component of a valid measurement system. Validation is defined as confirmation by examination and the provision of objective evidence that the particular requirements for a specific intended use are fulfilled. The international and national standard ISO/IEC 17025 recognizes the importance of validated methods and requires that laboratory-developed methods or methods adopted by the laboratory be appropriate for the intended use. Validated methods are therefore required and their use agreed to by the client (i.e., end users of the test results such as veterinarians, animal health programs, and owners). ISO/IEC 17025 also requires that the introduction of methods developed by the laboratory for its own use be a planned activity conducted by qualified personnel with adequate resources. This article discusses considerations and recommendations for the conduct of veterinary diagnostic test method development, validation, evaluation, approval, and transfer to the user laboratory in the ISO/IEC 17025 environment. These recommendations are based on those of nationally and internationally accepted standards and guidelines, as well as those of reputable and experienced technical bodies. They are also based on the author's experience in the evaluation of method development and transfer projects, validation data, and the implementation of quality management systems in the area of method development.

  8. Implicit attitudes towards homosexuality: reliability, validity, and controllability of the IAT.

    Science.gov (United States)

    Banse, R; Seise, J; Zerbes, N

    2001-01-01

    Two experiments were conducted to investigate the psychometric properties of an Implicit Association Test (IAT; Greenwald, McGhee, & Schwartz, 1998) that was adapted to measure implicit attitudes towards homosexuality. In a first experiment, the validity of the Homosexuality-IAT was tested using a known group approach. Implicit and explicit attitudes were assessed in heterosexual and homosexual men and women (N = 101). The results provided compelling evidence for the convergent and discriminant validity of the Homosexuality-IAT as a measure of implicit attitudes. No evidence was found for two alternative explanations of IAT effects (familiarity with stimulus material and stereotype knowledge). The internal consistency of IAT scores was satisfactory (alpha s > .80), but retest correlations were lower. In a second experiment (N = 79) it was shown that uninformed participants were able to fake positive explicit but not implicit attitudes. Discrepancies between implicit and explicit attitudes towards homosexuality could be partially accounted for by individual differences in the motivation to control prejudiced behavior, thus providing independent evidence for the validity of the implicit attitude measure. Neither explicit nor implicit attitudes could be changed by persuasive messages. The results of both experiments are interpreted as evidence for a single construct account of implicit and explicit attitudes towards homosexuality.

  9. Statistical Analysis Methods for Physics Models Verification and Validation

    CERN Document Server

    De Luca, Silvia

    2017-01-01

    The validation and verification process is a fundamental step for any software like Geant4 and GeantV, which aim to perform data simulation using physics models and Monte Carlo techniques. As experimental physicists, we have to face the problem to compare the results obtained using simulations with what the experiments actually observed. One way to solve the problem is to perform a consistency test. Within the Geant group, we developed a C++ compact library which will be added to the automated validation process on the Geant Validation Portal

  10. You don't have to believe everything you read: background knowledge permits fast and efficient validation of information.

    Science.gov (United States)

    Richter, Tobias; Schroeder, Sascha; Wöhrmann, Britta

    2009-03-01

    In social cognition, knowledge-based validation of information is usually regarded as relying on strategic and resource-demanding processes. Research on language comprehension, in contrast, suggests that validation processes are involved in the construction of a referential representation of the communicated information. This view implies that individuals can use their knowledge to validate incoming information in a routine and efficient manner. Consistent with this idea, Experiments 1 and 2 demonstrated that individuals are able to reject false assertions efficiently when they have validity-relevant beliefs. Validation processes were carried out routinely even when individuals were put under additional cognitive load during comprehension. Experiment 3 demonstrated that the rejection of false information occurs automatically and interferes with affirmative responses in a nonsemantic task (epistemic Stroop effect). Experiment 4 also revealed complementary interference effects of true information with negative responses in a nonsemantic task. These results suggest the existence of fast and efficient validation processes that protect mental representations from being contaminated by false and inaccurate information.

  11. Measuring the Pros and Cons of What It Means to Be a Black Man: Development and Validation of the Black Men's Experiences Scale (BMES).

    Science.gov (United States)

    Bowleg, Lisa; English, Devin; Del Rio-Gonzalez, Ana Maria; Burkholder, Gary J; Teti, Michelle; Tschann, Jeanne M

    2016-04-01

    Although extensive research documents that Black people in the U.S. frequently experience social discrimination, most of this research aggregates these experiences primarily or exclusively by race. Consequently, empirical gaps exist about the psychosocial costs and benefits of Black men's experiences at the intersection of race and gender. Informed by intersectionality, a theoretical framework that highlights how multiple social identities intersect to reflect interlocking social-structural inequality, this study addresses these gaps with the qualitative development and quantitative test of the Black Men's Experiences Scale (BMES). The BMES assesses Black men's negative experiences with overt discrimination and microaggressions, as well their positive evaluations of what it means to be Black men. First, we conducted focus groups and individual interviews with Black men to develop the BMES. Next, we tested the BMES with 578 predominantly low-income urban Black men between the ages of 18 and 44. Exploratory factor analysis suggested a 12-item, 3-factor solution that explained 63.7% of the variance. We labeled the subscales: Overt Discrimination, Microaggressions, and Positives: Black Men . Confirmatory factor analysis supported the three-factor solution. As hypothesized, the BMES's subscales correlated with measures of racial discrimination, depression, resilience, and social class at the neighborhood-level. Preliminary evidence suggests that the BMES is a reliable and valid measure of Black men's experiences at the intersection of race and gender.

  12. Validation of the actuator line/Navier Stokes technique using mexico measurements

    DEFF Research Database (Denmark)

    Shen, Wen Zhong; Zhu, Wei Jun; Sørensen, Jens Nørkær

    2010-01-01

    This paper concerns the contribution of DTU MEK in the international research collaboration project (MexNext) within the framework of IEA Annex 29 to validate aerodynamic models or CFD codes using the existing measurements made in the previous EU funded projectMEXICO (Model Experiments in Control......This paper concerns the contribution of DTU MEK in the international research collaboration project (MexNext) within the framework of IEA Annex 29 to validate aerodynamic models or CFD codes using the existing measurements made in the previous EU funded projectMEXICO (Model Experiments...... in Controlled Conditions). The Actuator Line/Navier Stokes (AL/NS) technique developed at DTU is validated against the detailed MEXICO measurements. The AL/NS computations without the DNW wind tunnel with speeds of 10m/s, 15m/s and 24m/s. Comparisons of blade loading between computations and measurements show...

  13. Validation of MCCI models implemented in ASTEC MEDICIS on OECD CCI-2 and CCI-3 experiments and further consideration on reactor cases

    Energy Technology Data Exchange (ETDEWEB)

    Agethen, K.; Koch, M.K., E-mail: agethen@lee.rub.de, E-mail: koch@lee.rub.de [Ruhr-Universitat Bochum, Energy Systems and Energy Economics, Reactor Simulation and Safety Group, Bochum (Germany)

    2014-07-01

    Within a severe accident in a light water reactor a loss of coolant can result in core melting and vessel failure. Afterwards, molten core material may discharge into the containment cavity and interact with the concrete basemat. Due to concrete erosion gases are released, which lead to exothermic oxidation reactions with the metals in the corium and to formation of combustible mixtures. In this work the MEDICIS module of the Accident Source Term Evaluation Code (ASTEC) is validated on experiments of the OECD CCI programme. The primary focus is set on the CCI-2 experiment with limestone common sand (LCS) concrete, in which nearly homogenous erosion appeared, and the CCI-3 experiment with siliceous concrete, in which increased lateral erosion occurred. These experiments enable the analysis of heat transfer depending on the axial and radial orientation from the interior of the melt to the surrounding surfaces and the impact of top flooding. For the simulation of both tests, two existing models in MEDICIS are used and analysed. Results of simulations show a good agreement of ablation behaviour, layer temperature and energy balance with experimental results. Furthermore the issue of a quasi-steady state in the energy balance for the long term appeared. Finally the basic data are scaled up to a generic reactor scenario, which shows that this quasi-steady state similarly occurred. (author)

  14. Human Factors methods concerning integrated validation of nuclear power plant control rooms

    International Nuclear Information System (INIS)

    Oskarsson, Per-Anders; Johansson, Bjoern J.E.; Gonzalez, Natalia

    2010-02-01

    The frame of reference for this work was existing recommendations and instructions from the NPP area, experiences from the review of the Turbic Validation and experiences from system validations performed at the Swedish Armed Forces, e.g. concerning military control rooms and fighter pilots. These enterprises are characterized by complex systems in extreme environments, often with high risks, where human error can lead to serious consequences. A focus group has been performed with representatives responsible for Human Factors issues from all Swedish NPP:s. The questions that were discussed were, among other things, for whom an integrated validation (IV) is performed and its purpose, what should be included in an IV, the comparison with baseline measures, the design process, the role of SSM, which methods of measurement should be used, and how the methods are affected of changes in the control room. The report brings different questions to discussion concerning the validation process. Supplementary methods of measurement for integrated validation are discussed, e.g. dynamic, psychophysiological, and qualitative methods for identification of problems. Supplementary methods for statistical analysis are presented. The study points out a number of deficiencies in the validation process, e.g. the need of common guidelines for validation and design, criteria for different types of measurements, clarification of the role of SSM, and recommendations for the responsibility of external participants in the validation process. The authors propose 12 measures for taking care of the identified problems

  15. Physics Validation of the LHC Software

    CERN Multimedia

    CERN. Geneva

    2004-01-01

    The LHC Software will be confronted to unprecedented challenges as soon as the LHC will turn on. We summarize the main Software requirements coming from the LHC detectors, triggers and physics, and we discuss several examples of Software components developed by the experiments and the LCG project (simulation, reconstruction, etc.), their validation, and their adequacy for LHC physics.

  16. Achieving external validity in home advantage research: generalizing crowd noise effects

    Directory of Open Access Journals (Sweden)

    Tony D Myers

    2014-06-01

    Full Text Available Different factors have been postulated to explain the home advantage phenomenon in sport. One plausible explanation investigated has been the influence of a partisan home crowd on sports officials’ decisions. Different types of studies have tested the crowd influence hypothesis including purposefully designed experiments. However, while experimental studies investigating crowd influences have high levels of internal validity, they suffer from a lack of external validity; decision-making in a laboratory setting bearing little resemblance to decision-making in live sports settings. This focused review initially considers threats to external validity in applied and theoretical experimental research. Discussing how such threats can be addressed using representative design by focusing on a recently published study that arguably provides the first experimental evidence of the impact of live crowd noise on officials in sport. The findings of this controlled experiment conducted in a real tournament setting offer some confirmation of the validity of laboratory experimental studies in the area. Finally directions for future research and the future conduct of crowd noise studies are discussed.

  17. Validation of CESAR Thermal-hydraulic Module of ASTEC V1.2 Code on BETHSY Experiments

    Science.gov (United States)

    Tregoures, Nicolas; Bandini, Giacomino; Foucher, Laurent; Fleurot, Joëlle; Meloni, Paride

    The ASTEC V1 system code is being jointly developed by the French Institut de Radioprotection et Sûreté Nucléaire (IRSN) and the German Gesellschaft für Anlagen und ReaktorSicherheit (GRS) to address severe accident sequences in a nuclear power plant. Thermal-hydraulics in primary and secondary system is addressed by the CESAR module. The aim of this paper is to present the validation of the CESAR module, from the ASTEC V1.2 version, on the basis of well instrumented and qualified integral experiments carried out in the BETHSY facility (CEA, France), which simulates a French 900 MWe PWR reactor. Three tests have been thoroughly investigated with CESAR: the loss of coolant 9.1b test (OECD ISP N° 27), the loss of feedwater 5.2e test, and the multiple steam generator tube rupture 4.3b test. In the present paper, the results of the code for the three analyzed tests are presented in comparison with the experimental data. The thermal-hydraulic behavior of the BETHSY facility during the transient phase is well reproduced by CESAR: the occurrence of major events and the time evolution of main thermal-hydraulic parameters of both primary and secondary circuits are well predicted.

  18. New set of convective heat transfer coefficients established for pools and validated against CLARA experiments for application to corium pools

    Energy Technology Data Exchange (ETDEWEB)

    Michel, B., E-mail: benedicte.michel@irsn.fr

    2015-05-15

    Highlights: • A new set of 2D convective heat transfer correlations is proposed. • It takes into account different horizontal and lateral superficial velocities. • It is based on previously established correlations. • It is validated against recent CLARA experiments. • It has to be implemented in a 0D MCCI (molten core concrete interaction) code. - Abstract: During an hypothetical Pressurized Water Reactor (PWR) or Boiling Water Reactor (BWR) severe accident with core meltdown and vessel failure, corium would fall directly on the concrete reactor pit basemat if no water is present. The high temperature of the corium pool maintained by the residual power would lead to the erosion of the concrete walls and basemat of this reactor pit. The thermal decomposition of concrete will lead to the release of a significant amount of gases that will modify the corium pool thermal hydraulics. In particular, it will affect heat transfers between the corium pool and the concrete which determine the reactor pit ablation kinetics. A new set of convective heat transfer coefficients in a pool with different lateral and horizontal superficial gas velocities is modeled and validated against the recent CLARA experimental program. 155 tests of this program, in two size configurations and a high range of investigated viscosity, have been used to validate the model. Then, a method to define different lateral and horizontal superficial gas velocities in a 0D code is proposed together with a discussion about the possible viscosity in the reactor case when the pool is semi-solid. This model is going to be implemented in the 0D ASTEC/MEDICIS code in order to determine the impact of the convective heat transfer in the concrete ablation by corium.

  19. Health Services OutPatient Experience questionnaire: factorial validity and reliability of a patient-centered outcome measure for outpatient settings in Italy

    Directory of Open Access Journals (Sweden)

    Coluccia A

    2014-09-01

    Full Text Available Anna Coluccia, Fabio Ferretti, Andrea PozzaDepartment of Medical Sciences, Surgery and Neurosciences, Santa Maria alle Scotte University Hospital, University of Siena, Siena, ItalyPurpose: The patient-centered approach to health care does not seem to be sufficiently developed in the Italian context, and is still characterized by the biomedical model. In addition, there is a lack of validated outcome measures to assess outpatient experience as an aspect common to a variety of settings. The current study aimed to evaluate the factorial validity, reliability, and invariance across sex of the Health Services OutPatient Experience (HSOPE questionnaire, a short ten-item measure of patient-centeredness for Italian adult outpatients. The rationale for unidimensionality of the measure was that it could cover global patient experience as a process common to patients with a variety of diseases and irrespective of the phase of treatment course.Patients and methods: The HSOPE was compiled by 1,532 adult outpatients (51% females, mean age 59.22 years, standard deviation 16.26 receiving care in ten facilities at the Santa Maria alle Scotte University Hospital of Siena, Italy. The sample represented all the age cohorts. Twelve percent were young adults, 57% were adults, and 32% were older adults. Exploratory and confirmatory factor analyses were conducted to evaluate factor structure. Reliability was evaluated as internal consistency using Cronbach’s α. Factor invariance was assessed through multigroup analyses.Results: Both exploratory and confirmatory analyses suggested a clearly defined unidimensional structure of the measure, with all the ten items having salient loadings on a single factor. Internal consistency was excellent (α=0.95. Indices of model fit supported a single-factor structure for both male and female outpatient groups. Young adult outpatients had significantly lower scores on perceived patient-centeredness relative to older adults. No

  20. Numerical Simulation of Tuff Dissolution and Precipitation Experiments: Validation of Thermal-Hydrologic-Chemical (THC) Coupled-Process Modeling

    Science.gov (United States)

    Dobson, P. F.; Kneafsey, T. J.

    2001-12-01

    As part of an ongoing effort to evaluate THC effects on flow in fractured media, we performed a laboratory experiment and numerical simulations to investigate mineral dissolution and precipitation. To replicate mineral dissolution by condensate in fractured tuff, deionized water equilibrated with carbon dioxide was flowed for 1,500 hours through crushed Yucca Mountain tuff at 94° C. The reacted water was collected and sampled for major dissolved species, total alkalinity, electrical conductivity, and pH. The resulting steady-state fluid composition had a total dissolved solids content of about 140 mg/L; silica was the dominant dissolved constituent. A portion of the steady-state reacted water was flowed at 10.8 mL/hr into a 31.7-cm tall, 16.2-cm wide vertically oriented planar fracture with a hydraulic aperture of 31 microns in a block of welded Topopah Spring tuff that was maintained at 80° C at the top and 130° C at the bottom. The fracture began to seal within five days. A 1-D plug-flow model using the TOUGHREACT code developed at Berkeley Lab was used to simulate mineral dissolution, and a 2-D model was developed to simulate the flow of mineralized water through a planar fracture, where boiling conditions led to mineral precipitation. Predicted concentrations of the major dissolved constituents for the tuff dissolution were within a factor of 2 of the measured average steady-state compositions. The fracture-plugging simulations result in the precipitation of amorphous silica at the base of the boiling front, leading to a hundred-fold decrease in fracture permeability in less than 6 days, consistent with the laboratory experiment. These results help validate the use of the TOUGHREACT code for THC modeling of the Yucca Mountain system. The experiment and simulations indicate that boiling and concomitant precipitation of amorphous silica could cause significant reductions in fracture porosity and permeability on a local scale. The TOUGHREACT code will be used

  1. TRACY transient experiment databook. 2) ramp withdrawal experiment

    International Nuclear Information System (INIS)

    Nakajima, Ken; Yamane, Yuichi; Ogawa, Kazuhiko; Aizawa, Eiju; Yanagisawa, Hiroshi; Miyoshi, Yoshinori

    2002-03-01

    This is a databook of TRACY ''ramp withdrawal'' experiments. TRACY is a reactor to perform supercritical experiments using low-enriched uranyl nitrate aqueous solution. The excess reactivity of TRACY is 3$ at maximum, and it is inserted by feeding the solution to a core tank or by withdrawing a control rod, which is called as the transient rod, from the core. In the ramp withdrawal experiment, the supercritical experiment is initiated by withdrawing the transient rod from the core in a constant speed using a motor drive system. The data in the present databook consist of datasheets and graphs. Experimental conditions and typical values of measured parameters are tabulated in the datasheet. In the graph, power and temperature profiles are plotted. Those data are useful for the investigation of criticality accidents with fissile solutions, and for validation of criticality accident analysis codes. (author)

  2. Validation of the ATHLET-code 2.1A by calculation of the ECTHOR experiment; Validierung des ATHLET-Codes 2.1A anhand des Einzeleffekt-Tests ECTHOR

    Energy Technology Data Exchange (ETDEWEB)

    Jung, Andreas; Sarkadi, Peter; Schaffrath, Andreas [TUEV NORD SysTec GmbH und Co. KG, Hamburg (Germany)

    2010-05-15

    Before a numerical code (e.g. ATHLET) is used for simulation of physical phenomena being new or unknown for the code and/or the user, the user ensures the applicability of the code and his own experience of handling with it by means of a so-called validation. Parametric studies with the code are executed for that matter and the results have to be compared with verified experimental data. Corresponding reference values are available in terms of so-called single-effect-tests (e.g. ECTHOR). In this work the system-code ATHLET Mod. 2.1 Cycle A is validated by post test calculation of the ECTHOR experiment due to the above named aspects. With the ECTHOR-tests the clearing of a water-filled model of a loop seal by means of an air-stream was investigated including momentum exchange at the phase interface under adiabatic and atmospheric conditions. The post test calculations show that the analytical results meet the experimental data within the reproducibility of the experiments. Further findings of the parametric studies are: - The experimental results obtained with the system water-air (ECTHOR) can be assigned to a water-steam-system, if the densities of the phases are equal in both cases. - The initial water level in the loop seal has no influence on the results as long as the gas mass flow is increased moderately. - The loop seal is appropriately nodalized if the mean length of the control volumes accords approx. 1.5 tim es the hydraulic pipe diameter. (orig.)

  3. Validation of the ATHLET-code 2.1A by calculation of the ECTHOR experiment; Validierung des ATHLET-Codes 2.1A anhand des Einzeleffekt-Tests ECTHOR

    Energy Technology Data Exchange (ETDEWEB)

    Jung, Andreas; Sarkadi, Peter; Schaffrath, Andreas [TUEV NORD SysTec GmbH und Co. KG, Hamburg (Germany)

    2010-06-15

    Before a numerical code (e.g. ATHLET) is used for simulation of physical phenomena being new or unknown for the code and/or the user, the user ensures the applicability of the code and his own experience of handling with it by means of a so-called validation. Parametric studies with the code are executed for that matter und the results have to be compared with verified experimental data. Corresponding reference values are available in terms of so-called single-effect-tests (e.g. ECTHOR). In this work the system-code ATHLET Mod. 2.1 Cycle A is validated by post test calculation of the ECTHOR experiment due to the above named aspects. With the ECTHOR-tests the clearing of a water-filled model of a loop seal by means of an air-stream was investigated including momentum exchange at the phase interface under adiabatic and atmospheric conditions. The post test calculations show that the analytical results meet the experimental data within the reproducibility of the experiments. Further findings of the parametric studies are: - The experimental results obtained with the system water-air (ECTHOR) can be assigned to a water-steam-system, if the densities of the phases are equal in both cases. - The initial water level in the loop seal has no influence on the results as long as the gas mass flow is increased moderately. - The loop seal is appropriately nodalized if the mean length of the control volumes accords approx. 1.5 times the hydraulic pipe diameter. (orig.)

  4. Threats to validity in the design and conduct of preclinical efficacy studies: a systematic review of guidelines for in vivo animal experiments.

    Directory of Open Access Journals (Sweden)

    Valerie C Henderson

    Full Text Available The vast majority of medical interventions introduced into clinical development prove unsafe or ineffective. One prominent explanation for the dismal success rate is flawed preclinical research. We conducted a systematic review of preclinical research guidelines and organized recommendations according to the type of validity threat (internal, construct, or external or programmatic research activity they primarily address.We searched MEDLINE, Google Scholar, Google, and the EQUATOR Network website for all preclinical guideline documents published up to April 9, 2013 that addressed the design and conduct of in vivo animal experiments aimed at supporting clinical translation. To be eligible, documents had to provide guidance on the design or execution of preclinical animal experiments and represent the aggregated consensus of four or more investigators. Data from included guidelines were independently extracted by two individuals for discrete recommendations on the design and implementation of preclinical efficacy studies. These recommendations were then organized according to the type of validity threat they addressed. A total of 2,029 citations were identified through our search strategy. From these, we identified 26 guidelines that met our eligibility criteria--most of which were directed at neurological or cerebrovascular drug development. Together, these guidelines offered 55 different recommendations. Some of the most common recommendations included performance of a power calculation to determine sample size, randomized treatment allocation, and characterization of disease phenotype in the animal model prior to experimentation.By identifying the most recurrent recommendations among preclinical guidelines, we provide a starting point for developing preclinical guidelines in other disease domains. We also provide a basis for the study and evaluation of preclinical research practice. Please see later in the article for the Editors' Summary.

  5. Threats to validity in the design and conduct of preclinical efficacy studies: a systematic review of guidelines for in vivo animal experiments.

    Science.gov (United States)

    Henderson, Valerie C; Kimmelman, Jonathan; Fergusson, Dean; Grimshaw, Jeremy M; Hackam, Dan G

    2013-01-01

    The vast majority of medical interventions introduced into clinical development prove unsafe or ineffective. One prominent explanation for the dismal success rate is flawed preclinical research. We conducted a systematic review of preclinical research guidelines and organized recommendations according to the type of validity threat (internal, construct, or external) or programmatic research activity they primarily address. We searched MEDLINE, Google Scholar, Google, and the EQUATOR Network website for all preclinical guideline documents published up to April 9, 2013 that addressed the design and conduct of in vivo animal experiments aimed at supporting clinical translation. To be eligible, documents had to provide guidance on the design or execution of preclinical animal experiments and represent the aggregated consensus of four or more investigators. Data from included guidelines were independently extracted by two individuals for discrete recommendations on the design and implementation of preclinical efficacy studies. These recommendations were then organized according to the type of validity threat they addressed. A total of 2,029 citations were identified through our search strategy. From these, we identified 26 guidelines that met our eligibility criteria--most of which were directed at neurological or cerebrovascular drug development. Together, these guidelines offered 55 different recommendations. Some of the most common recommendations included performance of a power calculation to determine sample size, randomized treatment allocation, and characterization of disease phenotype in the animal model prior to experimentation. By identifying the most recurrent recommendations among preclinical guidelines, we provide a starting point for developing preclinical guidelines in other disease domains. We also provide a basis for the study and evaluation of preclinical research practice. Please see later in the article for the Editors' Summary.

  6. Experimental validation of lead cross sections for scale and MCNP

    International Nuclear Information System (INIS)

    Henrikson, D.J.

    1995-01-01

    Moving spent nuclear fuel between facilities often requires the use of lead-shielded casks. Criticality safety that is based upon calculations requires experimental validation of the fuel matrix and lead cross section libraries. A series of critical experiments using a high-enriched uranium-aluminum fuel element with a variety of reflectors, including lead, has been identified. Twenty-one configurations were evaluated in this study. The fuel element was modelled for KENO V.a and MCNP 4a using various cross section sets. The experiments addressed in this report can be used to validate lead-reflected calculations. Factors influencing calculated k eff which require further study include diameters of styrofoam inserts and homogenization

  7. The concept of validation of numerical models for consequence analysis

    International Nuclear Information System (INIS)

    Borg, Audun; Paulsen Husted, Bjarne; Njå, Ove

    2014-01-01

    Numerical models such as computational fluid dynamics (CFD) models are increasingly used in life safety studies and other types of analyses to calculate the effects of fire and explosions. The validity of these models is usually established by benchmark testing. This is done to quantitatively measure the agreement between the predictions provided by the model and the real world represented by observations in experiments. This approach assumes that all variables in the real world relevant for the specific study are adequately measured in the experiments and in the predictions made by the model. In this paper the various definitions of validation for CFD models used for hazard prediction are investigated to assess their implication for consequence analysis in a design phase. In other words, how is uncertainty in the prediction of future events reflected in the validation process? The sources of uncertainty are viewed from the perspective of the safety engineer. An example of the use of a CFD model is included to illustrate the assumptions the analyst must make and how these affect the prediction made by the model. The assessments presented in this paper are based on a review of standards and best practice guides for CFD modeling and the documentation from two existing CFD programs. Our main thrust has been to assess how validation work is performed and communicated in practice. We conclude that the concept of validation adopted for numerical models is adequate in terms of model performance. However, it does not address the main sources of uncertainty from the perspective of the safety engineer. Uncertainty in the input quantities describing future events, which are determined by the model user, outweighs the inaccuracies in the model as reported in validation studies. - Highlights: • Examine the basic concept of validation applied to models for consequence analysis. • Review standards and guides for validation of numerical models. • Comparison of the validation

  8. GPM Ground Validation Southern Appalachian Rain Gauge IPHEx V1

    Data.gov (United States)

    National Aeronautics and Space Administration — The GPM Ground Validation Southern Appalachian Rain Gauge IPHEx dataset was collected during the Integrated Precipitation and Hydrology Experiment (IPHEx) field...

  9. [Ethic review on clinical experiments of medical devices in medical institutions].

    Science.gov (United States)

    Shuai, Wanjun; Chao, Yong; Wang, Ning; Xu, Shining

    2011-07-01

    Clinical experiments are always used to evaluate the safety and validity of medical devices. The experiments have two types of clinical trying and testing. Ethic review must be done by the ethics committee of the medical department with the qualification of clinical research, and the approval must be made before the experiments. In order to ensure the safety and validity of clinical experiments of medical devices in medical institutions, the contents, process and approval criterions of the ethic review were analyzed and discussed.

  10. A validation study for the gas migration modelling of the compacted bentonite using existing experiment data

    International Nuclear Information System (INIS)

    Tawara, Y.; Mori, K.; Tada, K.; Shimura, T.; Sato, S.; Yamamoto, S.; Hayashi, H.

    2010-01-01

    Document available in extended abstract form only. After the field-scaled Gas Migration Test (GMT) was carried out at Grimsel Test Site (GTS) in Switzerland from 1997 through 2005, a study on advanced gas migration modelling has been conducted as a part of R and D programs of the RWMC (Radioactive Waste Management funding and Research Center) to evaluate long-term behaviour of the Engineered Barrier System (EBS) for the TRU waste disposal system in Japan. One of main objectives of this modelling study is to provide the qualified models and parameters in order to predict long-term gas migration behaviour in compacted bentonite. In addition, from a perspective of coupled THMC (Thermal, Hydrological, Mechanical and Chemical) processes, the specific processes which may have considerable impact to the gas migration behaviour are discussed by means of scoping calculations. Literature survey was conducted to collect experimental data related to gas migration in compacted bentonite in order to discuss an applicability of the existing gas migration models in the bentonite. The well-known flow rate controlled-gas injection experiment by Horseman, et al. and the pressure-controlled-gas injection test using several data with wide range of clay density and water content by Graham, et al, were selected. These literatures show the following characteristic behaviour of gas migration in high compacted and water-saturated bentonite. The observed gas flow rate from the outlet in the experiment by Horseman et al. was numerically reproduced by using the different conceptual models and computer codes, and then an applicability of the models and the identified key parameters such as relative permeability and capillary pressure were discussed. Helium gas was repeatedly injected into fully water-saturated and isotropically consolidated MX-80 bentonite (dry density: 1.6 Mg/m 3 ) in the experiment. One of the most important conclusions from this experiment is that it's impossible for

  11. Integration and Validation of Hysteroscopy Simulation in the Surgical Training Curriculum.

    Science.gov (United States)

    Elessawy, Mohamed; Skrzipczyk, Moritz; Eckmann-Scholz, Christel; Maass, Nicolai; Mettler, Liselotte; Guenther, Veronika; van Mackelenbergh, Marion; Bauerschlag, Dirk O; Alkatout, Ibrahim

    The primary objective of our study was to test the construct validity of the HystSim hysteroscopic simulator to determine whether simulation training can improve the acquisition of hysteroscopic skills regardless of the previous levels of experience of the participants. The secondary objective was to analyze the performance of a selected task, using specially designed scoring charts to help reduce the learning curve for both novices and experienced surgeons. The teaching of hysteroscopic intervention has received only scant attention, focusing mainly on the development of physical models and box simulators. This encouraged our working group to search for a suitable hysteroscopic simulator module and to test its validation. We decided to use the HystSim hysteroscopic simulator, which is one of the few such simulators that has already completed a validation process, with high ratings for both realism and training capacity. As a testing tool for our study, we selected the myoma resection task. We analyzed the results using the multimetric score system suggested by HystSim, allowing a more precise interpretation of the results. Between June 2014 and May 2015, our group collected data on 57 participants of minimally invasive surgical training courses at the Kiel School of Gynecological Endoscopy, Department of Gynecology and Obstetrics, University Hospitals Schleswig-Holstein, Campus Kiel. The novice group consisted of 42 medical students and residents with no prior experience in hysteroscopy, whereas the expert group consisted of 15 participants with more than 2 years of experience of advanced hysteroscopy operations. The overall results demonstrated that all participants attained significant improvements between their pretest and posttests, independent of their previous levels of experience (p hysteroscopic skills, proving an adequate construct validation of the HystSim. Using the multimetric scoring system enabled a more accurate analysis of the performance of the

  12. Development and validation of the Patient Experience with Treatment and Self-management (PETS): a patient-reported measure of treatment burden.

    Science.gov (United States)

    Eton, David T; Yost, Kathleen J; Lai, Jin-Shei; Ridgeway, Jennifer L; Egginton, Jason S; Rosedahl, Jordan K; Linzer, Mark; Boehm, Deborah H; Thakur, Azra; Poplau, Sara; Odell, Laura; Montori, Victor M; May, Carl R; Anderson, Roger T

    2017-02-01

    The purpose of this study was to develop and validate a new comprehensive patient-reported measure of treatment burden-the Patient Experience with Treatment and Self-management (PETS). A conceptual framework was used to derive the PETS with items reviewed and cognitively tested with patients. A survey battery, including a pilot version of the PETS, was mailed to 838 multi-morbid patients from two healthcare institutions for validation. A total of 332 multi-morbid patients returned completed surveys. Diagnostics supported deletion and consolidation of some items and domains. Confirmatory factor analysis supported a domain model for scaling comprised of 9 factors: medical information, medications, medical appointments, monitoring health, interpersonal challenges, medical/healthcare expenses, difficulty with healthcare services, role/social activity limitations, and physical/mental exhaustion. Scales showed good internal consistency (α range 0.79-0.95). Higher PETS scores, indicative of greater treatment burden, were correlated with more distress, less satisfaction with medications, lower self-efficacy, worse physical and mental health, and lower convenience of healthcare (Ps health literacy, less adherence to medications, and more financial difficulties reported higher PETS scores (Ps < 0.01). A comprehensive patient-reported measure of treatment burden can help to better characterize the impact of treatment and self-management burden on patient well-being and guide care toward minimally disruptive medicine.

  13. GCFR thermal-hydraulic experiments

    International Nuclear Information System (INIS)

    Schlueter, G.; Baxi, C.B.; Dalle Donne, M.; Gat, U.; Fenech, H.; Hanson, D.; Hudina, M.

    1980-01-01

    The thermal-hydraulic experimental studies performed and planned for the Gas-Cooled Fast Reactor (GCFR) core assemblies are described. The experiments consist of basic studies performed to obtain correlations, and bundle experiments which provide input for code validation and design verification. These studies have been performed and are planned at European laboratories, US national laboratories, Universities in the US, and at General Atomic Company

  14. Multiscale GPS tomography during COPS: validation and applications

    Science.gov (United States)

    Champollion, Cédric; Flamant, Cyrille; Masson, Frédéric; Gégout, Pascal; Boniface, Karen; Richard, Evelyne

    2010-05-01

    Accurate 3D description of the water vapour field is of interest for process studies such as convection initiation. None of the current techniques (LIDAR, satellite, radio soundings, GPS) can provide an all weather continuous 3D field of moisture. The combination of GPS tomography with radio-soundings (and/or LIDAR) has been used for such process studies using both advantages of vertically resolved soundings and high temporal density of GPS measurements. GPS tomography has been used at short scale (10 km horizontal resolution but in a 50 km² area) for process studies such as the ESCOMPTE experiment (Bastin et al., 2005) and at larger scale (50 km horizontal resolution) during IHOP_2002. But no extensive statistical validation has been done so far. The overarching goal of the COPS field experiment is to advance the quality of forecasts of orographically induced convective precipitation by four-dimensional observations and modeling of its life cycle for identifying the physical and chemical processes responsible for deficiencies in QPF over low-mountain regions. During the COPS field experiment, a GPS network of about 100 GPS stations has been continuously operating during three months in an area of 500 km² in the East of France (Vosges Mountains) and West of Germany (Black Forest). If the mean spacing between the GPS is about 50 km, an East-West GPS profile with a density of about 10 km is dedicated to high resolution tomography. One major goal of the GPS COPS experiment is to validate the GPS tomography with different spatial resolutions. Validation is based on additional radio-soundings and airborne / ground-based LIDAR measurement. The number and the high quality of vertically resolved water vapor observations give an unique data set for GPS tomography validation. Numerous tests have been done on real data to show the type water vapor structures that can be imaging by GPS tomography depending of the assimilation of additional data (radio soundings), the

  15. Experimental validation for combustion analysis of GOTHIC 6.1b code in 2-dimensional premixed combustion experiments

    International Nuclear Information System (INIS)

    Lee, J. Y.; Lee, J. J.; Park, K. C.

    2003-01-01

    In this study, the prediction capability of GOTHIC code for hydrogen combustion phenomena was validated with the results of two-dimensional premixed hydrogen combustion experiment executed by Seoul National University. In the experimental results, we could confirm the propagation characteristics of hydrogen flame such as buoyancy effect, flame front shape etc.. The combustion time of the tests was about 0.1 sec.. In the GOTHIC analyses results, the GOTHIC code could predict the overall hydrogen flame propagation characteristics but the buoyancy effect and flame shape did not compare well with the experimental results. Especially, in case of the flame propagate to the dead-end, GOTHIC predicted the flame did not affected by the flow and this cause quite different results in flame propagation from experimental results. Moreover the combustion time of the analyses was about 1 sec. which is ten times longer than the experimental result. To obtain more reasonable analysis results, it is necessary that combustion model parameters in GOTHIC code apply appropriately and hydrogen flame characteristics be reflected in solving governing equations

  16. Development and validation of a Chinese music quality rating test.

    Science.gov (United States)

    Cai, Yuexin; Zhao, Fei; Zheng, Yiqing

    2013-09-01

    The present study aims to develop and validate a Chinese music quality rating test (MQRT). In Experiment 1, 22 music pieces were initially selected and paired as a 'familiar music piece' and 'unfamiliar music piece' based on familiarities amongst the general public in the categories of classical music (6), Chinese folk music (8), and pop music (8). Following the selection criteria, one pair of music pieces from each music category was selected and used for the MQRT in Experiment 2. In Experiment 2, the MQRT was validated using these music pieces in the categories 'Pleasantness', 'Naturalness', 'Fullness', 'Roughness', and 'Sharpness'. Seventy-two adult participants and 30 normal-hearing listeners were recruited in Experiments 1 and 2, respectively. Significant differences between the familiar and unfamiliar music pieces were found in respect of pleasantness rating for folk and pop music pieces as well as in sharpness rating for pop music pieces. The comparison of music category effect on MQRT found significant differences in pleasantness, fullness, and sharpness ratings. The Chinese MQRT developed in the present study is an effective tool for assessing music quality.

  17. In search of truth: the regulatory necessity of validation

    International Nuclear Information System (INIS)

    Niederer, U.

    1991-01-01

    A look at modern ideas of how scientific truth is achieved shows that theories are not really proved but accepted by a consensus of the experts, borne out by often repeated experience showing a theory to work well. In the same sense acceptability of models in waste disposal is mostly based on consensus. To obtain consensus of the relevant experts, including regulators, all models which considerably influence the results of a safety assessment have to be validated. This is particularly important for the models of geospheric migration because scientific experience with the deep underground is scarce. Validation plays a special role in public acceptance where regulators and other groups, which act as intermediaries between the public and the project manager, have to be convinced that all the relevant models are correct

  18. Jendl-3.1 iron validation on the PCA-REPLICA (H{sub 2}O/Fe) shielding benchmark experiment

    Energy Technology Data Exchange (ETDEWEB)

    Pescarini, M.; Borgia, M. G. [ENEA, Centro Ricerche ``Ezio Clementel``, Bologna (Italy). Dipt. Energia

    1997-03-01

    The PCA-REPLICA (H{sub 2}O/Fe) neutron shielding benchmarks experiment is analysed using the SN 2-D DOT 3.5-E code and the 3-D-equivalent flux synthesis method. This engineering benchmark reproduces the ex-core radial geometry of a PWR, including a mild steel reactor pressure vessel (RPV) simulator, and is designed to test the accuracy of the calculation of the in-vessel neutron exposure parameters. This accuracy is strongly dependent on the quality of the iron neutron cross sections used to describe the nuclear reactions within the RPV simulator. In particular, in this report, the cross sections based on the JENDL-3.1 iron data files are tested, through a comparison of the calculated integral and spectral results with the corresponding experimental data. In addition, the present results are compared, on the same benchmark experiment, with those of a preceding ENEA-Bologna validation of the ENDF/B VI iron cross sections. The integral result comparison indicates that, for all the threshold detectors considered (Rh-103 (n, n`) Rh-103m, In-115 (n, n`) In-115m and S-32 (n, p) P-32), the JENDL-3.1 natural iron data produce satisfactory results similar to those obtained with the ENDF/B VI iron data. On the contrary, when the JENDL/3.1 Fe-56 data file is used, strongly underestimated results are obtained for the lower energy threshold detectors, Rh-103 and In-115. This fact, in particular, becomes more evident with increasing the neutron penetration depth in the RPV simulator.

  19. Processing and validation of intermediate energy evaluated data files

    International Nuclear Information System (INIS)

    2000-01-01

    Current accelerator-driven and other intermediate energy technologies require accurate nuclear data to model the performance of the target/blanket assembly, neutron production, activation, heating and damage. In a previous WPEC subgroup, SG13 on intermediate energy nuclear data, various aspects of intermediate energy data, such as nuclear data needs, experiments, model calculations and file formatting issues were investigated and categorized to come to a joint evaluation effort. The successor of SG13, SG14 on the processing and validation of intermediate energy evaluated data files, goes one step further. The nuclear data files that have been created with the aforementioned information need to be processed and validated in order to be applicable in realistic intermediate energy simulations. We emphasize that the work of SG14 excludes the 0-20 MeV data part of the neutron evaluations, which is supposed to be covered elsewhere. This final report contains the following sections: section 2: a survey of the data files above 20 MeV that have been considered for validation in SG14; section 3: a summary of the review of the 150 MeV intermediate energy data files for ENDF/B-VI and, more briefly, the other libraries; section 4: validation of the data library against an integral experiment with MCNPX; section 5: conclusions. (author)

  20. Electrically Driven Thermal Management: Flight Validation, Experiment Development, Future Technologies

    Science.gov (United States)

    Didion, Jeffrey R.

    2018-01-01

    Electrically Driven Thermal Management is an active research and technology development initiative incorporating ISS technology flight demonstrations (STP-H5), development of Microgravity Science Glovebox (MSG) flight experiment, and laboratory-based investigations of electrically based thermal management techniques. The program targets integrated thermal management for future generations of RF electronics and power electronic devices. This presentation reviews four program elements: i.) results from the Electrohydrodynamic (EHD) Long Term Flight Demonstration launched in February 2017 ii.) development of the Electrically Driven Liquid Film Boiling Experiment iii.) two University based research efforts iv.) development of Oscillating Heat Pipe evaluation at Goddard Space Flight Center.

  1. Principles of validation of diagnostic assays for infectious diseases

    International Nuclear Information System (INIS)

    Jacobson, R.H.

    1998-01-01

    Assay validation requires a series of inter-related processes. Assay validation is an experimental process: reagents and protocols are optimized by experimentation to detect the analyte with accuracy and precision. Assay validation is a relative process: its diagnostic sensitivity and diagnostic specificity are calculated relative to test results obtained from reference animal populations of known infection/exposure status. Assay validation is a conditional process: classification of animals in the target population as infected or uninfected is conditional upon how well the reference animal population used to validate the assay represents the target population; accurate predictions of the infection status of animals from test results (PV+ and PV-) are conditional upon the estimated prevalence of disease/infection in the target population. Assay validation is an incremental process: confidence in the validity of an assay increases over time when use confirms that it is robust as demonstrated by accurate and precise results; the assay may also achieve increasing levels of validity as it is upgraded and extended by adding reference populations of known infection status. Assay validation is a continuous process: the assay remains valid only insofar as it continues to provide accurate and precise results as proven through statistical verification. Therefore, the work required for validation of diagnostic assays for infectious diseases does not end with a time-limited series of experiments based on a few reference samples rather, to assure valid test results from an assay requires constant vigilance and maintenance of the assay, along with reassessment of its performance characteristics for each unique population of animals to which it is applied. (author)

  2. Studies on calibration and validation of data provided by the Global Ozone Monitoring Experiment GOME on ERS-2 (CAVEAT). Final report; Studie zur Kalibrierung und Validation von Daten des Global Ozone Monitoring Experiments GOME auf ERS-2 (CAVEAT). Endbericht

    Energy Technology Data Exchange (ETDEWEB)

    Burrows, J.P.; Kuenzi, K.; Ladstaetter-Weissenmayer, A.; Langer, J. [Bremen Univ. (Germany). Inst. fuer Umweltphysik; Neuber, R.; Eisinger, M. [Alfred-Wegener-Institut fuer Polar- und Meeresforschung, Potsdam (Germany)

    2000-04-01

    The Global Ozone Monitoring Experiment (GOME) was launched on 21 April 1995 as one of six scientific instruments on board the second European remote sensing satellite (ERS-2) of the ESA. The investigations presented here aimed at assessing and improving the accuracy of the GOME measurements of sun-standardized and absolute radiation density and the derived data products. For this purpose, the GOME data were compared with measurements pf terrestrial, airborne and satellite-borne systems. For scientific reasons, the measurements will focus on the medium and high latitudes of both hemispheres, although equatorial regions were investigated as well. In the first stage, operational data products of GOME were validated, i.e. radiation measurements (spectra, level1 product) and trace gas column densities (level2 product). [German] Am 21. April 1995 wurde das Global Ozone Monitoring Experiment (GOME) als eines von insgesamt sechs wissenschaftlichen Instrumenten an Bord des zweiten europaeischen Fernerkundungssatelliten (ERS-2) der ESA ins All gebracht. Das Ziel dieses Vorhabens ist es, die Genauigkeit der von GOME durchgefuehrten Messungen von sonnennormierter und absoluter Strahlungsdichte sowie der aus ihnen abgeleiteten Datenprodukte zu bewerten und zu verbessern. Dazu sollten die GOME-Daten mit Messungen von boden-, flugzeug- und satellitengestuetzten Systemen verglichen werden. Aus wissenschaftlichen Gruenden wird der Schwerpunkt auf Messungen bei mittleren und hohen Breitengraden in beiden Hemisphaeren liegen. Jedoch wurden im Laufe des Projektzeitraumes auch Regionen in Aequatornaehe untersucht. Im ersten Schritt sollen operationelle Datenprodukte von GOME validiert werden. Dieses sind Strahlungsmessungen (Spektren, Level1-Produkt) und Spurengas-Saeulendichten (Level2-Produkt). (orig.)

  3. Integrated Disposal Facility FY 2016: ILAW Verification and Validation of the eSTOMP Simulator

    Energy Technology Data Exchange (ETDEWEB)

    Freedman, Vicky L. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Bacon, Diana H. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Fang, Yilin [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2016-05-13

    This document describes two sets of simulations carried out to further verify and validate the eSTOMP simulator. In this report, a distinction is made between verification and validation, and the focus is on verifying eSTOMP through a series of published benchmarks on cementitious wastes, and validating eSTOMP based on a lysimeter experiment for the glassified waste. These activities are carried out within the context of a scientific view of validation that asserts that models can only be invalidated, and that model validation (and verification) is a subjective assessment.

  4. The ALICE Software Release Validation cluster

    International Nuclear Information System (INIS)

    Berzano, D; Krzewicki, M

    2015-01-01

    One of the most important steps of software lifecycle is Quality Assurance: this process comprehends both automatic tests and manual reviews, and all of them must pass successfully before the software is approved for production. Some tests, such as source code static analysis, are executed on a single dedicated service: in High Energy Physics, a full simulation and reconstruction chain on a distributed computing environment, backed with a sample “golden” dataset, is also necessary for the quality sign off. The ALICE experiment uses dedicated and virtualized computing infrastructures for the Release Validation in order not to taint the production environment (i.e. CVMFS and the Grid) with non-validated software and validation jobs: the ALICE Release Validation cluster is a disposable virtual cluster appliance based on CernVM and the Virtual Analysis Facility, capable of deploying on demand, and with a single command, a dedicated virtual HTCondor cluster with an automatically scalable number of virtual workers on any cloud supporting the standard EC2 interface. Input and output data are externally stored on EOS, and a dedicated CVMFS service is used to provide the software to be validated. We will show how the Release Validation Cluster deployment and disposal are completely transparent for the Release Manager, who simply triggers the validation from the ALICE build system's web interface. CernVM 3, based entirely on CVMFS, permits to boot any snapshot of the operating system in time: we will show how this allows us to certify each ALICE software release for an exact CernVM snapshot, addressing the problem of Long Term Data Preservation by ensuring a consistent environment for software execution and data reprocessing in the future. (paper)

  5. A validation system for data preservation in HEP

    International Nuclear Information System (INIS)

    Kemp, Yves; Strutz, Marco; Hessling, Hermann

    2012-01-01

    Preserving data from past experiments and preserving the ability to perform analysis with old data is of growing importance in many domains of science, including High Energy Physics (HEP). A study group on this issue, DPHEP, has been established in this field to provide guidelines and a structure for international collaboration on data preservation projects in HEP. This contribution presents a framework that allows experimentalists to validate their software against a previously defined set of tests in an automated way. The framework has been designed with a special focus for longevity, as it makes use of open protocols, has a modular design and is based on simple communication mechanisms. On the fabrics side, tests are carried out in a virtual environment using a cloud infrastructure. Within the framework, it is easy to run validation tests on different hardware platforms, or different major or minor versions of operating systems. Experts from IT or the experiments can automatically detect failures in the test procedure by the help of reporting tools. Hence, appropriate actions can be taken in a timely manner. The design and important implementation aspects of the framework are shown and first experiences from early-bird users will be presented.

  6. TRACY transient experiment databook. 3) Ramp feed experiment

    Energy Technology Data Exchange (ETDEWEB)

    Nakajima, Ken; Yamane, Yuichi; Ogawa, Kazuhiko; Aizawa, Eiju; Yanagisawa, Hiroshi; Miyoshi, Yoshinori [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    2002-03-01

    This is a databook of TRACY ''ramp feed'' experiments. TRACY is a reactor to perform supercritical experiments using low-enriched uranyl nitrate aqueous solution. The excess reactivity of TRACY is 3$ at maximum, and it is inserted by feeding the solution to a core tank or by withdrawing a control rod, which is called as the transient rod, from the core. In the ramp feed experiment, the supercritical experiment is initiated by feeding the fuel solution to the core tank in a constant feed rate. The data in the present databook consist of datasheets and graphs. Experimental conditions and typical values of measured parameters are tabulated in the datasheet. In the graph, power and temperature profiles are plotted. Those data are useful for the investigation of criticality accidents with fissile solutions, and for validation of criticality accident analysis codes. (author)

  7. Model validation of GAMMA code with heat transfer experiment for KO TBM in ITER

    International Nuclear Information System (INIS)

    Yum, Soo Been; Lee, Eo Hwak; Lee, Dong Won; Park, Goon Cherl

    2013-01-01

    Highlights: ► In this study, helium supplying system was constructed. ► Preparation for heat transfer experiment in KO TBM condition using helium supplying system was progressed. ► To get more applicable results, test matrix was made to cover the condition for KO TBM. ► Using CFD code; CFX 11, validation and modification for system code GAMMA was performed. -- Abstract: By considering the requirements for a DEMO-relevant blanket concept, Korea (KO) has proposed a He cooled molten lithium (HCML) test blanket module (TBM) for testing in ITER. A performance analysis for the thermal–hydraulics and a safety analysis for the KO TBM have been carried out using a commercial CFD code, ANSYS-CFX, and a system code, GAMMA (GAs multicomponent mixture analysis), which was developed by the gas cooled reactor in Korea. To verify the codes, a preliminary study was performed by Lee using a single TBM first wall (FW) mock-up made from the same material as the KO TBM, ferritic martensitic steel, using a 6 MPa nitrogen gas loop. The test was performed at pressures of 1.1, 1.9 and 2.9 MPa, and under various ranges of flow rate from 0.0105 to 0.0407 kg/s with a constant wall temperature condition. In the present study, a thermal–hydraulic test was performed with the newly constructed helium supplying system, in which the design pressure and temperature were 9 MPa and 500 °C, respectively. In the experiment, the same mock-up was used, and the test was performed under the conditions of 3 MPa pressure, 30 °C inlet temperature and 70 m/s helium velocity, which are almost same conditions of the KO TBM FW. One side of the mock-up was heated with a constant heat flux of 0.3–0.5 MW/m 2 using a graphite heating system, KoHLT-2 (Korea heat load test facility-2). Because the comparison result between CFX 11 and GAMMA showed a difference tendency, the modification of heat transfer correlation included in GAMMA was performed. And the modified GAMMA showed the strong parity with CFX

  8. Design for validation: An approach to systems validation

    Science.gov (United States)

    Carter, William C.; Dunham, Janet R.; Laprie, Jean-Claude; Williams, Thomas; Howden, William; Smith, Brian; Lewis, Carl M. (Editor)

    1989-01-01

    Every complex system built is validated in some manner. Computer validation begins with review of the system design. As systems became too complicated for one person to review, validation began to rely on the application of adhoc methods by many individuals. As the cost of the changes mounted and the expense of failure increased, more organized procedures became essential. Attempts at devising and carrying out those procedures showed that validation is indeed a difficult technical problem. The successful transformation of the validation process into a systematic series of formally sound, integrated steps is necessary if the liability inherent in the future digita-system-based avionic and space systems is to be minimized. A suggested framework and timetable for the transformtion are presented. Basic working definitions of two pivotal ideas (validation and system life-cyle) are provided and show how the two concepts interact. Many examples are given of past and present validation activities by NASA and others. A conceptual framework is presented for the validation process. Finally, important areas are listed for ongoing development of the validation process at NASA Langley Research Center.

  9. Validation and Error Characterization for the Global Precipitation Measurement

    Science.gov (United States)

    Bidwell, Steven W.; Adams, W. J.; Everett, D. F.; Smith, E. A.; Yuter, S. E.

    2003-01-01

    The Global Precipitation Measurement (GPM) is an international effort to increase scientific knowledge on the global water cycle with specific goals of improving the understanding and the predictions of climate, weather, and hydrology. These goals will be achieved through several satellites specifically dedicated to GPM along with the integration of numerous meteorological satellite data streams from international and domestic partners. The GPM effort is led by the National Aeronautics and Space Administration (NASA) of the United States and the National Space Development Agency (NASDA) of Japan. In addition to the spaceborne assets, international and domestic partners will provide ground-based resources for validating the satellite observations and retrievals. This paper describes the validation effort of Global Precipitation Measurement to provide quantitative estimates on the errors of the GPM satellite retrievals. The GPM validation approach will build upon the research experience of the Tropical Rainfall Measuring Mission (TRMM) retrieval comparisons and its validation program. The GPM ground validation program will employ instrumentation, physical infrastructure, and research capabilities at Supersites located in important meteorological regimes of the globe. NASA will provide two Supersites, one in a tropical oceanic and the other in a mid-latitude continental regime. GPM international partners will provide Supersites for other important regimes. Those objectives or regimes not addressed by Supersites will be covered through focused field experiments. This paper describes the specific errors that GPM ground validation will address, quantify, and relate to the GPM satellite physical retrievals. GPM will attempt to identify the source of errors within retrievals including those of instrument calibration, retrieval physical assumptions, and algorithm applicability. With the identification of error sources, improvements will be made to the respective calibration

  10. Validation in the Absence of Observed Events.

    Science.gov (United States)

    Lathrop, John; Ezell, Barry

    2016-04-01

    This article addresses the problem of validating models in the absence of observed events, in the area of weapons of mass destruction terrorism risk assessment. We address that problem with a broadened definition of "validation," based on stepping "up" a level to considering the reason why decisionmakers seek validation, and from that basis redefine validation as testing how well the model can advise decisionmakers in terrorism risk management decisions. We develop that into two conditions: validation must be based on cues available in the observable world; and it must focus on what can be done to affect that observable world, i.e., risk management. That leads to two foci: (1) the real-world risk generating process, and (2) best use of available data. Based on our experience with nine WMD terrorism risk assessment models, we then describe three best use of available data pitfalls: SME confidence bias, lack of SME cross-referencing, and problematic initiation rates. Those two foci and three pitfalls provide a basis from which we define validation in this context in terms of four tests--Does the model: … capture initiation? … capture the sequence of events by which attack scenarios unfold? … consider unanticipated scenarios? … consider alternative causal chains? Finally, we corroborate our approach against three validation tests from the DOD literature: Is the model a correct representation of the process to be simulated? To what degree are the model results comparable to the real world? Over what range of inputs are the model results useful? © 2015 Society for Risk Analysis.

  11. Validation of the Persian version of the Daily Spiritual Experiences Scale (DSES) in Pregnant Women: A Proper Tool to Assess Spirituality Related to Mental Health.

    Science.gov (United States)

    Saffari, Mohsen; Amini, Hossein; Sheykh-Oliya, Zarindokht; Pakpour, Amir H; Koenig, Harold G

    2017-12-01

    Assessing spirituality in healthy pregnant women may lead to supportive interventions that will improve their care. A psychometrically valid measure such as the Daily Spiritual Experiences Scale (DSES) may be helpful in this regard. The current study sought to adapt a Persian version of DSES for use in pregnancy. A total of 377 pregnant women were recruited from three general hospitals located in Tehran, Iran. Administered scales were the DSES, Duke University Religion Index, Santa Clara Strength of Religious Faith scale, and Depression Anxiety Stress Scale, as well as demographic measures. Reliability of the DSES was tested using Cronbach's alpha for internal consistency and the intraclass correlation coefficient (ICC) for test-retest stability. Scale validity was assessed by criterion-related tests, known-groups comparison, and exploratory factor analysis. Participant's mean age was 27.7 (4.1), and most were nulliparous (70%). The correlation coefficient between individual items on the scale and the total score was greater than 0.30 in most cases. Cronbach's alpha for the scale was 0.90. The ICC for 2-week test-retest reliability was high (0.86). Relationships between similar and dissimilar scales indicated acceptable convergent and divergent validity. The factor structure of the scale indicated a single factor that explained 59% of the variance. The DSES was found to be a reliable and valid measure of spirituality in pregnant Iranian women. This scale may be used to examine the relationship between spirituality and health outcomes, research that may lead to supportive interventions in this population.

  12. Design and Analysis of simulation experiments : Tutorial

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    2017-01-01

    This tutorial reviews the design and analysis of simulation experiments. These experiments may have various goals: validation, prediction, sensitivity analysis, optimization (possibly robust), and risk or uncertainty analysis. These goals may be realized through metamodels. Two types of metamodels

  13. Validation of the newborn larynx modeling with aerodynamical experimental data.

    Science.gov (United States)

    Nicollas, R; Giordano, J; Garrel, R; Medale, M; Caminat, P; Giovanni, A; Ouaknine, M; Triglia, J M

    2009-06-01

    Many authors have studied adult's larynx modelization, but the mechanisms of newborn's voice production have very rarely been investigated. After validating a numerical model with acoustic data, studies were performed on larynges of human fetuses in order to validate this model with aerodynamical experiments. Anatomical measurements were performed and a simplified numerical model was built using Fluent((R)) with the vocal folds in phonatory position. The results obtained are in good agreement with those obtained by laser Doppler velocimetry (LDV) and high-frame rate particle image velocimetry (HFR-PIV), on an experimental bench with excised human fetus larynges. It appears that computing with first cry physiological parameters leads to a model which is close to those obtained in experiments with real organs.

  14. Test plan for validation of the radiative transfer equation.

    Energy Technology Data Exchange (ETDEWEB)

    Ricks, Allen Joseph; Grasser, Thomas W.; Kearney, Sean Patrick; Jernigan, Dann A.; Blanchat, Thomas K.

    2010-09-01

    As the capabilities of numerical simulations increase, decision makers are increasingly relying upon simulations rather than experiments to assess risks across a wide variety of accident scenarios including fires. There are still, however, many aspects of fires that are either not well understood or are difficult to treat from first principles due to the computational expense. For a simulation to be truly predictive and to provide decision makers with information which can be reliably used for risk assessment the remaining physical processes must be studied and suitable models developed for the effects of the physics. A set of experiments are outlined in this report which will provide soot volume fraction/temperature data and heat flux (intensity) data for the validation of models for the radiative transfer equation. In addition, a complete set of boundary condition measurements will be taken to allow full fire predictions for validation of the entire fire model. The experiments will be performed with a lightly-sooting liquid hydrocarbon fuel fire in the fully turbulent scale range (2 m diameter).

  15. Validation issues for depletion and criticality analysis in burnup credit

    International Nuclear Information System (INIS)

    Parks, C.V.; Broadhead, B.L.; Dehart, M.D.; Gauld, I.C.

    2001-01-01

    This paper reviews validation issues associated with implementation of burnup credit in transport, dry storage, and disposal. The issues discussed are ones that have been identified by one or more constituents of the United States technical community (national laboratories, licensees, and regulators) that have been exploring the use of burnup credit. There is not necessarily agreement on the importance of the various issues, which sometimes is what creates the issue. The broad issues relate to the paucity of available experimental data (radiochemical assays and critical experiments) covering the full range and characteristics of spent nuclear fuel in away-from-reactor systems. The paper will also introduce recent efforts initiated at Oak Ridge National Laboratory (ORNL) to provide technical information that can help better assess the value of different experiments. The focus of the paper is on experience with validation issues related to use of burnup credit for transport and dry storage applications. (author)

  16. Elements of a pragmatic approach for dealing with bias and uncertainty in experiments through predictions : experiment design and data conditioning; %22real space%22 model validation and conditioning; hierarchical modeling and extrapolative prediction.

    Energy Technology Data Exchange (ETDEWEB)

    Romero, Vicente Jose

    2011-11-01

    This report explores some important considerations in devising a practical and consistent framework and methodology for utilizing experiments and experimental data to support modeling and prediction. A pragmatic and versatile 'Real Space' approach is outlined for confronting experimental and modeling bias and uncertainty to mitigate risk in modeling and prediction. The elements of experiment design and data analysis, data conditioning, model conditioning, model validation, hierarchical modeling, and extrapolative prediction under uncertainty are examined. An appreciation can be gained for the constraints and difficulties at play in devising a viable end-to-end methodology. Rationale is given for the various choices underlying the Real Space end-to-end approach. The approach adopts and refines some elements and constructs from the literature and adds pivotal new elements and constructs. Crucially, the approach reflects a pragmatism and versatility derived from working many industrial-scale problems involving complex physics and constitutive models, steady-state and time-varying nonlinear behavior and boundary conditions, and various types of uncertainty in experiments and models. The framework benefits from a broad exposure to integrated experimental and modeling activities in the areas of heat transfer, solid and structural mechanics, irradiated electronics, and combustion in fluids and solids.

  17. The brief negative symptom scale: validation of the German translation and convergent validity with self-rated anhedonia and observer-rated apathy.

    Science.gov (United States)

    Bischof, Martin; Obermann, Caitriona; Hartmann, Matthias N; Hager, Oliver M; Kirschner, Matthias; Kluge, Agne; Strauss, Gregory P; Kaiser, Stefan

    2016-11-22

    Negative symptoms are considered core symptoms of schizophrenia. The Brief Negative Symptom Scale (BNSS) was developed to measure this symptomatic dimension according to a current consensus definition. The present study examined the psychometric properties of the German version of the BNSS. To expand former findings on convergent validity, we employed the Temporal Experience Pleasure Scale (TEPS), a hedonic self-report that distinguishes between consummatory and anticipatory pleasure. Additionally, we addressed convergent validity with observer-rated assessment of apathy with the Apathy Evaluation Scale (AES), which was completed by the patient's primary nurse. Data were collected from 75 in- and outpatients from the Psychiatric Hospital, University Zurich diagnosed with either schizophrenia or schizoaffective disorder. We assessed convergent and discriminant validity, internal consistency and inter-rater reliability. We largely replicated the findings of the original version showing good psychometric properties of the BNSS. In addition, the primary nurses evaluation correlated moderately with interview-based clinician rating. BNSS anhedonia items showed good convergent validity with the TEPS. Overall, the German BNSS shows good psychometric properties comparable to the original English version. Convergent validity extends beyond interview-based assessments of negative symptoms to self-rated anhedonia and observer-rated apathy.

  18. Assessing students' communication skills: validation of a global rating.

    Science.gov (United States)

    Scheffer, Simone; Muehlinghaus, Isabel; Froehmel, Annette; Ortwein, Heiderose

    2008-12-01

    Communication skills training is an accepted part of undergraduate medical programs nowadays. In addition to learning experiences its importance should be emphasised by performance-based assessment. As detailed checklists have been shown to be not well suited for the assessment of communication skills for different reasons, this study aimed to validate a global rating scale. A Canadian instrument was translated to German and adapted to assess students' communication skills during an end-of-semester-OSCE. Subjects were second and third year medical students at the reformed track of the Charité-Universitaetsmedizin Berlin. Different groups of raters were trained to assess students' communication skills using the global rating scale. Validity testing included concurrent validity and construct validity: Judgements of different groups of raters were compared to expert ratings as a defined gold standard. Furthermore, the amount of agreement between scores obtained with this global rating scale and a different instrument for assessing communication skills was determined. Results show that communication skills can be validly assessed by trained non-expert raters as well as standardised patients using this instrument.

  19. Validation of the generic medical interview satisfaction scale: the G-MISS questionnaire.

    Science.gov (United States)

    Maurice-Szamburski, Axel; Michel, Pierre; Loundou, Anderson; Auquier, Pascal

    2017-02-14

    Patients have about seven medical consultations a year. Despite the importance of medical interviews in the healthcare process, there is no generic instrument to assess patients' experiences in general practices, medical specialties, and surgical specialties. The main objective was to validate a questionnaire assessing patients' experiences with medical consultations in various practices. The G-MISS study was a prospective multi-center trial that enrolled patients from May to July 2016. A total of 2055 patients were included from general practices, medical specialties, and surgical specialties. Patients filled out a questionnaire assessing various aspects of their experience and satisfaction within 1 week after their medical interview. The validation process relied on item response theory. Internal validity was examined using exploratory factorial analysis. The statistical model used the root mean square error of approximation, confirmatory fit index, and standard root mean square residual as fit indices. Scalability and reliability were assessed with the Rasch model and Cronbach's alpha coefficients, respectively. Scale properties across the three subgroups were explored with differential item functioning. The G-MISS final questionnaire contained 16 items, structured in three dimensions of patients' experiences: "Relief", "Communication", and "Compliance". A global index of patients' experiences was computed as the mean of the dimension scores. All fit indices from the statistical model were satisfactory (RMSEA = 0.03, CFI = 0.98, SRMR = 0.06). The overall scalability had a good fit to the Rasch model. Each dimension was reliable, with Cronbach's alpha ranging from 0.73 to 0.86. Differential item functioning across the three consultation settings was negligible. Patients undergoing medical or surgical specialties reported higher scores in the "Relief" dimension compared with general practice (83.0 ± 11.6 or 82.4 ± 11.6 vs. 73.2 ± 16

  20. Validating the Patient Experience with Treatment and Self-Management (PETS, a patient-reported measure of treatment burden, in people with diabetes

    Directory of Open Access Journals (Sweden)

    Rogers EA

    2017-11-01

    Full Text Available Elizabeth A Rogers,1,2 Kathleen J Yost,3 Jordan K Rosedahl,3 Mark Linzer,4 Deborah H Boehm,5 Azra Thakur,5 Sara Poplau,5 Roger T Anderson,6 David T Eton3 1Department of Medicine, University of Minnesota Medical School, Minneapolis, MN, USA; 2Department of Pediatrics, University of Minnesota Medical School, Minneapolis, MN, USA; 3Department of Health Services Research, Mayo Clinic, Rochester, MN, USA; 4Department of Medicine, Hennepin County Medical Center, Minneapolis, MN, USA; 5Minneapolis Medical Research Foundation, Minneapolis, MN, USA; 6University of Virginia School of Medicine, Charlottesville, VA, USA Aims: To validate a comprehensive general measure of treatment burden, the Patient Experience with Treatment and Self-Management (PETS, in people with diabetes. Methods: We conducted a secondary analysis of a cross-sectional survey study with 120 people diagnosed with type 1 or type 2 diabetes and at least one additional chronic illness. Surveys included established patient-reported outcome measures and a 48-item version of the PETS, a new measure comprised of multi-item scales assessing the burden of chronic illness treatment and self-care as it relates to nine domains: medical information, medications, medical appointments, monitoring health, interpersonal challenges, health care expenses, difficulty with health care services, role activity limitations, and physical/mental exhaustion from self-management. Internal reliability of PETS scales was determined using Cronbach’s alpha. Construct validity was determined through correlation of PETS scores with established measures (measures of chronic condition distress, medication satisfaction, self-efficacy, and global well-being, and known-groups validity through comparisons of PETS scores across clinically distinct groups. In an exploratory test of predictive validity, step-wise regressions were used to determine which PETS scales were most associated with outcomes of chronic condition

  1. Improved best estimate plus uncertainty methodology including advanced validation concepts to license evolving nuclear reactors

    International Nuclear Information System (INIS)

    Unal, Cetin; Williams, Brian; McClure, Patrick; Nelson, Ralph A.

    2010-01-01

    Many evolving nuclear energy programs plan to use advanced predictive multi-scale multi-physics simulation and modeling capabilities to reduce cost and time from design through licensing. Historically, the role of experiments was primary tool for design and understanding of nuclear system behavior while modeling and simulation played the subordinate role of supporting experiments. In the new era of multi-scale multi-physics computational based technology development, the experiments will still be needed but they will be performed at different scales to calibrate and validate models leading predictive simulations. Cost saving goals of programs will require us to minimize the required number of validation experiments. Utilization of more multi-scale multi-physics models introduces complexities in the validation of predictive tools. Traditional methodologies will have to be modified to address these arising issues. This paper lays out the basic aspects of a methodology that can be potentially used to address these new challenges in design and licensing of evolving nuclear technology programs. The main components of the proposed methodology are verification, validation, calibration, and uncertainty quantification. An enhanced calibration concept is introduced and is accomplished through data assimilation. The goal is to enable best-estimate prediction of system behaviors in both normal and safety related environments. To achieve this goal requires the additional steps of estimating the domain of validation and quantification of uncertainties that allow for extension of results to areas of the validation domain that are not directly tested with experiments, which might include extension of the modeling and simulation (M and S) capabilities for application to full-scale systems. The new methodology suggests a formalism to quantify an adequate level of validation (predictive maturity) with respect to required selective data so that required testing can be minimized for

  2. Improved best estimate plus uncertainty methodology including advanced validation concepts to license evolving nuclear reactors

    Energy Technology Data Exchange (ETDEWEB)

    Unal, Cetin [Los Alamos National Laboratory; Williams, Brian [Los Alamos National Laboratory; Mc Clure, Patrick [Los Alamos National Laboratory; Nelson, Ralph A [IDAHO NATIONAL LAB

    2010-01-01

    Many evolving nuclear energy programs plan to use advanced predictive multi-scale multi-physics simulation and modeling capabilities to reduce cost and time from design through licensing. Historically, the role of experiments was primary tool for design and understanding of nuclear system behavior while modeling and simulation played the subordinate role of supporting experiments. In the new era of multi-scale multi-physics computational based technology development, the experiments will still be needed but they will be performed at different scales to calibrate and validate models leading predictive simulations. Cost saving goals of programs will require us to minimize the required number of validation experiments. Utilization of more multi-scale multi-physics models introduces complexities in the validation of predictive tools. Traditional methodologies will have to be modified to address these arising issues. This paper lays out the basic aspects of a methodology that can be potentially used to address these new challenges in design and licensing of evolving nuclear technology programs. The main components of the proposed methodology are verification, validation, calibration, and uncertainty quantification. An enhanced calibration concept is introduced and is accomplished through data assimilation. The goal is to enable best-estimate prediction of system behaviors in both normal and safety related environments. To achieve this goal requires the additional steps of estimating the domain of validation and quantification of uncertainties that allow for extension of results to areas of the validation domain that are not directly tested with experiments, which might include extension of the modeling and simulation (M&S) capabilities for application to full-scale systems. The new methodology suggests a formalism to quantify an adequate level of validation (predictive maturity) with respect to required selective data so that required testing can be minimized for cost

  3. Structural system identification: Structural dynamics model validation

    Energy Technology Data Exchange (ETDEWEB)

    Red-Horse, J.R.

    1997-04-01

    Structural system identification is concerned with the development of systematic procedures and tools for developing predictive analytical models based on a physical structure`s dynamic response characteristics. It is a multidisciplinary process that involves the ability (1) to define high fidelity physics-based analysis models, (2) to acquire accurate test-derived information for physical specimens using diagnostic experiments, (3) to validate the numerical simulation model by reconciling differences that inevitably exist between the analysis model and the experimental data, and (4) to quantify uncertainties in the final system models and subsequent numerical simulations. The goal of this project was to develop structural system identification techniques and software suitable for both research and production applications in code and model validation.

  4. Validating Bayesian truth serum in large-scale online human experiments.

    Science.gov (United States)

    Frank, Morgan R; Cebrian, Manuel; Pickard, Galen; Rahwan, Iyad

    2017-01-01

    Bayesian truth serum (BTS) is an exciting new method for improving honesty and information quality in multiple-choice survey, but, despite the method's mathematical reliance on large sample sizes, existing literature about BTS only focuses on small experiments. Combined with the prevalence of online survey platforms, such as Amazon's Mechanical Turk, which facilitate surveys with hundreds or thousands of participants, BTS must be effective in large-scale experiments for BTS to become a readily accepted tool in real-world applications. We demonstrate that BTS quantifiably improves honesty in large-scale online surveys where the "honest" distribution of answers is known in expectation on aggregate. Furthermore, we explore a marketing application where "honest" answers cannot be known, but find that BTS treatment impacts the resulting distributions of answers.

  5. Do qualitative methods validate choice experiment-results? A case study on the economic valuation of peatland restoration in Central Kalimantan, Indonesia

    Energy Technology Data Exchange (ETDEWEB)

    Schaafsma, M.; Van Beukering, P.J.H.; Davies, O.; Oskolokaite, I.

    2009-05-15

    This study explores the benefits of combining independent results of qualitative focus group discussions (FGD) with a quantitative choice experiment (CE) in a developing country context. The assessment addresses the compensation needed by local communities in Central Kalimantan to cooperate in peatland restoration programs by using a CE combined with a series of FGD to validate and explain the CE-results. The main conclusion of this study is that a combination of qualitative and quantitative methods is necessary to assess the economic value of ecological services in monetary terms and to better understand the underlying attitudes and motives that drive these outcomes. The FGD not only cross-validate results of the CE, but also help to interpret the differences in preferences of respondents arising from environmental awareness and ecosystem characteristics. The FGD confirms that the CE results provide accurate information for ecosystem valuation. Additional to the advantages of FGD listed in the literature, this study finds that FGD provide the possibility to identify the specific terms and conditions on which respondents will accept land-use change scenarios. The results show that FGD may help to address problems regarding the effects of distribution of costs and benefits over time that neo-classical economic theory poses for the interpretation of economic valuation results in the demand it puts on the rationality of trade-offs and the required calculations.

  6. The X-Ray Pebble Recirculation Experiment (X-PREX): Facility Description, Preliminary Discrete Element Method Simulation Validation Studies, and Future Test Program

    International Nuclear Information System (INIS)

    Laufer, Michael R.; Bickel, Jeffrey E.; Buster, Grant C.; Krumwiede, David L.; Peterson, Per F.

    2014-01-01

    This paper presents a facility description, preliminary results, and future test program of the new X-Ray Pebble Recirculation Experiment (X-PREX), which is now operational and being used to collect data on the behavior of slow dense granular flows relevant to pebble bed reactor core designs. The X-PREX facility uses digital x-ray tomography methods to track both the translational and rotational motion of spherical pebbles, which provides unique experimental results that can be used to validate discrete element method (DEM) simulations of pebble motion. The validation effort supported by the X-PREX facility provides a means to build confidence in analysis of pebble bed configuration and residence time distributions that impact the neutronics, thermal hydraulics, and safety analysis of pebble bed reactor cores. Preliminary experimental and DEM simulation results are reported for silo drainage, a classical problem in the granular flow literature, at several hopper angles. These studies include conventional converging and novel diverging geometries that provide additional flexibility in the design of pebble bed reactor cores. Excellent agreement is found between the X-PREX experimental and DEM simulation results. Finally, this paper discusses additional studies in progress relevant to the design and analysis of pebble bed reactor cores including pebble recirculation in cylindrical core geometries and evaluation of forces on shut down blades inserted directly into a packed pebble bed. (author)

  7. Construct Validity and Case Validity in Assessment

    Science.gov (United States)

    Teglasi, Hedwig; Nebbergall, Allison Joan; Newman, Daniel

    2012-01-01

    Clinical assessment relies on both "construct validity", which focuses on the accuracy of conclusions about a psychological phenomenon drawn from responses to a measure, and "case validity", which focuses on the synthesis of the full range of psychological phenomena pertaining to the concern or question at hand. Whereas construct validity is…

  8. GPM GROUND VALIDATION ENVIRONMENT CANADA (EC) SNOW SURVEYS GCPEX V1

    Data.gov (United States)

    National Aeronautics and Space Administration — The GPM Ground Validation Environment Canada Snow Surveys GCPEx dataset was manually collected during the GPM Cold-season Precipitation Experiment (GCPEx), which...

  9. GPM GROUND VALIDATION ENVIRONMENT CANADA (EC) VAISALA CEILOMETER GCPEX V1

    Data.gov (United States)

    National Aeronautics and Space Administration — The GPM Ground Validation Environment Canada (EC) VAISALA Ceilometer GCPEx dataset was collected during the GPM Cold-season Precipitation Experiment (GCPEx) in...

  10. The development and preliminary validation of the Challenge and Threat in Sport (CAT-Sport) Scale

    OpenAIRE

    Rossato, C.J.L.; Uphill, M.A.; Swain, J.; Coleman, D.

    2016-01-01

    This paper outlines the development and preliminary validation of a sport-specific measure of\\ud athletes’ experience of challenge and threat. Three independent studies assess the content\\ud validity, factor structure, criterion validity and internal consistency of the Challenge and\\ud Threat in Sport (CAT-Sport) Scale. In study 1, a group of 25 athletes and 2 experts assessed\\ud the content validity of items derived from existing measures of challenge and threat.\\ud Participants examined a p...

  11. INL Experimental Program Roadmap for Thermal Hydraulic Code Validation

    Energy Technology Data Exchange (ETDEWEB)

    Glenn McCreery; Hugh McIlroy

    2007-09-01

    Advanced computer modeling and simulation tools and protocols will be heavily relied on for a wide variety of system studies, engineering design activities, and other aspects of the Next Generation Nuclear Power (NGNP) Very High Temperature Reactor (VHTR), the DOE Global Nuclear Energy Partnership (GNEP), and light-water reactors. The goal is for all modeling and simulation tools to be demonstrated accurate and reliable through a formal Verification and Validation (V&V) process, especially where such tools are to be used to establish safety margins and support regulatory compliance, or to design a system in a manner that reduces the role of expensive mockups and prototypes. Recent literature identifies specific experimental principles that must be followed in order to insure that experimental data meet the standards required for a “benchmark” database. Even for well conducted experiments, missing experimental details, such as geometrical definition, data reduction procedures, and manufacturing tolerances have led to poor Benchmark calculations. The INL has a long and deep history of research in thermal hydraulics, especially in the 1960s through 1980s when many programs such as LOFT and Semiscle were devoted to light-water reactor safety research, the EBRII fast reactor was in operation, and a strong geothermal energy program was established. The past can serve as a partial guide for reinvigorating thermal hydraulic research at the laboratory. However, new research programs need to fully incorporate modern experimental methods such as measurement techniques using the latest instrumentation, computerized data reduction, and scaling methodology. The path forward for establishing experimental research for code model validation will require benchmark experiments conducted in suitable facilities located at the INL. This document describes thermal hydraulic facility requirements and candidate buildings and presents examples of suitable validation experiments related

  12. A Systematic Method for Verification and Validation of Gyrokinetic Microstability Codes

    Energy Technology Data Exchange (ETDEWEB)

    Bravenec, Ronald [Fourth State Research, Austin, TX (United States)

    2017-11-14

    My original proposal for the period Feb. 15, 2014 through Feb. 14, 2017 called for an integrated validation and verification effort carried out by myself with collaborators. The validation component would require experimental profile and power-balance analysis. In addition, it would require running the gyrokinetic codes varying the input profiles within experimental uncertainties to seek agreement with experiment before discounting a code as invalidated. Therefore, validation would require a major increase of effort over my previous grant periods which covered only code verification (code benchmarking). Consequently, I had requested full-time funding. Instead, I am being funded at somewhat less than half time (5 calendar months per year). As a consequence, I decided to forego the validation component and to only continue the verification efforts.

  13. Experimental validation of the containment codes ASTARTE and SEURBNUK

    International Nuclear Information System (INIS)

    Kendall, K.C.; Arnold, L.A.; Broadhouse, B.J.; Jones, A.; Yerkess, A.; Benuzzi, A.

    1979-10-01

    The fast reactor containment codes ASTARTE and SEURBNUK are being validated against data from the COVA series of small scale experiments being performed jointly by the UKAEA and JRC Ispra. The experimental programme is nearly complete, and data are given. (U.K.)

  14. An Assessment of the Title III, ESEA Validation Effort, 1973-74. Final Report.

    Science.gov (United States)

    Shaffer, J. Stephen, Jr.

    Evaluation materials and instructional content relevant to the IVD process for 1973-74 are presented. Section 1 of this report is an analysis of on-site experience of the validators in the employment of the Validator Self-Analysis Forms. Section 2 is Scientific Management Associates Educational Systems Division (SMA/ESD) evaluation of the 10…

  15. SP-100 from ground demonstration to flight validation

    International Nuclear Information System (INIS)

    Buden, D.

    1989-01-01

    The SP-100 program is in the midst of developing and demonstrating the technology of a liquid-metal-cooled fast reactor using thermoelectric thermal-to-electric conversion devices for space power applications in the range of tens to hundreds of kilowatts. The current ground engineering system (GES) design and development phase will demonstrate the readiness of the technology building blocks and the system to proceed to flight system validation. This phase includes the demonstration of a 2.4-MW(thermal) reactor in the nuclear assembly test (NAT) and aerospace subsystem in the integrated assembly test (IAT). The next phase in the SP-100 development, now being planned, is to be a flight demonstration of the readiness of the technology to be incorporated into future military and civilian missions. This planning will answer questions concerning the logical progression of the GES to the flight validation experiment. Important issues in planning the orderly transition include answering the need to plan for a second reactor ground test, the method to be used to test the SP-100 for acceptance for flight, the need for the IAT prior to the flight-test configuration design, the efficient use of facilities for GES and the flight experiment, and whether the NAT should be modified based on flight experiment planning

  16. Neutronics experiments and analyses in preparation of DT operations at JET

    Energy Technology Data Exchange (ETDEWEB)

    Villari, R., E-mail: rosaria.villari@enea.it [Dipartimento Fusione e tecnologie per la Sicurezza Nucleare, ENEA, Via E. Fermi 45, 00044 Frascati (Roma) (Italy); Batistoni, P.; Angelone, M. [Dipartimento Fusione e tecnologie per la Sicurezza Nucleare, ENEA, Via E. Fermi 45, 00044 Frascati (Roma) (Italy); Catalan, J.P. [Universidad Nacional de Educación a Distancia, Madrid (Spain); Colling, B.; Croft, D. [Culham Science Centre, Abingdon, OX14 3DB Oxon (United Kingdom); Fischer, U. [Karlsruhe Institute of Technology, Eggenstein-Leopoldshafen, 76344 Karlsruhe (Germany); Flammini, D. [Dipartimento Fusione e tecnologie per la Sicurezza Nucleare, ENEA, Via E. Fermi 45, 00044 Frascati (Roma) (Italy); Klix, A. [Karlsruhe Institute of Technology, Eggenstein-Leopoldshafen, 76344 Karlsruhe (Germany); Loreti, S. [Dipartimento Fusione e tecnologie per la Sicurezza Nucleare, ENEA, Via E. Fermi 45, 00044 Frascati (Roma) (Italy); Lilley, S. [Culham Science Centre, Abingdon, OX14 3DB Oxon (United Kingdom); Moro, F. [Dipartimento Fusione e tecnologie per la Sicurezza Nucleare, ENEA, Via E. Fermi 45, 00044 Frascati (Roma) (Italy); Naish, J.; Packer, L. [Culham Science Centre, Abingdon, OX14 3DB Oxon (United Kingdom); Pereslavtsev, P. [Karlsruhe Institute of Technology, Eggenstein-Leopoldshafen, 76344 Karlsruhe (Germany); Popovichev, S. [Culham Science Centre, Abingdon, OX14 3DB Oxon (United Kingdom); Sauvan, P. [Universidad Nacional de Educación a Distancia, Madrid (Spain); Syme, B. [Culham Science Centre, Abingdon, OX14 3DB Oxon (United Kingdom)

    2016-11-01

    Highlights: • Neutronics experiments are in preparation at JET for DTE-2 campaign. • Shutdown dose rate benchmark experiments will be performed to validate ITER relevant tools. • Advanced D1S, MCR2S, R2Smesh and R2SUNED are the European tools under validation. • Satisfying results were obtained in the last DD benchmark experiment but efforts to reduce the uncertainties were needed. • The current computational and experimental efforts seem promising to improve the benchmark accuracy. - Abstract: In the frame of the WPJET3-DT Technology project within the EUROfusion Consortium program, neutronics experiments are in preparation for the future deuterium–tritium campaign on JET (DTE2). The experiments will be conducted with the purpose to validate the neutronics codes and tools used in ITER, thus reducing the related uncertainties and the associated risks in the machine operation. This paper summarizes the status of previous shutdown dose rate benchmarks experiments and analyses performed at JET and focuses on the computational and experimental efforts conducted in preparation of the future DTE2 experiments. In particular, preliminary calculations and studies to select detectors and positions aimed to reduce uncertainties in the shutdown dose rate experiment are presented and discussed.

  17. Neutronics experiments and analyses in preparation of DT operations at JET

    International Nuclear Information System (INIS)

    Villari, R.; Batistoni, P.; Angelone, M.; Catalan, J.P.; Colling, B.; Croft, D.; Fischer, U.; Flammini, D.; Klix, A.; Loreti, S.; Lilley, S.; Moro, F.; Naish, J.; Packer, L.; Pereslavtsev, P.; Popovichev, S.; Sauvan, P.; Syme, B.

    2016-01-01

    Highlights: • Neutronics experiments are in preparation at JET for DTE-2 campaign. • Shutdown dose rate benchmark experiments will be performed to validate ITER relevant tools. • Advanced D1S, MCR2S, R2Smesh and R2SUNED are the European tools under validation. • Satisfying results were obtained in the last DD benchmark experiment but efforts to reduce the uncertainties were needed. • The current computational and experimental efforts seem promising to improve the benchmark accuracy. - Abstract: In the frame of the WPJET3-DT Technology project within the EUROfusion Consortium program, neutronics experiments are in preparation for the future deuterium–tritium campaign on JET (DTE2). The experiments will be conducted with the purpose to validate the neutronics codes and tools used in ITER, thus reducing the related uncertainties and the associated risks in the machine operation. This paper summarizes the status of previous shutdown dose rate benchmarks experiments and analyses performed at JET and focuses on the computational and experimental efforts conducted in preparation of the future DTE2 experiments. In particular, preliminary calculations and studies to select detectors and positions aimed to reduce uncertainties in the shutdown dose rate experiment are presented and discussed.

  18. Latinas/os in Community College Developmental Education: Increasing Moments of Academic and Interpersonal Validation

    Science.gov (United States)

    Acevedo-Gil, Nancy; Santos, Ryan E.; Alonso, LLuliana; Solorzano, Daniel G.

    2015-01-01

    This qualitative study examines the experiences of Latinas/os in community college English and math developmental education courses. Critical race theory in education and the theory of validation serve as guiding frameworks. The authors find that institutional agents provide academic validation by emphasizing high expectations, focusing on social…

  19. Cultural Adaptation of the Portuguese Version of the "Sniffin' Sticks" Smell Test: Reliability, Validity, and Normative Data.

    Science.gov (United States)

    Ribeiro, João Carlos; Simões, João; Silva, Filipe; Silva, Eduardo D; Hummel, Cornelia; Hummel, Thomas; Paiva, António

    2016-01-01

    The cross-cultural adaptation and validation of the Sniffin`Sticks test for the Portuguese population is described. Over 270 people participated in four experiments. In Experiment 1, 67 participants rated the familiarity of presented odors and seven descriptors of the original test were adapted to a Portuguese context. In Experiment 2, the Portuguese version of Sniffin`Sticks test was administered to 203 healthy participants. Older age, male gender and active smoking status were confirmed as confounding factors. The third experiment showed the validity of the Portuguese version of Sniffin`Sticks test in discriminating healthy controls from patients with olfactory dysfunction. In Experiment 4, the test-retest reliability for both the composite score (r71 = 0.86) and the identification test (r71 = 0.62) was established (pPortuguese version of Sniffin`Sticks test is provided, showing good validity and reliability and effectively distinguishing patients from healthy controls with high sensitivity and specificity. The Portuguese version of Sniffin`Sticks test identification test is a clinically suitable screening tool in routine outpatient Portuguese settings.

  20. JaCVAM-organized international validation study of the in vivo rodent alkaline comet assay for the detection of genotoxic carcinogens: I. Summary of pre-validation study results.

    Science.gov (United States)

    Uno, Yoshifumi; Kojima, Hajime; Omori, Takashi; Corvi, Raffaella; Honma, Masamistu; Schechtman, Leonard M; Tice, Raymond R; Burlinson, Brian; Escobar, Patricia A; Kraynak, Andrew R; Nakagawa, Yuzuki; Nakajima, Madoka; Pant, Kamala; Asano, Norihide; Lovell, David; Morita, Takeshi; Ohno, Yasuo; Hayashi, Makoto

    2015-07-01

    The in vivo rodent alkaline comet assay (comet assay) is used internationally to investigate the in vivo genotoxic potential of test chemicals. This assay, however, has not previously been formally validated. The Japanese Center for the Validation of Alternative Methods (JaCVAM), with the cooperation of the U.S. NTP Interagency Center for the Evaluation of Alternative Toxicological Methods (NICEATM)/the Interagency Coordinating Committee on the Validation of Alternative Methods (ICCVAM), the European Centre for the Validation of Alternative Methods (ECVAM), and the Japanese Environmental Mutagen Society/Mammalian Mutagenesis Study Group (JEMS/MMS), organized an international validation study to evaluate the reliability and relevance of the assay for identifying genotoxic carcinogens, using liver and stomach as target organs. The ultimate goal of this validation effort was to establish an Organisation for Economic Co-operation and Development (OECD) test guideline. The purpose of the pre-validation studies (i.e., Phase 1 through 3), conducted in four or five laboratories with extensive comet assay experience, was to optimize the protocol to be used during the definitive validation study. Copyright © 2015 Elsevier B.V. All rights reserved.

  1. Validation of intermediate end points in cancer research.

    Science.gov (United States)

    Schatzkin, A; Freedman, L S; Schiffman, M H; Dawsey, S M

    1990-11-21

    Investigations using intermediate end points as cancer surrogates are quicker, smaller, and less expensive than studies that use malignancy as the end point. We present a strategy for determining whether a given biomarker is a valid intermediate end point between an exposure and incidence of cancer. Candidate intermediate end points may be selected from case series, ecologic studies, and animal experiments. Prospective cohort and sometimes case-control studies may be used to quantify the intermediate end point-cancer association. The most appropriate measure of this association is the attributable proportion. The intermediate end point is a valid cancer surrogate if the attributable proportion is close to 1.0, but not if it is close to 0. Usually, the attributable proportion is close to neither 1.0 nor 0; in this case, valid surrogacy requires that the intermediate end point mediate an established exposure-cancer relation. This would in turn imply that the exposure effect would vanish if adjusted for the intermediate end point. We discuss the relative advantages of intervention and observational studies for the validation of intermediate end points. This validation strategy also may be applied to intermediate end points for adverse reproductive outcomes and chronic diseases other than cancer.

  2. Experimental validation of waveform relaxation technique for power ...

    Indian Academy of Sciences (India)

    Two systems are considered: a HVDC controller tested with a detailed model of the converters, and a TCSC based damping controller tested with a low frequency model of a power system. The results are validated with those obtained using simulated models of the controllers. We also present results of an experiment in ...

  3. Validation of a δ2Hn-alkane-δ18Ohemicellulose based paleohygrometer: Implications from a climate chamber experiment

    Science.gov (United States)

    Hepp, Johannes; Kathrin Schäfer, Imke; Tuthorn, Mario; Wüthrich, Lorenz; Zech, Jana; Glaser, Bruno; Juchelka, Dieter; Rozanski, Kazimierz; Zech, Roland; Mayr, Christoph; Zech, Michael

    2017-04-01

    Leaf wax-derived biomarkers, e.g. long chain n-alkanes and fatty acids, and their hydrogen isotopic composition are proved to be of a value in paleoclimatology/-hydrology research. However, the alteration of the isotopic signal as a result of the often unknown amount of leaf water enrichment challenges a direct reconstruction of the isotopic composition of paleoprecipitation. The coupling of ^2H/^1H results of leaf wax-derived biomarkers with 18O/16O results of hemicellulose-derived sugars has the potential to overcome this limitation and additionally allows reconstructing relative air humidity (RH) (Zech et al., 2013). This approach was recently validated by Tuthorn et al. (2015) by applying it to topsoil samples along a climate transect in Argentina. Accordingly, the biomarker-derived RH values correlate significantly with modern actual RH values from the respective study sites, showing the potential of the established 'paleohygrometer' approach. However, a climate chamber validation study to answer open questions regarding this approach, e.g. how robust biosynthetic fractionation factors are, is still missing. Here we present coupled δ2Hn-alkane-δ18Ohemicellulose results obtained for leaf material from a climate chamber experiment, in which Eucalyptus globulus, Vicia faba and Brassica oleracea were grown under controlled conditions (Mayr, 2003). First, the 2H and 18O enrichment of leaf water strongly reflects actual RH values of the climate chambers. Second, the biomarker-based reconstructed RH values correlate well with the actual RH values of the respective climate chamber, validating the proposed 'paleohygrometer' approach. And third, the calculated fractionation factors between the investigated leaf biomarkers (n-C29 and n-C31 for alkanes; arabinose and xylose for hemicellulose) and leaf water are close to the expected once reviewed from the literature (+27\\permil for hemicellulose; -155\\permil for n-alkanes). Nevertheless, minor dependencies of these

  4. Explaining customer experience of digital financial advice

    NARCIS (Netherlands)

    van Raaij, W.F.

    2017-01-01

    The goal of this paper is to analyze the customer experience of digital-driven financial advice systems. It is the objective of this study to develop a cross-cultural model for validating customer experiences of digital financial advice. In doing so, both objective and subjective system aspects have

  5. Achieving external validity in home advantage research: generalizing crowd noise effects.

    Science.gov (United States)

    Myers, Tony D

    2014-01-01

    Different factors have been postulated to explain the home advantage phenomenon in sport. One plausible explanation investigated has been the influence of a partisan home crowd on sports officials' decisions. Different types of studies have tested the crowd influence hypothesis including purposefully designed experiments. However, while experimental studies investigating crowd influences have high levels of internal validity, they suffer from a lack of external validity; decision-making in a laboratory setting bearing little resemblance to decision-making in live sports settings. This focused review initially considers threats to external validity in applied and theoretical experimental research. Discussing how such threats can be addressed using representative design by focusing on a recently published study that arguably provides the first experimental evidence of the impact of live crowd noise on officials in sport. The findings of this controlled experiment conducted in a real tournament setting offer a level of confirmation of the findings of laboratory studies in the area. Finally directions for future research and the future conduct of crowd noise studies are discussed.

  6. A Systematic Review Comparing the Acceptability, Validity and Concordance of Discrete Choice Experiments and Best-Worst Scaling for Eliciting Preferences in Healthcare.

    Science.gov (United States)

    Whitty, Jennifer A; Oliveira Gonçalves, Ana Sofia

    2018-06-01

    The aim of this study was to compare the acceptability, validity and concordance of discrete choice experiment (DCE) and best-worst scaling (BWS) stated preference approaches in health. A systematic search of EMBASE, Medline, AMED, PubMed, CINAHL, Cochrane Library and EconLit databases was undertaken in October to December 2016 without date restriction. Studies were included if they were published in English, presented empirical data related to the administration or findings of traditional format DCE and object-, profile- or multiprofile-case BWS, and were related to health. Study quality was assessed using the PREFS checklist. Fourteen articles describing 12 studies were included, comparing DCE with profile-case BWS (9 studies), DCE and multiprofile-case BWS (1 study), and profile- and multiprofile-case BWS (2 studies). Although limited and inconsistent, the balance of evidence suggests that preferences derived from DCE and profile-case BWS may not be concordant, regardless of the decision context. Preferences estimated from DCE and multiprofile-case BWS may be concordant (single study). Profile- and multiprofile-case BWS appear more statistically efficient than DCE, but no evidence is available to suggest they have a greater response efficiency. Little evidence suggests superior validity for one format over another. Participant acceptability may favour DCE, which had a lower self-reported task difficulty and was preferred over profile-case BWS in a priority setting but not necessarily in other decision contexts. DCE and profile-case BWS may be of equal validity but give different preference estimates regardless of the health context; thus, they may be measuring different constructs. Therefore, choice between methods is likely to be based on normative considerations related to coherence with theoretical frameworks and on pragmatic considerations related to ease of data collection.

  7. Explicating Validity

    Science.gov (United States)

    Kane, Michael T.

    2016-01-01

    How we choose to use a term depends on what we want to do with it. If "validity" is to be used to support a score interpretation, validation would require an analysis of the plausibility of that interpretation. If validity is to be used to support score uses, validation would require an analysis of the appropriateness of the proposed…

  8. S.E.T., CSNI Separate Effects Test Facility Validation Matrix

    International Nuclear Information System (INIS)

    1997-01-01

    1 - Description of test facility: The SET matrix of experiments is suitable for the developmental assessment of thermal-hydraulics transient system computer codes by selecting individual tests from selected facilities, relevant to each phenomena. Test facilities differ from one another in geometrical dimensions, geometrical configuration and operating capabilities or conditions. Correlation between SET facility and phenomena were calculated on the basis of suitability for model validation (which means that a facility is designed in such a way as to stimulate the phenomena assumed to occur in a plant and is sufficiently instrumented); limited suitability for model variation (which means that a facility is designed in such a way as to stimulate the phenomena assumed to occur in a plant but has problems associated with imperfect scaling, different test fluids or insufficient instrumentation); and unsuitability for model validation. 2 - Description of test: Whereas integral experiments are usually designed to follow the behaviour of a reactor system in various off-normal or accident transients, separate effects tests focus on the behaviour of a single component, or on the characteristics of one thermal-hydraulic phenomenon. The construction of a separate effects test matrix is an attempt to collect together the best sets of openly available test data for code validation, assessment and improvement, from the wide range of experiments that have been carried out world-wide in the field of thermal hydraulics. In all, 2094 tests are included in the SET matrix

  9. The predictive validity of safety climate.

    Science.gov (United States)

    Johnson, Stephen E

    2007-01-01

    -level climates. Journal of Applied Psychology, 90(4), 616-628]. In addition, safety behavior and accident experience data were collected for 5 months following the survey and were statistically analyzed (structural equation modeling, confirmatory factor analysis, exploratory factor analysis, etc.) to identify correlations, associations, internal consistency, and factorial structures. Results revealed that the ZSCQ: (a) was psychometrically reliable and valid, (b) served as an effective predictor of safety-related outcomes (behavior and accident experience), and (c) could be trimmed to an 11 item survey with little loss of explanatory power. Practitioners and researchers can use the ZSCQ with reasonable certainty of the questionnaire's reliability and validity. This provides a solid foundation for the development of meaningful organizational interventions and/or continued research into social factors affecting industrial accident experience.

  10. Validation needs of seismic probabilistic risk assessment (PRA) methods applied to nuclear power plants

    International Nuclear Information System (INIS)

    Kot, C.A.; Srinivasan, M.G.; Hsieh, B.J.

    1985-01-01

    An effort to validate seismic PRA methods is in progress. The work concentrates on the validation of plant response and fragility estimates through the use of test data and information from actual earthquake experience. Validation needs have been identified in the areas of soil-structure interaction, structural response and capacity, and equipment fragility. Of particular concern is the adequacy of linear methodology to predict nonlinear behavior. While many questions can be resolved through the judicious use of dynamic test data, other aspects can only be validated by means of input and response measurements during actual earthquakes. A number of past, ongoing, and planned testing programs which can provide useful validation data have been identified, and validation approaches for specific problems are being formulated

  11. Validation of SCALE-4 for burnup credit applications

    International Nuclear Information System (INIS)

    Bowman, S.M.; DeHart, M.D.; Parks, C.V.

    1995-01-01

    In the past, a criticality analysis of PWR fuel stored in racks and casks has assumed that the fuel is fresh with the maximum allowable initial enrichment. If credit is allowed for fuel burnup in the design of casks that are used in the transport of spent light water reactor fuel to a repository, the increase in payload can lead to a significant reduction in the cost of transport and a potential reduction in the risk to the public. A portion of the work has been performed at ORNL in support of the US DOE efforts to demonstrate a validation approach for criticality safety methods to be used in burnup credit cask design. To date, the SCALE code system developed at ORNL has been the primary computational tool used by DOE to investigate technical issues related to burnup credit. The ANSI/ANS-8.1 criticality safety standard requires validation and benchmarking of the calculational methods used in evaluating criticality safety limits for applications outside reactors by correlation against critical experiments that are applicable. Numerous critical experiments for fresh PWR-type fuel in storage and transport configurations exist and can be used as part of a validation database. However, there are no critical experiments with burned PWR-type fuel in storage and transport configurations. As an alternative, commercial reactors offer an excellent source of measured critical configurations. The results reported demonstrate the ability of the ORNL SCALE-4 methodology to predict a value of k eff very close to the known value of 1.0, both for fresh fuel criticals and for the more complex reactor criticals. Beyond these results, additional work in the determination of biases and uncertainties is necessary prior to use in burnup credit applications

  12. Verification and Validation of Carbon-Fiber Laminate Low Velocity Impact Simulations.

    Energy Technology Data Exchange (ETDEWEB)

    English, Shawn Allen; Nelson, Stacy Michelle; Briggs, Timothy; Brown, Arthur A.

    2014-10-01

    Presented is a model verification and validation effort using low - velocity impact (LVI) of carbon fiber reinforced polymer laminate experiments. A flat cylindrical indenter impacts the laminate with enough energy to produce delamination, matrix cracks and fiber breaks. Included in the experimental efforts are ultrasonic scans of the damage for qualitative validation of the models. However, the primary quantitative metrics of validation are the force time history measured through the instrumented indenter and initial and final velocities. The simulations, whi ch are run on Sandia's Sierra finite element codes , consist of all physics and material parameters of importance as determined by a sensitivity analysis conducted on the LVI simulation. A novel orthotropic damage and failure constitutive model that is cap able of predicting progressive composite damage and failure is described in detail and material properties are measured, estimated from micromechanics or optimized through calibration. A thorough verification and calibration to the accompanying experiment s are presented. Specia l emphasis is given to the four - point bend experiment. For all simulations of interest, the mesh and material behavior is verified through extensive convergence studies. An ensemble of simulations incorporating model parameter unc ertainties is used to predict a response distribution which is then compared to experimental output. The result is a quantifiable confidence in material characterization and model physics when simulating this phenomenon in structures of interest.

  13. Brand experiences in engaging marketing campaigns

    OpenAIRE

    Reisegg, Kristin

    2012-01-01

    This research examines the effects from engaging marketing campaigns on brand experiences and the potential outcome on affective commitment and loyalty. In doing this, it also test the validity of the brand experience scale in a new setting during a short term marketing campaign. The research was conducted as a natural experiment during a marketing event arranged by Litago. Data were collected from participants and a control group, and the survey was sent out through the online survey tool...

  14. GPM GROUND VALIDATION ENVIRONMENT CANADA (EC) MANUAL PRECIPITATION MEASUREMENTS GCPEX V1

    Data.gov (United States)

    National Aeronautics and Space Administration — The GPM Ground Validation Environment Canada (EC) Manual Precipitation Measurements GCPEx dataset was collected during the GPM Cold-season Precipitation Experiment...

  15. Experiments at the GELINA facility for the validation of the self-indication neutron resonance densitometry technique

    Directory of Open Access Journals (Sweden)

    Rossa Riccardo

    2017-01-01

    Full Text Available Self-Indication Neutron Resonance Densitometry (SINRD is a passive non-destructive method that is being investigated to quantify the 239Pu content in a spent fuel assembly. The technique relies on the energy dependence of total cross sections for neutron induced reaction. The cross sections show resonance structures that can be used to quantify the presence of materials in objects, e.g. the total cross-section of 239Pu shows a strong resonance close to 0.3 eV. This resonance will cause a reduction of the number of neutrons emitted from spent fuel when 239Pu is present. Hence such a reduction can be used to quantify the amount of 239Pu present in the fuel. A neutron detector with a high sensitivity to neutrons in this energy region is used to enhance the sensitivity to 239Pu. This principle is similar to self-indication cross section measurements. An appropriate detector can be realized by surrounding a 239Pu-loaded fission chamber with appropriate neutron absorbing material. In this contribution experiments performed at the GELINA time-of-flight facility of the JRC at Geel (Belgium to validate the simulations are discussed. The results confirm that the strongest sensitivity to the target material was achieved with the self-indication technique, highlighting the importance of using a 239Pu fission chamber for the SINRD measurements.

  16. Recommended nuclear criticality safety experiments in support of the safe transportation of fissile material

    International Nuclear Information System (INIS)

    Tollefson, D.A.; Elliott, E.P.; Dyer, H.R.; Thompson, S.A.

    1993-01-01

    Validation of computer codes and nuclear data (cross-section) libraries using benchmark quality critical (or certain subcritical) experiments is an essential part of a nuclear criticality safety evaluation. The validation results establish the credibility of the calculational tools for use in evaluating a particular application. Validation of the calculational tools is addressed in several American National Standards Institute/American Nuclear Society (ANSI/ANS) standards, with ANSI/ANS-8.1 being the most relevant. Documentation of the validation is a required part of all safety analyses involving significant quantities of fissile materials. In the case of transportation of fissile materials, the safety analysis report for packaging (SARP) must contain a thorough discussion of benchmark experiments, detailing how the experiments relate to the significant packaging and contents materials (fissile, moderating, neutron absorbing) within the package. The experiments recommended in this paper are needed to address certain areas related to transportation of unirradiated fissile materials in drum-type containers (packagings) for which current data are inadequate or are lacking

  17. A new generation of crystallographic validation tools for the protein data bank.

    Science.gov (United States)

    Read, Randy J; Adams, Paul D; Arendall, W Bryan; Brunger, Axel T; Emsley, Paul; Joosten, Robbie P; Kleywegt, Gerard J; Krissinel, Eugene B; Lütteke, Thomas; Otwinowski, Zbyszek; Perrakis, Anastassis; Richardson, Jane S; Sheffler, William H; Smith, Janet L; Tickle, Ian J; Vriend, Gert; Zwart, Peter H

    2011-10-12

    This report presents the conclusions of the X-ray Validation Task Force of the worldwide Protein Data Bank (PDB). The PDB has expanded massively since current criteria for validation of deposited structures were adopted, allowing a much more sophisticated understanding of all the components of macromolecular crystals. The size of the PDB creates new opportunities to validate structures by comparison with the existing database, and the now-mandatory deposition of structure factors creates new opportunities to validate the underlying diffraction data. These developments highlighted the need for a new assessment of validation criteria. The Task Force recommends that a small set of validation data be presented in an easily understood format, relative to both the full PDB and the applicable resolution class, with greater detail available to interested users. Most importantly, we recommend that referees and editors judging the quality of structural experiments have access to a concise summary of well-established quality indicators. Copyright © 2011 Elsevier Ltd. All rights reserved.

  18. Validation of simulation codes for future systems: motivations, approach, and the role of nuclear data

    International Nuclear Information System (INIS)

    Palmiotti, G.; Salvatores, M.; Aliberti, G.

    2007-01-01

    The validation of advanced simulation tools will still play a very significant role in several areas of reactor system analysis. This is the case of reactor physics and neutronics, where nuclear data uncertainties still play a crucial role for many core and fuel cycle parameters. The present paper gives a summary of validation motivations, objectives and approach. A validation effort is in particular necessary in the frame of advanced (e.g. Generation-IV or GNEP) reactors and associated fuel cycles assessment and design. Validation of simulation codes is complementary to the 'verification' process. In fact, 'verification' addresses the question 'are we solving the equations correctly' while validation addresses the question 'are we solving the correct equations with the correct parameters'. Verification implies comparisons with 'reference' equation solutions or with analytical solutions, when they exist. Most of what is called 'numerical validation' falls in this category. Validation strategies differ according to the relative weight of the methods and of the parameters that enter into the simulation tools. Most validation is based on experiments, and the field of neutronics where a 'robust' physics description model exists and which is function of 'input' parameters not fully known, will be the focus of this paper. In fact, in the case of reactor core, shielding and fuel cycle physics the model (theory) is well established (the Boltzmann and Bateman equations) and the parameters are the nuclear cross-sections, decay data etc. Two types of validation approaches can and have been used: (a) Mock-up experiments ('global' validation): need for a very close experimental simulation of a reference configuration. Bias factors cannot be extrapolated beyond reference configuration; (b) Use of 'clean', 'representative' integral experiments ('bias factor and adjustment' method). Allows to define bias factors, uncertainties and can be used for a wide range of applications. It

  19. Literature research concerning alternative methods for validation of criticality calculation systems; Literaturrecherche zu alternativen Daten und Methoden zur Validierung von Kritikalitaetsrechensystemen

    Energy Technology Data Exchange (ETDEWEB)

    Behler, Matthias

    2016-05-15

    Beside radiochemical analysis of irradiated fuel and critical experiments, which has become a well-established basis for the validation of depletion code and criticality codes respectively, also results of oscillation experiments or the operating conditions of power reactor and research reactors can provide useful information for the validation of the above mentioned codes. Based on a literature review the potential of the utilization of oscillation experiment measurements for the validation of criticality codes is estimated. It is found that the reactivity measurements for actinides and fission products within the CERES program on the reactors DIMPLE (Winfrith, UK) and MINERVE (Cadarache, France) can give a valuable addition to the commonly used critical experiments for criticality code validation. However, there are approaches but yet no generally satisfactory solution for integrating the reactivity measurements in a quantitative bias determination for the neutron multiplication factor of typical application cases including irradiated spent fuel outside reactor cores, calculated using common criticality codes.

  20. Validation of Helicopter Gear Condition Indicators Using Seeded Fault Tests

    Science.gov (United States)

    Dempsey, Paula; Brandon, E. Bruce

    2013-01-01

    A "seeded fault test" in support of a rotorcraft condition based maintenance program (CBM), is an experiment in which a component is tested with a known fault while health monitoring data is collected. These tests are performed at operating conditions comparable to operating conditions the component would be exposed to while installed on the aircraft. Performance of seeded fault tests is one method used to provide evidence that a Health Usage Monitoring System (HUMS) can replace current maintenance practices required for aircraft airworthiness. Actual in-service experience of the HUMS detecting a component fault is another validation method. This paper will discuss a hybrid validation approach that combines in service-data with seeded fault tests. For this approach, existing in-service HUMS flight data from a naturally occurring component fault will be used to define a component seeded fault test. An example, using spiral bevel gears as the targeted component, will be presented. Since the U.S. Army has begun to develop standards for using seeded fault tests for HUMS validation, the hybrid approach will be mapped to the steps defined within their Aeronautical Design Standard Handbook for CBM. This paper will step through their defined processes, and identify additional steps that may be required when using component test rig fault tests to demonstrate helicopter CI performance. The discussion within this paper will provide the reader with a better appreciation for the challenges faced when defining a seeded fault test for HUMS validation.

  1. A new type of In-core sensor validation outline

    International Nuclear Information System (INIS)

    Figedy, S.

    2006-01-01

    In this a new type of in-core sensor validation method is outlined, which is based on combination of correlation coefficients and mutual information indices. First experience with this approach is described and further improvements to enhance the outcome reliability are proposed namely through computational intelligence techniques (Authors)

  2. Class 1E software verification and validation: Past, present, and future

    Energy Technology Data Exchange (ETDEWEB)

    Persons, W.L.; Lawrence, J.D.

    1993-10-01

    This paper discusses work in progress that addresses software verification and validation (V&V) as it takes place during the full software life cycle of safety-critical software. The paper begins with a brief overview of the task description and discussion of the historical evolution of software V&V. A new perspective is presented which shows the entire verification and validation process from the viewpoints of a software developer, product assurance engineer, independent V&V auditor, and government regulator. An account of the experience of the field test of the Verification Audit Plan and Report generated from the V&V Guidelines is presented along with sample checklists and lessons learned from the verification audit experience. Then, an approach to automating the V&V Guidelines is introduced. The paper concludes with a glossary and bibliography.

  3. Content validation of the Tilburg Frailty Indicator from the perspective of frail elderly

    DEFF Research Database (Denmark)

    Andreasen, Jane; Lund, Hans; Aadahl, Mette

    2015-01-01

    validation. AIM: To validate the Tilburg Frailty Indicator on content in relation to the physical, psychological and social domain by exploring the experience of daily life of community dwelling frail elderly. METHODS: The design was a qualitative content validation study. The participants were acutely...... admitted frail elderly discharged to home and interviewed one week after discharge. A deductive content analysis, with categories structured in advance, was performed. RESULTS: A total of 422 meaning units were extracted from the transcriptions; 131 units related to the physical domain, 106 units...

  4. Current status of validation for robotic surgery simulators - a systematic review.

    Science.gov (United States)

    Abboudi, Hamid; Khan, Mohammed S; Aboumarzouk, Omar; Guru, Khurshid A; Challacombe, Ben; Dasgupta, Prokar; Ahmed, Kamran

    2013-02-01

    To analyse studies validating the effectiveness of robotic surgery simulators. The MEDLINE(®), EMBASE(®) and PsycINFO(®) databases were systematically searched until September 2011. References from retrieved articles were reviewed to broaden the search. The simulator name, training tasks, participant level, training duration and evaluation scoring were extracted from each study. We also extracted data on feasibility, validity, cost-effectiveness, reliability and educational impact. We identified 19 studies investigating simulation options in robotic surgery. There are five different robotic surgery simulation platforms available on the market. In all, 11 studies sought opinion and compared performance between two different groups; 'expert' and 'novice'. Experts ranged in experience from 21-2200 robotic cases. The novice groups consisted of participants with no prior experience on a robotic platform and were often medical students or junior doctors. The Mimic dV-Trainer(®), ProMIS(®), SimSurgery Educational Platform(®) (SEP) and Intuitive systems have shown face, content and construct validity. The Robotic Surgical SimulatorTM system has only been face and content validated. All of the simulators except SEP have shown educational impact. Feasibility and cost-effectiveness of simulation systems was not evaluated in any trial. Virtual reality simulators were shown to be effective training tools for junior trainees. Simulation training holds the greatest potential to be used as an adjunct to traditional training methods to equip the next generation of robotic surgeons with the skills required to operate safely. However, current simulation models have only been validated in small studies. There is no evidence to suggest one type of simulator provides more effective training than any other. More research is needed to validate simulated environments further and investigate the effectiveness of animal and cadaveric training in robotic surgery. © 2012 BJU

  5. Site characterization and validation - equipment design and techniques used in single borehole hydraulic testing, simulated drift experiment and crosshole testing

    International Nuclear Information System (INIS)

    Holmes, D.C.; Sehlstedt, M.

    1991-10-01

    This report describes the equipment and techniques used to investigate the variation of hydrogeological parameters within a fractured crystalline rock mass. The testing program was performed during stage 3 of the site characterization and validation programme at the Stripa mine in Sweden. This programme used a multidisciplinary approach, combining geophysical, geological and hydrogeological methods, to determine how groundwater moved through the rock mass. The hydrogeological work package involved three components. Firstly, novel single borehole techniques (focused packer testing) were used to determine the distribution of hydraulic conductivity and head along individual boreholes. Secondly, water was abstracted from boreholes which were drilled to simulate a tunnel (simulated drift experiment). Locations and magnitudes of flows were measured together with pressure responses at various points in the SCV rock mass. Thirdly, small scale crosshole tests, involving detailed interference testing, were used to determine the variability of hydrogeological parameters within previously identified, significant flow zones. (au)

  6. Readout electronics validation and target detector assessment for the Neutrinos Angra experiment

    International Nuclear Information System (INIS)

    Alvarenga, T.A.; Anjos, J.C.; Azzi, G.; Cerqueira, A.S.; Chimenti, P.; Costa, J.A.; Dornelas, T.I.; Farias, P.C.M.A.; Guedes, G.P.; Gonzalez, L.F.G.; Kemp, E.; Lima, H.P.; Machado, R.; Nóbrega, R.A.; Pepe, I.M.; Ribeiro, D.B.S.; Simas Filho, E.F.; Valdiviesso, G.A.; Wagner, S.

    2016-01-01

    A compact surface detector designed to identify the inverse beta decay interaction produced by anti-neutrinos coming from near operating nuclear reactors is being developed by the Neutrinos Angra Collaboration. In this document we describe and test the detector and its readout system by means of cosmic rays acquisition. In this measurement campaign, the target detector has been equipped with 16 8-in PMTs and two scintillator paddles have been used to trigger cosmic ray events. The achieved results disclosed the main operational characteristics of the Neutrinos Angra system and have been used to assess the detector and to validate its readout system.

  7. Synthesis of clad motion experiments interpretation: codes and validation

    International Nuclear Information System (INIS)

    Papin, J.; Fortunato, M.; Seiler, J.M.

    1983-04-01

    This communication deals with clad melting and relocation phenomena related to LMFBR safety analysis of loss of flow accidents. We present: - the physical models developed at DSN/CEN Cadarache in single channel and bundle geometry. The interpretation with these models of experiments performed by the STT (CEN Grenoble). It comes out that we have now obtained a good understanding of the involved phenomena in single channel geometry. On the other hand, further studies are necessary for a better knowledge of clad motion phenomena in bundle cases with conditions close to reactor ones

  8. A newly designed multichannel scaling system: Validated by Feynman-α experiment in EHWZPR

    Energy Technology Data Exchange (ETDEWEB)

    Arkani, Mohammad, E-mail: markani@aeoi.org.ir; Mataji-Kojouri, Naimeddin

    2016-08-15

    Highlights: • An embedded measuring system with enhanced operational capabilities is introduced to the scientists. • The design is low cost and reprogrammable. • The system design is dedicated to multi-detector experiments with huge data collection. • Non count loss effect Feynman-α experiment is performed in EHWZPR. • The results is compared with endogenous/inherent pulsed neutron source experiment. - Abstract: In this work, an embedded multi-input multi-million-channel MCS in a newly design is constructed for multi-detector experimental research applications. Important characteristics of the system are possible to be tuned based on experimental case studies utilizing the reprogrammable nature of the silicon. By means of differentiation of the integrated counts registered in memory, this system is featured as a zero channel advance time measuring tool ideal for experiments on time correlated random processes. Using this equipment, Feynman-α experiment is performed in Esfahan Heavy Water Zero Power Reactor (EHWZPR) utilizing three different in-core neutron detectors. One million channel data is collected by the system in 5 ms gate time from each neutron detector simultaneously. As heavy water moderated reactors are significantly slow systems, a huge number of data channels is required to be collected. Then, by making in use of bunching method, the data is analyzed and prompt neutron decay constant of the system is estimated for each neutron detector positioned in the core. The results are compared with the information provided by endogenous pulsed neutron source experiment and a good agreement is seen within the statistical uncertainties of the results. This equipment makes further research in depth possible in a range of stochastic experiments in nuclear physics such as cross correlation analysis of multi-detector experiments.

  9. Fusion decay power: Validation of FISPACT and FENDL/A-2.0

    International Nuclear Information System (INIS)

    Sublet, J.C.; Forrest, R.A.

    1999-01-01

    Integral experiments are a rich source of information with which a wide range of validation and comparison exercises can be made in the activation data field. Materials samples have been irradiated in a wide range of simulated D-T neutron fields at three European laboratories and at JAERI FNS. The later experiment is unique because decay heat rather than activity was measured. Some results from that experiment are reported here with some details of data corrections that have been made for EAF-99. (author)

  10. Validation of Hydrodynamic Numerical Model of a Pitching Wave Energy Converter

    DEFF Research Database (Denmark)

    López, Maria del Pilar Heras; Thomas, Sarah; Kramer, Morten Mejlhede

    2017-01-01

    Validation of numerical model is essential in the development of new technologies. Commercial software and codes available simulating wave energy converters (WECs) have not been proved to work for all the available and upcoming technologies yet. The present paper presents the first stages...... of the validation process of a hydrodynamic numerical model for a pitching wave energy converter. The development of dry tests, wave flume and wave basin experiments are going to be explained, lessons learned shared and results presented....

  11. Convergent validity test, construct validity test and external validity test of the David Liberman algorithm

    Directory of Open Access Journals (Sweden)

    David Maldavsky

    2013-08-01

    Full Text Available The author first exposes a complement of a previous test about convergent validity, then a construct validity test and finally an external validity test of the David Liberman algorithm.  The first part of the paper focused on a complementary aspect, the differential sensitivity of the DLA 1 in an external comparison (to other methods, and 2 in an internal comparison (between two ways of using the same method, the DLA.  The construct validity test exposes the concepts underlined to DLA, their operationalization and some corrections emerging from several empirical studies we carried out.  The external validity test examines the possibility of using the investigation of a single case and its relation with the investigation of a more extended sample.

  12. Results from the FELIX experiments on electromagnetic effects in hollow cylinders

    International Nuclear Information System (INIS)

    Turner, L.R.; Gunderson, G.R.; Knott, M.J.; McGhee, D.G.; Praeg, W.F.; Wehrle, R.B.

    1985-01-01

    The early experiments with the FELIX (Fusion Electromagnetic Induction eXperiments) facility have been devoted to obtaining data which can be used to validate eddy current computer codes. This paper describes experiments on field variation inside conducting cylinders

  13. An Engineering Method of Civil Jet Requirements Validation Based on Requirements Project Principle

    Science.gov (United States)

    Wang, Yue; Gao, Dan; Mao, Xuming

    2018-03-01

    A method of requirements validation is developed and defined to meet the needs of civil jet requirements validation in product development. Based on requirements project principle, this method will not affect the conventional design elements, and can effectively connect the requirements with design. It realizes the modern civil jet development concept, which is “requirement is the origin, design is the basis”. So far, the method has been successfully applied in civil jet aircraft development in China. Taking takeoff field length as an example, the validation process and the validation method of the requirements are detailed introduced in the study, with the hope of providing the experiences to other civil jet product design.

  14. Validation of system codes RELAP5 and SPECTRA for natural convection boiling in narrow channels

    Energy Technology Data Exchange (ETDEWEB)

    Stempniewicz, M.M., E-mail: stempniewicz@nrg.eu; Slootman, M.L.F.; Wiersema, H.T.

    2016-10-15

    Highlights: • Computer codes RELAP5/Mod3.3 and SPECTRA 3.61 validated for boiling in narrow channels. • Validated codes can be used for LOCA analyses in research reactors. • Code validation based on natural convection boiling in narrow channels experiments. - Abstract: Safety analyses of LOCA scenarios in nuclear power plants are performed with so called thermal–hydraulic system codes, such as RELAP5. Such codes are validated for typical fuel geometries applied in nuclear power plants. The question considered by this article is if the codes can be applied for LOCA analyses in research reactors, in particular exceeding CHF in very narrow channels. In order to answer this question, validation calculations were performed with two thermal–hydraulic system codes: RELAP and SPECTRA. The validation was based on natural convection boiling in narrow channels experiments, performed by Prof. Monde et al. in the years 1990–2000. In total 42 vertical tube and annulus experiments were simulated with both codes. A good agreement of the calculated values with the measured data was observed. The main conclusions are: • The computer codes RELAP5/Mod 3.3 (US NRC version) and SPECTRA 3.61 have been validated for natural convection boiling in narrow channels using experiments of Monde. The dimensions applied in the experiments were performed for a range that covers the values observed in typical research reactors. Therefore it is concluded that both codes are validated and can be used for LOCA analyses in research reactors, including natural convection boiling. The applicability range of the present validation is: hydraulic diameters of 1.1 ⩽ D{sub hyd} ⩽ 9.0 mm, heated lengths of 0.1 ⩽ L ⩽ 1.0 m, pressures of 0.10 ⩽ P ⩽ 0.99 MPa. In most calculations the burnout was predicted to occur at lower power than that observed in the experiments. In several cases the burnout was observed at higher power. The overprediction was not larger than 16% in RELAP and 15% in

  15. Validation of single-fluid and two-fluid magnetohydrodynamic models of the helicity injected torus spheromak experiment with the NIMROD code

    International Nuclear Information System (INIS)

    Akcay, Cihan; Victor, Brian S.; Jarboe, Thomas R.; Kim, Charlson C.

    2013-01-01

    We present a comparison study of 3-D pressureless resistive MHD (rMHD) and 3-D presureless two-fluid MHD models of the Helicity Injected Torus with Steady Inductive helicity injection (HIT-SI). HIT-SI is a current drive experiment that uses two geometrically asymmetric helicity injectors to generate and sustain toroidal plasmas. The comparable size of the collisionless ion skin depth d i to the resistive skin depth predicates the importance of the Hall term for HIT-SI. The simulations are run with NIMROD, an initial-value, 3-D extended MHD code. The modeled plasma density and temperature are assumed uniform and constant. The helicity injectors are modeled as oscillating normal magnetic and parallel electric field boundary conditions. The simulations use parameters that closely match those of the experiment. The simulation output is compared to the formation time, plasma current, and internal and surface magnetic fields. Results of the study indicate 2fl-MHD shows quantitative agreement with the experiment while rMHD only captures the qualitative features. The validity of each model is assessed based on how accurately it reproduces the global quantities as well as the temporal and spatial dependence of the measured magnetic fields. 2fl-MHD produces the current amplification (I tor /I inj ) and formation time τ f demonstrated by HIT-SI with similar internal magnetic fields. rMHD underestimates (I tor /I inj ) and exhibits much a longer τ f . Biorthogonal decomposition (BD), a powerful mathematical tool for reducing large data sets, is employed to quantify how well the simulations reproduce the measured surface magnetic fields without resorting to a probe-by-probe comparison. BD shows that 2fl-MHD captures the dominant surface magnetic structures and the temporal behavior of these features better than rMHD

  16. Affective commitment to the employer brand: Development and validation of a scale

    Directory of Open Access Journals (Sweden)

    Susana Fernandez-Lores

    2016-01-01

    Full Text Available In recent years employer branding has become increasingly important as a source of sustainable competitive advantage. Companies are trying to engender affective commitment in the best employees in a global labour market. In this study, we develop and validate a multidimensional scale to measure the strength of an employee's affective commitment to the employer brand in five separate studies. In Studies 1 and 2 the Affective Commitment to the Employer Brand (ACEB scale was developed and tested for its structure, reliability and convergent validity. Study 3 examines additional reliability and discriminant validity. Study 4 provides evidence of external validity. Study 5 examines the scale's nomological validity showing that a positive experience with the employer brand is important in making the employee develop affective commitment towards it. The limitations of the scale and the boundary conditions of its applicability are also discussed.

  17. Biased binomial assessment of cross-validated estimation of classification accuracies illustrated in diagnosis predictions.

    Science.gov (United States)

    Noirhomme, Quentin; Lesenfants, Damien; Gomez, Francisco; Soddu, Andrea; Schrouff, Jessica; Garraux, Gaëtan; Luxen, André; Phillips, Christophe; Laureys, Steven

    2014-01-01

    Multivariate classification is used in neuroimaging studies to infer brain activation or in medical applications to infer diagnosis. Their results are often assessed through either a binomial or a permutation test. Here, we simulated classification results of generated random data to assess the influence of the cross-validation scheme on the significance of results. Distributions built from classification of random data with cross-validation did not follow the binomial distribution. The binomial test is therefore not adapted. On the contrary, the permutation test was unaffected by the cross-validation scheme. The influence of the cross-validation was further illustrated on real-data from a brain-computer interface experiment in patients with disorders of consciousness and from an fMRI study on patients with Parkinson disease. Three out of 16 patients with disorders of consciousness had significant accuracy on binomial testing, but only one showed significant accuracy using permutation testing. In the fMRI experiment, the mental imagery of gait could discriminate significantly between idiopathic Parkinson's disease patients and healthy subjects according to the permutation test but not according to the binomial test. Hence, binomial testing could lead to biased estimation of significance and false positive or negative results. In our view, permutation testing is thus recommended for clinical application of classification with cross-validation.

  18. GPM GROUND VALIDATION OKLAHOMA CLIMATOLOGICAL SURVEY MESONET MC3E V1

    Data.gov (United States)

    National Aeronautics and Space Administration — The GPM Ground Validation Oklahoma Climatological Survey Mesonet MC3E data were collected during the Midlatitude Continental Convective Clouds Experiment (MC3E) in...

  19. BIOMOVS: an international model validation study

    International Nuclear Information System (INIS)

    Haegg, C.; Johansson, G.

    1988-01-01

    BIOMOVS (BIOspheric MOdel Validation Study) is an international study where models used for describing the distribution of radioactive and nonradioactive trace substances in terrestrial and aquatic environments are compared and tested. The main objectives of the study are to compare and test the accuracy of predictions between such models, explain differences in these predictions, recommend priorities for future research concerning the improvement of the accuracy of model predictions and act as a forum for the exchange of ideas, experience and information. (author)

  20. BIOMOVS: An international model validation study

    International Nuclear Information System (INIS)

    Haegg, C.; Johansson, G.

    1987-01-01

    BIOMOVS (BIOspheric MOdel Validation Study) is an international study where models used for describing the distribution of radioactive and nonradioactive trace substances in terrestrial and aquatic environments are compared and tested. The main objectives of the study are to compare and test the accuracy of predictions between such models, explain differences in these predictions, recommend priorities for future research concerning the improvement of the accuracy of model predictions and act as a forum for the exchange of ideas, experience and information. (orig.)

  1. Dynamically Scaled Model Experiment of a Mooring Cable

    Directory of Open Access Journals (Sweden)

    Lars Bergdahl

    2016-01-01

    Full Text Available The dynamic response of mooring cables for marine structures is scale-dependent, and perfect dynamic similitude between full-scale prototypes and small-scale physical model tests is difficult to achieve. The best possible scaling is here sought by means of a specific set of dimensionless parameters, and the model accuracy is also evaluated by two alternative sets of dimensionless parameters. A special feature of the presented experiment is that a chain was scaled to have correct propagation celerity for longitudinal elastic waves, thus providing perfect geometrical and dynamic scaling in vacuum, which is unique. The scaling error due to incorrect Reynolds number seemed to be of minor importance. The 33 m experimental chain could then be considered a scaled 76 mm stud chain with the length 1240 m, i.e., at the length scale of 1:37.6. Due to the correct elastic scale, the physical model was able to reproduce the effect of snatch loads giving rise to tensional shock waves propagating along the cable. The results from the experiment were used to validate the newly developed cable-dynamics code, MooDy, which utilises a discontinuous Galerkin FEM formulation. The validation of MooDy proved to be successful for the presented experiments. The experimental data is made available here for validation of other numerical codes by publishing digitised time series of two of the experiments.

  2. Validation of SCALE for High Temperature Gas-Cooled Reactors Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Ilas, Germina [ORNL; Ilas, Dan [ORNL; Kelly, Ryan P [ORNL; Sunny, Eva E [ORNL

    2012-08-01

    This report documents verification and validation studies carried out to assess the performance of the SCALE code system methods and nuclear data for modeling and analysis of High Temperature Gas-Cooled Reactor (HTGR) configurations. Validation data were available from the International Handbook of Evaluated Reactor Physics Benchmark Experiments (IRPhE Handbook), prepared by the International Reactor Physics Experiment Evaluation Project, for two different HTGR designs: prismatic and pebble bed. SCALE models have been developed for HTTR, a prismatic fuel design reactor operated in Japan and HTR-10, a pebble bed reactor operated in China. The models were based on benchmark specifications included in the 2009, 2010, and 2011 releases of the IRPhE Handbook. SCALE models for the HTR-PROTEUS pebble bed configuration at the PROTEUS critical facility in Switzerland have also been developed, based on benchmark specifications included in a 2009 IRPhE draft benchmark. The development of the SCALE models has involved a series of investigations to identify particular issues associated with modeling the physics of HTGRs and to understand and quantify the effect of particular modeling assumptions on calculation-to-experiment comparisons.

  3. A proposed framework for computational fluid dynamics code calibration/validation

    International Nuclear Information System (INIS)

    Oberkampf, W.L.

    1993-01-01

    The paper reviews the terminology and methodology that have been introduced during the last several years for building confidence n the predictions from Computational Fluid Dynamics (CID) codes. Code validation terminology developed for nuclear reactor analyses and aerospace applications is reviewed and evaluated. Currently used terminology such as ''calibrated code,'' ''validated code,'' and a ''validation experiment'' is discussed along with the shortcomings and criticisms of these terms. A new framework is proposed for building confidence in CFD code predictions that overcomes some of the difficulties of past procedures and delineates the causes of uncertainty in CFD predictions. Building on previous work, new definitions of code verification and calibration are proposed. These definitions provide more specific requirements for the knowledge level of the flow physics involved and the solution accuracy of the given partial differential equations. As part of the proposed framework, categories are also proposed for flow physics research, flow modeling research, and the application of numerical predictions. The contributions of physical experiments, analytical solutions, and other numerical solutions are discussed, showing that each should be designed to achieve a distinctively separate purpose in building confidence in accuracy of CFD predictions. A number of examples are given for each approach to suggest methods for obtaining the highest value for CFD code quality assurance

  4. A Measure of Perceived Chronic Social Adversity: Development and Validation

    Directory of Open Access Journals (Sweden)

    Jingqiu Zhang

    2017-12-01

    Full Text Available The goal of this study was to develop a measure that assesses negative daily social encounters. Specifically, we examined the concept of perceived chronic social adversity and its assessment, the Perceived Chronic Social Adversity Questionnaire (PCSAQ. The PCSAQ focused on the subjective processing of daily social experiences. Psychometric properties were examined within two non-clinical samples (N = 331 and N = 390 and one clinical sample (N = 86. Exploratory and confirmatory factor analyses supported a three-factor model of the PCSAQ, which corresponds to three types of daily social stressors. The final 28-item PCSAQ was shown to be internally consistent, and to have good construct validity in terms of factor structure and group differences. It was also shown to have good concurrent validity in terms of association with outcome variables (sense of control, happiness, and mood and anxiety symptoms. Perceived chronic social adversity was also shown to be correlated with PTSD severity. Taken together, these findings suggest that the PCSAQ is a reliable, valid, and useful measure that can be used to assess negative social and clinical aspects of personal experiences. This study is an important exploratory step in improving our understanding of the relationship between the cumulative effect of negative social encounters and psychological difficulty.

  5. Experimental validation of UTDefect

    Energy Technology Data Exchange (ETDEWEB)

    Eriksson, A.S. [ABB Tekniska Roentgencentralen AB, Taeby (Sweden); Bostroem, A.; Wirdelius, H. [Chalmers Univ. of Technology, Goeteborg (Sweden). Div. of Mechanics

    1997-01-01

    This study reports on conducted experiments and computer simulations of ultrasonic nondestructive testing (NDT). Experiments and simulations are compared with the purpose of validating the simulation program UTDefect. UTDefect simulates ultrasonic NDT of cracks and some other defects in isotropic and homogeneous materials. Simulations for the detection of surface breaking cracks are compared with experiments in pulse-echo mode on surface breaking cracks in carbon steel plates. The echo dynamics are plotted and compared with the simulations. The experiments are performed on a plate with thickness 36 mm and the crack depths are 7.2 mm and 18 mm. L- and T-probes with frequency 1, 2 and 4 MHz and angels 45, 60 and 70 deg are used. In most cases the probe and the crack is on opposite sides of the plate, but in some cases they are on the same side. Several cracks are scanned from two directions. In total 53 experiments are reported for 33 different combinations. Generally the simulations agree well with the experiments and UTDefect is shown to be able to, within certain limits, perform simulations that are close to experiments. It may be concluded that: For corner echoes the eight 45 deg cases and the eight 60 deg cases show good agreement between experiments and UTDefect, especially for the 7.2 mm crack. The amplitudes differ more for some cases where the defect is close to the probe and for the corner of the 18 mm crack. For the two 70 deg cases there are too few experimental values to compare the curve shapes, but the amplitudes do not differ too much. The tip diffraction echoes also agree well in general. For some cases, where the defect is close to the probe, the amplitudes differ more than 10-15 dB, but for all but two cases the difference in amplitude is less than 7 dB. 6 refs.

  6. Development and validation of three-dimensional CFD techniques for reactor safety applications. Final report

    International Nuclear Information System (INIS)

    Buchholz, Sebastian; Palazzo, Simone; Papukchiev, Angel; Scheurer Martina

    2016-12-01

    The overall goal of the project RS 1506 ''Development and Validation of Three Dimensional CFD Methods for Reactor Safety Applications'' is the validation of Computational Fluid Dynamics (CFD) software for the simulation of three -dimensional thermo-hydraulic heat and fluid flow phenomena in nuclear reactors. For this purpose a wide spectrum of validation and test cases was selected covering fluid flow and heat transfer phenomena in the downcomer and in the core of pressurized water reactors. In addition, the coupling of the system code ATHLET with the CFD code ANSYS CFX was further developed and validated. The first choice were UPTF experiments where turbulent single- and two-phase flows were investigated in a 1:1 scaled model of a German KONVOI reactor. The scope of the CFD calculations covers thermal mixing and stratification including condensation in single- and two-phase flows. In the complex core region, the flow in a fuel assembly with spacer grid was simulated as defined in the OECD/NEA Benchmark MATIS-H. Good agreement are achieved when the geometrical and physical boundary conditions were reproduced as realistic as possible. This includes, in particular, the consideration of heat transfer to walls. The influence of wall modelling on CFD results was investigated on the TALL-3D T01 experiment. In this case, the dynamic three dimensional fluid flow and heat transfer phenomena were simulated in a Generation IV liquid metal cooled reactor. Concurrently to the validation work, the coupling of the system code ATHLET with the ANSYS CFX software was optimized and expanded for two-phase flows. Different coupling approaches were investigated, in order to overcome the large difference between CPU-time requirements of system and CFD codes. Finally, the coupled simulation system was validated by applying it to the simulation of the PSI double T-junction experiment, the LBE-flow in the MYRRA Spallation experiment and a demonstration test case simulating a pump trip

  7. Validation of SCALE-4 criticality sequences using ENDF/B-V data

    International Nuclear Information System (INIS)

    Bowman, S.M.; Wright, R.Q.; DeHart, M.D.; Taniuchi, H.

    1993-01-01

    The SCALE code system developed at Oak Ridge National Laboratory contains criticality safety analysis sequences that include the KENO V.a Monte Carlo code for calculation of the effective multiplication factor. These sequences are widely used for criticality safety analyses performed both in the United States and abroad. The purpose of the current work is to validate the SCALE-4 criticality sequences with an ENDF/B-V cross-section library for future distribution with SCALE-4. The library used for this validation is a broad-group library (44 groups) collapsed from the 238-group SCALE library. Extensive data testing of both the 238-group and the 44-group libraries included 10 fast and 18 thermal CSEWG benchmarks and 5 other fast benchmarks. Both libraries contain approximately 300 nuclides and are, therefore, capable of modeling most systems, including those containing spent fuel or radioactive waste. The validation of the broad-group library used 93 critical experiments as benchmarks. The range of experiments included 60 light-water-reactor fuel rod lattices, 13 mixed-oxide fuel rod lattice, and 15 other low- and high-enriched uranium critical assemblies

  8. The validity of the general similarity law for electrical breakdown of gases

    International Nuclear Information System (INIS)

    Osmokrovic, Predrag; Zivic, Tamara; Loncar, Boris; Vasic, Aleksandra

    2006-01-01

    This paper investigates the validity of the similarity law in cases of dc and pulse breakdown of gases. Geometrically similar systems insulated with SF 6 gas were used during experiments. It is shown that the similarity law is valid for dc breakdown voltage if the electron mean free path is included in geometrical parameters of the system, but not for pulse breakdown voltages. The explanation for this is the mechanism of the pulse discharge. The similarity law was expanded to take into account mechanisms of pulse breakdown initiation. Thus, the general similarity law is obtained, the validity of which in case of a pulse breakdown is established experimentally

  9. experience in school

    Directory of Open Access Journals (Sweden)

    Maria da Graça B. B. Dias

    2005-01-01

    Full Text Available An experiment investigated the effect of a make-believe fantasy mode of problem presentation on reasoning about valid conditional syllogisms in three groups of 5-year-old children: a school children from middle-class families in England; b school children from middle-class families in Brazil; and, c children from low SES families in Brazil who had never gone to school. Previous investigations had reported that the use of a fantasy context elicited significantly more logically appropriate responses from school children than did other contexts, and that children with school experiences made significantly more logically appropriate responses than did children without school experience. The present investigation extended these findings to show that the beneficial effects of a fantasy context extended to lower-class illiterate children who never had been exposed to schooling

  10. The Validity and Precision of the Comparative Interrupted Time-Series Design: Three Within-Study Comparisons

    Science.gov (United States)

    St. Clair, Travis; Hallberg, Kelly; Cook, Thomas D.

    2016-01-01

    We explore the conditions under which short, comparative interrupted time-series (CITS) designs represent valid alternatives to randomized experiments in educational evaluations. To do so, we conduct three within-study comparisons, each of which uses a unique data set to test the validity of the CITS design by comparing its causal estimates to…

  11. The validation of evacuation simulation models through the analysis of behavioural uncertainty

    International Nuclear Information System (INIS)

    Lovreglio, Ruggiero; Ronchi, Enrico; Borri, Dino

    2014-01-01

    Both experimental and simulation data on fire evacuation are influenced by a component of uncertainty caused by the impact of the unexplained variance in human behaviour, namely behavioural uncertainty (BU). Evacuation model validation studies should include the study of this type of uncertainty during the comparison of experiments and simulation results. An evacuation model validation procedure is introduced in this paper to study the impact of BU. This methodology is presented through a case study for the comparison between repeated experimental data and simulation results produced by FDS+Evac, an evacuation model for the simulation of human behaviour in fire, which makes use of distribution laws. - Highlights: • Validation of evacuation models is investigated. • Quantitative evaluation of behavioural uncertainty is performed. • A validation procedure is presented through an evacuation case study

  12. INTRA - Maintenance and Validation. Final Report

    International Nuclear Information System (INIS)

    Edlund, Ove; Jahn, Hermann; Yitbarek, Z.

    2002-05-01

    The INTRA code is specified by the ITER Joint Central Team and the European Community as a reference code for safety analyses of Tokamak type fusion reactors. INTRA has been developed by GRS and Studsvik EcoSafe to analyse integrated behaviours such as pressurisation, chemical reactions and temperature transients inside the plasma chamber and adjacent rooms, following postulated accidents, e.g. ingress of coolant water or air. Important results of the ICE and EVITA experiments, which became available early 2001, were used to validate and improve specific INTRA models. Large efforts were spent on the behaviour of water and steam injection into a low-pressure volumes at high temperature as well as on the modelling of boiling of water in contact with hot surfaces. As a result of this a new version, INTRA/Mod4, was documented and issued. The work included implementation and validation of selected physical models in the code, maintaining code versions, preparation review and distribution of code documents, and monitoring of the code related activities being performed by the GRS under a separate contract. The INTRA/Mod4 Manual and Code Description is documented in four volumes: Volume 1 - Physical Modelling, Volume 2 - User's Manual, Volume 3 -Code Structure and Volume 4 - Validation

  13. Class 1E software verification and validation: Past, present, and future

    International Nuclear Information System (INIS)

    Persons, W.L.; Lawrence, J.D.

    1993-10-01

    This paper discusses work in progress that addresses software verification and validation (V ampersand V) as it takes place during the full software life cycle of safety-critical software. The paper begins with a brief overview of the task description and discussion of the historical evolution of software V ampersand V. A new perspective is presented which shows the entire verification and validation process from the viewpoints of a software developer, product assurance engineer, independent V ampersand V auditor, and government regulator. An account of the experience of the field test of the Verification Audit Plan and Report generated from the V ampersand V Guidelines is presented along with sample checklists and lessons learned from the verification audit experience. Then, an approach to automating the V ampersand V Guidelines is introduced. The paper concludes with a glossary and bibliography

  14. Class 1E software verification and validation: Past, present, and future

    International Nuclear Information System (INIS)

    Persons, W.L.; Lawrence, J.D.

    1994-01-01

    This paper discusses work in progress that addresses software verification and validation (V ampersand V) as it takes place during the full software life cycle of safety-critical software. The paper begins with a brief overview of the task description and discussion of the historical evolution of software V ampersand V. A new perspective is presented which shows the entire verification and validation process from the viewpoints of a software developer, product assurance engineer, independent V ampersand V auditor, and government regulator. An account of the experience of the field test of the Verification Audit Plan and Report generated from the V ampersand V Guidelines is presented along with sample checklists and lessons learned from the verification audit experience. Then, an approach to automating the V ampersand V Guidelines is introduced. The paper concludes with a glossary and bibliography

  15. Can 3D Gamified Simulations Be Valid Vocational Training Tools for Persons with Intellectual Disability? An Experiment Based on a Real-life Situation.

    Science.gov (United States)

    von Barnekow, Ariel; Bonet-Codina, Núria; Tost, Dani

    2017-03-23

    To investigate if 3D gamified simulations can be valid vocational training tools for persons with intellectual disability. A 3D gamified simulation composed by a set of training tasks for cleaning in hostelry was developed in collaboration with professionals of a real hostel and pedagogues of a special needs school. The learning objectives focus on the acquisition of vocabulary skills, work procedures, social abilities and risk prevention. Several accessibility features were developed to make the tasks easy to do from a technological point-of-view. A pilot experiment was conducted to test the pedagogical efficacy of this tool on intellectually disabled workers and students. User scores in the gamified simulation follow a curve of increasing progression. When confronted with reality, they recognized the scenario and tried to reproduce what they had learned in the simulation. Finally, they were interested in the tool, they showed a strong feeling of immersion and engagement, and they reported having fun. On the basis of this experiment we believe that 3D gamified simulations can be efficient tools to train social and professional skills of persons with intellectual disabilities contributing thus to foster their social inclusion through work.

  16. Modeling the Effects of Argument Length and Validity on Inductive and Deductive Reasoning

    Science.gov (United States)

    Rotello, Caren M.; Heit, Evan

    2009-01-01

    In an effort to assess models of inductive reasoning and deductive reasoning, the authors, in 3 experiments, examined the effects of argument length and logical validity on evaluation of arguments. In Experiments 1a and 1b, participants were given either induction or deduction instructions for a common set of stimuli. Two distinct effects were…

  17. Validation Study of CODES Dragonfly Network Model with Theta Cray XC System

    Energy Technology Data Exchange (ETDEWEB)

    Mubarak, Misbah [Argonne National Lab. (ANL), Argonne, IL (United States); Ross, Robert B. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2017-05-31

    This technical report describes the experiments performed to validate the MPI performance measurements reported by the CODES dragonfly network simulation with the Theta Cray XC system at the Argonne Leadership Computing Facility (ALCF).

  18. Assessment of TRACE code against CHF experiments

    International Nuclear Information System (INIS)

    Audrius Jasiulevicius; Rafael Macian-Juan; Paul Coddington

    2005-01-01

    Full text of publication follows: This paper reports on the validation of the USNRC 'consolidate' code TRACE with data obtained during Critical Heat Flux (CHF) experiments in single channels and round and annular tubes. CHF is one of the key reactor safety parameters, because it determines the conditions for the onset of transition boiling in the core rod bundles, leading to the low heat transfer rates characteristics of the post-CHF heat transfer regime. In the context of the participation of PSI in the the International Programme for uncertainty analysis BEMUSE, we have carried out extensive work for the validation of some important TRACE models. The present work is aimed at assessing the range of validity for the CHF correlations and post-CHF heat transfer models currently included in TRACE. The heat transfer experiments selected for the assessment were performed at the Royal Institute of Technology (RIT) in Stockholm, Sweden and at the Atomic Energy Establishment in Winfrith, UK. The experimental investigations of the CHF and post-CHF heat transfer at RIT for flow of water in vertical tubes and annulus were performed at pressures ranging from 1 to 20 MPa and coolant mass fluxes from 500 to 3000 kg/m 2 s. The liquid was subcooled by 10 deg. C and 40 deg. C at the inlet of the test section. The experiments were performed on two different types of test sections. Experiments with uniformly heated single 7.0 m long tubes were carried out with three different inner tube diameters of 10, 14.9 and 24.7 mm. A series of experiments with non-uniform axial power distribution were also conducted in order to study the effect of the axial heat flux distribution on the CHF conditions in both 7.0 m long single tubes and 3.65 long annulus. Several different axial power profiles were employed with bottom, middle and top power peaks as well as the double-humped axial power profiles. In total more than 100 experiments with uniform axial heat flux distribution and several hundreds

  19. Domestic design and validation of natural circulation steam generator of China 1000 MWe PWR NPP

    International Nuclear Information System (INIS)

    Liu, H.Y.; Wang, X.Y.; Wu, G.; Qin, J.M.; Xiong, Ch.H.; Wang, W.; Chen, J.L.; Cheng, H.P.; Zuo, Ch.P.

    2005-01-01

    In order to meet the requirements of domestic design of China intending built NPP projects, Research Institute of Nuclear Power Operation (RINPO) has achieved design of 1000 MWe NPP steam generator, called RINSG-1000(means 1000MWe SG designed by RINPO), which is based on SG research ,experiments and service experience accumulated by RINPO in more 40 years. Testing validation of two steam generator key technologies, advanced moisture separate device and sludge collector, has been accomplished during the period of 2000 to 2002. This paper describes the design features of RINSG-1000, and provides some validation test results. (authors)

  20. [Formal sample size calculation and its limited validity in animal studies of medical basic research].

    Science.gov (United States)

    Mayer, B; Muche, R

    2013-01-01

    Animal studies are highly relevant for basic medical research, although their usage is discussed controversially in public. Thus, an optimal sample size for these projects should be aimed at from a biometrical point of view. Statistical sample size calculation is usually the appropriate methodology in planning medical research projects. However, required information is often not valid or only available during the course of an animal experiment. This article critically discusses the validity of formal sample size calculation for animal studies. Within the discussion, some requirements are formulated to fundamentally regulate the process of sample size determination for animal experiments.

  1. GPM GROUND VALIDATION AIRBORNE SECOND GENERATION PRECIPITATION RADAR (APR-2) GCPEX V1

    Data.gov (United States)

    National Aeronautics and Space Administration — The GPM Ground Validation Airborne Second Generation Precipitation Radar (APR-2) GCPEx dataset was collected during the GPM Cold-season Precipitation Experiment...

  2. The virtual lover: variable and easily guided 3D fish animations as an innovative tool in mate-choice experiments with sailfin mollies-II. Validation.

    Science.gov (United States)

    Gierszewski, Stefanie; Müller, Klaus; Smielik, Ievgen; Hütwohl, Jan-Marco; Kuhnert, Klaus-Dieter; Witte, Klaudia

    2017-02-01

    The use of computer animation in behavioral research is a state-of-the-art method for designing and presenting animated animals to live test animals. The major advantages of computer animations are: (1) the creation of animated animal stimuli with high variability of morphology and even behavior; (2) animated stimuli provide highly standardized, controlled and repeatable testing procedures; and (3) they allow a reduction in the number of live test animals regarding the 3Rs principle. But the use of animated animals should be attended by a thorough validation for each test species to verify that behavior measured with live animals toward virtual animals can also be expected with natural stimuli. Here we present results on the validation of a custom-made simulation for animated 3D sailfin mollies Poecilia latipinna and show that responses of live test females were as strong to an animated fish as to a video or a live male fish. Movement of an animated stimulus was important but female response was stronger toward a swimming 3D fish stimulus than to a "swimming" box. Moreover, male test fish were able to discriminate between animated male and female stimuli; hence, rendering the animated 3D fish a useful tool in mate-choice experiments with sailfin mollies.

  3. CFD validation of a supercritical water flow for SCWR design heat and mass fluxes

    International Nuclear Information System (INIS)

    Roelofs, F.; Lycklama a Nijeholt, J.A.; Komen, E.M.J.; Lowenberg, M.; Starflinger, J.

    2007-01-01

    The applicability of Computational Fluid Dynamics (CFD) for water under supercritical conditions in supercritical water reactors (SCWR) has still to be verified. In the recent past, CFD validation analyses were performed by various institutes for supercritical water in vertical tubes based on the well known experimental data from Yamagata. However, validation using data from experiments with working conditions closer to the actual operational conditions of such reactors is needed. From a literature survey the experiments performed by Herkenrath are selected to perform validation analyses at higher heat fluxes and a higher mass flux. The accuracy of CFD using RANS (Reynolds Average Navier-Stokes) turbulence modelling for supercritical fluids under conditions close to the operational conditions of a supercritical water reactor is determined. It is concluded that the wall temperature can be predicted by RANS CFD, using the RNG k-ε turbulence model, with accuracy in the range of 5% for heat fluxes up to 1100 kW/m 2 and for a bulk enthalpy up to 2200 kJ/kg. For a bulk enthalpy exceeding 2200 kJ/kg, a significant lower accuracy of the CFD predictions (about 3%) is found for the simulations of the experiments of Yamagata in comparison with the simulations of the experiments of Herkenrath. For these experiments, the accuracy is about 18 per cent. This might be a result of the fact that the CFD analyses do not simulate the flattening of the temperature profile at about 2200 kJ/kg which is found in the experiments of Herkenrath. However, the obtained accuracies ranging from 3% to 18% are still deemed to be acceptable for many design purposes. (authors)

  4. Convergent validity between a discrete choice experiment and a direct, open-ended method: comparison of preferred attribute levels and willingness to pay estimates.

    Science.gov (United States)

    Marjon van der Pol; Shiell, Alan; Au, Flora; Johnston, David; Tough, Suzanne

    2008-12-01

    The Discrete Choice Experiment (DCE) has become increasingly popular as a method for eliciting patient or population preferences. If DCE estimates are to inform health policy, it is crucial that the answers they provide are valid. Convergent validity is tested in this paper by comparing the results of a DCE exercise with the answers obtained from direct, open-ended questions. The two methods are compared in terms of preferred attribute levels and willingness to pay (WTP) values. Face-to-face interviews were held with 292 women in Calgary, Canada. Similar values were found between the two methods with respect to preferred levels for two out of three of the attributes examined. The DCE predicted less well for levels outside the range than for levels inside the range reaffirming the importance of extensive piloting to ensure appropriate level range in DCEs. The mean WTP derived from the open-ended question was substantially lower than the mean derived from the DCE. However, the two sets of willingness to pay estimates were consistent with each other in that individuals who were willing to pay more in the open-ended question were also willing to pay more in the DCE. The difference in mean WTP values between the two approaches (direct versus DCE) demonstrates the importance of continuing research into the different biases present across elicitation methods.

  5. Lessons learned from recent geomagnetic disturbance model validation activities

    Science.gov (United States)

    Pulkkinen, A. A.; Welling, D. T.

    2017-12-01

    Due to concerns pertaining to geomagnetically induced current impact on ground-based infrastructure, there has been significantly elevated interest in applying models for local geomagnetic disturbance or "delta-B" predictions. Correspondingly there has been elevated need for testing the quality of the delta-B predictions generated by the modern empirical and physics-based models. To address this need, community-wide activities were launched under the GEM Challenge framework and one culmination of the activities was the validation and selection of models that were transitioned into operations at NOAA SWPC. The community-wide delta-B action is continued under the CCMC-facilitated International Forum for Space Weather Capabilities Assessment and its "Ground Magnetic Perturbations: dBdt, delta-B, GICs, FACs" working group. The new delta-B working group builds on the past experiences and expands the collaborations to cover the entire international space weather community. In this paper, we discuss the key lessons learned from the past delta-B validation exercises and lay out the path forward for building on those experience under the new delta-B working group.

  6. Validation of thermalhydraulic codes

    International Nuclear Information System (INIS)

    Wilkie, D.

    1992-01-01

    Thermalhydraulic codes require to be validated against experimental data collected over a wide range of situations if they are to be relied upon. A good example is provided by the nuclear industry where codes are used for safety studies and for determining operating conditions. Errors in the codes could lead to financial penalties, to the incorrect estimation of the consequences of accidents and even to the accidents themselves. Comparison between prediction and experiment is often described qualitatively or in approximate terms, e.g. ''agreement is within 10%''. A quantitative method is preferable, especially when several competing codes are available. The codes can then be ranked in order of merit. Such a method is described. (Author)

  7. The role of CFD combustion modelling in hydrogen safety management – VI: Validation for slow deflagration in homogeneous hydrogen-air-steam experiments

    Energy Technology Data Exchange (ETDEWEB)

    Cutrono Rakhimov, A., E-mail: cutrono@nrg.eu [Nuclear Research and Consultancy Group (NRG), Westerduinweg 3, 1755 ZG Petten (Netherlands); Visser, D.C., E-mail: visser@nrg.eu [Nuclear Research and Consultancy Group (NRG), Westerduinweg 3, 1755 ZG Petten (Netherlands); Holler, T., E-mail: tadej.holler@ijs.si [Jožef Stefan Institute (JSI), Jamova cesta 39, 1000 Ljubljana (Slovenia); Komen, E.M.J., E-mail: komen@nrg.eu [Nuclear Research and Consultancy Group (NRG), Westerduinweg 3, 1755 ZG Petten (Netherlands)

    2017-01-15

    Highlights: • Deflagration of hydrogen-air-steam homogeneous mixtures is modeled in a medium-scale containment. • Adaptive mesh refinement is applied on flame front positions. • Steam effect influence on combustion modeling capabilities is investigated. • Mean pressure rise is predicted with 18% under-prediction when steam is involved. • Peak pressure is evaluated with 5% accuracy when steam is involved. - Abstract: Large quantities of hydrogen can be generated during a severe accident in a water-cooled nuclear reactor. When released in the containment, the hydrogen can create a potential deflagration risk. The dynamic pressure loads resulting from hydrogen combustion can be detrimental to the structural integrity of the reactor. Therefore, accurate prediction of these pressure loads is an important safety issue. In previous papers, we validated a Computational Fluid Dynamics (CFD) based method to determine the pressure loads from a fast deflagration. The combustion model applied in the CFD method is based on the Turbulent Flame Speed Closure (TFC). In our last paper, we presented the extension of this combustion model, Extended Turbulent Flame Speed Closure (ETFC), and its validation against hydrogen deflagration experiments in the slow deflagration regime. During a severe accident, cooling water will enter the containment as steam. Therefore, the effect of steam on hydrogen deflagration is important to capture in a CFD model. The primary objectives of the present paper are to further validate the TFC and ETFC combustion models, and investigate their capability to predict the effect of steam. The peak pressures, the trends of the flame velocity, and the pressure rise with an increase in the initial steam dilution are captured reasonably well by both combustion models. In addition, the ETFC model appeared to be more robust to mesh resolution changes. The mean pressure rise is evaluated with 18% under-prediction and the peak pressure is evaluated with 5

  8. SEA effectiveness criteria-equally valid in all countries? The case of Italy

    International Nuclear Information System (INIS)

    Fischer, Thomas B.; Gazzola, Paola

    2006-01-01

    Recent years have seen the introduction of various sets of strategic environmental assessment (SEA) effectiveness criteria in the professional international literature. Content analysis of key international SEA publications suggest that these have been developed based on the experiences of a selected number of countries only, and to date, the question whether they are fully valid in all systems and countries world-wide has not been addressed sufficiently, yet. In this context, the paper discusses the validity of effectiveness criteria for Italy, a country from which authors have only contributed to a very limited extent to the international SEA literature. It is concluded that, particularly in the light of experiences with a 'flexible', but ineffective EIA system, in Italy SEA needs to be applied in a systematic and rigorous manner, aided by strong enforcement mechanisms

  9. Face and content validity of the virtual reality simulator 'ScanTrainer®'.

    Science.gov (United States)

    Alsalamah, Amal; Campo, Rudi; Tanos, Vasilios; Grimbizis, Gregoris; Van Belle, Yves; Hood, Kerenza; Pugh, Neil; Amso, Nazar

    2017-01-01

    Ultrasonography is a first-line imaging in the investigation of women's irregular bleeding and other gynaecological pathologies, e.g. ovarian cysts and early pregnancy problems. However, teaching ultrasound, especially transvaginal scanning, remains a challenge for health professionals. New technology such as simulation may potentially facilitate and expedite the process of learning ultrasound. Simulation may prove to be realistic, very close to real patient scanning experience for the sonographer and objectively able to assist the development of basic skills such as image manipulation, hand-eye coordination and examination technique. The aim of this study was to determine the face and content validity of a virtual reality simulator (ScanTrainer®, MedaPhor plc, Cardiff, Wales, UK) as reflective of real transvaginal ultrasound (TVUS) scanning. A questionnaire with 14 simulator-related statements was distributed to a number of participants with differing levels of sonography experience in order to determine the level of agreement between the use of the simulator in training and real practice. There were 36 participants: novices ( n  = 25) and experts ( n  = 11) who rated the simulator. Median scores of face validity statements between experts and non-experts using a 10-point visual analogue scale (VAS) ratings ranged between 7.5 and 9.0 ( p  > 0.05) indicated a high level of agreement. Experts' median scores of content validity statements ranged from 8.4 to 9.0. The findings confirm that the simulator has the feel and look of real-time scanning with high face validity. Similarly, its tutorial structures and learning steps confirm the content validity.

  10. Failure mode and effects analysis outputs: are they valid?

    Science.gov (United States)

    Shebl, Nada Atef; Franklin, Bryony Dean; Barber, Nick

    2012-06-10

    Failure Mode and Effects Analysis (FMEA) is a prospective risk assessment tool that has been widely used within the aerospace and automotive industries and has been utilised within healthcare since the early 1990s. The aim of this study was to explore the validity of FMEA outputs within a hospital setting in the United Kingdom. Two multidisciplinary teams each conducted an FMEA for the use of vancomycin and gentamicin. Four different validity tests were conducted: Face validity: by comparing the FMEA participants' mapped processes with observational work. Content validity: by presenting the FMEA findings to other healthcare professionals. Criterion validity: by comparing the FMEA findings with data reported on the trust's incident report database. Construct validity: by exploring the relevant mathematical theories involved in calculating the FMEA risk priority number. Face validity was positive as the researcher documented the same processes of care as mapped by the FMEA participants. However, other healthcare professionals identified potential failures missed by the FMEA teams. Furthermore, the FMEA groups failed to include failures related to omitted doses; yet these were the failures most commonly reported in the trust's incident database. Calculating the RPN by multiplying severity, probability and detectability scores was deemed invalid because it is based on calculations that breach the mathematical properties of the scales used. There are significant methodological challenges in validating FMEA. It is a useful tool to aid multidisciplinary groups in mapping and understanding a process of care; however, the results of our study cast doubt on its validity. FMEA teams are likely to need different sources of information, besides their personal experience and knowledge, to identify potential failures. As for FMEA's methodology for scoring failures, there were discrepancies between the teams' estimates and similar incidents reported on the trust's incident

  11. Ensemble Kalman filter regularization using leave-one-out data cross-validation

    KAUST Repository

    Rayo Schiappacasse, Lautaro Jerónimo

    2012-09-19

    In this work, the classical leave-one-out cross-validation method for selecting a regularization parameter for the Tikhonov problem is implemented within the EnKF framework. Following the original concept, the regularization parameter is selected such that it minimizes the predictive error. Some ideas about the implementation, suitability and conceptual interest of the method are discussed. Finally, what will be called the data cross-validation regularized EnKF (dCVr-EnKF) is implemented in a 2D 2-phase synthetic oil reservoir experiment and the results analyzed.

  12. Criticality safety validation of MCNP5 using continuous energy libraries

    International Nuclear Information System (INIS)

    Salome, Jean A.D.; Pereira, Claubia; Assuncao, Jonathan B.A.; Veloso, Maria Auxiliadora F.; Costa, Antonella L.; Silva, Clarysson A.M. da

    2013-01-01

    The study of subcritical systems is very important in the design, installation and operation of various devices, mainly nuclear reactors and power plants. The information generated by these systems guide the decisions to be taken in the executive project, the economic viability and the safety measures to be employed in a nuclear facility. Simulating some experiments from the International Handbook of Evaluated Criticality Safety Benchmark Experiments, the code MCNP5 was validated to nuclear criticality analysis. Its continuous libraries were used. The average values and standard deviation (SD) were evaluated. The results obtained with the code are very similar to the values obtained by the benchmark experiments. (author)

  13. Validating Remotely Sensed Land Surface Evapotranspiration Based on Multi-scale Field Measurements

    Science.gov (United States)

    Jia, Z.; Liu, S.; Ziwei, X.; Liang, S.

    2012-12-01

    validation experiments demonstrated that the models yield accurate estimates at flux measurement sites, the question remains whether they are performing well over the broader landscape. Moreover, a large number of RS_ET products have been released in recent years. Thus, we also pay attention to the cross-validation method of RS_ET derived from multi-source models. "The Multi-scale Observation Experiment on Evapotranspiration over Heterogeneous Land Surfaces: Flux Observation Matrix" campaign is carried out at the middle reaches of the Heihe River Basin, China in 2012. Flux measurements from an observation matrix composed of 22 EC and 4 LAS are acquired to investigate the cross-validation of multi-source models over different landscapes. In this case, six remote sensing models, including the empirical statistical model, the one-source and two-source models, the Penman-Monteith equation based model, the Priestley-Taylor equation based model, and the complementary relationship based model, are used to perform an intercomparison. All the results from the two cases of RS_ET validation showed that the proposed validation methods are reasonable and feasible.

  14. Validation of NCSSHP for highly enriched uranium systems containing beryllium

    International Nuclear Information System (INIS)

    Krass, A.W.; Elliott, E.P.; Tollefson, D.A.

    1994-01-01

    This document describes the validation of KENO V.a using the 27-group ENDF/B-IV cross section library for highly enriched uranium and beryllium neutronic systems, and is in accordance with ANSI/ANS-8.1-1983(R1988) requirements for calculational methods. The validation has been performed on a Hewlett Packard 9000/Series 700 Workstation at the Oak Ridge Y-12 Plant Nuclear Criticality Safety Department using the Oak Ridge Y-12 Plant Nuclear Criticality Safety Software code package. Critical experiments from LA-2203, UCRL-4975, ORNL-2201, and ORNL/ENG-2 have been identified as having the constituents desired for this validation as well as sufficient experimental detail to allow accurate construction of KENO V.a calculational models. The results of these calculations establish the safety criteria to be employed in future calculational studies of these types of systems

  15. Validating a perceptual distraction model in a personal two-zone sound system

    DEFF Research Database (Denmark)

    Rämö, Jussi; Christensen, Lasse; Bech, Søren

    2017-01-01

    This paper focuses on validating a perceptual distraction model, which aims to predict user’s perceived distraction caused by audio-on-audio interference, e.g., two competing audio sources within the same listening space. Originally, the distraction model was trained with music-on-music stimuli...... using a simple loudspeaker setup, consisting of only two loudspeakers, one for the target sound source and the other for the interfering sound source. Recently, the model was successfully validated in a complex personal sound-zone system with speech-on-music stimuli. Second round of validations were...... conducted by physically altering the sound-zone system and running a set of new listening experiments utilizing two sound zones within the sound-zone system. Thus, validating the model using a different sound-zone system with both speech-on-music and music-on-speech stimuli sets. Preliminary results show...

  16. Validation of OPERA3D PCMI Analysis Code

    Energy Technology Data Exchange (ETDEWEB)

    Jeun, Ji Hoon; Choi, Jae Myung; Yoo, Jong Sung [KEPCO Nuclear Fuel Co., Daejeon (Korea, Republic of); Cheng, G.; Sim, K. S.; Chassie, Girma [Candu Energy INC.,Ontario (Canada)

    2013-10-15

    This report will describe introduction of validation of OPERA3D code, and validation results that are directly related with PCMI phenomena. OPERA3D was developed for the PCMI analysis and validated using the in-pile measurement data. Fuel centerline temperature and clad strain calculation results shows close expectations with measurement data. Moreover, 3D FEM fuel model of OPERA3D shows slight hour glassing behavior of fuel pellet in contact case. Further optimization will be conducted for future application of OPERA3D code. Nuclear power plant consists of many complicated systems, and one of the important objects of all the systems is maintaining nuclear fuel integrity. However, it is inevitable to experience PCMI (Pellet Cladding Mechanical Interaction) phenomena at current operating reactors and next generation reactors for advanced safety and economics as well. To evaluate PCMI behavior, many studies are on-going to develop 3-dimensional fuel performance evaluation codes. Moreover, these codes are essential to set the safety limits for the best estimated PCMI phenomena aimed for high burnup fuel.

  17. Biased binomial assessment of cross-validated estimation of classification accuracies illustrated in diagnosis predictions

    Directory of Open Access Journals (Sweden)

    Quentin Noirhomme

    2014-01-01

    Full Text Available Multivariate classification is used in neuroimaging studies to infer brain activation or in medical applications to infer diagnosis. Their results are often assessed through either a binomial or a permutation test. Here, we simulated classification results of generated random data to assess the influence of the cross-validation scheme on the significance of results. Distributions built from classification of random data with cross-validation did not follow the binomial distribution. The binomial test is therefore not adapted. On the contrary, the permutation test was unaffected by the cross-validation scheme. The influence of the cross-validation was further illustrated on real-data from a brain–computer interface experiment in patients with disorders of consciousness and from an fMRI study on patients with Parkinson disease. Three out of 16 patients with disorders of consciousness had significant accuracy on binomial testing, but only one showed significant accuracy using permutation testing. In the fMRI experiment, the mental imagery of gait could discriminate significantly between idiopathic Parkinson's disease patients and healthy subjects according to the permutation test but not according to the binomial test. Hence, binomial testing could lead to biased estimation of significance and false positive or negative results. In our view, permutation testing is thus recommended for clinical application of classification with cross-validation.

  18. Airborne campaigns for CryoSat prelaunch calibration and validation

    DEFF Research Database (Denmark)

    Skourup, Henriette; Hanson, Susanne; Hvidegaard, Sine Munk

    2011-01-01

    After the successful launch of CryoSat-2 in April 2010, the first direct validation campaign of the satellite is planned for spring 2011. DTU Space has been involved in ESA’s CryoSat Validation Experiment (CryoVEx) with airborne activities since 2003. To validate the prelaunch performance...... of the CryoSat radar altimeter (SIRAL), an airborne version of the SIRAL altimeter (ASIRAS) has been flown together with a laser scanner in 2006 and 2008. Of particular interest is to study the penetration depth of the radar altimeter over both land- and sea ice. This can be done by comparing the radar...... and laser measurements with in situ observations. Here, an overview of the prelaunch airborne campaigns is given, together with results of the ASIRAS performance over land- and sea ice. The observations, used in this study, are obtained from the Greenland ice sheet and from both multiyear and first year sea...

  19. Member checking: a tool to enhance trustworthiness or merely a nod to validation?

    OpenAIRE

    Birt, Linda; Scott, Suzanne; Cavers, Deborah; Campbell, Christine; Walter, Fiona M

    2016-01-01

    The trustworthiness of results is the bedrock of high quality qualitative research. Member checking, also known as participant or respondent validation, is a technique for exploring the credibility of results. Data or results are returned to participants to check for accuracy and resonance with their experiences. Member checking is often mentioned as one in a list of validation techniques. This simplistic reporting might not acknowledge the value of using the method, nor its juxtaposition wit...

  20. Development and preliminary validation of an interactive remote physical therapy system.

    Science.gov (United States)

    Mishra, Anup K; Skubic, Marjorie; Abbott, Carmen

    2015-01-01

    In this paper, we present an interactive physical therapy system (IPTS) for remote quantitative assessment of clients in the home. The system consists of two different interactive interfaces connected through a network, for a real-time low latency video conference using audio, video, skeletal, and depth data streams from a Microsoft Kinect. To test the potential of IPTS, experiments were conducted with 5 independent living senior subjects in Kansas City, MO. Also, experiments were conducted in the lab to validate the real-time biomechanical measures calculated using the skeletal data from the Microsoft Xbox 360 Kinect and Microsoft Xbox One Kinect, with ground truth data from a Vicon motion capture system. Good agreements were found in the validation tests. The results show potential capabilities of the IPTS system to provide remote physical therapy to clients, especially older adults, who may find it difficult to visit the clinic.

  1. Application of Sensitivity and Uncertainty Analysis Methods to a Validation Study for Weapons-Grade Mixed-Oxide Fuel

    International Nuclear Information System (INIS)

    Dunn, M.E.

    2001-01-01

    At the Oak Ridge National Laboratory (ORNL), sensitivity and uncertainty (S/U) analysis methods and a Generalized Linear Least-Squares Methodology (GLLSM) have been developed to quantitatively determine the similarity or lack thereof between critical benchmark experiments and an application of interest. The S/U and GLLSM methods provide a mathematical approach, which is less judgment based relative to traditional validation procedures, to assess system similarity and estimate the calculational bias and uncertainty for an application of interest. The objective of this paper is to gain experience with the S/U and GLLSM methods by revisiting a criticality safety evaluation and associated traditional validation for the shipment of weapons-grade (WG) MOX fuel in the MO-1 transportation package. In the original validation, critical experiments were selected based on a qualitative assessment of the MO-1 and MOX contents relative to the available experiments. Subsequently, traditional trending analyses were used to estimate the Δk bias and associated uncertainty. In this paper, the S/U and GLLSM procedures are used to re-evaluate the suite of critical experiments associated with the original MO-1 evaluation. Using the S/U procedures developed at ORNL, critical experiments that are similar to the undamaged and damaged MO-1 package are identified based on sensitivity and uncertainty analyses of the criticals and the MO-1 package configurations. Based on the trending analyses developed for the S/U and GLLSM procedures, the Δk bias and uncertainty for the most reactive MO-1 package configurations are estimated and used to calculate an upper subcritical limit (USL) for the MO-1 evaluation. The calculated bias and uncertainty from the S/U and GLLSM analyses lead to a calculational USL that supports the original validation study for the MO-1

  2. Five year experience in management of perforated peptic ulcer and validation of common mortality risk prediction models - are existing models sufficient? A retrospective cohort study.

    Science.gov (United States)

    Anbalakan, K; Chua, D; Pandya, G J; Shelat, V G

    2015-02-01

    Emergency surgery for perforated peptic ulcer (PPU) is associated with significant morbidity and mortality. Accurate and early risk stratification is important. The primary aim of this study is to validate the various existing MRPMs and secondary aim is to audit our experience of managing PPU. 332 patients who underwent emergency surgery for PPU at a single intuition from January 2008 to December 2012 were studied. Clinical and operative details were collected. Four MRPMs: American Society of Anesthesiology (ASA) score, Boey's score, Mannheim peritonitis index (MPI) and Peptic ulcer perforation (PULP) score were validated. Median age was 54.7 years (range 17-109 years) with male predominance (82.5%). 61.7% presented within 24 h of onset of abdominal pain. Median length of stay was 7 days (range 2-137 days). Intra-abdominal collection, leakage, re-operation and 30-day mortality rates were 8.1%, 2.1%, 1.2% and 7.2% respectively. All the four MRPMs predicted intra-abdominal collection and mortality; however, only MPI predicted leak (p = 0.01) and re-operation (p = 0.02) rates. The area under curve for predicting mortality was 75%, 72%, 77.2% and 75% for ASA score, Boey's score, MPI and PULP score respectively. Emergency surgery for PPU has low morbidity and mortality in our experience. MPI is the only scoring system which predicts all - intra-abdominal collection, leak, reoperation and mortality. All four MRPMs had a similar and fair accuracy to predict mortality, however due to geographic and demographic diversity and inherent weaknesses of exiting MRPMs, quest for development of an ideal model should continue. Copyright © 2015 Surgical Associates Ltd. Published by Elsevier Ltd. All rights reserved.

  3. USING CFD TO ANALYZE NUCLEAR SYSTEMS BEHAVIOR: DEFINING THE VALIDATION REQUIREMENTS

    Energy Technology Data Exchange (ETDEWEB)

    Richard Schultz

    2012-09-01

    A recommended protocol to formulate numeric tool specifications and validation needs in concert with practices accepted by regulatory agencies for advanced reactors is described. The protocol is based on the plant type and perceived transient and accident envelopes that translates to boundary conditions for a process that gives the: (a) key phenomena and figures-of-merit which must be analyzed to ensure that the advanced plant can be licensed, (b) specification of the numeric tool capabilities necessary to perform the required analyses—including bounding calculational uncertainties, and (c) specification of the validation matrices and experiments--including the desired validation data. The result of applying the process enables a complete program to be defined, including costs, for creating and benchmarking transient and accident analysis methods for advanced reactors. By following a process that is in concert with regulatory agency licensing requirements from the start to finish, based on historical acceptance of past licensing submittals, the methods derived and validated have a high probability of regulatory agency acceptance.

  4. A validated methodology for evaluating burnup credit in spent fuel casks

    International Nuclear Information System (INIS)

    Brady, M.C.; Sanders, T.L.

    1991-01-01

    The concept of allowing reactivity credit for the transmuted state of spent fuel offers both economic and risk incentives. This paper presents a general overview of the technical work being performed in support of the US Department of Energy (DOE) program to resolve issues related to the implementation of burnup credit. An analysis methodology is presented along with information representing the validation of the method against available experimental data. The experimental data that are applicable to burnup credit include chemical assay data for the validation of the isotopic prediction models, fresh fuel critical experiments for the validation of criticality calculations for various casks geometries, and reactor restart critical data to validate criticality calculations with spent fuel. The methodology has been specifically developed to be simple and generally applicable, therefore giving rise to uncertainties or sensitivities which are identified and quantified in terms of a percent bias in k eff . Implementation issues affecting licensing requirements and operational procedures are discussed briefly

  5. A validated methodology for evaluating burnup credit in spent fuel casks

    International Nuclear Information System (INIS)

    Brady, M.C.; Sanders, T.L.

    1991-01-01

    The concept of allowing reactivity credit for the transmuted state of spent fuel offers both economic and risk incentives. This paper presents a general overview of the technical work being performed in support of the U.S. Department of Energy (DOE) program to resolve issues related to the implementation of burnup credit. An analysis methodology is presented along with information representing the validation of the method against available experimental data. The experimental data that are applicable to burnup credit include chemical assay data for the validation of the isotopic prediction models, fresh fuel critical experiments for the validation of criticality calculations for various cask geometries, and reactor restart critical data to validate criticality calculations with spent fuel. The methodology has been specifically developed to be simple and generally applicable, therefore giving rise to uncertainties or sensitivities which are identified and quantified in terms of a percent bias in k eff . Implementation issues affecting licensing requirements and operational procedures are discussed briefly. (Author)

  6. A validated methodology for evaluating burnup credit in spent fuel casks

    International Nuclear Information System (INIS)

    Brady, M.C.; Sanders, T.L.

    1991-01-01

    The concept of allowing reactivity credit for the transmuted state of spent fuel offers both economic and risk incentives. This paper presents a general overview of the technical work being performed in support of the US Department of Energy (DOE) program to resolve issues related to the implementation of burnup credit. An analysis methodology is presented along with information representing the validation of the method against available experimental data. The experimental data that are applicable to burnup credit include chemical assay data for the validation of the isotopic prediction models, fresh fuel critical experiments for the validation of criticality calculations for various cask geometries, and reactor restart critical data to validate criticality calculations with spent fuel. The methodology has been specifically developed to be simple and generally applicable, therefore giving rise to uncertainties or sensitivities which are identified and quantified in terms of a percent bias in k eff . Implementation issues affecting licensing requirements and operational procedures are discussed briefly. 24 refs., 3 tabs

  7. Optimal Design of Shock Tube Experiments for Parameter Inference

    KAUST Repository

    Bisetti, Fabrizio

    2014-01-06

    We develop a Bayesian framework for the optimal experimental design of the shock tube experiments which are being carried out at the KAUST Clean Combustion Research Center. The unknown parameters are the pre-exponential parameters and the activation energies in the reaction rate expressions. The control parameters are the initial mixture composition and the temperature. The approach is based on first building a polynomial based surrogate model for the observables relevant to the shock tube experiments. Based on these surrogates, a novel MAP based approach is used to estimate the expected information gain in the proposed experiments, and to select the best experimental set-ups yielding the optimal expected information gains. The validity of the approach is tested using synthetic data generated by sampling the PC surrogate. We finally outline a methodology for validation using actual laboratory experiments, and extending experimental design methodology to the cases where the control parameters are noisy.

  8. An introduction to use of the USACE HTRW program's data validation guidelines engineering manual

    International Nuclear Information System (INIS)

    Becker, L.D.; Coats, K.H.

    1994-01-01

    Data validation has been defined by regulatory agencies as a systematic process (consisting of data editing, screening, checking, auditing, verification, certification, and review) for comparing data to established criteria in order to provide assurance that data are adequate for their intended use. A problem for the USACE HTRW Program was that clearly defined data validation guidelines were available only for analytical data quality level IV. These functional data validation guidelines were designed for validation of data produced using protocols from the US E.P.A.'s Contract Laboratory Program (CLP). Unfortunately, USACE experience demonstrates that these level IV functional data validation guidelines were being used to validate data not produced under the CLP. The resulting data validation product was less than satisfactory for USACE HTRW needs. Therefore, the HTRW-MCX initiated an Engineering Manual (EM) for validation of analytical data quality levels other than IV. This EM is entitle ''USACE HTRW Data Validation Guidelines.'' Use of the EM is required for validation of analytical data relating to projects under the jurisdiction of the Department of the Army, Corps of Engineers, Hazardous, Toxic, and Radioactive Waste Program. These data validation guidelines include procedures and checklists for technical review of analytical data at quality levels I, II, III, and V

  9. Towards natural language question generation for the validation of ontologies and mappings.

    Science.gov (United States)

    Ben Abacha, Asma; Dos Reis, Julio Cesar; Mrabet, Yassine; Pruski, Cédric; Da Silveira, Marcos

    2016-08-08

    The increasing number of open-access ontologies and their key role in several applications such as decision-support systems highlight the importance of their validation. Human expertise is crucial for the validation of ontologies from a domain point-of-view. However, the growing number of ontologies and their fast evolution over time make manual validation challenging. We propose a novel semi-automatic approach based on the generation of natural language (NL) questions to support the validation of ontologies and their evolution. The proposed approach includes the automatic generation, factorization and ordering of NL questions from medical ontologies. The final validation and correction is performed by submitting these questions to domain experts and automatically analyzing their feedback. We also propose a second approach for the validation of mappings impacted by ontology changes. The method exploits the context of the changes to propose correction alternatives presented as Multiple Choice Questions. This research provides a question optimization strategy to maximize the validation of ontology entities with a reduced number of questions. We evaluate our approach for the validation of three medical ontologies. We also evaluate the feasibility and efficiency of our mappings validation approach in the context of ontology evolution. These experiments are performed with different versions of SNOMED-CT and ICD9. The obtained experimental results suggest the feasibility and adequacy of our approach to support the validation of interconnected and evolving ontologies. Results also suggest that taking into account RDFS and OWL entailment helps reducing the number of questions and validation time. The application of our approach to validate mapping evolution also shows the difficulty of adapting mapping evolution over time and highlights the importance of semi-automatic validation.

  10. GPM GROUND VALIDATION NCAR CLOUD MICROPHYSICS PARTICLE PROBES MC3E V1

    Data.gov (United States)

    National Aeronautics and Space Administration — The GPM Ground Validation NCAR Cloud Microphysics Particle Probes MC3E dataset was collected during the Midlatitude Continental Convective Clouds Experiment (MC3E),...

  11. Validation of the Large Interface Method of NEPTUNE{sub C}FD 1.0.8 for Pressurized Thermal Shock (PTS) applications

    Energy Technology Data Exchange (ETDEWEB)

    Coste, P., E-mail: pierre.coste@cea.fr [CEA, DEN, DER/SSTH, F-38054 Grenoble (France); Lavieville, J. [Electricite de France, Chatou (France); Pouvreau, J. [CEA, DEN, DER/SSTH, F-38054 Grenoble (France); Baudry, C.; Guingo, M.; Douce, A. [Electricite de France, Chatou (France)

    2012-12-15

    Highlights: Black-Right-Pointing-Pointer The two-phase Pressurized Thermal Shock (PTS) is a key thermohydraulics issue for PWR safety. Black-Right-Pointing-Pointer The dynamic and condensation models are firstly validated separately. Black-Right-Pointing-Pointer Then the global validation is done with the COSI experiment. Black-Right-Pointing-Pointer All the calculations performed with the same set of models both in the Large Interface Method and in the k-{epsilon} approach for turbulence substantiate the application of the tool to PTS. - Abstract: NEPTUNE{sub C}FD is a code based on a 3D transient Eulerian two-fluid model. One of the main application targets is the two-phase Pressurized Thermal Shock (PTS), which is related to PWR Reactor Pressure Vessel (RPV) lifetime safety studies, when sub-cooled water from Emergency Core Cooling (ECC) system is injected into the possibly uncovered cold leg and penetrates into the RPV downcomer. Five experiments were selected for the validation, a selection reviewed by a panel of European experts. The dynamic models are validated with a co-current smooth and wavy air-water stratified flow in a rectangular channel with detailed measurements of turbulence and velocities. The condensation models are validated with a co-current smooth and wavy steam-water stratified flow in a rectangular channel with measurements of the steam flow rates. The dynamic models are validated in the situation of a jet impinging a pool free surface with two experiments dealing with a water jet impingement on a water pool free surface in air environment. Finally, all the models involved in the reactor conditions are validated with the COSI experiment. The calculations are done with the same set of Large Interface Method models and a RANS (k-{epsilon}) approach for turbulence. They substantiate the application of the tool to PTS studies.

  12. Validation of CTF Droplet Entrainment and Annular/Mist Closure Models using Riso Steam/Water Experiments

    Energy Technology Data Exchange (ETDEWEB)

    Wysocki, Aaron J. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Salko, Robert K. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2017-02-01

    This report summarizes the work done to validate the droplet entrainment and de-entrainment models as well as two-phase closure models in the CTF code by comparison with experimental data obtained at Riso National Laboratory. The Riso data included a series of over 250 steam/water experiments that were performed in both tube and annulus geometries over a range of various pressures and outlet qualities. Experimental conditions were set so that the majority of cases were in the annular/mist ow regime. Measurements included liquid lm ow rate, droplet ow rate, lm thickness, and two-phase pressure drop. CTF was used to model 180 of the tubular geometry cases, matching experimental geometry, outlet pressure, and outlet ow quality to experimental values. CTF results were compared to the experimental data at the outlet of the test section in terms of vapor and entrained liquid ow fractions, pressure drop per unit length, and liquid lm thickness. The entire process of generating CTF input decks, running cases, extracting data, and generating comparison plots was scripted using Python and Matplotlib for a completely automated validation process. All test cases and scripting tools have been committed to the COBRA-TF master repository and selected cases have been added to the continuous testing system to serve as regression tests. The dierences between the CTF- and experimentally-calculated ow fraction values were con- sistent with previous calculations by Wurtz, who applied the same entrainment correlation to the same data. It has been found that CTF's entrainment/de-entrainment predictive capability in the annular/mist ow regime for this particular facility is comparable to the licensed industry code, COBRAG. While lm and droplet predictions are generally good, it has been found that accuracy is diminished at lower ow qualities. This nding is consistent with the noted deciencies in the Wurtz entrainment model employed by CTF. The CTF predicted two-phase pressure drop in

  13. Validating Bayesian truth serum in large-scale online human experiments

    OpenAIRE

    Pickard, Galen; Frank, Morgan Ryan; Cebrian, Manuel; Rahwan, Iyad

    2016-01-01

    This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Bayesian truth serum (BTS) is an exciting new method for improving honesty and information quality in multiple-choice survey, but, despite the method's mathematical reliance on large sample sizes, existing literature about BTS only focuses on small experiments....

  14. Overt head movements and persuasion: a self-validation analysis.

    Science.gov (United States)

    Briñol, Pablo; Petty, Richard E

    2003-06-01

    The authors report 3 experiments that examine a new mechanism by which overt head movements can affect attitude change. In each experiment, participants were induced to either nod or to shake their heads while listening to a persuasive message. When the message arguments were strong, nodding produced more persuasion than shaking. When the arguments were weak, the reverse occurred. These effects were most pronounced when elaboration was high. These findings are consistent with the "self-validation" hypothesis that postulates that head movements either enhance (nodding) or undermine (shaking) confidence in one's thoughts about the message. In a 4th experiment, the authors extended this result to another overt behavior (writing with the dominant or nondominant hand) and a different attitude domain (self-esteem).

  15. Method validation for uranium content analysis using a potentiometer T-90

    International Nuclear Information System (INIS)

    Torowati; Ngatijo; Rahmiati

    2016-01-01

    An experimental method validation has been conducted for uranium content analysis using Potentiometer T-90. The method validation experiment was performed in the quality control laboratory of Experiment Fuel Element Installation, PTBBN - BATAN. The objective is to determine the level of precision and accuracy of analytical results for uranium analysis referring to the latest American Standard Test Method (ASTM) of ASTM C1267-11, which is a modified reference method by reducing of reagent consumption by 10% of the amount used by the original method. The ASTM C 1267-11 reference is a new ASTM as a substitute for the older ASTM namely ASTM C799, Vol.12.01, 2003. It is, therefore, necessary to validate the renewed method. The tool used for the analysis of uranium was potentiometer T-90 and the material used was standard uranium oxide powder CRM (Certificate Reference Material). Validation of the method was done by analyzing standard uranium powder by 7 times weighing and 7 times analysis. Analysis results were used to determine the level of accuracy, precision, Relative Standard Deviation (RSD) and Horwitz coefficient Variation and limit detection and quantitation. The average uranium obtained for this method validation is 84.36% with Standard Deviation (SD) of 0.12%, Relative Standard Deviation (RSD) 0.14% and 2/3 Horwitz coefficient Variation (CV Horwitz) 2.05%. The results show that RSD value is smaller than the value of (2/3) CV Horwitz, which means that this method has a high precision. The accuracy value obtained is 0.48%, and since the acceptance limit of high level of accuracy is when the accuracy value is <2.00%, this method is regarded as having a high degree of accuracy [1]. The limit of detection (LOD) and and the limit of quantitation (LOQ) are 0.0145 g/L and 0.0446 g/L respectively. It is concluded that the ASTM C 1267-11 reference method is valid for use. (author)

  16. Benchmarking - a validation of UTDefect

    International Nuclear Information System (INIS)

    Niklasson, Jonas; Bostroem, Anders; Wirdelius, Haakan

    2006-06-01

    New and stronger demands on reliability of used NDE/NDT procedures and methods have stimulated the development of simulation tools of NDT. Modelling of ultrasonic non-destructive testing is useful for a number of reasons, e.g. physical understanding, parametric studies and in the qualification of procedures and personnel. The traditional way of qualifying a procedure is to generate a technical justification by employing experimental verification of the chosen technique. The manufacturing of test pieces is often very expensive and time consuming. It also tends to introduce a number of possible misalignments between the actual NDT situation and the proposed experimental simulation. The UTDefect computer code (SUNDT/simSUNDT) has been developed, together with the Dept. of Mechanics at Chalmers Univ. of Technology, during a decade and simulates the entire ultrasonic testing situation. A thorough validated model has the ability to be an alternative and a complement to the experimental work in order to reduce the extensive cost. The validation can be accomplished by comparisons with other models, but ultimately by comparisons with experiments. This project addresses the last alternative but provides an opportunity to, in a later stage, compare with other software when all data are made public and available. The comparison has been with experimental data from an international benchmark study initiated by the World Federation of NDE Centers. The experiments have been conducted with planar and spherically focused immersion transducers. The defects considered are side-drilled holes, flat-bottomed holes, and a spherical cavity. The data from the experiments are a reference signal used for calibration (the signal from the front surface of the test block at normal incidence) and the raw output from the scattering experiment. In all, more than forty cases have been compared. The agreement between UTDefect and the experiments was in general good (deviation less than 2dB) when the

  17. Validation studies of thermal-hydraulic code for safety analysis of nuclear power plants

    International Nuclear Information System (INIS)

    Haapalehto, T.

    1995-01-01

    The thesis gives an overview of the validation process for thermal-hydraulic system codes and it presents in more detail the assessment and validation of the French code CATHARE for VVER calculations. Three assessment cases are presented: loop seal clearing, core reflooding and flow in a horizontal steam generator. The experience gained during these assessment and validation calculations has been used to analyze the behavior of the horizontal steam generator and the natural circulation in the geometry of the Loviisa nuclear power plant. Large part of the work has been performed in cooperation with the CATHARE-team in Grenoble, France. (41 refs., 11 figs., 8 tabs.)

  18. FACTAR validation

    International Nuclear Information System (INIS)

    Middleton, P.B.; Wadsworth, S.L.; Rock, R.C.; Sills, H.E.; Langman, V.J.

    1995-01-01

    A detailed strategy to validate fuel channel thermal mechanical behaviour codes for use of current power reactor safety analysis is presented. The strategy is derived from a validation process that has been recently adopted industry wide. Focus of the discussion is on the validation plan for a code, FACTAR, for application in assessing fuel channel integrity safety concerns during a large break loss of coolant accident (LOCA). (author)

  19. Complex terrain experiments in the New European Wind Atlas

    DEFF Research Database (Denmark)

    Mann, Jakob; Angelou, Nikolas; Arnqvist, Johan

    2017-01-01

    The New European Wind Atlas project will create a freely accessible wind atlas covering Europe and Turkey, develop the model chain to create the atlas and perform a series of experiments on flow in many different kinds of complex terrain to validate the models. This paper describes the experiment...

  20. Identifying Attrition Risk Based on the First Year Experience

    Science.gov (United States)

    Naylor, Ryan; Baik, Chi; Arkoudis, Sophia

    2018-01-01

    Using data collected from a recent national survey of Australian first-year students, this paper defines and validates four scales--belonging, feeling supported, intellectual engagement and workload stress--to measure the student experience of university. These scales provide insights into the university experience for both groups and individual…

  1. Endogenous protein "barcode" for data validation and normalization in quantitative MS analysis.

    Science.gov (United States)

    Lee, Wooram; Lazar, Iulia M

    2014-07-01

    Quantitative proteomic experiments with mass spectrometry detection are typically conducted by using stable isotope labeling and label-free quantitation approaches. Proteins with housekeeping functions and stable expression level such actin, tubulin, and glyceraldehyde-3-phosphate dehydrogenase are frequently used as endogenous controls. Recent studies have shown that the expression level of such common housekeeping proteins is, in fact, dependent on various factors such as cell type, cell cycle, or disease status and can change in response to a biochemical stimulation. The interference of such phenomena can, therefore, substantially compromise their use for data validation, alter the interpretation of results, and lead to erroneous conclusions. In this work, we advance the concept of a protein "barcode" for data normalization and validation in quantitative proteomic experiments. The barcode comprises a novel set of proteins that was generated from cell cycle experiments performed with MCF7, an estrogen receptor positive breast cancer cell line, and MCF10A, a nontumorigenic immortalized breast cell line. The protein set was selected from a list of ~3700 proteins identified in different cellular subfractions and cell cycle stages of MCF7/MCF10A cells, based on the stability of spectral count data generated with an LTQ ion trap mass spectrometer. A total of 11 proteins qualified as endogenous standards for the nuclear and 62 for the cytoplasmic barcode, respectively. The validation of the protein sets was performed with a complementary SKBR3/Her2+ cell line.

  2. Experimental Validation of Mathematical Framework for Fast Switching Valves used in Digital Hydraulic Machines

    DEFF Research Database (Denmark)

    Nørgård, Christian; Roemer, Daniel Beck; Bech, Michael Møller

    2015-01-01

    of 10 kW during switching (mean of approximately 250 W) and a pressure loss below 0.5 bar at 600 l/min. The main goal of this article is validate parts of the mathematical framework based on a series of experiments. Furthermore, this article aims to document the experience gained from the experimental...

  3. Development and validation of a viscoelastic and nonlinear liver model for needle insertion

    Energy Technology Data Exchange (ETDEWEB)

    Kobayashi, Yo [Waseda University, Consolidated Research Institute for Advanced Science and Medical Care, Shinjuku, Tokyo (Japan); Onishi, Akinori; Hoshi, Takeharu; Kawamura, Kazuya [Waseda University, Graduate School of Science and Engineering, Shinjuku (Japan); Hashizume, Makoto [Kyushu University Hospital, Center for the Integration of Advanced Medicine and Innovative Technology, Fukuoka (Japan); Fujie, Masakatsu G. [Waseda University, Graduate School of Science and Engineering, Faculty of Science and Engineering, Shinjuku (Japan)

    2009-01-15

    The objective of our work is to develop and validate a viscoelastic and nonlinear physical liver model for organ model-based needle insertion, in which the deformation of an organ is estimated and predicted, and the needle path is determined with organ deformation taken into consideration. First, an overview is given of the development of the physical liver model. The material properties of the liver considering viscoelasticity and nonlinearity are modeled based on the measured data collected from a pig's liver. The method to develop the liver model using FEM is also shown. Second, the experimental method to validate the model is explained. Both in vitro and in vivo experiments that made use of a pig's liver were conducted for comparison with the simulation using the model. Results of the in vitro experiment showed that the model reproduces nonlinear and viscoelastic response of displacement at an internally located point with high accuracy. For a force up to 0.45 N, the maximum error is below 1 mm. Results of the in vivo experiment showed that the model reproduces the nonlinear increase of load upon the needle during insertion. Based on these results, the liver model developed and validated in this work reproduces the physical response of a liver in both in vitro and in vivo situations. (orig.)

  4. Development of a corrosion detection experiment to evaluate conventional and advanced NDI techniques

    Energy Technology Data Exchange (ETDEWEB)

    Roach, D.

    1995-12-31

    The Aging Aircraft NDI Validation Center (AANC) was established by the Federal Aviation Administration Technical Center (FAATC) at Sandia National Laboratories in August of 1991. The goal of the AANC is to provide independent validation of technologies intended to enhance the structural inspection of aging commuter and transport aircraft. The deliverables from the AANC`s validation activities are assessments of the reliability of existing and emerging inspection technologies as well as analyses of the cost benefits to be derived from their implementation. This paper describes the methodology developed by the AANC to assess the performance of NDI techniques. In particular, an experiment being developed to evaluate corrosion detection devices will be presented. The experiment uses engineered test specimens, as well as complete aircraft test beds to provide metrics for NDI validation.

  5. Validity and validation of expert (Q)SAR systems.

    Science.gov (United States)

    Hulzebos, E; Sijm, D; Traas, T; Posthumus, R; Maslankiewicz, L

    2005-08-01

    At a recent workshop in Setubal (Portugal) principles were drafted to assess the suitability of (quantitative) structure-activity relationships ((Q)SARs) for assessing the hazards and risks of chemicals. In the present study we applied some of the Setubal principles to test the validity of three (Q)SAR expert systems and validate the results. These principles include a mechanistic basis, the availability of a training set and validation. ECOSAR, BIOWIN and DEREK for Windows have a mechanistic or empirical basis. ECOSAR has a training set for each QSAR. For half of the structural fragments the number of chemicals in the training set is >4. Based on structural fragments and log Kow, ECOSAR uses linear regression to predict ecotoxicity. Validating ECOSAR for three 'valid' classes results in predictivity of > or = 64%. BIOWIN uses (non-)linear regressions to predict the probability of biodegradability based on fragments and molecular weight. It has a large training set and predicts non-ready biodegradability well. DEREK for Windows predictions are supported by a mechanistic rationale and literature references. The structural alerts in this program have been developed with a training set of positive and negative toxicity data. However, to support the prediction only a limited number of chemicals in the training set is presented to the user. DEREK for Windows predicts effects by 'if-then' reasoning. The program predicts best for mutagenicity and carcinogenicity. Each structural fragment in ECOSAR and DEREK for Windows needs to be evaluated and validated separately.

  6. Automation of RELAP5 input calibration and code validation using genetic algorithm

    International Nuclear Information System (INIS)

    Phung, Viet-Anh; Kööp, Kaspar; Grishchenko, Dmitry; Vorobyev, Yury; Kudinov, Pavel

    2016-01-01

    Highlights: • Automated input calibration and code validation using genetic algorithm is presented. • Predictions generally overlap experiments for individual system response quantities (SRQs). • It was not possible to predict simultaneously experimental maximum flow rate and oscillation period. • Simultaneous consideration of multiple SRQs is important for code validation. - Abstract: Validation of system thermal-hydraulic codes is an important step in application of the codes to reactor safety analysis. The goal of the validation process is to determine how well a code can represent physical reality. This is achieved by comparing predicted and experimental system response quantities (SRQs) taking into account experimental and modelling uncertainties. Parameters which are required for the code input but not measured directly in the experiment can become an important source of uncertainty in the code validation process. Quantification of such parameters is often called input calibration. Calibration and uncertainty quantification may become challenging tasks when the number of calibrated input parameters and SRQs is large and dependencies between them are complex. If only engineering judgment is employed in the process, the outcome can be prone to so called “user effects”. The goal of this work is to develop an automated approach to input calibration and RELAP5 code validation against data on two-phase natural circulation flow instability. Multiple SRQs are used in both calibration and validation. In the input calibration, we used genetic algorithm (GA), a heuristic global optimization method, in order to minimize the discrepancy between experimental and simulation data by identifying optimal combinations of uncertain input parameters in the calibration process. We demonstrate the importance of the proper selection of SRQs and respective normalization and weighting factors in the fitness function. In the code validation, we used maximum flow rate as the

  7. Automation of RELAP5 input calibration and code validation using genetic algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Phung, Viet-Anh, E-mail: vaphung@kth.se [Division of Nuclear Power Safety, Royal Institute of Technology, Roslagstullsbacken 21, 10691 Stockholm (Sweden); Kööp, Kaspar, E-mail: kaspar@safety.sci.kth.se [Division of Nuclear Power Safety, Royal Institute of Technology, Roslagstullsbacken 21, 10691 Stockholm (Sweden); Grishchenko, Dmitry, E-mail: dmitry@safety.sci.kth.se [Division of Nuclear Power Safety, Royal Institute of Technology, Roslagstullsbacken 21, 10691 Stockholm (Sweden); Vorobyev, Yury, E-mail: yura3510@gmail.com [National Research Center “Kurchatov Institute”, Kurchatov square 1, Moscow 123182 (Russian Federation); Kudinov, Pavel, E-mail: pavel@safety.sci.kth.se [Division of Nuclear Power Safety, Royal Institute of Technology, Roslagstullsbacken 21, 10691 Stockholm (Sweden)

    2016-04-15

    Highlights: • Automated input calibration and code validation using genetic algorithm is presented. • Predictions generally overlap experiments for individual system response quantities (SRQs). • It was not possible to predict simultaneously experimental maximum flow rate and oscillation period. • Simultaneous consideration of multiple SRQs is important for code validation. - Abstract: Validation of system thermal-hydraulic codes is an important step in application of the codes to reactor safety analysis. The goal of the validation process is to determine how well a code can represent physical reality. This is achieved by comparing predicted and experimental system response quantities (SRQs) taking into account experimental and modelling uncertainties. Parameters which are required for the code input but not measured directly in the experiment can become an important source of uncertainty in the code validation process. Quantification of such parameters is often called input calibration. Calibration and uncertainty quantification may become challenging tasks when the number of calibrated input parameters and SRQs is large and dependencies between them are complex. If only engineering judgment is employed in the process, the outcome can be prone to so called “user effects”. The goal of this work is to develop an automated approach to input calibration and RELAP5 code validation against data on two-phase natural circulation flow instability. Multiple SRQs are used in both calibration and validation. In the input calibration, we used genetic algorithm (GA), a heuristic global optimization method, in order to minimize the discrepancy between experimental and simulation data by identifying optimal combinations of uncertain input parameters in the calibration process. We demonstrate the importance of the proper selection of SRQs and respective normalization and weighting factors in the fitness function. In the code validation, we used maximum flow rate as the

  8. Validation of the reactor dynamics code TRAB

    International Nuclear Information System (INIS)

    Raety, H.; Kyrki-Rajamaeki, R.; Rajamaeki, M.

    1991-05-01

    The one-dimensional reactor dynamics code TRAB (Transient Analysis code for BWRs) developed at VTT was originally designed for BWR analyses, but it can in its present version be used for various modelling purposes. The core model of TRAB can be used separately for LWR calculations. For PWR modelling the core model of TRAB has been coupled to circuit model SMABRE to form the SMATRA code. The versatile modelling capabilities of TRAB have been utilized also in analyses of e.g. the heating reactor SECURE and the RBMK-type reactor (Chernobyl). The report summarizes the extensive validation of TRAB. TRAB has been validated with benchmark problems, comparative calculations against independent analyses, analyses of start-up experiments of nuclear power plants and real plant transients. Comparative RBMES type reactor calculations have been made against Soviet simulations and the initial power excursion of the Chernobyl reactor accident has also been calculated with TRAB

  9. [Validity and Reliability of Korean Version of the Spiritual Care Competence Scale].

    Science.gov (United States)

    Chung, Mi Ja; Park, Youngrye; Eun, Young

    2016-12-01

    The aim of this study was to examine the validity and reliability of the Korean Version of the Spiritual Care Competence Scale (K-SCCS). A cross-sectional study design was used. The K-SCCS consisted of 26 questions to measure spiritual care competence of nurses. Participants, 228 nurses who had more than 3 years'experience as a nurse, completed the survey. Confirmatory factor analysis was used to examine the construct validity and correlations of K-SCCS and spiritual well-being (SWB) were used to examine the criterion validity of K-SCCS. Cronbach's alpha was used to test internal consistency. The construct and the criterion-related validity of K-SCCS were supported as measures of spiritual care competence. Cronbach's alpha was .95. Factor loadings of the 26 questions ranged from .60 to .96. Construct validity of K-SCCS was verified by confirmatory factor analysis (RMSEA=.08, CFI=.90, NFI=.85). Criterion validity compared to the SWB showed significant correlation (r=.44, pspiritual care competence with validity and reliability. However, further study is needed to retest the verification of the factor analysis related to factor 2 (professionalisation and improving the quality of spiritual care) and factor 3 (personal support and patient counseling). Therefore, we recommend using the total score without distinguishing subscales.

  10. Verification and validation of models

    International Nuclear Information System (INIS)

    Herbert, A.W.; Hodgkinson, D.P.; Jackson, C.P.; Lever, D.A.; Robinson, P.C.

    1986-12-01

    The numerical accuracy of the computer models for groundwater flow and radionuclide transport that are to be used in repository safety assessment must be tested, and their ability to describe experimental data assessed: they must be verified and validated respectively. Also appropriate ways to use the codes in performance assessments, taking into account uncertainties in present data and future conditions, must be studied. These objectives are being met by participation in international exercises, by developing bench-mark problems, and by analysing experiments. In particular the project has funded participation in the HYDROCOIN project for groundwater flow models, the Natural Analogues Working Group, and the INTRAVAL project for geosphere models. (author)

  11. Brand Experience in Banking Industry: Direct and Indirect Relationship to Loyalty

    Directory of Open Access Journals (Sweden)

    Nuri WULANDARI

    2016-02-01

    Full Text Available In marketing, the meaning of value is rapidly shifting from service and relationships to experiences. It is believed that the traditional value proposition is no longer effective to compete in the market and to gain customer loyalty. By adapting the brand experience model, this study tries to validate the model in the banking industry, which is currently facing intense competition to retain customers. The brand experience construct is tested for its direct and indirect relationship toward loyalty. It is postulated that satisfaction and brand authenticity might be instrumental in mediating brand experience to loyalty. Research was conducted via in-depth interview and quantitative survey, targeting bank customers in Jakarta. The result confirmed that brand experience has direct and indirect contribution to loyalty in significant and positive manner. The research contributes in validating previous studies with a rare emphasis in banking sector. The result implies that brand experience is an important value to lead to customer loyalty in this area and subject to growing research on experience marketing.

  12. French validation of the Barcelona Music Reward Questionnaire

    Directory of Open Access Journals (Sweden)

    Joe Saliba

    2016-03-01

    Full Text Available Background. The Barcelona Music Reward Questionnaire (BMRQ questionnaire investigates the main facets of music experience that could explain the variance observed in how people experience reward associated with music. Currently, only English and Spanish versions of this questionnaire are available. The objective of this study is to validate a French version of the BMRQ. Methods. The original BMRQ was translated and adapted into an international French version. The questionnaire was then administered through an online survey aimed at adults aged over 18 years who were fluent in French. Statistical analyses were performed and compared to the original English and Spanish version for validation purposes. Results. A total of 1,027 participants completed the questionnaire. Most responses were obtained from France (89.4%. Analyses revealed that congruence values between the rotated loading matrix and the ideal loading matrix ranged between 0.88 and 0.96. Factor reliabilities of subscales (i.e., Musical Seeking, Emotion Evocation, Mood Regulation, Social Reward and Sensory-Motor also ranged between 0.88 and 0.96. In addition, reliability of the overall factor score (i.e., Music reward was 0.91. Finally, the internal consistency of the overall scale was 0.85. The factorial structure obtained in the French translation was similar to that of the original Spanish and English samples. Conclusion. The French version of the BMRQ appears valid and reliable. Potential applications of the BMRQ include its use as a valuable tool in music reward and emotion research, whether in healthy individuals or in patients suffering from a wide variety of cognitive, neurologic and auditory disorders.

  13. Validation of a two-fluid model used for the simulation of dense fluidized beds; Validation d`un modele a deux fluides applique a la simulation des lits fluidises denses

    Energy Technology Data Exchange (ETDEWEB)

    Boelle, A.

    1997-02-17

    A two-fluid model applied to the simulation of gas-solid dense fluidized beds is validated on micro scale and on macro scale. Phase coupling is carried out in the momentum and energy transport equation of both phases. The modeling is built on the kinetic theory of granular media in which the gas action has been taken into account in order to get correct expressions of transport coefficients. A description of hydrodynamic interactions between particles in high Stokes number flow is also incorporated in the model. The micro scale validation uses Lagrangian numerical simulations viewed as numerical experiments. The first validation case refers to a gas particle simple shear flow. It allows to validate the competition between two dissipation mechanisms: drag and particle collisions. The second validation case is concerted with sedimenting particles in high Stokes number flow. It allows to validate our approach of hydrodynamic interactions. This last case had led us to develop an original Lagrangian simulation with a two-way coupling between the fluid and the particles. The macro scale validation uses the results of Eulerian simulations of dense fluidized bed. Bed height, particles circulation and spontaneous created bubbles characteristics are studied and compared to experimental measurement, both looking at physical and numerical parameters. (author) 159 refs.

  14. Validation of the VTT's reactor physics code system

    International Nuclear Information System (INIS)

    Tanskanen, A.

    1998-01-01

    At VTT Energy several international reactor physics codes and nuclear data libraries are used in a variety of applications. The codes and libraries are under constant development and every now and then new updated versions are released, which are taken in use as soon as they have been validated at VTT Energy. The primary aim of the validation is to ensure that the code works properly, and that it can be used correctly. Moreover, the applicability of the codes and libraries are studied in order to establish their advantages and weak points. The capability of generating program-specific nuclear data for different reactor physics codes starting from the same evaluated data is sometimes of great benefit. VTT Energy has acquired a nuclear data processing system based on the NJOY-94.105 and TRANSX-2.15 processing codes. The validity of the processing system has been demonstrated by generating pointwise (MCNP) and groupwise (ANISN) temperature-dependent cross section sets for the benchmark calculations of the Doppler coefficient of reactivity. At VTT Energy the KENO-VI three-dimensional Monte Carlo code is used in criticality safety analyses. The KENO-VI code and the 44GROUPNDF5 data library have been validated at VTT Energy against the ZR-6 and LR-0 critical experiments. Burnup Credit refers to the reduction in reactivity of burned nuclear fuel due to the change in composition during irradiation. VTT Energy has participated in the calculational VVER-440 burnup credit benchmark in order to validate criticality safety calculation tools. (orig.)

  15. Validation of MCNP and WIMS-AECL/DRAGON/RFSP for ACR-1000 applications

    International Nuclear Information System (INIS)

    Bromley, Blair P.; Adams, Fred P.; Zeller, Michael B.; Watts, David G.; Shukhman, Boris V.; Pencer, Jeremy

    2008-01-01

    This paper gives a summary of the validation of the reactor physics codes WIMS-AECL, DRAGON, RFSP and MCNP5, which are being used in the design, operation, and safety analysis of the ACR-1000 R . The standards and guidelines being followed for code validation of the suite are established in CSA Standard N286.7-99 and ANS Standard ANS-19.3-2005. These codes are being validated for the calculation of key output parameters associated with various reactor physics phenomena of importance during normal operations and postulated accident conditions in an ACR-1000 reactor. Experimental data from a variety of sources are being used for validation. The bulk of the validation data is from critical experiments in the ZED-2 research reactor with ACR-type lattices. To supplement and complement ZED-2 data, qualified and applicable data are being taken from other power and research reactors, such as existing CANDU R units, FUGEN, NRU and SPERT research reactors, and the DCA critical facility. MCNP simulations of the ACR-1000 are also being used for validating WIMS-AECL/ DRAGON/RFSP, which involves extending the validation results for MCNP through the assistance of TSUNAMI analyses. Code validation against commissioning data in the first-build ACR-1000 will be confirmatory. The code validation is establishing the biases and uncertainties in the calculations of the WIMS-AECL/DRAGON/RFSP suite for the evaluation of various key parameters of importance in the reactor physics analysis of the ACR-1000. (authors)

  16. Design and Validation of Affective Warning Pictorial on Cigarette Labels

    Directory of Open Access Journals (Sweden)

    Chanduen Pat-Arin

    2016-01-01

    Full Text Available The purpose of present study were to design and validate affective warning pictorials for cigarette label in Thailand. Brainstorming and survey techniques were used to collect the idea of possible warning pictorials. All ideas were grouped for finding candidated pictorials. Then, primary sixty warning pictorials were collected and equally classified into three affective warning pictorial groups as positive, neutral, and negative. Sixty Thai male engineering students participated in affective validation of warning pictorials using SAM rating. The International Affective Picture System (IAPS was used to manipulate the affective state of participants to neutral affective state before the experiments. The results revealed that all affective warning pictorials were successfully evoked target affective states on participants. After refining, thirty affective warning pictorials were provided as positive, neutral, and negative affective warning pictorials for using on cigarette labels. Implications on the affective warning pictorials design and validation.

  17. A practical guide for operational validation of discrete simulation models

    Directory of Open Access Journals (Sweden)

    Fabiano Leal

    2011-04-01

    Full Text Available As the number of simulation experiments increases, the necessity for validation and verification of these models demands special attention on the part of the simulation practitioners. By analyzing the current scientific literature, it is observed that the operational validation description presented in many papers does not agree on the importance designated to this process and about its applied techniques, subjective or objective. With the expectation of orienting professionals, researchers and students in simulation, this article aims to elaborate a practical guide through the compilation of statistical techniques in the operational validation of discrete simulation models. Finally, the guide's applicability was evaluated by using two study objects, which represent two manufacturing cells, one from the automobile industry and the other from a Brazilian tech company. For each application, the guide identified distinct steps, due to the different aspects that characterize the analyzed distributions

  18. Social validation of vocabulary selection: ensuring stakeholder relevance.

    Science.gov (United States)

    Bornman, Juan; Bryen, Diane Nelson

    2013-06-01

    The vocabulary needs of individuals who are unable to spell their messages continue to be of concern in the field of augmentative and alternative communication (AAC). Social validation of vocabulary selection has been suggested as one way to improve the effectiveness and relevance of service delivery in AAC. Despite increased emphasis on stakeholder accountability, social validation is not frequently used in AAC research. This paper describes an investigation of the social validity of a vocabulary set identified in earlier research. A previous study used stakeholder focus groups to identify vocabulary that could be used by South African adults who use AAC to disclose their experiences as victims of crime or abuse. Another study used this vocabulary to create communication boards for use by adults with complex communication needs. In this current project, 12 South African adults with complex communication needs who use AAC systems used a 5-point Likert scale to score the importance of each of the previously identified 57 vocabulary items. This two-step process of first using stakeholder focus groups to identify vocabulary, and then having literate persons who use AAC provide information on social validity of the vocabulary on behalf of their peers who are illiterate, appears to hold promise as a culturally relevant vocabulary selection approach for sensitive topics such as crime and abuse.

  19. CFD Validation Experiment of a Mach 2.5 Axisymmetric Shock-Wave Boundary-Layer Interaction

    Science.gov (United States)

    Davis, David O.

    2015-01-01

    Preliminary results of an experimental investigation of a Mach 2.5 two-dimensional axisymmetric shock-wave/boundary-layer interaction (SWBLI) are presented. The purpose of the investigation is to create a SWBLI dataset specifically for CFD validation purposes. Presented herein are the details of the facility and preliminary measurements characterizing the facility and interaction region. The results will serve to define the region of interest where more detailed mean and turbulence measurements will be made.

  20. CFD Validation Experiment of a Mach 2.5 Axisymmetric Shock-Wave/Boundary-Layer Interaction

    Science.gov (United States)

    Davis, David Owen

    2015-01-01

    Preliminary results of an experimental investigation of a Mach 2.5 two-dimensional axisymmetric shock-wave/ boundary-layer interaction (SWBLI) are presented. The purpose of the investigation is to create a SWBLI dataset specifically for CFD validation purposes. Presented herein are the details of the facility and preliminary measurements characterizing the facility and interaction region. These results will serve to define the region of interest where more detailed mean and turbulence measurements will be made.